text
stringlengths
3.22k
165k
Impact of Team-Based Care on Emergency Department Use PURPOSE We sought to assess the impact of team-based care on emergency department (ED) use in the context of physicians transitioning from fee-for-service payment to capitation payment in Ontario, Canada. METHODS We conducted an interrupted time series analysis to assess annual ED visit rates before and after transition from an enhanced fee-for-service model to either a team capitation model or a nonteam capitation model. We included Ontario residents aged 19 years and older who had at least 3 years of outcome data both pretransition and post-transition (N = 2,524,124). We adjusted for age, sex, income quintile, immigration status, comorbidity, and morbidity, and we stratified by rurality. A sensitivity analysis compared outcomes for team vs nonteam patients matched on year of transition, age, sex, rurality, and health region. RESULTS We compared 387,607 team and 1,399,103 nonteam patients in big cities, 213,394 team and 380,009 nonteam patients in small towns, and 65,289 team and 78,722 nonteam patients in rural areas. In big cities, after adjustment, the ED visit rate increased by 2.4% (95% CI, 2.2% to 2.6%) per year for team patients and 5.2% (95% CI, 5.1% to 5.3%) per year for nonteam patients in the years after transition (P <.001). Similarly, there was a slower increase in ED visits for team relative to nonteam patients in small towns (0.9% [95% CI, 0.7% to 1.1%] vs 2.9% [95% CI, 2.8% to 3.1%], P <.001) and rural areas (‒0.5% [95% CI, –0.8% to 0.2%] vs 1.3% [95% CI, 1.0% to 1.6%], P <.001). Results were much the same in the matched analysis. CONCLUSIONS Adoption of team-based primary care may reduce ED use. Further research is needed to understand optimal team composition and roles. INTRODUCTION S trong primary care is the foundation of a high-functioning health care system and is associated with better outcomes, lower costs, and greater equity. 1 But for the last 2 decades, primary care has been in crisis. An outdated fee-forservice payment system was designed to support care for acute conditions and not the growing number of patients with complex chronic conditions who require long appointments and case management in-between. 2 A rapidly increasing evidence base has made it harder to practice as a generalist physician, 3 with some estimating that implementing clinical practice guidelines for the 10 most common chronic conditions would take longer than the time available in an average work week. 4 Electronic health records were supposed to improve efficiency but instead have contributed to physician burnout, a growing problem. 5 Sharing the care within an interprofessional primary care team and implementing physician payment reform are 2 strategies to simultaneously improve patient care and reduce burnout among family physicians. 6,7 Team-based care is seen as a central pillar in high-functioning primary care by professional associations and policy experts in both Canada and the United States. [8][9][10] Yet, jurisdictions in both countries have been slow to scale up team-based care, 11,12 in large part because of concerns about return on investment. Early evaluations found primary care teams had a favorable impact on chronic condition management. 13 More recently, there have been attempts to understand the effect on health care use and cost. The impact on emergency department (ED) visits has been of particular interest. Patients in the United States and Canada consistently report difficulties in accessing timely primary care with 47% and 41%, respectively, reporting going to the ED for an issue that could have been addressed by their primary care clinician. 14 The evidence on the impact of teambased care on ED visits has been mixed, [15][16][17][18] however, and in some cases, it is difficult to disentangle from payment reform and other components associated with the patient-centered medical home. 19,20 Primary care teams can theoretically reduce ED use through improved timeliness of appointments, better chronic condition management, greater care coordination, and support for the social determinants of health. 21,22 In Ontario, Canada, approximately one-fifth of patients receive team-based primary care as part of a Family Health Team whereby physicians formally enroll patients, are paid via blended capitation, and have mandatory after-hours care. 23,24 Approximately one-quarter of patients are part of a corresponding practice model that includes blended capitation and mandatory after-hours care but does not include government-funded nonphysician health professionals. We conducted an interrupted time series study to compare changes in ED use between patients who transitioned to a Family Health Team and those who transitioned to a similar practice model that did not involve a team. Context and Setting Ontario is Canada's largest province with a population of 14.7 million in 2020. 25 All permanent residents have health insurance through the provincial health plan, and necessary physician and hospital visits are free at the point of care. 26 Ninety-four percent of the population report having a primary care professional, usually a family physician. 27 Approximately 80% of patients see a family physician who practices in a Patient Enrolment Model-a set of new physician payment models introduced in the early 2000s that include formal patient enrollment, physicians in administrative groups with shared after-hours coverage, financial incentives, and varying degrees of blended payments. 23,24 The most common Patient Enrolment Model is the Family Health Organization, in which approximately 70% of physician payment is by capitation adjusted for age and sex, 10% by financial incentives and bonuses, and 20% by fee for service. 28 The second most common model is the Family Health Group, in which 80% of payment is by fee for service, 5% via financial incentives and bonuses, and 15% via capitation. About 40% of Family Health Organizations are part of Family Health Teams whereby physicians and their patients have access to a government-funded extended health care team that can include nurses, nurse practitioners, social workers, dietitians, pharmacists, and other health professionals. Teams decide on the role of the professional; for example, pharmacists may support everything from medication reconciliation to smoking cessation, anticoagulation management, opioid stewardship, and physician education. The size and composition of the extended health care team, whether they are colocated with physicians, and the level of integration with the Family Health Organization is also variable. Family Health Teams are more likely to be located in rural areas where family physicians spend more time delivering emergency, inpatient, and obstetrical care. There are usually no walk-in clinics in rural areas, and rates of ED use are higher compared with those in urban areas. 29 Family Health Teams were introduced in 2005 but no new teams have been funded since 2012. Study Design and Population We conducted a longitudinal study to compare the change in the annual ED visit rate for patients whose physician transitioned to a team vs a nonteam capitation practice. We included Ontario residents aged ≥19 years whose physician transitioned from a Family Health Group (enhanced fee-forservice payment) to a Family Health Organization (blended capitation payment) between April 1, 2006 and March 31, 2013 and had a minimum of 3 years of outcome data both before and after transition (the minimum number of time points required for the regression analysis). Some physicians who joined a Family Health Organization applied to and became part of a Family Health Team (team practice), whereas others did not (nonteam practice). We included patients from team practices if their physician joined a Family Health Team within a year of transitioning to a Family Health Organization to allow a clear before-after comparison with patients who transitioned to a Family Health Organization without a team. Patients in both groups needed to be in the respective model for at least 3 years (Supplemental Figure 1). We used linked administrative data sets to conduct a patient-level analysis comparing outcomes for patients who joined a team practice vs a nonteam practice. These data sets were linked using unique encoded identifiers and analyzed at the ICES, an independent, nonprofit research institute whose legal status under Ontario's health information privacy law allows it to collect and analyze health care and demographic data, without consent, for health system evaluation and improvement. The use of data in this project was authorized under section 45 of Ontario's Personal Health Information Protection Act, which does not require review by a research ethics board. Data Sources For our exposure variable (team vs nonteam care), enrollment tables provided by the Ontario Ministry of Health and Long-Term Care allowed us to assign patients to a family physician as of March 31 of a given year. The National Ambulatory Care Reporting System provided us with patient-level data on ED use, our outcome of interest, for each year of the study. We used the provincial health insurance registry for information on patient age, sex, and postal code. We linked patient postal code to 2006 census data to ascertain patients' neighborhood income quintile. We assessed new registration with provincial health insurance within the last 10 years as a proxy for recent immigration, as we have done previously. 30 We defined rurality on the urban-rural spectrum using the Rurality Index of Ontario (values of 0-9 denote big cities, 10-39 small towns, and 40 and higher rural locations). 31 We used the Johns Hopkins Adjusted Clinical Group Software to assess comorbidity using adjusted diagnosis groups (no health care use, 1 to 4, 5 to 9, and ≥10, with higher numbers signifying greater comorbidity) and morbidity using resource utilization bands (0 to 5, with higher numbers signifying greater morbidity). 32 Statistical Analysis We calculated outcome data and descriptive characteristics at the patient level for patients of team and nonteam practices between April 1, 2003 and March 31, 2017. We defined the index date as the date on which the patient's physician transitioned to the team-capitation model or the nonteam capitation model. We decided a priori to stratify our analysis by rurality (big cities, small towns, rural) given rural differences in the ED visit rate and primary care delivery. Depending on the year their physician joined a capitated model, patients contributed 6 to 11 years of outcome data in the period spanning before and after the index date. Our primary analysis was an unmatched comparison of patients of physicians who transitioned to a team practice vs a nonteam practice. We calculated the crude and adjusted ED visit rate before and after the index date. We adjusted for several potential confounders: patient age, sex, neighborhood income quintile, recent immigration, comorbidity, and morbidity. We used a segmented regression (negative-binomial) analysis to model interrupted time series data, to assess both the immediate step change (change in intercept) and the change in trend (gradual change) in the rate of ED visits after the index date. 33 We then calculated the difference in change in trend between team and nonteam patients. Next, we conducted a sensitivity analysis, comparing team patients with nonteam patients matched by year of transition, age, sex, rurality, and local health region. We calculated the unadjusted ED visit rate before and after the index date to assess the immediate step change and change in trend in mean ED visits following that date. For all analyses, we calculated 95% confidence intervals and considered P values of <.005 to be significant a priori. Further analytic notes are available in Supplemental Figure 2. All analyses were conducted in SAS Enterprise Guide version 7.15 (SAS Institute Inc). RESULTS In our primary analysis, we compared 387,607 team and 1,399,103 nonteam patients in big cities, 213,394 team and 380,009 nonteam patients in small towns, and 65,289 team and 78,722 nonteam patients in rural areas (Table 1). Nonteam patients had lower comorbidity and morbidity, and were less likely to live in a neighborhood in the lowest income quintile. Urban residents were more likely to be recent registrants with the provincial health plan compared with residents in small towns or rural areas. Most patients' physicians transitioned to the new model by March 31, 2010; the peak year of transition was generally 1 to 2 years later for nonteam patients vs team patients. The number of patients included in the analysis varied in each year before and after the index date (Supplemental Table 2). The unadjusted mean ED visit rate was more than twice as high at baseline in rural areas compared with urban areas (Figure 1). Before transition, the unadjusted rate was higher for team patients compared with nonteam patients; after transition, the difference in the rate between groups narrowed. In adjusted analysis, the mean ED visit rate was still slightly higher for team patients compared with nonteam patients in both big cities and rural areas, but the difference disappeared 6 years after the index date (Figure 2A and Figure 2C). In small towns, the rate was similar between team and nonteam patients on the index date but higher for the nonteam group 6 years later ( Figure 2B). Table 2 presents results of the interrupted time series analysis for team and nonteam patients after adjustment for confounders. In big cities, the ED visit rate increased by 2.4% (95% CI, 2.2% to 2.6%) per year for the team group and 5.2% (95% CI, 5.1% to 5.3%) per year for the nonteam group in the years after transition (P <.001). Similarly, patients in teams had a smaller annual increase in ED visits compared with nonteam counterparts in small towns (0.9% [95% CI, 0.7% to 1.1%] vs 2.9% [95% CI, 2.8% to 3.1%], P <.001), and they had a decrease in rate in rural areas (−0.5% [95% CI, −0.8% to 0.2%] vs 1.3% [95% CI, 1.0% to 1.6%], P <.001). The change in annual ED visit rate was 2.9%, 2.1%, and 1.8% higher for the nonteam group compared with the team group in big cities, small towns, and rural areas, respectively (P <.001 for each). The results of our sensitivity analysis comparing matched team and nonteam patients are presented in the Supplemental Appendix Table 1, Table 3, and Figure 3, Table 3, and Figure 3, Table 3, and Figure 3. Despite matching, the ED visit rate was still higher pretransition for the team group. The overall results, however, were similar to those of the primary analysis. The change in the annual ED visit rate was 2.0%, 1.5%, and 3.3% higher after transition for nonteam patients compared with team counterparts in big cities, small towns, and rural areas, respectively (P <.001 for each). DISCUSSION We analyzed data for more than 2 million patients comparing ED use before and after their physician transitioned from a fee-for-service model to either a model with blended capitation plus an interprofessional care team or a model with blended capitation only. We found an overall increase in ED use in the time period following the transition for both groups, but there was less of an increase for patients in the team vs the nonteam model. Results were consistent using 2 analytic methods. Findings were similar in big cities, small towns, and rural areas despite differences in baseline ED visit rates by rurality. Findings in Context Our findings are consistent with the theory and evidence supporting the role of extended health care teams in improving outcomes. 21,22 There are also other contextual factors that may explain our findings. In Ontario, teams have specific accountabilities related to access and quality improvement as well as paid administrators, shared decision support specialists, and other supports 34,35 not present in other practices. Some teams have focused efforts specifically on reducing ED use. 36 Several US studies have assessed the impact of the patient-centered medical home on ED visits, but results have been heterogenous and the addition of teams is one of many changes included in medical home reforms. 20 A few studies, like ours, have tried to isolate the impact of teams. Researchers at Intermountain Healthcare found an association between teams and better chronic care management and decreased ED use. 18 A study of academic primary care practices in Boston found a reduction in ED visits for those with 2 or more chronic conditions, 16 while an evaluation in the Veterans Health Administration found a reduction for those more reliant on Veterans Affairs care. 17 In Canada, a few studies suggest team-based models have led to reductions in ED use in Quebec and Alberta. 15,37,38 Of note, ED visits have been rising in Ontario for the last decade in part due to an increase in population, aging of the population, and a reduced number of hospital beds per capita. 39 Study Limitations Our study has important limitations. First, it is observational not experimental. In our comparison of groups, we adjusted for (or matched on) available patient characteristics, but there may have been unmeasured differences between patients and physicians of team and nonteam practices, especially given that physicians were free to select between models. We noticed a drop in ED visits between the second and first year before transition in some of the adjusted models, but the reason for this drop is unclear. We did not include calendar year in our unmatched analysis, but we have confidence in our findings given similar results in the matched analysis. Second, our study does not elucidate what components of team-based care are most effective. Team-based practices in Ontario are heterogeneous in terms of the type of health professionals comprising the team, the ratio of team members per patient, and their roles, but these data are not available in administrative data holdings. We also did not have data on team stability and culture, factors that are associated with team effectiveness. 40 Third, we focused on the outcome of ED visits, which is only one of many important quality measures. An increase in ED visits does not necessarily imply worse-quality care. 41 Canada has a high ED visit rate, however, compared with other high-income countries, 14,39 and relative improvement is desirable. Future research will explore other health system impacts including potential cost savings. Policy Implications Since 2015, the Ontario government has halted expansion of team-based care because of concerns about the return on investment. 42 Our study, however, suggests primary care teams have led to improvements in ED use, which may be a surrogate for improved chronic care management, a finding consistent with those of other studies. 24 Lower ED use likely also relates to timely access, although studies from Ontario have indicated a mixed association between teams and access. 43 Currently, there is a 10-fold variation in availability of team-based care in regions across Ontario with no correspondence between availability and health care need, 44 45 This inequity in access to teams together with our findings and research from other jurisdictions all support government expansion of team-based primary care in tandem with more research to understand the team composition, ratios, and roles associated with better patient outcomes and reduced health system cost. Notably, it may take several years to see improvements in outcomes with expansion of team-based care. Conclusions In summary, we found that patients whose physicians joined a practice model combining capitation and team-based care had a slower increase in ED use compared with those whose physicians joined a capitation model without a team. Our findings add to the growing evidence supporting the value of the extended health care team in primary care.
Comparative Dose of Intracarotid Autologous Bone Marrow Mononuclear Therapy in Chronic Ischemic Stroke in Rats BACKGROUND: Research on chronic ischemic stroke is limited. One of the more promising approaches showing positive effects in the acute stage is mononuclear bone marrow cell therapy. This research may be the first which presents data about the optimum dose of bone marrow mononuclear cells (BM-MNCs) for chronic ischemic stroke in rats and discusses factors influencing recovery in the chronic stage. AIM: To elucidate the optimum dose of BM-MNCs for chronic ischemic stroke and to demonstrate factors influencing recovery in chronic stage of stroke ischemia. METHODS: Thirty-two male Sprague-Dawley rats sourced from the Kalbe Farma Institution (Bandung, Indonesia), aged 6–10 aged months, weighing 350–450 g were used in this study. We performed temporary middle cerebral artery occlusion (MCAO) procedures on the rats which were then randomly assigned to one of two experimental groups in which they were given either low or high doses of autologous BM-MNCs (5 million or 10 million cells per kg body weight intracarotid), after 4 week of MCAO. At 8 or 12 week, rats were necropsied and rat brains were fixed for HE, cluster of differentiation (CD) 31, and doublecortin staining for analysis of the effects. Rat behavior was assessed weekly using the cylinder test and a modified neurological severity score (NSS) test. Cylinder test scores and NSS scores were analyzed by one-way ANOVA repeated measures and post hoc Bonferroni. The size of the infarct zone, the CD 31 vessels, and the DCX-neuroblast were analyzed using one-way ANOVA and a post hoc Bonferroni test. To investigate the degree of correlation between time and dose, two-way ANOVA and simple mass effect analyses were conducted. A linear regression test was used to evaluate the correlation between CD34 and other variables. RESULTS: In the 4 weeks before administration of BM-MNC, cylinder test scores improved to near normal, and NSS test scores improved moderately. The infarct zone decreased significantly (p < 0.01), there was an improvement in angiogenesis (p = 0.1590) and a significant improvement in neurogenesis (p < 0.01). Reduction of the infarct zone was associated with a higher dose whereas both higher and lower doses were found to have a similar effect on improving angiogenesis, and neurogenesis. Recovery was superior after 12 weeks compared with the recovery assessment at 8 weeks. CONCLUSION: A dose of 10 million cells was more effective than a dose of 5 million cells per kg body weight for reducing the infarct zone and ameliorating neurogenesis. There was an improvement of histopathological parameters associated with the longer infarct period. Edited by: Branislav Filipović Citation: Makkiyah F, Sadewo W, Nurrizka RH. Comparative Dose of Intracarotid Autologous Bone Marrow Mononuclear Therapy in Chronic Ischemic Stroke in Rats. Open Access Maced J Med Sci. 2021 Apr 17; 9(A):233-243. https://doi.org/10.3889/oamjms.2021.5675 Introduction Ischemic stroke represents a major cause of death and is the most prominent cause of permanent disability in adulthood. The chronic phase, where disabilities reach a plateau, has no known effective treatment regime and it is this fact which drives the search for novel therapeutic options for ischemic stroke treatment. Cell therapies, in particular the use of bonemarrow-derived cell populations, are among the most promising approaches. Studies to date have shown that bone marrow mononuclear cell (BM-MNC) reduces infarct volume [1], decreases the thickness of glial scar, and enhances proliferation of oligodendrocyte precursors along the subventricular zone in the ipsilateral hemisphere [2]. During the chronic phase of stroke, it is hypothesized that the formation of new vessels by angiogenesis contributes to brain plasticity and functional recovery after stroke. Angiogenesis induces neurogenesis and vice versa. A challenge in translational research to humans is the fact that the rodent is a totally different animal from humans. One of the main obstacles in the treatment of stroke is that the mechanism of recovery in chronic stroke ischemia is not yet fully understood and that the effective optimum dose has not yet been established. To better understand the possible mechanism underlying the effect of BM-MNC on chronic stroke ischemia, we measured the infarct zone, degree of angiogenesis, and neurogenesis at two different time periods and with two different dose levels. https://oamjms.eu/index.php/mjms/index Animals and experimental groups Thirty-two male Sprague-Dawley rats sourced from the Kalbe Farma Institution (Bandung, Indonesia), aged 6-10 aged months, weighing 350-450 g were used in this study. The animals were assigned randomly to the following experimental groups (Figure 1a (Figure 1). The rats were maintained in accordance with the guidelines of the NIH (Guide for the Care and Use of Laboratory Animals, 1976). All protocols were approved by the Animal Care and Use Committee of the Faculty of the University of Indonesia. All rats were given free access to food and water throughout the study. Temporary middle cerebral artery occlusion (MCAO) procedure After a week's adaptation in the Animal Research Facilities of the Faculty of the University of Indonesia, 32 rats underwent surgery by two experienced neurosurgeons trained in MCAO techniques. The following surgical procedures, originally described by Koizumi and Longa [3], [4] were used, being modified to have no coagulation and no clips being applied [5]. Body weight, heart rate, and respiration rate were measured before the procedure. The rats were anesthetized with ketamine xylazine (Xyla, Holland) 0.3-0.4 ml in a 1 ml syringe. Rectal temperature was maintained at 37 ± 0.5°C throughout the surgical procedure. An incision was made in the midline of the neck. Then, the left common carotid artery (CCA), external carotid artery (ECA), and internal carotid artery (ICA) were isolated through a midline incision. Temporary knots were made in the left ECA, CCA, and ICA and two further temporary knots were then made in the ICA. Between these two knots, a small arteriotomy was made using a 26 G needle. The subsequent steps were to insert a 4.0 proximal end heat blunted end of nylon monofilament (Ethicon, NJ, USA) into the ICA until mild resistance was felt. Ninety minutes after occlusion, the nylon was withdrawn and the skin was sutured. One hour after the procedure was completed, neurological assessments were made using a 6 point neurological scale: (0) Means no deficits (1) difficulty in extending front extremities -indicating mild deficit. (2) circular movement in the direction of the paretic limb -indicating moderate deficit, (3) fall to the left -indicating severe deficit, (4) unable to walkindicating a decrease in consciousness, and (5) death due to severe brain ischemia [4]. All animals showed a circular movement favoring the paretic limb. To alleviate any pain, 24 h before the procedure and 24 h after the procedure, rats were given paracetamol syrup in their drinking water (paracetamol syrup, Soho) 1-2 mg/cc. The skin wound in the neck was given antibiotic skin ointment (Gentamicin sulfate 0.1%, Kalbe). Two animals that appeared to be less active and hunched were excluded from the study. A postmortem autopsy showed subarachnoid hemorrhage in both rats. Harvesting procedure After 4 weeks (equivalent to the chronic phase of stroke in humans), 30 male Sprague Dawley rats were anesthetized by the method described above. No intubation was performed during the procedure. The anterior right knee joint was shaved and the area was disinfected with alcohol. The right knee joint was chosen as the isolation site because it was the paretic side. The bone marrow isolation was performed by the technique described by Ordodi et al. [6]. Before needle insertion, the right knee joint was moved into a flexion-extension position to facilitate access to the upper part of the joint -the preferred point of entry to the femur. Using a 14 G needle of iv Catheter (SR-OX1451CA, Terumo, USA), the skin and muscle were pierced until the needle touched the bone. It was then carefully twisted until it was felt to be in the middle of the femur (diaphysis of the femur). The needle was attached to a 1cc syringe and inserted until it reached 1 cm before the hip joint. The needle was then removed and a new needle attached to the syringe to mitigate the risk that the previous needle might contain bone chips after bone drilling. To prevent the coagulation of the diaphysis content, the syringe was flushed with a 0.1 cc EDTA solution (60-00-4, Sigma Aldrich, USA). Bone marrow (0.5-1 ml) was aspirated while rotating and moving the needle back and forth. The medullary cavity was flushed with saline, and the content aspirated. The skin was cleaned with alcohol. Isolation of bone marrow Bone marrow that was isolated from the rat's femur was diluted 1:1 with phosphate-buffered saline (PBS). This suspension was placed on top of a ficollhypaque solution (1:1 comparison) in a glass tube and centrifuged for 10 min at 650 G at 22°C with no brake. The mononuclear cell band was extracted with a pipette and washed immediately. This was centrifuged again for 10 min at 400G with brake at 22°C. The supernatant was then discarded and the collected [7], 10 µl were collected for cell counting, and 200 µl were taken for analysis by the flow cytometer. Using flow cytometry BD FACS ARIA III, these cells were characterized and identified for the cluster of differentiation (CD)-34+ markers. The overall procedure took 3 h to complete. Intracarotid mononuclear cells injection Before receiving the injection, the rats were sedated with the same anesthetic formula as the MCAO procedure, followed by an incision in the middle of the neck, exposure to CCA, ICA, and ECA. Then, a temporary ligation was applied with a double loop of silk in CCA and ECA to reduce the blood flow to the injection site. A second double loop of silk was applied to the CCA. The site of arteriotomy was between the two loops of CCA. Polyethylene PE 50 (SIMS Portex Ltd ID: 0.635 mm; OD: 1.19 mm) was inserted through the arteriotomy site. 1 cc mononuclear cell in saline solution was infused slowly (velocity about 1 cc/min). The catheter was flushed with the saline solution. The catheter was released and the skin was closed with a nylon suture. Cylinder tests and Neurological Severity Scale Score (NSS) The rats were assessed weekly using the cylinder test and behavioral observations to produce their NSS score. The cylinder test provided a measure of rat's spontaneous forelimb use. Each animal was placed in a transparent plexiglass container and observed by independent researchers who recorded the number of independent wall placements for the right forelimb, left forelimb, and both forelimbs simultaneously for 10 min. Each 10 min observation period required up to a total of twenty movements to be classified and recorded. The modified NSS, commonly used in animal studies of stroke, uses ratings of neurological functioning on a scale from 1 to 18, to obtain a composite score of the motor (muscle status and abnormal movement), sensory (visual, tactile, and proprioceptive), reflex, and balance tests. One point is given for the inability to perform each test and one point is deducted for the lack of a tested reflex, with the overall composite score indicating the degree of impairment. Necropsy procedure All animals survived the procedure and remained live until the time for a necropsy. After 4 weeks or 8 weeks of administration of intracarotid autologous mononuclear cells (week 8 and week 12 of the experiment), the rats were sacrificed. The brain was collected through the perfusion method using saline and normal buffer formalin until the tissue lost color. For histological studies, two consecutive 2-mm slices were analyzed from the level of MCAO (central part of the lesion) and 2 mm distal to the first slice. Infarct analysis The H and E slides were macrophotographed using a microscope and a Canon camera. Healthy tissue and border tissue between infarcted areas were outlined using software J (Image J; National Institutes of Health, Bethesda, MD). The infarct zone was determined by reducing the contralateral brain to the normal area of the ipsilateral zone. Immunohistochemistry Two biomarkers were used in this research: CD31 as a marker of angiogenesis and doublecortin as a marker of neurogenesis. The extent of angiogenesis was assessed from two consecutive 2-mm slices taken from the central part of the lesion and distal area for analysis. The brain slices were embedded in paraffin and brain microvessels evaluated by immunohistochemistry using a rat anti-CD31, Abcam [EPR17259] (ab182981). The second antibody was Goat Anti-Rabbit IgG H&L (HRP). Five random areas around the focal cerebral infarction were imaged at 200× and the number of CD31 positive vessels was counted. A single endothelial cell separating from adjacent microvessels was considered to be the criterion for evidence of CD 31 positive vessels [8], [9]. The extent of neurogenesis was evaluated by immunohistochemistry using doublecortin (DCX), Abcam (EPR 19997 ab 207175). The second antibody applied was Goat Anti-Rabbit IgG H&L (HRP). Five random areas in the subventricular zone were imaged at 400×. Image J was used to count the number of doublecortin positive cells. Statistics STATA 15 statistical software was used to analyze all data. All data distributed normally. Cylinder test scores and NSS scores were analyzed by one-way ANOVA repeated measures and post hoc Bonferroni. The size of the infarct zone, the CD 31 vessels, and the DCX-neuroblast were analyzed using one-way ANOVA and a post hoc Bonferroni test. To investigate the degree of correlation between time and dose, two-way ANOVA and simple mass effect analyses were conducted. A linear regression test was used to evaluate the correlation between CD34 and other variables. All data were displayed using Graph prism software as mean and ± SD. The asterisk symbol (*) indicates a statistically significant difference. p < 0.05 and p < 0.01 were considered significant. Isolation and characterization of BM-MNC The range of BM-MNC per ml taken from ten samples was 2.5 million to 13 million and the presence of CD34+ cells was calculated to be from zero to 19.6%. All rats survived the bone marrow isolation procedure and were able to stand and walk the next day. Cylinder test and neurological severity score assessment The cylinder test was used to assess the forelimb in MCAO rats. Before the procedure, all animals showed equal forelimb use. Following the MCAO procedure, there was a decrease in the use of the impaired forelimb (contralateral to the lesion) and an increase in the use of the unimpaired forelimb. After 4 weeks, all animals exhibited equal use of both the impaired and unimpaired forelimbs similar to their behavior before the procedure. For this reason, the cylinder test is considered to be unsuitable as a method for evaluating the long-term outcome of BM-MNC treatment. The NSS scores of all rats before undergoing the MCAO procedure showed no neurological deficits. Following the MCAO procedure, the rats began to show moderate neurological deficits which persisted until the week before BM-MNC administration commenced (Figure 1b). Four weeks after the MCAO procedure, NSS scores between groups were markedly different. Given the potential for statistical bias, both behavioral tests were excluded from statistical analysis. However, unlike the cylinder test, the comparison of NSS was considered appropriate for the evaluation of change in neurological deficit in the chronic phase of stroke ischemia. BM-MNCs treatment significantly reduced the infarct zone Three analyses yielded significant results. Compared with the saline group, the BM-MNCs treatment rats showed a significant reduction in lesion size (p < 0.01). The Bonferroni post hoc test demonstrated that the size of dose had a significant effect -10 million cells per kg BW significantly reduced lesion size compared with a dose of 5 million cells per kg BW. This was noted both at week 8 th and week 12 th . The two-way analysis using ANOVA showed a significant interaction of dose size and period (weeks) in the reduction of lesion size (p < 0.01). A simple main effect test was used to assess the significance of the interaction of dose and period (weeks) and found that increasing the dose from 5 million cells to 10 million cells per kg BW had a greater effect size (2.67) at week 12 th than at week 8 th (95% Confidence Interval -3.732991-1.607009, standard error 0.5150405). Reduction of infarction size from week 8 to week 12 There was no reduction in infarction size for the control group. However, the BM-MNCs treatment group showed a reduction in the size of infarction between week 8 th and week 12 th . The differences in reduction of infarction size were observed in both high and low dose groups (16.58 ± 0.56, 15.24 ± 0.70 vs. 19.74 ± 0.92, 18.41 ± 0.20, p = 0.019) (Figure 2a). BM-MNCs treatment did not significantly improve angiogenesis in chronic stroke The rats in the BM-MNCs treatment groups showed a non-significant increase in the number of peri-infarction vessels in chronic stroke compared with the control group (p = 0.1590) (Figure 2b). The number of CD31 positive cells, as a marker of angiogenesis, increased after BM-MNCs administration compared with the saline administration and a higher dose of BM-MNCs increased angiogenesis more than a lower dose of BM-MNCs; however, these differences were also not significant. Analysis of the interaction between dose and time (week) did not yield significant results (p = 0.3211). Improvement in angiogenesis from week 8 to week 12 The longer period of infarct increased angiogenesis in the BM-MNC groups. Even though a high dose increased the number of CD31 vessels more than was seen in the low dose groups, the difference was not statistically significant. There was no improvement in angiogenesis in the control groups at week 12 th . BM-MNCs treatment significantly improved neurogenesis in chronic stroke BM-MNCs treatment enhanced neurogenesis significantly (p < 0.01) (Figure 2c). The post hoc Bonferroni test showed a significant difference between the control group necropsied at week 8 th and the BM-MNCs dose 5 million per kg BW necropsied at week 8 th (p < 0.01). A significant difference in neurogenesis was also seen between the control group necropsied at week 8 th and the BM-MNCs dose 10 million per kg BW necropsied at week 8 th (p < 0.01). Both the high and low doses of BM-MNCs enhanced neurogenesis at a similar level at week 8 th . A similar difference was shown for the rats necropsied at week 12 th . https://oamjms.eu/index.php/mjms/index Interaction between the level of dose and length of treatment on neurogenesis An ANOVA two-way test was performed to assess the interaction between dose and time (week) and found a significant interaction between the size of dose and length of time on neurogenesis. A simple main effect test yielded the conclusion that enhancing dose from 5 million to 10 million BM-MNCs per kg BW had a 5.8 higher effect at week 12 th than at week 8 th (95% Confidence Interval -18.17441-29.77441, standard error 11.61608). The suggestive trend in neurogenesis from week 8 th to week 12 th BM-MNCs administration showed suggestive different trends of neurogenesis for the two periods ( Figure 3). An upward trend was apparent in the control groups and a downward was seen after the administration of BM-MNCs. Linear regression correlation between CD34, Behavior Test, and Histopathology findings The number of CD34+ cells contained in BM-MNC showed a linear correlation between the reduction in the infarct zone (p = 0.05) and improvement in neurogenesis (p = 0.04). Angiogenesis did not show a linear correlation with the CD34+ content. Results of the cylinder test and NSS Scores after administration of BM-MNC This current study showed deficits in standard behavioral measures such as the cylinder test from the 1 st day following the MCAO procedure and the behavioral deficits reduced during the 1 st week after surgery. By the 14 th day following the procedure, it was barely possible to detect the deficits by use of the cylinder test. This might be explained by the spontaneous recovery of rats or there was rapid habituation due to frequent testing resulted in less tendency to rear and explore the plexiglass. This disadvantage was also commenced by Balkaya et al. (PMID 28760700) [10]. The markedly different NSS scores between groups at 4 weeks post-MCAO made this test unreliable as a means to interpret the outcome of BM-MNCs therapy in this study. However, since the NSS scores did not return to baseline after the 4 th -week post-MCAO, this test is still considered reliable for evaluating the severity of stroke ischemia. In this study, after week 5 th , the rats showed normal mNSS and this result was supported by the explanation which is (1) there was a spontaneous recovery and (2) in detecting behavioral deficits in the long-term study, NSS tends to be not useful [11], [12]. Both behavioral test results suggest that caution should be taken when selecting a simple behavioral test for detecting behavioral deficits in the chronic stage. BM-MNC crosses the blood-brain barrier (BBB) In this current study, BM-MNCs administered intracarotid at 4 weeks post-MCAO presumably cross the BBB; as a result, these cells are situated in the peri-infarction zone. This hypothesis is supported by Garbudoza-Davis et al. study which showed the significant damage of BBB, especially in the ipsilateral striatum and motor cortex in rats after 30 days MCAO. Their study also showed a separated microvascular change in the ipsilateral and contralateral brain areas, indicating the continuation of BBB damage in chronic ischemia [13]. Strbian et al. [14] evaluated leakage of BBB in rats post 90 min temporary MCAO ischemic-reperfusion using gadolinium (small molecules) and Evans Blue fluorescent (large molecules), demonstrated leakage for both within 25 min after reperfusion. They found that BBB remained open for 3 weeks for Evans Blue and 5 weeks for gadolinium, suggesting that there were would be permanent leakage of BBB in rats after ischemicreperfusion, without closing BBB until the next few weeks [14]. Kamiya et al. [15] monitored the distribution of intra-arterial BM-MNC in rats using magnetic resonance imaging administered 90 min after MCAO. This study showed the distribution of BM-MNC in ischemic hemispheric 1 h after administration with a decrease in the number of cells by day-7 th with the BM-MNC treatment group showing smaller infarct volume. The fact that BM-MNC can enter brain circulation was shown by the Prabhakar et al. study in which BM-MNC was administered to mice intravenously, 24 h post-MCAO. This produced an accumulation of cells that were labeled with carboxyfluorescein diacetate in the infarct zone. This demonstrated that BM-MNC was not only able to cross BBB but also remained live for 1 week. However, the mechanism of BM-MNC crosses BBB is still uncertain [16]. There is a study that suggested mononuclear cells increased brain perfusion 1-15 min after injection through interaction with the host and release nitric oxide (NO). As a result, perfusion of the brain was increased and helped in entering brain circulation. NO inhibitor was administered in this study and they showed no improvement of neurological deficit in rats [17]. Our study administered ipsilateral intracarotid mononuclear cells at 4 weeks post-MCAO; however, we did not have any objective parameters to prove . There were significant differences that were observed in infarction sizes p < 0.05, n = 5 in each group. The rats that had received 10 million cells of kg BW showed more reduction in infarction size than the lower dose. In the opposite site, control group showed an increase in infarction size. A graph is mean ± SD. (b) Bar graph comparing of mean CD31 cells, the difference was not significant. (c) Mean of DCX (Doublecortin) positive cells in the subventricular zone. A graph is mean ± SD. C-2. Mean of DCX positive cells at 8 th week and 12 th week (* = significant, p < 0.01). The black arrow means a change of neurogenesis trend. (d). Mononuclear Cells Reduce Infarct Zone, Improve Angiogenesis, and Neurogenesis A. Control group B. Mononuclear cells group. Bone marrow mononuclear cell (BM-MNCs) treatment after 4 th -week middle cerebral artery occlusion decreases infarct zone, improve angiogenesis, and neurogenesis. Blue square means infarct area, red columns mean blood vessels, a round blue circle means neuroblasts. (e). Suggestive Change in The Trend of Neurogenesis from 8 th week to 12 th week (a). Saline treatment group (b) BM-MNCs treatment group. There was still an increase in the number of neuroblasts between those periods. In contrast, BM-MNCs treatment resulted in a decreasing number of neuroblasts and it appears that many have changed to mature neurons. A blue round shape is a neuroblast; the yellow round shape is a mature neuron d c b a e https://oamjms.eu/index.php/mjms/index that BM-MNC entered the brain circulation. This is the limitation of our study. Nevertheless, improvement of infarct zone, neurogenesis, and angiogenesis, made us believe that BM-MNC works. Our route of administration is through ipsilateral intracarotid. BM-MNC presumably enters the open BBB in the area of the brain that has lots of fenestration, less permeable tight junction, especially around the cerebral intraventricle [18]. This might be the plausible explanation of the improvement of the infarct zone, angiogenesis, and neurogenesis that could be because of the effect of BM-MNC entered the brain circulation. As this study did not label BM-MNCs that entered the brain's rats, other explanations of recovery of chronic stroke in this study, might be the paracrine effect of the BM-MNCs. This improvement in the chronic stage was supported by a study done in ten patients by Bhasin et al. [19], [20]. He demonstrated that the higher growth factors such as serum vascular endothelial growth factor (VEGF) and brain-derived neurotrophic growth factor (BDNF) were released in the BM-MNC group than the saline group. However, the difference was not significant. He hypothesized that due to the paracrine effect of stem cell niche and neurorehabilitation regime that released growth factors (VEGF and BDNF) in the microenvironment [19], [20]. The mechanism by which BM-MNCs enhances recovery in the chronic stage Many papers have been written about recovery mechanism during the acute stage of ischemia, however, very limited paper discuss mechanism in the chronic phase. Our study demonstrated that BM-MNC transplantation improved recovery in angiogenesis (not significant), improved neurogenesis, and decreased the size of the infarct zone in the chronic stage of ischemic stroke. We also demonstrated that active neural proliferation was more obvious in the BM-MNC group in the subventricular zone than in the control group. The neural proliferation was not likely due to the ischemic injury itself because neural proliferation ended only until 2 weeks post-stroke in ischemia-induced condition [21], [22]. Instead of that, we administered BM-MNC 4 weeks post-MCAO. To show the neurogenesis process, we measured the number of immature neurons (DCX) in the subventricular zone in the week 8 th and week 12 th . Thored et al. [23] labeled BrdU with DCX and suggested that DCX cells were immunoreactive over 2-3 week, but then gradually lost this expression and become marked with NeuN. Intriguingly, we demonstrated that there was still progression in the presence of neuroblast in the 8 th week and 12 th week, especially in the BM-MNC group treatment. However, we observed the different trends between control group and BM-MNC group. The downward trends of number of DCX cells in BM-MNC treatment group, suggested that many of the immature neurons (DCX cells) turn into the mature neurons and we hypothesized that BM-MNCs administration faster this neurogenesis process. This is the limitation of our study, so, the entire process of neurogenesis could be detailed explained ( Figure 3). We suggest should measure the number of DCX neurons, mature neurons (Neu-N) in the penumbra and subventricular zone, to enhance the understanding of neurogenesis mechanism in the chronic stage. The previous studies have shown that an increase in the number of new vessels in penumbra is correlated with improved outcomes in ischemic animal models. And angiogenesis occurs in 4-7 days after brain ischemia in the border of the ischemic core and periphery [24]. Similarly, we showed that in the week 8 th and week 12 th angiogenesis continued; however, the difference between the control group and experiment group was not significant. There appear to be several explanations for the enhancement of angiogenesis by BM-MNC: BM-MNC contains mesenchymal stem cells (MSCs) that are able to differentiate into smooth muscle and endothelial of vascular in MCAO rats; these differentiated cells ameliorate arteriogenesis (especially for leptomeningeal anastomosis) and angiogenesis; and MSCs might decrease the inflammatory response in the modulation of cytokine expression [25]. This is supported by Wang et al. study which showed that BM-MNC contains MMP-9 [26], this MMP-9 acts as a proteolytic enzyme that degrades the extracellular matrix in the glial scar [27], and increases neovascularization [28], neurogenesis, and synaptogenesis. MMP-9 also involves the migration of neuroprogenitors from the subventricular zone in ischemic stroke [29]. The role of neurogenesis of MMP-9 during the chronic stage was supported by Lee et al., [30] and inhibitors of MMPs were observed suppressing the migration of neurons from subventricular zone into the striatum [31]. However, the MMP-9 acts differently in acute stroke, which enhances leakage of BBB, resulted in more edema formation and hemorrhagic transformation [32]. The more detailed mechanism on angiogenesis of BM-MNC administration was observed by a study done by Kikuchi-Taura et al. [33]. Their study showed that BM-MNC administration; (1) increased the uptake of VEGF into endothelial cells and (2) able to bring glucose homolog to endothelial cells (PMID: 3207554). The significant decreased in size of the infarct zone in the BM-MNC group is presumed to occur because those cells able to cross the BBB [17], stimulate angiogenesis, and neurogenesis (which is explained by the CD31 and DCX expression). Or it may be the paracrine effect that modulates an inflammation response, releases trophic factor, and cytokine that supports cito-protection [34], [35]. Besides all the mechanisms mentioned above, using freshly prepared cells in this study are such a huge advantage. Because it is clinically feasible, particularly in the chronic stage, and avoids problems related to cryopreservation. As a study done by Weise [36], administration of cryopreserved human umbilical cord blood mononuclear cells did not prove to induce sustained recovery after stroke in spontaneously hypertensive rats (PMID 24169850). The pursue of findings of cell treatment that is cost-effective and effective is the ultimate goal in countries with a shortage of cuttingedge cell preparation equipment and skills. In this current study, the procedure to process BM-MNC took only 2 h and the cost of each procedure was <200 USD. Recovery mechanism in the infarct focal chronic phase at week 8 th and week 12 th Our study showed that the BM-MNC groups recovered markedly by week 12 th , the explanation being that stimulation of angiogenesis and neurogenesis has reduced infarct size by week 12 th . It also appears that BM-MNC therapy produces faster maturation of neurons in that period. This maturation trend was absent in the control group ( Figure 3). The influence of angiogenesis follows a similar path to neurogenesis: BM-MNC enhances the neovascularization in the peri-infarct area. The maturation trend of a neuron is supported by Thored who observed that neuroblast is found at week 6 in the subventricular zone but that in week 16 more mature neurons were found than neuroblasts [37]. Optimal dose in the recovery of functional neurology in the chronic phase focal infarct A higher dose (10 million cells per kg BW) of BM-MNC was found to be superior to the lower dose for improving the functional neurology deficit. This result was in accordance with the dose of MSCs of the "STem cell Application Researches and Trials In NeuroloGy" (STARTING) study (PMID 24083670) [38], 1 × 10 5 -3 × 10 6 cells/rat. Another study was done by Bhasin et al. [20] observed no functional outcomes difference between 10 million, 8 million, 7 million per kg BW BM-MNC in ten chronic stroke ischemia patients. Their study administered BM-MNCs intravenously. Our study administered ipsilateral intracarotid BM-MNCs. The difference site/route of administration also makes a significant difference in outcome results. The reason for the higher dose showed more efficacies was partly that the higher number of mononuclear cells increases the potential to generate more progenitor cells. One of the cells was CD34+ cells. These hematopoietic stem cells were an important indicator of the success of BM-MNC therapy [39]. However, due to the small sample of CD34+ cells in this study, we cannot draw a direct conclusion on the effect of CD34+cells. Limitation of the study The key limitations of this study are the lack of an objective measure for the size of ischemia and that only two behavioral tests were applied. A further limitation was that only two biomarkers were used. The availability of detailed and comprehensive immunohistochemistry markers of neuron and synapse would enable a more comprehensive assessment of the recovery mechanism of chronic ischemic stroke in rats. Conclusion Doses of 10 million cells per kilogram of body weight were superior to doses of five 5 million cells per kilogram bodyweight in reducing infarct zone and improving angiogenesis and neurogenesis. The size of the dose did not produce significantly different behavioral measures on either the cylinder or NSS tests as there was already an improvement in the behavioral test measures before BM-MNC administration. Histopathological recovery was also observed to be greater at 12 weeks post-MCAO than at 8 weeks. We suggest performing behavioral assessment over a longer time frame and examination of other endpoints; such as genetics mechanism and synaptogenesis.
Selfconsistent approximations, symmetries and choice of representation In thermal field theory selfconsistent (Phi-derivable) approximations are used to improve (resum) propagators at the level of two-particle irreducible diagrams. At the same time vertices are treated at the bare level. Therefore such approximations typically violate the Ward identities connected to internal symmetries. Examples are presented how such violations can be tamed by a proper choice of representation for the fields which describe the system under consideration. These examples cover the issue of massless Goldstone bosons in the linear sigma model and the Nambu--Jona-Lasinio model and the problem of current conservation in theories with massive vector mesons. I. INTRODUCTION AND SUMMARY For the description of quantum field theories in and also out of thermal equilibrium Φ-derivable approximations [1,2,3] have gained a lot of attention in the last years [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. Such an approach provides a tool to go beyond purely perturbative calculations by resumming whole classes of diagrams. At the same time the approximation scheme is thermodynamically consistent [1,5]. Typically the generating functional which defines the approach is introduced as a functional of one-and two-point functions (classical fields and propagators). Consequently, the key quantity Φ is calculated from two-particle irreducible (2PI) diagrams (see e.g. [20] for a generalization to n-PI diagrams). In this way, one deals with full propagators, while vertices are treated at a perturbative level. If the system under consideration contains internal symmetries, such an approach can violate the Ward identities connected to these symmetries [9]. The reason is that Ward identities typically connect propagators and vertices. Therefore, problems can occur in a scheme in which propagators are determined by involving diagrams of arbitrary loop order, while vertices are calculated only up to a given loop order. The purpose of the present work is to show some examples where such problems appear and to point out how these problems can be avoided by a proper redefinition of fields. The present work does not aim at a full formal solution of arbitrary problems connected to resummations in the presence of internal symmetries. Rather for some examples of practical relevance (Goldstone modes, current conservation in theories with massive vector mesons), it is shown how the problems can be tamed. Also the discussion of renormalization issues (cf. e.g. [7,8,9]) is beyond the scope of the present work. In principle, physical quantities (e.g. S-matrix elements) do not depend on the choice of representation [28,29]. Thus, it is important to understand why a redefinition of fields, i.e. a clever choice of representation can help at all: The point is that typically one cannot calculate a physical quantity exactly, i.e. one does not have a full solution to the quantum field theoretical problem, but only an approximation. E.g. in a perturbation theory differences originating from different choices of representation are of higher order in the expansion parameter than the order studied (see e.g. [30] and references therein). In resummation schemes one typically involves classes of processes/diagrams up to infinite order in the expansion parameter (e.g. two-particle reducible diagrams) while other classes are treated perturbatively. In such a scheme the only thing one can say about the (in-)dependence on the choice of representation is that the results should become less dependent on the representation, if one includes more and more processes/diagrams (in a systematic way). Therefore, in practice where one cannot solve the full quantum field theoretical problem, the choice of representation might matter. As will be outlined in the present work it can indeed be used as a tool to improve the symmetry properties of a resummation scheme. The rest of the present work is structured as follows: In the next section we discuss the linear sigma model and in section III the Nambu-Jona-Lasinio model. In both sections we focus on the problem that the propagator of the (supposed-to-be) Goldstone bosons might not propagate massless modes, if it is calculated within a Φ-derivable approximation with resummed propagators but bare vertices. We will show that such a problem appears, if a linear representation for the Goldstone boson fields is used, whereas Goldstone modes remain massless for a non-linear representation. Note that a similar line of reasoning is also presented in [31], however not in the context of Φderivable schemes. For the Nambu-Jona-Lasinio model also a redefinition of the quark fields will be important on top of the change of representation for the Goldstone boson fields. In section IV we turn to a different problem, namely the current (non-)conservation in theories with massive vector mesons. Here it will turn out that the use of a tensor representation for the vector mesons is superior to the frequently used vector representation. We will also present a projector formalism which deals with the technical aspects of the tensor representation. As a side remark on the 2 treatment of tadpoles in Φ-derivable schemes we have added an appendix. II. LINEAR SIGMA MODEL As a first example we take the O(N + 1) linear sigma model [32] Here φ is a N + 1 component vector (linear representation) The Lagrangian (1) is invariant with respect to the global transformations with an arbitrary matrix S ∈ O(N + 1). For m 2 > 0 (and low enough temperatures) the system described by (1) has a non-trivial ground state which spontaneously breaks the symmetry (3). We choose the (positive) φ 0 direction and find: To study the (quantum and thermal) fluctuations around this ground state we perform a shift in (1) which yields the Lagrangian with the mass for the φ 0 mode Note that the spontaneous symmetry breaking induces a new three-point interaction term, the last term on the right hand side of (6). Spontaneously broken global symmetries cause the appearance of massless Goldstone modes [32,33]. For the studied system these are the φ i (i = 0) modes. Indeed, on the tree level there are no mass terms for the φ i modes. Single loop diagrams induce mass terms. In a perturbative expansion, however, such mass terms cancel in the sum of all contributing loop diagrams. An example of such a cancellation is depicted in figure 1. There, all one-loop self energy diagrams for the φ i modes are shown. The sum of these contributions is proportional to I a + I b + I c + I d + I e with the φ i snail diagram the φ 0 snail diagram the φ i tadpole the φ 0 tadpole We have introduced the Matsubara formalism [34] to calculate the diagrams at finite temperature. In the following, our considerations will be at a purely formal level. Therefore we are not concerned with the renormalization of the expressions in (8)- (12). A mode remains massless, if the corresponding self energy Π satisfies The following decomposition for diagram (e) ensures that the φ i modes remain massless: i.e. the first two terms on the right hand side of (14) cancel the result of the sum of I a + I b + I c + I d whereas the last term vanishes with the external momentum k. shown in figure 1 with the important difference that now all internal lines should be regarded as full instead of bare propagators. 2 The necessary cancellation between diagram (e) and the sum of the others would still take place, if the following difference vanished with the external momentum k (cf. equation (14)): where we have introduced self energies Π 0 and Π i for the φ 0 and φ i modes, respectively. Instead of an expression which vanishes with k we get In general, the right hand side of (16) does not vanish, since the self energies Π 0 and Π i are not the same. E.g. the self energy for the φ 0 mode has an imaginary part coming from the decay into two (ideally massless) φ i modes. This decay channel is not present for a φ i mode. There is a second way to see that the cancellation does not work any more. For that purpose we work out which perturbative diagrams are generated from the Φ functional of figure 3 and which are not. Obviously, one generates (besides infinitely many other perturbative diagrams) the (perturbative!) two-loop self energy shown in figure 2(a). (Note that the lines in figure 3 denote full propagators, whereas the lines in figure 2 denote bare propagators.) On the other hand, the diagrams depicted in figure 2(b) and (c) are not generated from Φ as given in figure 3. Three-loop diagrams for Φ would be necessary here. As already pointed out, all diagrams of figure 2 would be needed to ensure that the φ i modes remain massless at the two-loop level. More generally, the symmetries (which dictate the appearance of Goldstone modes) lead to Ward identities which typically connect propagators and vertices. If one resums propagators to all orders but truncates the vertices, one might get problems, since the necessary cancellation of different diagrams is not ensured any longer. Note that e.g. diagram 2(a) is a propagator correction to diagram 1(e), while diagram 2(b) is a vertex correction. Also note that the inclusion of diagram 2(b) -even with full propagators -would not solve the problem in a 2PI Φ-derivable scheme: Using full propagators one would need a full vertex and not just bare plus one-loop. The previous discussion shows that one has to look for a formalism where such cancellations are not needed. To see that this is indeed possible and practically conceivable we turn to the non-linear representation (see e.g. also [31]) with U 2 = 1. Obviously, as a unit vector in N + 1 dimensions, U contains N degrees of freedom which we will call "π modes" in the following. In the non-linear representation the Lagrangian (1) takes the form 5 Spontaneous symmetry breaking induces a finite vacuum expectation value for the σ mode: which of course agrees with (4). With the shift one gets the Lagrangian Obviously, the π modes appear only with derivatives. Therefore, all interactions vanish in the limit of soft momenta and no mass terms are induced. This statement is separately true for each conceivable Feynman diagram for the π self energy. Therefore no subtle cancellations are needed to ensure the appearance of Goldstone modes. In any Φ-derivable scheme based on (21) the π modes remain massless. Therefore a non-linear representation is better suited for such a resummation scheme. For practical applications the unit vector U must be expanded in powers of the pion field. This is completely analogous to the non-linear sigma model and its extention to chiral perturbation theory [35]. We will not elaborate on this issue here any more. We note, however, that in spite of the chosen non-linear representation in (21) this Lagrangian still describes the linear sigma model since the sigma mode is not frozen, but has a tree level mass as given in (7). III. NAMBU-JONA-LASINIO MODEL As a second example we study the Nambu-Jona-Lasinio (NJL) model [36,37,38]. It is widely used as a quark model which possesses the chiral symmetry of QCD. The spontaneous breakdown of this symmetry in vacuum and its restoration at finite temperatures and/or baryon densities has been studied extensively within the NJL model. Also the appearance of Goldstone modes can be studied explicitly within this model. For simplicity we restrict ourselves in the following to two quark flavors. One way to write down the Lagrangian is [37] with the current quark mass m. 3 Integrating out the sigma and pion fields (which posses no dynamics at tree level) one obtains the usual NJL Lagrangian with its four-quark couplings. The NJL model possesses a systematic expansion scheme, namely an expansion in inverse powers of the number of quark colors N c . In that context we note that G ∼ 1/N c (see e.g. [37]). In leading order of the 1/N c expansion the quarks get a dynamically generated constituent mass (Hartree approximation). The corresponding quark self energy is depicted in figure 4. Note that the solid line in the loop is supposed to be a full quark propagator, i.e. "bubbles within bubbles" are implicitly generated by this diagram. For the mesons, e.g. the pion, the leading order contribution to the self energy is given by the one-loop diagram shown in figure 5(a). Again the solid lines denote full (here Hartree) quark propagators. Note that this quark loop yields also a kinetic term for the pion which is obviously not present at tree level, i.e. in the Lagrangian (22). This kinetic term is proportional to the square of the pion decay constant (see e.g. [37]). At leading 1/N c order the dynamics of the quarks (the generation of a constituent quark mass) influences the meson properties. On the other hand, the dynamics of the mesons are not fed back to influence the quark properties. Actually it is only the expectation value of the sigma, i.e. the one-point function, which causes the Hartree diagram. The connection of tadpoles and one-point functions is discussed from a somewhat more general point of view in the appendix. It is only at next-to-leading order where the meson propagators influence the quark properties. Therefore, processes like quark-quark or quark-meson scattering come into play only at next-to-leading order of the 1/N c expansion. On the other hand, such processes are of interest e.g. for a dynamical description of the chiral phase transition [39]. next-to-leading order contributions O(N 0 c ) come from the other two diagrams of figure 6. To generate the mesons and couple the dynamics of quarks and mesons in a selfconsistent way one has to involve (at least) all the diagrams of figure 6. However, as we will discuss next, such a selfconsistent scheme again has problems to ensure that the pions keep their character as Goldstone bosons. Obviously there is a mass term 1/(2G) for the pion at tree level. In the chiral limit (m = 0) this mass term is canceled exactly by the quark loop depicted in figure 5(a), if the quark propagator is determined at the Hartree level depicted in figure 4. For finite quark masses one obtains the Gell-Mann-Oakes-Renner relation [40]. The cancellation between tree level and quark loop does not work any more, if the quark propagator is changed beyond the Hartree level without changing the corresponding vertices. E.g. the Φ-derivable approximation depicted in figure 6 generates the perturbative contribution shown in figure 5(b) but not the diagram of figure 5(c). Only both diagrams (b) and (c) of figure 5 ensure that the pion remains massless (in the chiral limit). In the following, we will demonstrate how this problem can be circumvented. Now we turn to a non-linear realization by identifying σ + iτ a π a =σe iτ aπa /F =:σU . We have introduced the pion decay constant F . To be more specific, at present F is an arbitrary parameter which drops out of all physical quantities. When a kinetic term for the pion is generated from the loops the free parameter is properly replaced by the pion decay constant F π . Inserting (23) in (22) we get where we have introduced chiral projectors P R/L := 1 2 (1±γ 5 ) and right/left handed quarks q R/L := P R/L q. Obviously, in this non-linear representation of the boson fields there is no pion mass at the tree level. Still it might happen that a pion mass is generated by loops. In the following, we will demonstrate how to avoid that. To this end, we also change the representation for the quark fields [41,42]: This yields In the following, we will only be concerned with the non-linear representation. Therefore we drop from now on the tilde and prime assignment to the fields in (23) and (25). Obviously, in (26) all interactions between the pion fields (encoded in U ) and the quarks come with derivatives of the pion fields or with the current quark mass. Thus, in the chiral limit soft pions decouple from the quarks. No mass terms for the pion can therefore be generated in any loop order (in the chiral limit). No cancellation of diagrams is needed to achieve that property, since each interaction vertex separately ensures the decoupling of pions. In powers of 1/N c the Φ-derivable approximation shown in figure 6 (using the linear representation (22)) yields the generating functional up to O(N 0 c ). The same accuracy can be obtained in the non-linear representation, if all U 's appearing in (26) are expanded up to O(1/F 2 ). Note that F 2 ∼ N c which can be easily obtained from the Gell-Mann-Oakes-Renner relation. We obtain Actually there emerges one more term with two pion fields, the Weinberg-Tomozawa term ∼qγ µ ǫ abc π a ∂ µ π b τ c q. On the level of approximation which we treat in figure 7, this term enters the calculation of Φ only as a tadpole type contribution. Since the Weinberg-Tomozawa term is flavor changing, this tadpole vanishes (as long as there is no isospin chemical potential). Of course, this would change, if we were interested in a calculation of Φ beyond O(N 0 c ). The pion mass M π is obtained from the self energy diagrams depicted in figure 9. Obviously, diagrams (a), (b), (c) and (d) are proportional to M 2 π , mM π , m 2 and m, respectively. Thus, comparing diagrams (a) and (d) yields M 2 π ∼ m and diagrams (b) and (c) are only subleading corrections. Indeed, it is easy to see that the correct Gell-Mann-Oakes-Renner relation emerges from diagrams (a) and (d): The quark condensate appears in diagram (d), together with the vertex ∼ m. On the other hand, diagram (a) is caused by the pseudovector interaction in (27) which couples the derivative of the pion field to the axial-vector current. The latter defines the pion decay constant. Thus, M 2 π F 2 π emerges from diagram (a). We conclude that the Goldstone boson character of the pion is respected in a Φ-derivable approach to the NJL model, if the starting point is the Lagrangian (26) instead of (22). Therefore, the non-linear representation is clearly better suited for studies of the low-temperature phase where chiral symmetry is spontaneously broken. However, it is important to note that there is one aspect where the linear representation has its merits: In . 9: Pion self energy obtained in the non-linear representation from the Φ-functional shown in figure 7. Note that all internal lines denote full propagators. (22) one can clearly see the chiral partners of the model, σ and π, which become degenerate for temperatures above the phase transition [37]. Naturally, a non-linear representation does not display the chiral partners so explicitly. What makes the use of the non-linear representation at the technical level rather involved close to or even above the phase transition can be understood as follows: We have argued that a proper expansion parameter is 1/F to come from (26) to the practically useful Lagrangian (27). Close to the chiral transition the relevant F becomes small, however. Therefore, an expansion in 1/F ceases to be useful. A numerical study of the Φ-functional shown in figure 7 as a function of the temperature is beyond the scope of the present work. At least for temperatures below the chiral transition the non-linear representation is an appropriate tool as it respects the Goldstone boson character of the pions also within resummation schemes. IV. VECTOR MESONS AND CURRENT CONSERVATION We now turn to a different problem encountered in resummation schemes, namely the violation of current conservation for systems with massive vector mesons. Typically vector mesons are described by a Lorentz vector field V µ which has four indices. On the other hand, a massive vector meson has only three polarizations. The (free) equation of motion (Proca equation [43]) is then constructed such that one component of the vector field is frozen by the condition If interactions are switched on and the vector mesons are only coupled to currents j µ which are conserved, then (28) remains valid in the presence of these interactions -at least for a full solution of the quantum field theoretical problem. However, the conservation of j µ is typically a consequence of an internal symmetry which on the level of n-point functions connects propagators and vertices (an example is discussed in [6]). For approximate solutions and especially using resummation schemes it might therefore appear that current conservation is spoiled. As a consequence also (28) is violated and the current non-conservation is proliferated by the now non-vanishing longitudinal mode of the vector meson. Recipes how to tame such problems and the appearance of possible new problems caused by the use of such recipes are discussed in [6,44,45,46,47]. In the following we will demonstrate how the problem of current non-conservation can be circumvented by a different choice of representation for the vector meson field. We start with the vector meson Lagrangian with a source term j µ and the field strength Interactions of the vector mesons with other fields can be encoded in j µ . Note that typical interactions of vector mesons with other fields can be written in the way (29). However, it is not the most general case: Also terms like V µ V ν j µν are conceivable. Indeed, if vector meson masses are generated via a Higgs field φ [48], terms with j µν ∼ g µν φ 2 appear. Terms with two vector fields in the interaction part are unfortunately not suitable for the field redefinitions which we discuss below. Therefore, our framework does not cover the case of dynamical vector meson mass generation and, as we will see below, also not the case of massless vector mesons. Nonetheless, the Lagrangian (29) covers a large body of frequently used hadronic Lagrangians with vector mesons. Concerning field redefinitions for vector states we also refer to appendix B in [49]. We introduce a new antisymmetric tensor fieldF µν and study the new Lagrangians and SinceF µν appears at most in quadratic order in (30) this field can be integrated out. Since there are no derivatives acting on this field one obtains again a local action. Actually, one ends up with the original Lagrangian (29). On the other hand, the Lagrangians (30) and (31) differ only by an irrelevant total derivative. Thus all three Lagrangians are equivalent. In (31) the field V µ appears only up to quadratic order and without derivatives acting on it. Hence, V µ can be easily integrated out. One gets To achieve a proper normalization of the kinetic terms we introduceF µν =: mV µν and obtain The first two terms on the right hand side are just the mass and kinetic term of the free Lagrangian for vector fields in the tensor representation [35,50,51]. The other two terms represent the interaction of the vector field with the source and a quadratic source term. The latter induces a point interaction, if the source is expressed in terms of the fields the vectors are supposed to interact with. In contrast to the original interaction term V µ j µ the new interaction term j µ ∂ ν V νµ automatically projects on transverse states: where we have used which holds since V να has been introduced as an antisymmetric tensor. Obviously problems with current conservation are now no longer proliferated by the vector mesons. Therefore a tensor representation for the vector mesons provides a better starting point for selfconsistent approximations than the frequently used vector representation. Trading a vector field with one Lorentz index with a tensor field with two indices, one might get the feeling that in practice this becomes technically rather difficult. However, such kind of difficulties always have a simple solution: One just needs the proper projectors to decompose everything in scalar quantities (times projectors). Without much details we present in the following the projectors for the tensor representation which are required for (equilibrium) in-medium calculations, i.e. for a situation where one has a Lorentz vector p which specifies the medium. In such a case one can distinguish vector mesons which move with respect to the medium from those which do not. (In vacuum one can always boost to the frame where the vector meson is at rest.) For moving (massive) vector mesons one can distinguish their respective polarization [52]: Either it is longitudinal (l) or transverse (t) with respect to the three-momentum of the vector meson. (We recall that the polarization is always transverse (T) with respect to the four-momentum.) The pertinent tensor structures are: the unity tensor the tensor transverse with respect to the four-momentum k the tensor longitudinal with respect to the four-momentum k the tensor transverse with respect to the four-momentum k and longitudinal with respect to the three-momentum 4 k P µναβ and the tensor transverse with respect to four-and three-momentum Obviously all tensors are constructed such that they are antisymmetric with respect to an exchange of the first (third) and the second (fourth) index. This just reflects the property of the basic object, the antisymmetric tensor field V µν . It is easy to check that the following relations hold: where the product "⊗" is of course defined by contracting the last two indices of the first tensor with the first two indices of the second tensor. The free propagator is given by [50] 0|T which shows that only transverse (T) modes are propagated while the longitudinal (L) mode is frozen. Of course, for a free propagator there is no distinction between transverse (t) and longitudinal (l) polarizations. Finally we note that we have nothing clever to say about massless vector mesons: From (33) it is obvious that the transformations which lead from the original Lagrangian (29) to (33) only work for m = 0. Therefore, the formalism developed in the present section does not work for massless vector states (and, as already discussed at the beginning of this section, also not for the case where the mass is dynamically generated). The formalism does work, however, for typical hadronic Lagrangians involving massive vector mesons. ¡ + ¢ FIG. 10: Φ-functional in two-loop order for the φϕ 2 model. Solid lines denote ϕ modes, wiggly lines φ modes. It will turn out that the wiggly line in the left diagram must be a bare (non-dynamical) propagator, whereas the wiggly line in the right diagram is a full propagator. See main text for details. Here, O c denote one-point functions (classical fields) and D O propagators. Φ is now given by all two-particle irreducible diagrams. These are all diagrams which do not fall apart, if two lines are cut. In particular, the left diagram of figure 10 -the tadpole diagram -does not enter here, since it already falls apart after cutting one line. On the other hand, tadpole diagrams do show up in perturbative evaluations of the self energy. So the question emerges: Where are the tadpole diagrams in the Φ-approach? Indeed, a tadpole type diagram does emerge, but it is not the left diagram of figure 10. Instead, the diagram depicted in figure 11 must be considered (cf. also [5]). Note that here the wiggly line cannot be cut, since the cross together with the line denotes one object, namely φ c . Of course, the right diagram of figure 10 and higher loop orders contribute to Φ. All these diagrams, however, do not contain φ c (in our simple toy model (A1)). The contribution of figure 11 to the Φ-functional is given by In the following we are aiming at a generating functional for the two-point functions only. In other words, we want to write down a Φ-functional which only depends on the propagators and not any more on the classical fields. Of course, this new Φ-functional still should yield the correct self energy for the Dyson-Schwinger equation. To derive this new Φ-functional all we have to do is to solve the equations of motion for the classical fields and plug the solutions in S c [φ c , ϕ c ] + Φ[φ c , ϕ c , D φ , D ϕ ] which appears in (A2). The equation of motion for ϕ c is given by In a thermal system expectation values are independent of the coordinate. We do not consider spontaneous symmetry breaking for the ϕ mode and conclude: ϕ c = 0. The equation of motion for φ c is more involved: The last term on the right hand side of (A5) is exactly the contribution generated from the diagram in figure 11. Again, in a thermal system also φ c is independent of the coordinate. Using ϕ c = 0 one gets By inspection of this last formula we see that in this way the left diagram of figure 10 emerges with a subtle, but important aspect: The wiggly line is a bare propagator and not a full one. 5 In this way, taking the derivative of this diagram with respect to D φ yields zero, since there is no full φ propagator. On the other hand, taking a derivative with respect to D ϕ yields the tadpole diagram. We have chosen the simple model (A1) to illustrate in which way tadpole diagrams enter the Φ-formalism. In our simple case the equations of motion for the classical fields could be solved exactly. This can be different for more complicated theories. What is generic, however, is the fact that in a Φ-functional of propagators only, tadpole diagrams must be included, but with "tadpole tails" which are bare and not full propagators. The same line of reasoning applies to the models discussed in sections II and III.
Relative quantification of neuronal polar lipids by UPLC-MS reveals the brain protection mechanism of Danhong injection Promising results from clinical trials have fueled a growing acceptance of Danhong injection (DHI) as a Chinese Materia Medica standardized product for the treatment of ischemic stroke. However, little information is available on the underlying mechanisms of DHI especially in lipidomics. In this study, experiments on permanent middle cerebral artery occlusion (MCAO) in mice were carried out to confirm the protective effect of DHI. Furthermore, primary mouse cortical neurons subjected to oxygen-glucose-deprivation (OGD) were used to investigate the protective mechanisms of DHI. A UPLC-MS profiling analysis for neuronal polar lipids including phosphatidylcholines (PCs), sphingomyelins (SMs) and ceramides (Cers) was carried out in order to study the potential biomarkers in this OGD-induced neuron injury model. The results showed that pretreatment with DHI resulted in a significantly smaller infarct volume and better neurological scores than pretreatment with saline in MCAO mice. In an OGD-induced neuron injury model, DHI exhibited remarkable neuroprotection by reducing the neuronal damage and excessive accumulation of intracellular reactive oxygen species, suppressing intracellular free calcium influx and apoptosis. Meanwhile, 28 biomarkers of PCs, SMs and Cers sub-species of neuronal injury induced by OGD have been identified for the first time. The perturbations could be partly reversed by DHI intervention such as PC (17:0,0:0), PC (18:0,0:0), PC (16:0,0:0), PC (P-16:0,0:0) and SM (18:0,16:0). The results specifically provide information on the relationships between PCs, SMs, Cers sub-species and neuronal damage mechanisms during an ischemic stroke. Overall, we found that the therapeutic effects of DHI on cerebral ischemia are partially due to interferences with the PC and SM metabolisms. Introduction Stroke represents the second leading cause of death worldwide, with a high incidence of morbidity oen observed in surviving victims. Stroke results in approximately 6 000 000 deaths annually, with ischemic strokes accounting for about 90%. 1,2 Ischemic stroke results in a series of complex and multifaceted pathological and physiological alterations, including reactive oxygen species (ROS) outburst, inammatory mediator overproduction, lipid peroxidation, [Ca 2+ ] overload, and brain blood barrier (BBB) disruption. [3][4][5] However, the mechanisms of ischemic stroke are largely unclear and clinical trials have failed to show positive effects in patients with ischemic stroke. Therefore, systematic investigations of the complex pathological cascades during ischemic brain injury may help in the development of effective treatments and novel therapeutic tools against cerebral ischemia. Traditional Chinese Medicine (TCM) has been practiced for millennia using multiple components to treat as well as prevent many complex and refractory disease states. Danhong injection (DHI), a Chinese Materia Medica standardized product, attracted a considerable amount of attention to nd more agents and treatment options for ischemic brain vascular diseases. The raw materials of DHI are Radix Salviae miltiorrhizae and Flos Carthami tinctoria. Chemically, a number of phytochemical constituents, including catechols derived from Radix Salviae miltiorrhizae and quinochalcones, avonoids derived from Flos Carthami tinctoria, have been identied in DHI. 6 Ten main components of DHI, including procatechuic aldehyde, ferulic acid, salvianolic acid B, etc., have been quantitatively analyzed for quality control using HPLC-UV. 7 Potential mechanisms of DHI have been reported by various research groups, involving antibrinolytic and antioxidant, 8 transcription factors mediator, 9 activating Nrf2 signaling and NF-kB signaling pathways, 10,11 as well as brain and heart co-protection effects by reinstating the arginine vasopressin level. 12 In a recent study, DHI showed a strong ameliorative effect on cerebral ischemiareperfusion damages in rats due to its protective effect on the blood-brain barrier and the reversal of neutrophil inltration by suppressing the upregulation of matrix metallopeptidase-9 expression. 13 Although investigations on the mechanism of DHI have resulted in signicant breakthroughs, investigations on the mechanism of DHI from a lipidomics perspective are extremely rare. It has already been described in the literature that brain tissue contains a high concentration of phospholipids and sphingolipids. 14,15 Ischemic injury characterized by low oxygen and insufficient glucose supply may induce changes in the composition of membrane phospholipids and sphingolipids and may further initiate the production of second messengers for cellular signal transduction. Lipidomics, a rapidly expanding research eld in metabonomics, offers powerful tools to understand the pathogenesis of ischemic injury and may further help uncover the therapeutic mechanisms involved in TCM. In neuron or brain tissue, polar lipids, particularly phosphatidylcholines (PCs), sphingomyelins (SMs) and ceramides (Cers), play a crucial role in the physiological function and pathological processes. PCs, the most abundant phospholipid on the outer surface of cellular membranes, are important for maintaining the cellular structure and exhibit important functions in signal transduction and membrane trafficking. 16,17 The metabolism of SMs creates a variety of byproducts that plays a signicant role in cell homeostasis. When ischemia occurs, PCs and SMs are susceptible to extensive hydrolysis. 18,19 SMs hydrolysis results in accumulation of Cers which activates diverse signal pathways and leads to cell damage and death. 20,21 PCs hydrolysis usually causes functional cell membrane uidity disorders and may further result in messenger transduction of surface-to-nucleus phenomena in neural cells. 22 Although changes in the amount of PCs, SMs and Cers may be highly signicant to ischemic stroke, the relationships between these polar lipid changes, especially for sub-species and neuron injury, remain largely unclear. In this study, a UPLC-MS proling analysis for neuronal polar lipids, including PCs, SMs and Cers, was carried out on neurons subjected to oxygen-glucose-deprivation as a method to investigate the therapeutic mechanisms of DHI. In an effort to obtain reliable results and improve the accuracy of lipid analysis, a target database including the iden-tied 98 PCs, 28 SMs and 41 Cers was constructed based on a LC-MS platform using internal standard standards as well as the relative data of accurate masses and MS/MS fragments of the organic samples. The corresponding checklist can be found in the ESI. † Before lipidomics analysis, permanent middle cerebral artery occlusion (MCAO) mice were used to conrm the protective effect of DHI in vivo. Subsequently, primary mouse cortical neurons subjected to oxygen-glucosedeprivation (OGD) were used to investigate the pathogenesis of acute ischemic stroke along with all underlying mechanisms of DHI. MCAO surgery and drug administration Male C57 BL/6 mouse were obtained from the Animal Breeding Centre of Beijing Vital River Laboratories Company. The project identication code was 20162003. All experimental procedures were approved by the Academy of Chinese Medical Science's Administrative Panel on Laboratory Animal Care and performed in accordance with institutional guidelines and ethics of the committee as part of the China Academy of Chinese Medical Sciences (February 1st, 2016). In this study, a permanent middle cerebral artery occlusion (MCAO) mouse model was applied. According to the reported methods described previously, MCAO surgery was carried out by intraluminal occlusion using a mono-lament. 23 Twenty four male C57 BL/6 mice were randomly divided into 4 groups: sham operation (Sham), MCAO group with water treatment (Model), MCAO with DHI-treated group (5 mL kg À1 ) and MCAO with Ginaton (positive drug) treated group (5 mL kg À1 ). The drugs were intraperitoneally injected twice a day in the morning and evening for a total of 3 days. On the third day and aer the last injection in the morning, stroke was induced in the mice by MCAO. Sham mice were subjected to the same procedures with the exception of a nylon lament insertion into the common carotid artery. Evaluation of neurological defects and infarct volume measurement To evaluate the protective effects of DHI against ischemic stroke, the neurologic function and infarct area were measured. Six hours aer MCAO-operation, the neurological function was blindly evaluated by Longa's Neurological Severity Score. 24 Then, all mice were euthanized with a lethal dose of isourane. Five coronal sections of the brain (1 mm thickness) were immediately cut and the slices were stained with 0.5% 2,3,5triphenyltetrazolium chloride (Sigma, St. Louis, MO, USA) for 15 minutes at 37 C. Finally, numeric images were captured for the quantication of the infarct volume. The infarct volume of each slice was calculated as the infarct area  thickness (1 mm). The summation of the infarct volumes for all brain slices was dened as the total infarct volume. Primary mouse cortical neuron culture and drug administration Primary mouse cortical neurons from embryos of timed pregnant (14 days) C57 BL/6 mice were isolated and cultivated by methods as described previously. 25 Aer isolation of cortical neurons, the cells were counted and planted into poly-L-lysinecoated culture dishes with equal cell numbers (1  10 6 cells per well in 6-well plates, 3  10 4 cells per well in 96-well plates). The neurons were maintained at 37 C in a humidied incubator with 5% CO 2 atmosphere. The culture medium supplemented with 10% DES, 200 mM L-glutamine, 1 M D-glucose and 15 nM 5-FU was half changed with fresh culture medium every 2 days. 8 d aer planting, the purity of the cells was conrmed by the use of mouse anti-MAP2 staining (Ab11267; 1200; Abcam, Cambridge, UK). As the purity was calculated to be over 87%, this stage was deemed best for further studies (results of MAP2 immunoreactivity can be found in the ESI †). Neurons in the OGD group were washed with HBSS and the culture media were then replaced with glucose-free HBSS and incubated for 6 h in oxygen-free N 2 /CO 2 (95%/5%) atmosphere at 37 C. To determine the effect of DHI, cultured cortical neurons were treated with DHI (1-0.01 mL mL À1 ) and 0.01 mM EDA for 6 h of OGD. EDA was used as a positive control during OGD. The cortical neurons undergoing neither drug treatment nor OGD served as control. At the end of cell treatments, different tests were carried out as described below. 6-well plates and 96-well plates were used for lipidomics analysis and all other evaluations, respectively. Measurement of the cell viability, reactive oxygen species, [Ca 2+ ] levels and apoptosis The cell viability was quantitatively assessed by measurement of LDH released into the bathing medium. Cortical neuron supernatants were collected and LDH activity in the medium was determined according to the manufacturer's protocol of the LDH assay kit (Dojindo, Kumamoto, Japan). 26 The levels of intracellular free calcium were determined by loading the cells with uo-3/AM. The neurons were washed and subsequently incubated with 1 mM of uo-3/AM in the dark at 37 C for 30 min. The uo-3 uorescence was excited at 488 nm and measured at 520 nm with a microplate reader (SpectraMax M5, USA). 27 Formation of ROS was determined by using the uorescent probe 2,7-dichlorouorescein diacetate (DCFH-DA). Cellpermeant nonuorescent DCFH-DA has been shown to be oxidized to the highly uorescent species 2,7-dichloro-uorescein in the presence of ROS. Neurons were washed with PBS and were incubated with 10 mM DCF-DA for 1 h at 37 C in the dark. The uorescence intensity was measured using a microplate reader (SpectraMax M5, USA) at an excitation wavelength of 488 nm and an emission wavelength of 525 nm. 28 The apoptosis rate was measured using an annexin V-FITC/ PI apoptosis detection kit (Nanjing Jiancheng Biotech Co, Ltd., Nanjing, China) and ow cytometry (BD, CA, USA). Neurons were washed with PBS and subjected to Annexin V-FITC and propidium iodide (PI) double staining as described in the manufacturer's instructions. Aer incubation for 30 minutes at 37 C, the stained neurons were analyzed by ow cytometry and the rate of cell apoptosis was determined. 29 Lipid proling analysis of cortical neurons The cells (1  10 6 cells per well) were gently scraped to dislodge them from the plate and then transferred using a 3 mL Eppendorf® tube. Aer centrifugation for 5 minutes at 100g at 4 C, the supernatant was removed. Then, 1.5 mL of chloroform/ methanol/water (3 : 1 : 1, v/v) was added and the mixture was ultrasonicated in an ice-water bath for 1 h. The mixture was then centrifuged for 5 minutes at 100g at 4 C and the liquid phase was transferred and evaporated using a steam of nitrogen. Subsequently, the residue was dissolved in 200 mL of isopropanol/acetonitrile (1 : 1, v/v), followed by centrifugation at 4 C (13 000  g for 10 min). The supernatant was then analyzed by LC-MS. A Thermo Scientic™ Q Exactive hybrid quadrupole Orbitrap mass spectrometer equipped with a HESI-II probe was used in the positive electrospray ionization mode. The pos HESI-II spray voltages were 3.7 kV, the heated capillary temperature was 320 C, the sheath gas pressure was 30 psi, the auxiliary gas setting was 10 psi, and the heated vaporizer temperature was 300 C. The parameters of the full mass scan were as follows: resolution of 70 000, auto gain control target under 1  10 6 , maximum isolation time of 50 ms, and m/z range of 150-1500. Data processing and statistical analysis The raw LC-MS data was imported to the Skyline soware (http://skyline.gs.washington.edu/) for the relative quantication of the lipid species according to the retention time and accurate mass in the constructed polar lipids database. Briey, the workow for using Skyline for the analysis of 165 targeted polar lipids consisted of the following steps: (1) at le containing the 165 molecules; (2) import the small molecule list into the Skyline, building an analysis template; (3) import the raw data. The chromatographic data for each lipid were manually analyzed to determine the quality of the signal and peak shape. Before chemometrics analysis, all of the detected ion signals in each sample were normalized to the obtained total ion count value. Multivariate analysis was performed using SIMCA-P 12.0 soware. A Principal Components Analysis (PCA) was rst used as an unsupervised method to visualize the differences for all groups. Supervised regression modeling was then performed on the data set by using Partial Least Squares Discriminant Analysis (PLS-DA) to identify the potential biomarkers. The biomarkers were then ltered and conrmed by combining the results of the VIP values (VIP > 1) and t-test (P < 0.05). All values measured in vivo and in vitro are presented as means AE standard error of the mean. Statistical signicance was determined by one-way ANOVA followed by Tukey multiple comparison test or Student's t-tests. A value of P < 0.05 was considered to be statistically signicant. DHI reduces MCAO-induced infarct size and improves neurologic function In order to investigate the protective effects of DHI against MCAO in mice, the neurologic function and infarct area were determined. Six hours aer MCAO-operation, the mean neurological score in the model group (2.45 AE 0.19) was found to be signicantly (P < 0.001) greater than that of the sham group, indicating a neurological defect aer MCAO. In the DHI and Ginaton (positive drug) groups, the neurological defect was determined to be signicantly improved compared to the model group (P < 0.01, cf. Fig. 1A). A similar phenomenon also appeared in the cerebral infarct area in the serial coronal brain sections. As shown in Fig. 1B and C, MCAO-induced ischemia produced a marked infarct area in the serial coronal brain sections. TTC staining of the relevant mouse tissue aer DHI treatment showed a signicantly lower degree of ischemic injury compared to the MCAO mice. Moreover, the corresponding infarct volumes also demonstrated that both DHI and Ginaton exhibited signicant protective effects against MCAO-induced ischemic injury. All of the experimental results taken in concert suggest a crucial protective effect of DHI on ischemic stroke in vivo. DHI protects the OGD-induced neuron injury LDH is a stable cytoplasmic enzyme present in most cells and is oen found to be rapidly released into the cell culture supernatant upon plasma membrane damages. As shown in Fig. 2A, the leakage rate of LDH in the OGD group dramatically increased compared to the control group, indicating cortical neuron injury. Treatment with DHI (3 mL mL À1 , 1 mL mL À1 , 0.3 mL mL À1 , 0.1 mL mL À1 , 0.03 mL mL À1 , 0.01 mL mL À1 ) and EDA (0.01 mM, positive drug) showed a remarkable reduction of the leakage rate of LDH compared to the OGD group. These results conrmed that DHI exhibits a degree of neuroprotection against an OGD-induced cortical neuron injury. A calcium surge represents a crucial step in brain damage aer a stroke. As shown in Fig. 2B, the levels of calcium in the OGD group were all found to be signicantly increased compared to the control group. Compared to the OGD-group, upon treatment with DHI (1 mL mL À1 , 0.3 mL mL À1 , 0.1 mL mL À1 , 0.03 mL mL À1 , 0.01 mL mL À1 ) and EDA (0.01 mM), the levels of calcium were found to be remarkably reduced. These results suggested that DHI can reduce the intracellular calcium inux to prevent further damages. Fig. 2C shows the changes in the ROS-dependent uorescence intensity detected in cortical neurons of each group. As shown here, the DCF uorescent intensity signicantly Fig. 1 Neurological scores and infarct area by DHI pre-treatment: neurobehavioral score (A), infarct area rate (B) and TTC staining of brain (C). #P < 0.05, ##P < 0.01, ###P < 0.001 the model group versus the sham group; *P < 0.05, **P < 0.01, ***P < 0.001 the DHI group (or positive group) versus the model group. Histograms represent mean AE SD, n ¼ 6. increased when compared to the control group, indicating that OGD induced an obvious elevation in the ROS level (P < 0.001). However, the overproduction of intracellular ROS was found to be signicantly reduced by following pretreatments with DHI (1 mL mL À1 , 0.3 mL mL À1 , 0.1 mL mL À1 , 0.03 mL mL À1 , 0.01 mL mL À1 ) and EDA (0.01 mM), further indicating that DHI reduced intracellular ROS levels in OGD treated cortical neurons (P < 0.01). Taken in concert, all experimental results demonstrated cerebral protective effects of DHI on ischemic stroke. DHI could reduce the LDH release and the excess generation of ROS. Furthermore, the intracellular calcium inux and apoptosis were found to be suppressed upon DHI treatment. In the doses of DHI-treated OGD groups, a DHI dose of 0.3 mL mL À1 exhibited the best neuroprotective efficacy. Therefore, this dose was selected for the subsequent lipidomics analysis. QC samples were used to demonstrate the stability of the LC-MS system. Five quality control (QC) samples of cell blends collected at the beginning of the sequence were run and the QC samples were run at regular intervals (every ten samples) throughout the entire sequence. The RSDs of the peak areas and retention times of all identied lipids in the QC samples were calculated and we determined that more than 90% of RSDs were less than 20% for the QC samples. The values of RSD > 20% in the QC group were excluded. Therefore, the repeatability and stability of the global experimental performances were high and suitable for this study. Furthermore, the cluster of the QC samples in the PCA scores scatter plot also demonstrated a satisfactory stability and repeatability of this lipidomics proling analysis method (cf. Fig. 4). Identication of potential biomarkers To investigate the global lipidomics metabolism variations, PCA was used to analyze all observations acquired. PCA, an unsupervised pattern recognition method for handling metabolomics data, can classify the lipid metabolic phenotypes based on all imported samples. As shown in the PCA score 3D plot (cf. Fig. 4), an overview of all samples in the data can be observed and a clear grouping trend (R 2 X, 0.781; Q 2 , 0.706) between the control group, the OGD group, the DHI-treated group and the positive drug group could be observed. The OGD group vs. control group exhibited an improved separation. This observation indicates that OGD-processing may disturb the metabolism of the lipids compared to the normal state. DHI and the positive drug exhibited an effect on the OGD-induced damage, although the trajectory of the treated groups did not show a complete separation with the OGD group and was found not to return to a normal state. To further conrm the certain polar lipids used as selective and sensitive biomarkers for OGD-induced neuron injury, PLS-DA was applied to compare the lipid changes between the OGD model group and the control group. As demonstrated by the PLS-DA scores scatter plot (cf. Fig. 5A), a clear separation of the control group versus the OGD-model group could be observed. The cumulative R 2 X and Q 2 were 0.876 and 0.869, respectively, in the PLS-DA model. No over-tting could be observed according to the results of the chance permutation (cf. Fig. 5B). As shown here, the R 2 Y-intercept was 0.84 for the sham group compared to the model group. Furthermore, all green R 2 -values to the le were found to be lower than the original points to the right, indicating that the original model was valid. Aerwards, the signicantly changed lipids of the OGD model group compared with control group were ltered out based on VIP values (VIP > 1) and t-test (P < 0.05). Subsequently, a total of 28 potential lipids biomarkers of OGD-induced cortical neuron damages, including 18 PCs, 9 SMs, and 1 Cer, were studied. The changed levels and names of the biomarkers are shown in Fig. 6 Discussion The pathophysiological processes of ischemic stroke-induced brain injuries are complex and generally poorly understood, with many questions remaining unanswered. A variety of studies have been devoted towards explaining the mechanisms and providing better therapeutic approaches of ischemic stroke. DHI, a standardized commercial product derived from TCM, has long been used to treat and prevent ischemic stroke, although the mechanisms are still not fully understood. In this study, a MCAO-induced mouse injury model conrmed the protective effects of DHI by decreasing the infarct volume and improving neurological functions. Furthermore, an OGDinduced neuron injury model demonstrated the cell protective effect of DHI from various perspectives of cell viability, ROS, [Ca 2+ ] levels and apoptosis. Based on the OGD-induced neuron injury model, a lipidomics proling analysis with a targeted polar lipids extract was carried out to reveal the pathogenesis of acute ischemic stroke and the underlying mechanisms of DHI. For the rst time, 28 biomarkers of PCs, SMs and Cers of neuron injury induced by OGD were identied according to the database generated by the standards and the perturbations could be partly reversed by DHI intervention. Monitoring the changes in these lipids may help shed light on the mechanism of OGDinduced neuron injury and the efficacy of DHI. Sphingomyelin metabolism Sphingomyelin (SM), a sphingolipid species, is located in the membrane myelin sheath which surrounds some nerve cell axons. 30 As an essential modulator, SM inuences the membrane gathering of proteins involved in cellular proliferation, growth and apoptosis. 31,32 Furthermore, SM represents an important source of ceramide. 33 Previous studies have shown that the levels of SMs are crucial to ischemic stroke mostly based on tissue levels. In this study, lipidomics was conducted for the rst time on the primary mouse cortical neurons sub- It has already been shown that the energy requirements of the brain are satised by the glucose metabolism and oxygen needed for the phosphorylation of ADP to ATP. ATP is vital to maintain intracellular homeostasis and transmembrane ion gradients of sodium, potassium, and calcium. Oxygen and glucose deprivation results in the rapid loss of ATP which further causes an uncontrolled calcium leakage at certain disease related events. As shown in Fig. 2B, the calcium levels are found to be signicantly increased aer OGD compared to the control group and the rapid increase of calcium leads to the activation of sphingomyelinase which plays an important role in the sphingomyelin cycle. 34 In the sphingomyelin cycle, SMs are hydrolyzed to ceramides by the activation of sphingomyelinases. 21 Ceramide, a hydrolyzed product of sphingomyelin, is reported to play an important role in cell death as well as cell cycle arrest and generally serves as a second messenger. Several signaling pathways can be regulated by ceramide, including a diverse range of protein kinases and phosphatases. 35 Previous studies have shown that the further release of ceramide-1-phosphate (C1P), a hydrolyzed product of Cer, results in the activation of phospholipases. 36 The activation of phospholipases further increases the hydration of PCs. Moreover, Cer has been shown to promote the release of arachidonic acid (ARAC) which acts as an inammatory intermediate. 36,37 The release of ARAC results in activation of cyclooxygenase (COX) and the generation of ROS which causes lipid peroxidation. 38,39 Eventually, this process leads to apoptosis as highlighted in Fig. 7. In this study, compared to the control group, the levels of 9 signicantly modied SM species found in the OGD model were shown to decrease by 10-20%, a nding that was in accordance with the hydrolyzation theory (cf. Fig. 6). As the levels of Cers were too low, only 3 Cers subspecies were found in the neuron samples and only the level of Cer (d14:1/18:0) changed signicantly. However, the levels of ROS and apoptosis in the OGD group were conrmed and showed a remarkable increase compared to the control group. Phosphatidylcholines metabolism Phosphatidylcholines (PCs) can not only be found in the composition of cell membranes, but also serve as nuclear receptor of peroxisome proliferator-activated receptors (PPARa) which describes a transcription factor regulating the expression of various genes that govern lipid metabolism. 40 The metabolic products of PCs may act as secondary messengers for cellular regulation, e.g. fatty acids. 41 Prior studies have provided evidence that degradation of membrane phospholipids may play a key role in ischemia injuries. 14,42 As shown in Fig. 7, with uncontrolled leakage of calcium, phospholipases such as phospholipases A2 were also found to be activated, in addition to sphingomyelinase. Subsequently, free fatty acids (FFA), biologically active lipid mediators in the brain, were stimulated. The large polyunsaturated fatty acid (PUFA) pools lead to peroxidation of membrane lipids which further resulted in functional impairment. 43 In this study, of the 18 PCs biomarkers we observed that the levels of 12 PCs sub-species, i.e. PC The increased PC levels may also serve as endogenous neuroprotective cytokine, similarly to gangliosides (GM1). 45 However, this notion as well as the underlying mechanisms requires further studies to be entirely conrmed. Potential mechanism of DHI DHI, a Chinese Materia Medica standardized product with multiple components, has been reported to exhibit various pharmacological activities. In this work, an OGD-induced primary mouse cortical neuron injury model was used for the rst time to conrm the activities and neuroprotective effects of DHI in the reduction of neuronal damages and the excess generation of ROS, suppressing intracellular calcium inux and apoptosis. As shown in Fig. 6, we found that aer DHI treatment, of the 28 potential lipids biomarkers of OGD-induced cortical neuron injury, only one SM level, i.e. SM (18:0,16:0) and 4 PCs levels, i.e. PC (17:0,0:0), PC (18:0,0:0), PC (16:0,0:0), and PC (P-16:0,0:0) in the DHI-treated group, exhibited a signicant normal level tendency (P < 0.05) which suggests that the protective effects of DHI on cerebral ischemia are closely related to the regulation of the PCs metabolism. Furthermore, among the 6 signicantly increased PCs biomarkers in the OGD model which may serve as endogenous neuroprotective cytokines, PC (18:0/22:5) was found to be increased in the DHI treated group (P < 0.05). However, upon EDA treatment, the results were not the same with DHI. It has already been shown that EDA as brain protecting agent, may act as a free-radical scavenger. Of the 28 potential lipids biomarkers of OGD-induced cortical neuron injury, 2 PC levels demonstrated a normal trend (P < 0.05) in the EDA treated group compared to the OGD model group. Between DHI and EDA, no common returned biomarkers were present, further indicating that the mechanism of DHI and EDA may not be equal. Conclusions A UPLC-MS proling analysis for neuronal polar lipids including PCs, SMs and Cers was developed and applied to explore the mechanisms of OGD-induced primary mouse cortical neuron injury and the therapeutic effects of DHI. Before lipidomics analysis, MCAO-induced mice were used to conrm the protective effects of DHI in vivo. Aer DHI treatment, we observed decreases in neurological decits and cerebral infarct sizes, indicating that DHI exhibits therapeutic efficacy for the treatment of cerebral ischemia. Furthermore, primary mouse cortical neurons subjected to OGD were used to investigate the protective mechanisms of DHI in vitro. DHI was found to reduce neuronal damages and the excess generation of ROS. Furthermore, intracellular calcium surge and apoptosis were observed to be suppressed. Based on an OGD-induced neuron injury model, 28 biomarkers of PCs, SMs and Cers sub-species of neuron injury induced by OGD have been accurately identied. The perturbations could be partly reversed by DHI intervention such as PC (17:0,0:0), PC (18:0,0:0), PC (16:0,0:0), PC (P-16:0,0:0) and SM (18:0,16:0). We found that the therapeutic effects of DHI on cerebral ischemia are partially due to interferences with the PCs and SMs metabolisms. These results specically shed light on the relationship between PCs, SMs, Cers sub-species and the mechanism of neuronal damages during ischemia stokes. Potentially, these modied lipids may be used as biomarkers of ischemic cerebral injury for clinical diagnosis and treatment. Conflicts of interest All authors have approved of the manuscript and agree with submission to your esteemed journal. We declare no conicts of interest.
Towards Improving the Practical Energy Density of Li-Ion Batteries: Optimization and Evaluation of Silicon:Graphite Composites in Full Cells IncreasingtheenergydensityofLi-ionbatteriesisverycrucialforthesuccessofelectricvehicles,grid-scaleenergystorage,andnext-generationconsumerelectronics.Onepopularapproachistoincrementallyincreasethecapacityofthegraphiteanodebyintegratingsiliconintocompositeswithcapacitiesbetween500and1000mAh/gasatransientandpracticalalternativetothemore-challenging,silicon-onlyanodes.Inthiswork,wehavecalculatedthepercentageofimprovementinthecapacityofsilicon:graphitecompositesandtheirimpactonenergydensityofLi-ionfullcell.WehaveusedtheDesignofExperimentmethodtooptimizecompositesusingdatafromhalfcells,anditisfoundthat16%improvementsinpracticalenergydensityofLi-ionfullcellscanbeachievedusing15to25wt%ofsilicon.However,full-cellassemblyandtestingofthesecompositesusingLiNi Mn Co cathode have proven to be challenging and composites with no more than 10 wt% silicon were tested giving 63% capacity retention of 95 mAh/g at only 50 cycles. The work demonstrates that introducing even the smallest amount of silicon into graphite anodes is still a challenge and to overcome that improvements to the different components of the Li-ion battery are required. Most commercial Li-ion batteries still use carbon as an anode material since they were first commercialized in 1991 due to its low cost and excellent electrochemical performance especially long battery cycle life. 1 However, the reversible electrochemical intercalation of Li + into the graphite structure is limited to one lithium per six carbons (LiC 6 ) that results in a theoretical capacity of 372 mAh/g. To that end, there are on-going efforts to explore higher capacity anode materials, to meet the increasing demand for batteries with higher energy density. This is done by exploring materials that are based on storing and releasing Li + ion by electrochemical mechanisms other than intercalation such as electrochemical alloying e.g. tin, 2 and silicon 3 or their composites(Sn-Co-C), 4,5 or conversion reaction, mostly in oxides 6 of transition metals such as Co, Ni, Cu or Fe, or mixed oxides such as spinel-like ZnMn 2 O 4 7 . However, the battery performance of both types is still not promising as they show: poor electronic conductivity, high irreversible first-cycle capacity, high insertion/conversion potential, large volume expansion and large hysteresis in potential during cycling. The most promising materials that are currently under extensive R&D and have better chance to replace carbon are elements (mostly metals) that can electrochemically alloy with lithium (Si, Al, Sn, Ge, Bi, Sb, Ag, Mg, Pb). Table I shows the theoretical capacity of elements that can alloy with lithium. It is clear that silicon has the highest capacity reaching 4200 mAh/g (gravimetric) and 9800 mAh/cm 3 (volumetric). Germanium has the second highest gravimetric and volumetric capacity. This is due to the higher density of germanium as shown in Table I. Aluminum has a similar density to silicon, however, due to the lower molar interaction with lithium (limited to 1:1) gave much lower capacity than other metals. Even though aluminum has low specific capacity among the metals, it has the highest electrical conductivity that can reduce polarization resistance during charge/discharge and hence improve the battery performance. Silicon is very attractive since it comes from an abundant source; it is cheap and has a high theoretical capacity of 4200 mAh/g. 8,9 It reacts with lithium by forming the alloy SiLi x with 0 ≤ x ≤ 4.4. Taking such a high quantity of lithium involves large structural (volume) changes z E-mail: Yaser.Abu-Lebdeh@nrc-cnrc.gc.ca that can reach up to 400%. 1,10,11 This gives rise to mechanical stresses that lead to pulverization of silicon structure associated with solid electrolyte interface (SEI) formation that eventually cause the failure of the battery. 12 Many solutions to the problem have been proposed: (1) The use of nano-sized or nanostructured silicon that usually provides higher experimental capacity and better capacity retention 8,[13][14][15] because the volume change can be accommodated by free volume or fast stress relaxation. (2) The use of silicon/metal composite where the metal that does not alloy with Li + acts as a matrix that minimizes the volume expansion. 7,16,17 (3) The use of an upper limit on the capacity to make silicon alloy only partially with Li + to control volume changes. 18 (4) The use of new binders that can accommodate the volume expansion better than the conventional binder: polyvinylidene fluoride (PVDF). [19][20][21][22] Examples of the binders investigated in this regard are: alginic acid (AA), 23 polyamide-imide (PAI), 24 sodium or lithium salts of polyacrylic acid (NaPAA or LiPAA), 14,22,25 polyimide (PI), 26 sodium or lithium salts of carboxymethyl cellulose (NaCMC or LiCMC). 21 and conductive binders 27,28 (5) The use of silicon in a composite at low content typically and preferably less than 20 wt% with graphitic carbon because of its great physical and chemical properties. This leads to lower anode capacity values than using Si alone, but it shows better capacity retention with good cycle life. This solution gained a lot of popularity among researchers and manufacturers as a short-term alternative to graphite because of the experimental difficulties faced in achieving total silicon capacity with enough cycle life. Graphitic carbon in itself is still interesting as an anode material and chances are it will be used in commercial batteries as the main anode material for some time due to mature manufacturing processes and good battery performance. In this regard, it is considered as a diluent/buffer to mitigate the total volume expansion of the composite by using less of the metals that can also lower the cost when the element is more expensive than carbon. In this paper, we have studied silicon along with other elements that can alloy with lithium to optimize the capacity of their composite with graphitic carbon and evaluate their impact on the practical energy density of full Li-ion cell. We have modified a cell-based model developed by Obravac et al. 29 and applied it to silicon/graphitic carbon composites and we have observed improvements in the energy density of the Li-ion full cell using different cathode materials. The design of the experiment method has been used to optimize the capacity of Characterization.-Battery cycling was carried out on half and full-cells using 2325-type coin cells (supplied by National Research Council of Canada) assembled in an argon-filled dry glove box. Capacity measurements were performed by galvanostatic experiments carried out on a multichannel Arbin battery cycler (BT2000). The working electrode was first discharged (lithiated) down to 5 mV and then charged (delithiated) up to 1.5 V versus Li/Li+ galvanostatically for half-cells. The electrode anode and cathode films were prepared on a high purity copper and an aluminum foil current collector, respectively, (copper foil was cleaned using a 2.5% HCl solution in order to remove the copper oxide layer) using an automated doctor-blade and then dried overnight at 85 • C in a convection oven. Individual disk electrodes (Ø = 12.5 mm) were punched out, dried at 80 • C under vacuum overnight and then pressed under a pressure of 0.5 metric ton. Electrodes were made of 3-4 mg of the active material. A lithium metal disk (Ø = 16.5 mm) was used as a negative electrode (counter electrode and reference electrode). 70 μL of an electrolyte solution of 1 M LiPF 6 in ethylene carbonate-dimethyl carbonate (EC:DMC, 1:1, v/v) or ethylene carbonate-diethyl carbonate (EC:DEC, 3:7, v:v) with 10% fluoroethylene carbonate (FEC) was spread over a double layer of microporous polypropylene separators (Celgard 3501 for EC:DMC or Celgard 2500 for EC:DEC, thickness = 30 μm, Ø = 21 mm). The cells were assembled in an argon-filled dry glove box at room temperature and rested overnight before testing. Results and Discussion Comparison of theoretical capacity of alloyable elements and their composites.-Calculations of effect of anode and cathode specific (gravimetric) capacity on the energy density of full-cells based on active materials only.-Increasing the specific capacity of the anode can lead to improvement of total cell capacity, however, as others and we have discussed previously, 11,30 high capacity of the anode materials is not necessary unless the cathode capacity also improves. To evaluate the effect of the anode specific discharge capacity of the full-cell capacity, we have followed the approach introduced by Kasavajjula et al. 11 who used Equation 1 to calculate the total cell capacity using commercial 18560 cylindrical cell assuming cathode capacity of 140 and 200 mAh/g. They reviewed methodologies to prevent the capacity fade of silicon-based anode material. We have reproduced the results for the cathodes with 140 and 200 mAh/g capacity as shown in Figure 1, but also extended the calculations to cathodes with higher capacity (300 mAh/g) and also calculated the percentage of improvement in total cell capacity compared to commonly used materials graphite and LiCoO 2. Figure 1 shows the total cell capacity of a commercial 18560 cylindrical cells as a function of the anode specific capacity calculated using Equation 1. The figure clearly shows that in all cases a noticeable rapid increase of the total cell capacity occurs up until the anode capacity reaches 1000 mAh/g. Above this value, the improvement in capacity is marginal and can be considered of low value when cost and other factors are taking into account. Q A and Q C : anode and cathode specific capacity (mAh/g) Q M : Mass of inactive materials (mAh/g) Figure 1 also shows (the secondary y-axis) the percentage of improvement in the total cell capacity in a commercial battery as a base where anode and cathode capacities are 300 and 140 mAh/g, respectively. Keeping anode capacity at 300 mAh/g and increasing the cathode capacity results in an increase in the total capacities by 13 and 27% for 200 and 300 mAh/g of cathode capacities, respectively. However, when the anode capacities are at 1000 mAh/g the total capacities increase by 33 and 51% for 200 and 300 mAh/g cathode capacities, respectively. Also, an increase of 15% in the total capacity can still be achieved with 140 mAh/g of cathode capacity. The calculations above are only a rough approximation and very useful in demonstrating the effect of changes to the capacity of either the anode or the cathode materials on the total capacity. In commercial batteries, other factors have to be taking into accounts such as volume, voltages, irreversible capacities, electrode formulations, processing and geometrical factors. Calculations of the energy density of alloyable elements.-In commercial batteries, the energy density is one of the most important features as it takes into account not only capacity but also voltage, weight, and volume of all active and inactive components. The volumetric energy density is calculated using the average difference in potential between cathode and anode and the molar volume as shown in Equation 2. The average potentials for anodes are listed in Table I. The volumetric energy density is plotted as a function of volume expansion calculated using Equation 2 and Equation 3, 31 where Equation 3 is the volume expansion as a function of a number of moles of lithium per mole of host alloy atoms. The results are plotted in Figure 2. Plotting energy density vs volume expansion rather than the number of moles is more useful as it provides a better guide, later on, to choose the alloyable element at a certain energy density and a tolerable volume expansion. For all the elements, the capacity increases as a simple rational function. It is clear that silicon has the highest energy density but also the highest volume expansion. The difference in the volumetric energy density of all the elements is not significant due to the very high density of the metals of lowest capacities (Sn, Ge, Sb) which also offsets the high average voltage. However, their gravimetric energy density varies significantly while silicon still giving the highest values followed by aluminum while the other three gave much lower values, as shown in Figure 2. U : Volumetric energy density x: Number of moles of lithium per mole of host alloy atoms Calculation of the capacity of composites (alloyable element: graphitic carbon).-The theoretical capacity of composites made of two electrochemically active components: alloyable element (Si, Al, Sn, Ge, Sb, Bi) and graphitic carbon, and inactive component: polymeric binder were calculated and are shown as ternary diagrams in Figure 3. The capacities were calculated by multiplying the percentage of each of the active components by its theoretical capacity and divided by the total (active and inactive) weight. 375 mAh/g of graphite capacity was used while the values for the alloyable elements were taken from Table I. The volume expansion of the composite was calculated assuming that volume changes for the binder and graphitic carbon are 0 and 12%, 32 respectively, while the volume expansions of the alloyable elements are the maximum expansion at the fully lithiated state obtained from Equation 3. The color in the ternary diagram highlights the variation in the theoretical capacity of the composite as a function of the three components in the fully lithiated state. We have also outlined the composites at volume expansion of 40%. This value was chosen in order to simulate the available free "void" within a commercial battery composite that can accommodate the increase in volume when it is lithiated from the delithiated state. [33][34][35][36] For all the alloyable elements, the capacity of the composite was highest when the element content was maximum (red region of the diagrams) except for Bi because of its lower capacity compared to graphite. Again, silicon gives the highest capacity values in composites compared to the other elements. The black line, which represents the 40% capacity expansion, crosses composites with a large variation in capacity and Table II shows the minimum and maximum capacity for these composites. At 40% volume expansion, the maximum capacity achieved for a composite is 715 mAh/g with 9 wt% silicon and 91 wt% graphite), when no binder is used that is also a point where the carbon axis and the black line meet in the ternary diagram in Figure 3. The minimum capacity of 518 mAh/g is obtained in a composite of 12% silicon and 88% binder, which is at the opposite side of the black line of the maximum capacity. In between, there is a whole range of compositions that can be selected based on the application. In commercial batteries, around 10% of binder and carbon additive is used and we will assume the amount of binder represent the total of the two, i.e. carbon additive has a negligible contribution to capacity. Also shown in Table II are the capacities for the practical composite (composites with 10% binder) calculated for all the elements. In the case of Si, this practical composite is composed of 10% binder, 9% Si, and 81% graphite and shows 692 mAh/g of reversible capacity, highlighted as a red star in the diagram. The results from Figure 3 and Table II clearly show that silicon is the best element to use on its own, if possible, or in a composite to achieve the highest capacity in a battery, despite its high volume expansion which as we have shown can be controlled to certain tolerable values such as the 40% volume expansion. If more capacity is needed, higher volume expansions will take place, and therefore Table III shows calculated volume expansion for composites with Si and Ge for capacity limited to 1000 and 1300 mAh/g. It shows that to get a higher practical capacity reaching 1000 mAh/g double the amount of silicon (17%) is needed, but with much higher volume expansion (65%) compared to the composite with 40% volume expansion. As the elements have different volume expansions, capacities, and voltages, it is tempting to see the effect of incorporating more than one of the elements into a composite to get higher capacities and lower volume expansion. To study the effect of two mixtures of elements, silicon and aluminum were chosen and a ternary diagram is generated and included in supporting information. The maximum capacity at 40% is 517 mAh/g that is lower than the silicon and graphite mix. The cause of the low capacity with the two elements is due to the volume expansion. In comparison, from silicon and graphite mix, the graphite has insignificant expansion compared to the metals that help to minimize the volume expansion and improve the capacity. The 40% volume expansion line lies on the right side of the triangle that suggests the use of more than 55% binder, which is not practical. Ternary capacity diagram of Si, Al, and carbon with 10% binder is also provided in supporting information. A table is also provided to show the capacity and the ratio of the mixtures. The maximum capacity was achieved when no aluminum is mixed. This is because even though aluminum provides three times more capacity than graphite; it increases the volume expansion by eight times, which makes this approach not practical. Energy density improvements of full cells with silicon-based anode.-It is apparent from calculations that so far silicon is the best alloying element to improve the capacity of the anode. However, there is still a need to see whether this will lead to an improvement in the total cell capacity. Obrovac et al. 29 have recently discussed the key parameters that affect the total capacity of the battery. We have used their equation (equation 15 in Obrovac et al. 29 ) as a starting point to calculate the energy density of the full cells and modified it by introducing initial columbic efficiency (ϕ + 0 and ϕ − 0 ) and volumes of inactive components and other volumes such as porosity of active component. [4] t + : Thickness of cathode t + cc and t − cc : Thickness of anode and cathode current collector t s : Thickness of separator q + R andq − R : Reversible capacity of anode and cathode ϕ + 0 and ϕ − 0 : Initial columbic efficiency of anode and cathode Firstly, the energy density was calculated when silicon only, not a composite, was used as an anode with different cathodes as a function of the first-cycle irreversible capacity of the silicon. The thickness of the cathode and anode current collectors was 15 um. The thickness of the separator was 20 um. The negative/positive (N/P) ratio was 1.1. Other parameters are summarized in Table IV. Figure 4 shows the percentage of improvement in full-cell energy density as a function of irreversible capacity. The figure clearly shows that the highest irreversible capacity (intersection with x-axis) that allows for any improvements to the full cells are 33, 31, and 36% for LiCoO 2 (LCO), LiNi 0.33 Mn 0.33 Co 0.33 O 2 (NMC), and LiMn 0.67 Ni 0.33 O 2 (LMNO), respectively. It also shows that the maximum improvements are 48, 43, and 58% for LCO, NMC, and LMNO, respectively; when up to 6% of the irreversible capacity of silicon is allowed. This is way less than what is obtained experimentally with silicon in half-cells that usually gives 20% 37 irreversible capacity and corresponds to 15 to 30% overall improvements. However, these improvements are also overestimated since most of the experimental results have shown much lower reversible capacity for silicon-only anode materials. Secondly, the same equation was used to find the minimum amount of silicon, in a composite, required to improve the performance of the full-cell. Figure 5 shows the improvements as a function of the active volume of silicon. The minimum volume required to achieve the improvements are 8. 19, 9.24, and 6.05% for LCO, NMC, and LMNO respectively. These were calculated using the same parameters in Table IV by changing the active volume of silicon. The minimum required silicon is where the theoretical capacity is equivalent to when graphite is used in a full Li-ion battery. This figure looks similar to Figure 1 since the active amount of silicon corresponds to an increase in the anode material. As a result, Figure 5 also shows the larger amount of silicon in a composite, silicon-rich composite, do not give high improvements, instead, volume expansion due to the high amount of silicon will have detrimental effects on the battery performance. This estimation also assumes fixed and low amount of irreversible capacity of silicon. Finally, the amount of silicon in a composite was applied into Equation 4 to estimate the improvement and results are shown in Figure 6, which shows the improvements versus silicon/graphite composites. LMNO gave the highest improvement (11%) without even using any silicon due to the high average voltage and capacity. For NMC, 1.1% of silicon has to be used to achieve the minimum improvements. The maximum improvements were achieved around 80 and 90 wt% of silicon composites, silicon-rich composites. 47, 42, and 57% of improvements were achieved for LCO, NMC, and LMNO, respectively. The improvement slows down as the composite becomes very rich in silicon up until 80 wt% of silicon where after less improvement is achieved due to the higher average potential of the silicon that reduces overall potential in a full-cell and consequently reduces the energy density of the full cells. During the course of this work, Dash et al. 38 used simple mass balance calculations to obtain volumetric capacity and introduced an equation using porosity/volume accommodation parameter to determine the theoretical limits of Si in a Si-carbon composite based anode to maximize the volumetric energy density of Li-ion cells. From calculations, they reported that the level of improvement in volumetric and gravimetric energy density of Li-ion cells using silicon-carbon composite with constrained volume is less than 15% when compared to Li-ion cell using a graphite-only anode. 38 Performance of silicon/graphite composites in half Li-ion cells.- To verify the calculation, we have assembled and tested cointype, half and full-cells using Si-carbon composites as an anode, NMC and LMNO as a cathode and a carbonate-based electrolyte. We have optimized the cell performance by looking into the following parameters: type and amount of electrode components (silicon, graphitic carbon, and binder), type of electrolyte, and "laminate" thickness on the cast, as summarized in Table V. We have used a design of experiment approach to analyzing the data to find optimum performance. Figure 7 shows some of the cycling performance of Li-ion half cells, which are selected to represent various parameters we have tested. It can be seen that: 1. A large irreversible capacity ranging from 15 to 200% corresponding to initial columbic efficiency of 30 to 75%. 2. A reversible capacity ranging from 350 to 2500 mAh/g with the 33 wt% Si-composite giving the highest capacity while composite 11 wt% giving the lowest capacity. The variables were factored into numerical values and the results were fitted to linear and non-linear models that represent irreversible and reversible capacity. From the results, we have selected the best performing variables, and the fitted models are simplified to optimize silicon, carbon, and binder composition. The selected variables are also shown in Table V. Our results have shown some obvious outcomes for selecting better performing variables, and the selected parameters are summarized and explained: r Type of Silicon: Etched and non-ball-milled silicon performed better than other types of silicon. It is well known that the etching of Table V. Summary of the variables used in the DOE optimization of silicon:carbon:binder anode. Selected variables were obtained from the battery data of 40 different composites. Some of them are shown in Figure 7. silicon removes the silicon oxide layers formed during synthesis and handling. 39 Due to its nanometer size, the silicon has a high surface area that allows for the formation of a significant amount of silicon oxide layers that reduces the reversible capacity. Even though it was shown that nano-sized silica is electrochemically active, 40 the results show that the silicon oxide layer at the surface does not provide any benefits to the capacity, but helps in interacting with the binder and other electrode components. 41,42 Our previous results show that ball milling leads to amorphization of crystalline silicon; however, in silicon nanoparticle the amorphization did not improve reversible capacity. The nano-sized silicon might be small enough that amorphization is not required. 30 r Type of Graphitic carbon: Mesoporous carbon microbeads (MCMB) provided better performance from the experimental results; the main reason is the good and well-known reversibility. It also acts as a buffer to mitigate the volume expansion due to the negligible volume expansion during lithiation. r The thickness of the "Laminate" cast: The thickness had no effect on the performance. Variables The battery data were fitted to linear and non-linear models to find an estimation of irreversible and reversible capacities. The linear model was selected since the theoretical capacities from the ternary diagram have shown linear relation as shown in Figure 3. However, it is also known experimentally that the capacity of the composite does not show a linear relation due to many factors, such as irreversible capacity, volume expansion, the size of the particles, type of electrolyte or binder. The results of the linear model, which is shown in Figures 8a and 8b, shows a decrease in irreversible and reversible capacity as the number of silicon increases. Also, some compositions show higher-than-theoretical capacity due to the linear fitting. For non-linear fitting, shown in supporting information Figure S3 b and d, the results show unreasonable values such as negative values for irreversible and reversible capacity or higher-than-theoretical values for capacity. As a result, we have set limits for theoretical capacity, so the capacity does not go below zero or exceeds theoretical capacity and the results are shown in Figure 8c for linear model and for non-linear models are shown in supporting information, Figure S4. Setting up the limits was necessary to use Equation 4 and find the optimized ratio of the composites. The estimations from Figures 8b and 8c are used along with Equation 4 to calculate improvements on a full-cell battery that is shown in Figure 8d. Figure 9c shows that the maximum experimental reversible capacity from the fitted model is 1100 mAh/g. This is true for most of the ternary diagrams and is way lower than the full theoretical capacity of the silicon (4200 mAh/g). The low capacity of the silicon composites could be improved in the future, however, the results obtained so far do not have enough data point to cover the whole range of parameters, which causes the lower than expected capacity. Some research groups have reported capacities in the range between 1000 to 4000 mAh/g, 14,23,43 but this was in special experimental conditions such as low loading, thin films, low C-rates. Some of the cells tested were assembled with close to the optimum composition of 10:75:15 (Si:Carbon:binder with a theoretical capacity of 700 mAh/g), which only have shown 465 mAh/g. The battery results of the silicon composite tested in this work still provide a trend that will help in the optimization of the silicon composites. Figure 8d shows that a maximum of 16% improvement can be achieved when the amount of silicon exceeds 10 wt%. However, the capacity yields are way below 25% which suggests the optimum amount of silicon that gives the highest improvement in the total cell capacity/energy density is between 10 and 25 wt% of silicon in a composite. Figure 9b shows the irreversible capacity averages around 20-23%, which is also a non-realistic estimate since using carbon will provide lower than 10% irreversible capacity. This mismatch is also caused by the linear fitted model. Another reason might be due to the type of carbon tested in this work, for example, the carbon super S, which is additive, would provide higher irreversible capacity compared to MCMB that shown around 6% of irreversible capacity. Also, silicon usually has more than 20% of irreversible capacity. By considering the irreversible capacity of each material, the composite that provides around 20% of irreversible capacity is reasonable to use to find the estimation of the improvements in the full cell. Irreversible and reversible capacities from Figures 8b and 8c are used with Equation 4 to generate Figure 8d. More than 5 wt% of silicon has to be used with the minimum amount of binder to obtain improvements in the full-cell; however, more than 25 wt% of silicon is not necessary since the capacity yields are less than 25%. Performance of silicon/graphite composites in full Li-ion cells.- To verify the results, we have tested some of the composites in full cells using NMC as a cathode. One of the first challenges faced was to balance the capacity of the active materials in both electrodes: cathode (P) and anode (N) in a coin cell. Full cells with different N/P ratios ranging from 0.7 to 1.9 were assembled. The full-cell and NMC/Li half-cell discharge capacity results are shown in Figure 9. The composition of the anode was 10 wt% silicon, 85 wt% MCMB, and 5 wt% sodium alginate as a binder. The composition of the cathode was 90 wt% NMC, 5 wt% carbon Super P, and 5 wt% PVDF. The electrolyte was 1 M LiPF 6 in EC:DEC (3:7) with 10% fluoroethylene carbonate (FEC) as additive.The most optimized discharge capacity was achieved when N/P was at 1.76. However, the optimized full cells still show a continuous decrease in capacity reaching 63% of 95 mAh/g after 50 cycles. For further analysis, the potential profile and dQ/dV are plotted as shown in Figure 10, Figure 11, and in supporting information. Figure 10 and Figure 11 are for graphite/NMC full-cell and Si composite/NMC full-cell (N/P = 1.76) respectively. Figure 10 shows the full-cell battery of NMC with graphite as an anode. The distinctive peaks at 3.4, 3.5 and 3.7 V when charging and discharging are due to the lithiation and de-lithiation of graphite. The first dQ/dV cycle shows a slightly higher shift, 0.5 V higher, due to the SEI formation on graphite at first lithiation, which is due to the slightly higher average potential against Li/Li + . The consistent dQ/dV profile explains the balanced active materials in a cathode and anode. Unlike graphite, Sicomposite in full cells exhibits broader peaks because of the lithium alloying reaction, not like graphite where lithium intercalates and results in sharper peaks. Even though the silicon/NMC full cell (N/P = 1.76) has the best capacity retention among the others, the dQ/dV profile show reduced peak current, a shift in potential and even disappearance of some peaks. The shift in potential has also been observed by others and was attributed to the higher end of charging voltages. 44 This effect might be caused by the continuous formation of SEI at silicon surface and a decrease in reversible capacity. We were hoping to improve the reversibility of the silicon composite by balancing the active material of cathode and anode, but obviously, further work is required to optimize the silicon composite anode to work in a full-cell, Li-ion battery as it was pointed out by other researchers. [44][45][46] Moreover, due to the challenges faced with this new type of full cells, it was hard to move to compositions with higher silicon content than 10 wt% until the issues with capacity fade are resolved. This, of course, requires the discovery of new electrolyte additives, binders and other components in the battery. Conclusions In this work, we have evaluated the performance of silicongraphitic carbon composites in half and full Li-ion cells. We first optimized silicon:graphite composites using a newly modified equation based on the work of Obrovac et al. 29 From calculations, we found that 59% improvement in energy density can be obtained when only 6% irreversible capacity is assumed. However, using half cells experimental results, lower improvements are achieved in graphiterich composites with low silicon content reaching 16% improvements when 20% of silicon and 80% of graphite is used with no binder, which was due to the high irreversible and low reversible capacity. Realistically, the range of the silicon would be from 15 to 25% of silicon, 50% to 80% of graphite and 5 to 10% of binder. The calculated improvements in this work were based on gravimetric capacity, but similar results have been obtained when theoretical limits were estimated using volumetric capacity and using simple mass balance equations. 38 We found that assembling full cells using the silicon:graphite composite is not straightforward and requires lots of optimization to improve coulombic efficiency and cycleability. For examples, full cells assembled with 10 wt% silicon:graphite composite give 63% capacity retention of 95 mAh/g after only 50 cycles.
Empathic Autonomous Agents Identifying and resolving conflicts of interests is a key challenge when designing autonomous agents. For example, such conflicts often occur when complex information systems interact persuasively with humans and are in the future likely to arise in non-human agent-to-agent interaction. We introduce a theoretical framework for an empathic autonomous agent that proactively identifies potential conflicts of interests in interactions with other agents (and humans) by considering their utility functions and comparing them with its own preferences using a system of shared values to find a solution all agents consider acceptable. To illustrate how empathic autonomous agents work, we provide running examples and a simple prototype implementation in a general-purpose programing language. To give a high-level overview of our work, we propose a reasoning-loop architecture for our empathic agent. Background and Problem Description In modern information technologies, conflicts of interests between users and information systems that operate with a high degree of autonomy (autonomous agents) are of increasing prevalence. For example, complex web applications persuade end-users, possibly against the interests of the persuaded individuals 1 . Given the prevalence of autonomous systems will increase, conflicts between autonomous agents and humans (or between different autonomous agent instances and types) can be expected to occur more frequently in the future, e.g. in interactions with or among autonomous vehicles in scenarios that cannot be completely solved by applying static traffic rules. Consequently, one can argue for the need to develop empathic intelligent agents that consider the preferences or utility functions of others, as well as ethics rules and social norms when interacting with their environment to avoid severe conflicts of interests. As a simple example, take two vehicles (A and B) that are about to enter a bottleneck. Assume they cannot enter the bottleneck at the same time. A and B can either wait or drive. Considering only its own utility function, A might determine that driving is the best action to execute, given that B will likely stop and wait to avoid a crash. However, A should ideally assess both its own and B's utility function and act accordingly. If B's utility for driving is considered higher than A's, A can then come to the conclusion that waiting is the best action. As A does not only consider its own goals, but also the ones of B, one can regard A as empathic, following Coplan's definition of empathy, as "a process through which an observer simulates another's situated psychological states, while maintaining clear self-other differentiation" [12]. While existing literature covers conflict resolution in multi-agent systems from a broad range of perspectives (see for a partial overview: [2]), devising a theoretical framework for autonomous agents that consider the utility functions (or preferences) of agents in their environment and use a combined utilitarian/rule-based approach to identify and resolve conflicts of interests can be considered a novel idea. However, existing multi-agent systems research can be leveraged to implement core components of such a framework, as is discussed later. In this chapter, we provide the following research contributions: 1. We create a theoretical framework for an empathic agent that uses a combination of utility-based and rule-based concepts to compromise with other agents in its environment when deciding upon how to act. 2. We provide a set of running examples that illustrate how the empathic agent works and show how the examples can be implemented in a general-purpose programing language. 3. We propose a reasoning-loop architecture for a generic empathic agent. The rest of this chapter is organized as follows: in Section 2, we present a theoretical framework for the problem in focus. Then, we illustrate the concepts with the help of different running examples and describe the example implementation in a generalpurpose programing language in Section 3. Next, we outline a basic reasoning-loop architecture for the empathic agent in Section 4. In Section 5, we analyze how the architecture aligns with the belief-desire-intention approach and propose an implementation using the Jason multi-agent development framework. Finally, we discuss how our empathic agent concepts relate to existing work, propose potential use cases, highlight a set of limitations, and outline future work in Section 6, before we conclude the chapter in Section 7. Empathic Agent Core Concepts In this section, we describe the core concepts of the empathic agent. To allow for a precise description, we assume the following scenario 2 : -The scenario describes the interaction between a set of empathic agents {A 0 , ..., A n }. -Each interaction scenario takes place at one specific point in time, at which all agents execute their actions simultaneously. -At this point in time, each agent A i (0 ≤ i ≤ n) has a finite set of possible actions Acts i := {Act 0 i , ..., Act m i }, resulting in an overall set of action sets Acts := {Acts 0 , ..., Acts n }. Each agent can execute an action tuple that contains one or multiple actions. In each interaction scenario, all agents execute their actions simultaneously and receive their utility as a numeric reward based on the actions that have been executed. -The utility of an agent A i is determined by a function u i of the actions of all agents. The utility function returns a numerical value or null 3 : The goal of the empathic agent is to maximize its own utility as long as no conflicts with other agents arise. We define a conflict of interests between several agents as any interaction scenario in which there is no tuple of possible actions that maximizes the utility functions of all agents. I.e., we need to compare arg max u A0 , ..., arg max u An 4 . Note that arg max u Ai returns a set of tuples (that contains all action tuples that yield the maximal utility for agent A i ). For this, we create a boolean function c that the empathic agent uses to determine conflicts between itself and other agents, based on the utility functions of all agents: Considering the incomparability property of the von Neumann-Morgenstern utility theorem [24], such a conflict can be solved only if a system of values exists that is shared between the agents and used to determine comparable individual utility values. Hence, we introduce such a shared value system. To provide a possible structure for this system, we deconstruct the utility functions into two parts: -An actions-to-consequences mapping (a function a2c i that takes the actions the agents potentially decide to execute and returns a set of consequences (propositional atoms) Consqs := {Consq 0 i , ..., Consq n i }): -A consequences-to-utility mapping (utility quantification function uq). Note that the actions-to-consequences mapping is agent-specific, while the utility quantification function is generically provided by the shared value system 5 : uq := 2 Consqs → {null, −∞, R, ∞} 3 We allow for utility functions to return a null value for action tuples that are considered impossible, e.g. in case some actions are mutually exclusive. While we concede that the elegance of this approach is up for debate, we opted for it because of its simplicity. 4 The arg max operator takes the function it precedes and returns all argument tuples that maximize the function. 5 I.e., for the same actions, an agent should only receive a different utility outcome than another agent if the impact on the two is distinguishable in its consequences. We again allow for null values to be returned in case of impossible action tuples. Then, agents can agree on the utility value of a given tuple of actions, as long as the quality of the consequence is observable to all agents in the same way. In addition, the value system can introduce generally applicable rules, e.g. to hard-code a prioritization of individual freedom into an agent. With help of the value system, we create a pragmatic definition of a conflict of interests as any situation, in which there is no tuple of actions that is regarded as acceptable by all agents when considering the shared set of values, given each agent executes the actions that maximize their individual utility function. To support the notion of acceptability, we introduce a set of agent-specific acceptability functions accs := {acc A0 , ..., acc An }. The acceptability functions are derived from the corresponding utility functions and the shared system of values and take a set of actions as their inputs. Acceptability functions are domain-specific and there is no generic logic to be described in this context: The notion of acceptability rules adds a normative aspect to the otherwise consequentialist empathic agent framework. Without this notion, our definition of a conflict of interests would cover many scenarios that most human societies regard as not conflictworthy, e.g. when one agent would need to accept large utility losses to optimize its own actions towards improving another agents' utility. Considering the acceptability functions, we can now determine whether a conflict of interests in terms of the pragmatic definition approach exists for an agent A i by using the following function cp that takes the utility function u i of agent A i and the acceptability functions Accs := {acc A0 , ..., acc An } as input arguments: cp(u i , Accs) :=    true, if : acts ∈ arg max u i ∧ ∀ acc∈Accs : acc(acts) = true f alse, otherwise. We define an empathic agent A i as an agent that, when determining the actions it executes, considers the utility functions of the agents it could potentially affect and maximizes its own utility only if doing so does not violate the acceptability function of any other agent; otherwise it acts to maximize the shared utility of all agents (while also considering the acceptability functions) 6 . Algorithm 1 specifies an initial, naive approach towards the empathic agent core algorithm. The empathic agent core algorithm of an agent A i in its simplest form can be defined as a function that takes the utility functions {u 0 , ..., u n } of the different agents, the set of all acceptability functions Accs := {acc 0 , ..., acc l }, and all possible actions Acts i of agent A i and returns the tuple of actions A i should execute 7 . 4: return Actsi ∩ f irst(acts k ∈ best acceptable acts) 5: else 6: return Actsi ∩ f irst(arg max(aggregate(u 0 , ..., u n )) 7: end if 8: end procedure Note that in the context of the empathic agent algorithms, the function f irst(set) turns the provided set of tuples into a sequence of tuples by sorting the elements in decreasing alphanumerical order and then returns the first element of the sequence. This enables a deterministic action tuple selection. Moreover, we construct a set of new utility functions {u 0 , ..., u n } that assign all not acceptable action tuples a utility of null (Algorithm 2) 8 : Algorithm 2 Helper function: new utility function based on u i ; all not acceptable action tuples yield utility of null. is acceptable ← ∀ acc ∈ accs : acc(actsi) = true 3: if is acceptable then 4: return ui(actsi, ..., actsn) 5: else 6: return null 7: end if 8: end procedure In Algorithm 1, we specify that the agent picks the first item in the sequence of determined action tuples if it finds multiple optimal tuples of actions. Alternatively, the agent could employ one of the following approaches to select between the optimal action tuples: -Random. The agent picks a random action tuple from the list of the tuples it determined as optimal. This would require empathic agents to use an additional protocol to agree on the action tuple that should be executed. -Utilitarian. Among the action tuples that were determined as optimal, the agent picks the one that provides maximal combined utility for all agents and falls back to a random or first-in-sequence selection between action tuples if several of such tuples exist. Still, the algorithm is somewhat naive, as agents that implement it will decide to execute suboptimal activities if the following conditions apply: -Multiple agents find that the actions that optimize their individual utility are inconsistent with the actions that are optimal for at least one of the other agents. -Multiple agents find that executing these conflicting actions is considered acceptable. -Executing these acceptable actions generates a lower utility for both agents than optimizing the shared utility would. Hence, we extend the algorithm so that the agent selects the tuple of actions that maximizes its own utility, but falls back to maximize shared utility if the utility-maximizing action tuple is either not acceptable, or would lead to a lower utility outcome than maximizing the shared utility, considering the other agent follows the same approach (Algorithm 3): ..., 6: DET ERM IN E GOOD ACT S M AX(un, Accs, acts maxn), 7: } 8: if good acts max0 ∩ ... ∩ good acts maxn = {} then 9: return Actsi ∩ f irst(good acts max) 10: else 11: return Actsi ∩ f irst(arg max(aggregate(u 0 , ..., u n ))) 12: end if 13: end procedure Algorithm 3 calls two helper functions. Algorithm 4 determines acceptable action tuples that maximize a provided utility function u i : Algorithm 5 determines all action tuples that would maximize an agent's (A i 's) utility if this agent could dictate the actions of all other agents, given the action tuples provide a better utility for this agent than the action tuples that maximize all agents' combined utility, given all agents execute an action tuple that maximizes their own utility if they could dictate the other agents' actions. Note that Algorithm 5 makes use of the previously introduced algorithm (Algorithm 1): Algorithm 5 Helper function: determines all maximizing action tuples that would still yield a good utility result for agent A i (0 ≤ i ≤ n), given all other agents also pick an action tuple that would maximize their own utility, if all other agents "played along". 4: ≥ ui(acts max) 5: end procedure However, this algorithm only considers two types of action tuples for execution: action tuples that provide the maximal individual utility for the agent and action tuples that provide the maximal combined utility for all agents. Action tuples that do not maximize the agent's individual utility, but are still preferable over the action tuples that maximize the combined utility, remain unconsidered. Consequently, we call an agent that implements such an algorithm a lazy empathic agent. We extend the algorithm to also consider all action tuples that could possibly be relevant. I.e., if an action tuple is not considered acceptable, or if the tuple is considered acceptable but the agent chooses to not execute it, the agent falls back to the tuple of actions that provides the next best individual utility. We construct a function ne that returns the Nash equilibria based on the updated utility functions {u 0 , ..., u n }, considering we have a strategic game N, (A i ) i , with N := {A 0 , ...A n }, A i := Acts Ai , and acts i acts := u i (acts) ≥ u i (acts ) 9 . Then, we create the full empathic agent core algorithm D A F i for an agent A i that takes the updated utility functions {u 0 , ..., u n } and all agents' possible actions as inputs {Acts 0 , ..., Acts n }. The algorithm determines the (first of) the Nash equilibria that provide the highest shared utility and, if no Nash equilibrium exists, chooses the first tuple of actions that maximizes shared utility: shared max equilibria ← acts * ∈ equilibria : 5: ∀acts ∈ equilibria : 6: (u 0 (acts * ) × ... × u n (acts * )) ≥ (u 0 (acts) × ... × u n (acts)) 7: return Actsi ∩ f irst(shared max equilibria) 8: else 9: return Actsi ∩ f irst(arg max(aggregate(u0, ..., un)) 10: 11: end if 12: end procedure Going back to the selection between several action tuples that might be determined as optimal, it is now clear that a deterministic approach for selecting a final action tuple is preferable for both lazy and full empathic agents, as it avoids agents deciding upon executing action tuples that are not aligned with one another and lead to an unnecessary low utility outcome. Hence, we propose using a utilitarian approach with a first-insequence selection if the utilitarian approach is inconclusive 10 . The proposed agent can be considered a rational agent following the definition by Russel and Norvig in that it "acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome" [22, p. 4-5] and an artificially socially intelligent agent as defined by Dautenhahn as it instantiates "human-style social intelligence" in that it "manage[s] the individual's [its own] interests in relationship to the interests of the social system of the next higher level" [13]. Running Examples In this section, we present two simple running examples of empathic agents and describe the implementation of the examples in a general-purpose programming language (JavaScript). Example 1: Vehicles We provide a running example for the "vehicle/bottleneck" scenario introduced above. Consequently, we have a two-agent scenario {A, B}. Each agent has a utility function u A,B := Acts A × Acts B → {−∞, R, ∞}. Acts A and Acts B are the possible actions A and B, respectively, can execute. To fully specify the utility functions, we follow the approach outlined above and first construct the actions-to-consequences mappings a2c A and a2c B for both agents. The possible actions are Acts A = {drive A , wait A } and Acts B = {drive B , wait B }. I.e., Acts = {drive A , wait A , drive B , wait B }. To assess the consequences that include waiting, we assume B is twice as fast as A (without waiting, A needs 20 time units to pass the bottleneck while B needs 10) 11 : We construct the following utility quantification functions and subtract an amount proportional to the waiting time from the utility value 1 of wait 0: Actions-to-consequences mappings and utility quantification functions can then be combined to utility functions: We assume scenarios where both agents are driving or both agents are waiting are not acceptable by either agents and introduce the corresponding acceptability rules: Based on the utility functions (u A , u B ), we create new utility functions (u A , u B ) that consider the acceptability rules: Finally, we apply the empathic agent algorithms to our scenario. Using the naive algorithm, the agents apply the acceptability rules, but do not consider the other agent's strategy. Hence, both agents decide to drive, (and consequently crash). The resulting utility is −∞ for both agents. None of the two other algorithms (lazy, full) allows any agent to decide to execute an action tuple that does not optimize shared utility. I.e., both algorithms yield the same result: The resulting utility is 0.9 for agent A and 1 for agent B. As can be seen, the difference between agent types is not always relevant. The following scenario will provide a distinctive outcome for all three agent variants. Example 2: Concert As a second example, we introduce the following scenario 12 . Two empathic agents {A, B} plan to attend a concert of music by either Bach, Stravinsky, or Mozart (Acts := {Bach A , Stravinsky A , M ozart A , Bach B , Stravinsky B , M ozart B }). A considers the Bach and Mozart concerts of much greater pleasure when attended in company of B (utility of 6, respectively 3) and not alone (either concert: 1). In contrast, the Stravinsky concert yields good utility, even if A attends it alone (4). Attending it in company of B merely gives a utility bonus of 1 (total: 5). B prefers concerts in company of A as well (2 for Stravinsky and 4 for Mozart), but gains little additional utility from attending a Bach concert with A (1.1 with A versus 1 alone) because they dislike listening to A's Bach appraisals. Attending any concert alone yields a utility of 1 for B. As the utility is in this scenario largely derived from the subjective musical taste and social preferences of the agents and to keep the example concise, we skip the actions-to-consequences mapping and construct the utility functions right away 13 : We introduce the following acceptability function that applies to both agents (although it is of primary importance for agent A). As agent A is banned from the venue that hosts the Stravinsky concert, the action Stravinsky A is not acceptable: Considering the acceptability function, we create the following updated utility functions: otherwise. otherwise. Now, we can run the empathic agent algorithms. The naive algorithm returns Bach for agent A and M ozart for agent B: The resulting utility is 1 for both agents. The lazy algorithm returns M ozart for both agents: The resulting utility is 3 for agent A and 4 for agent B. The full algorithm returns Bach for both agents: The resulting utility is 6 for agent A and 1.1 for agent B. JavaScript Implementation We implemented the running examples in JavaScript 14 . As a basis for the implementation, we created a simple framework that consists of the following components: -Web socket server: environment and communications manager. The environment and communications interface is implemented by a web socket server that consists of the following components: • Environment and communications manager. The web server provides a generic environment and communications manager that relays messages between agents and provides the shared value system of acceptability rules. • Environment specification. The environment specification contains scenariospecific information and enables the server to determine and propagate the utility rewards to the agents. -Web socket clients: empathic agents. The empathic agents are implemented as web socket clients that interact via the server described above. Each agent consists of the following two components: • Generic empathic agent library. The generic empathic agent library provides a function to create an empathic agent object with the properties ID, utilityMappings, acceptabilityRules, and type (naive, lazy, or full). The empathic agent object is then equipped with an action determination function that implements the empathic agent algorithm as described above. • Agent specifications. The agent specification consists of the scenario-specific information of all agents in the environment, as well as of the current agents' identifier and type (naive, lazy, or full) and is used to instantiate a specific empathic agent. Note that in the implementation, we construct the utility functions right away and do not use actions-to-consequences mappings. The implementation assumes that the specifications provided to both agents agents and to the server is consistent. Fig. 1 depicts the architecture of the empathic agent JavaScript implementation for the vehicle scenario. We chose JavaScript as the lan- guage for implementing the scenario to show how to implement basic empathic agents using a popular general-purpose programing language, but concede that a more powerful implementation in the context of MAS frameworks like Jason is of value. Reasoning-loop Architecture We create a reasoning-loop architecture for the empathic agent and again assume a twoagent scenario to simplify the description. The architecture consists of the following components: -Empathic agent (EA). The empathic agent is the system's top-level component. It has three generic components (observer, negotiator, and interactor) and five dynamically generated functions/objects (utility function and acceptability function of both agents, as well as a formalized model of the shared system of values). -Target agent (TA). In the simplest scenario, the empathic agent interacts with exactly one other agent (the target agent), which is modeled as a black box. Preexisting knowledge about the target agent can be part of the models the empathic agent has of the target agent's utility and acceptability functions. -Shared system of values. The shared system of values allows comparing the utility functions of the agents and creating their acceptability functions, as well as their actions-to-consequences mappings and utility quantification functions, from which the utility functions are derived. -Utility function. Based on the actions-to-consequences mappings and utility quantification functions, each empathic agent maintains its own utility function, as well as models of the utility function of the agent it is interacting with. -Acceptability function. Based on the shared system of values, the agent derives the acceptability functions (as described above) to then incorporate them into updated utility functions, which it feeds into the empathic agent algorithm to determine the best possible tuple of actions. -Observer. The observer component scans the environment, registers other agents, receives their utility functions, and also keeps the agent's own functions updated. To construct and update the utility and acceptability functions without explicitly receiving them, the observer could make use of inverse reinforcement learning methods, as for example described by [10]. -Negotiator. The negotiator identifies and resolves conflicts of interests using the acceptability function models and instructs the interactor to engage with other agents if necessary, in particular, to propose a solution for a conflict of interest, or to resolve the conflict immediately (depending on the level of confidence that the solution is indeed acceptable). The negotiator could make use of argument-based negotiation (see e.g.: [3]). -Interactor. The interactor component interacts with the agent's environment and in particular with the target agent to work towards the conflict resolution. The means of communication is domain-specific and not covered by the generic architecture. Fig. 2 presents a simple graphical model of the empathic agent's reasoning loop architecture. Alignment with BDI Architecture and Possible Implementation with Jason Our architecture reflects the common belief-desire-intention (BDI) model as based on [7] to some extent: -If a priori available to both agents in the forms of rules or norms, beliefs, and belief sets are part of the shared value system. Otherwise, they qualify the agents' utility and acceptability functions directly. In contrast, desires define the objective(s) towards which an agent's utility function is optimized and are-while depending on beliefs-not directly mutable through persuasive argumentation between the agents. -Intentions are the tuples of actions the agents choose to execute. -As it strives for simplicity, our architecture does for now not distinguish between desires and goals, and intentions and plans, respectively. We expect to improve the alignment of our framework with the BDI architecture to facilitate the integration with existing BDI-based theories and implementation using BDI frameworks. The Jason platform for multi-agent system development [6] can serve as the basis for implementing the empathic agent. While simplified running examples of our architecture can be implemented with Jason, extending the platform to provide an empathic agent-specific abstraction layer would better support complex scenarios. Discussion In this section, we place our empathic agent concepts into the context of existing work, highlight potential applications, analyze limitations, and outline future work. Similar Conflict Resolution Approaches Our empathic agent can be considered a generic and basic agent model that can draw upon a large body of existing research on multi-agent learning and negotiation techniques for possible extensions. A survey of research on agents that model other agents is provided by Albrecht and Stone [1]. The idea of combining a utility-based approach with acceptability rules to emulate empathic behavior is to our knowledge novel. However, a somewhat similar concept is presented by Black and Atkinson, who propose an argumentation-based approach for an agent that can find agreement with one other agent on acceptable actions and can develop a model of the other agent's preferences over time [5]. While Black's and Atkinson's approach is similar in that it reflects Coplan's definition of empathy (it maintains "a process through which [it] simulates another's situated psychological states, while maintaining clear self-other differentiation" [12]) to some extent we identify the following key differences: -The approach is limited to a two-agent scenario. -The agent model is preference-based and not utility-based. While this has the advantage that it does not require reducing complex preferences to a simple numeric value, it makes it harder to combine with existing learning concepts (see below). -The agent has the ability to learn another agent's preferences over time. However, the learning concept is-according to Black and Atkinson-"not intended to be complete" [5]. We suggest that while our empathic agent does not provide learning capabilities by default, it has the advantage that its utility-based concept allows for integration with established inverse reinforcement learning algorithms (see: Subsection 6.4). -The agent Black and Atkinson introduce is not empathic in that it tries to compromise with the other agent, but rather uses its ability to model the agent's preferences to improve its persuasive capabilities by tailoring the arguments it provides to this agent. Potential Real-World Use Cases In this chapter, we exemplified the empathic agent with two simple scenarios, with the primary purpose of better explaining our agent's core concepts. These scenarios do not fully reflect real-world use cases. However, the core concepts of the agent can form the basis of solutions for real-world applications. Below, we provide a non-exhaustive list of use case types empathic agents could potentially address: -Handling aspects of traffic navigation scenarios that cannot be covered by static rules. Besides adjusting the assertiveness levels to the preferences of their drivers, as suggested by Sikkenk and Terken [23], and Yusof et al. [26], autonomous vehicles could consider the driving style of other human-or agent-controlled vehicles to improve traffic flow, for example by adjusting speed or lane-changing behavior according to the (perceived) utility functions of all traffic participants or to resolve unexpected incidents (in particular emergencies). -Mitigating negative effects of large-scale web applications on their users. Evidence exists that suggests the well-being of passive (mainly content-consuming) users of social media is frequently negatively impacted by technology, while the well-being of at least some users, who actively engage with others through the technology, improves [20]. To facilitate social media use that is positive for the users' well-being, an empathic agent could serve as a mediator between user needs (social inclusion) and the business goals of the technology provider (often: maximization of advertisement revenue). -Decreasing the negotiation overhead for agent-based manufacturing systems. Autonomous agent-based manufacturing systems are an emerging alternative to traditional, hierarchically managed control architectures [16]. While agent-based systems are considered to increase the agility of manufacturing processes, one disadvantage of agent-based manufacturing systems is the need for negotiation between agents and the resulting overhead (see for example: Bruccoleri et al. [8]). Employing empathic agents in agent-based manufacturing scenarios can possibly help solve conflicts of interests efficiently. -Improving persuasive healthcare technology. Persuasive technology-"computerized software or information system designed to reinforce, change or shape attitudes or behaviours or both without using coercion or deception" [18]-is frequently applied in healthcare scenarios [11], in particular, to facilitate behavior change. Persuasive functionality is typically implemented using recommender systems [14], which in general struggle to compromise between system provider and end-user needs [21]. This can be considered as a severe limitation in healthcare scenarios, where trade-offs between serving public health needs (optimizing for a low burden on the healthcare system) and empowering patients (allowing for a subjective assessment of health impact, as well as for unhealthy choices to support individual freedom) need to be made. Hence, employing the empathic agent concepts in this context can be considered a promising endeavor. Limitations The purpose of this chapter is to introduce empathic agents as a general concept. When working towards a practically applicable empathic agent, the following limitations of our work need to be taken into account: -The agent is designed to act in a fully observable world, which is an unrealistic assumption for real-world use cases. For better applicability, the agent needs to support probabilistic models of the environment, the other agents, and the shared value system. -Our formal empathic agent description is logic-based. Integrating it with Markov decision process-based inverse reinforcement learning approaches is a non-trivial endeavor, although certainly possible. -In the example scenarios we provided, all agents are identically implemented empathic agents. An empathic agent that interacts with non-empathic agents will need to take into account further game-theoretic considerations and to have negotiation capabilities. -The presented empathic agent concepts use a simple numeric value to represent the utility an agent receives as a consequence of the execution of an action tuple. While this approach is commonly employed when designing utility-based autonomous agents, it is an oversimplification that can potentially limit the applicability of the agent. -Software engineering and technological aspects of empathic agents need to be further investigated. In particular, the implementation of an empathic agent library using a higher-level framework for multi-agent system development, as we discuss in Section 5 could provide a more powerful engineering framework for empathic agents. Future Work We suggest the following research to address the limitations presented in Subsection 6.3: -So far, we have chosen a logic-based approach to the problem in focus to allow for a minimalistic problem description with low complexity. Alternatively, the problem could be approached from a reinforcement learning perspective (see for an overview of multi-agent reinforcement learning: [9]). Using (partially observable) Markov decision processes, one can introduce a well-established temporal and probabilistic perspective 15 . A key capability our empathic agent needs to have is the ability to learn the utility function of other agents. A comprehensive body of research on enabling this ability by applying inverse reinforcement learning exists (for example: [10] and [17]). Hence, creating a Markovian perspective on the empathic agent to enable the application of reinforcement learning methods for the observational learning of the utility functions of other agents can be considered relevant future work. -To better assess the applicability of the empathic agent algorithms, it is important to analyze its computational complexity in general, as well as to evaluate it in the context of specific use cases that might allow for performance-improving adjustments. -To enable empathic agents to reach consensus in case of inconsistent beliefs argumentation-based negotiation approaches can be applied that consider uncertainty and subjectivity (e.g. [15]) for creating solvers for finding compromises between utility/acceptability functions. Similar approaches can be used to enhance utility quantification capabilities by considering preferences and probabilistic beliefs. -The design intention of the architectural framework we present in Section 4 is to form a high-level abstraction of an empathic agent that is to some extent agnostic of the concepts the different components implement. We are confident that the framework can be applied in combination with existing technologies to create a real-world applicable empathic agent framework, at least for use cases that allow making some assumptions regarding the interaction context and protocol. -The ultimate goal of this research is to apply the concept in a real-world scenario and evaluate to what extent the application of empathic agents provides practically relevant benefits. Conclusion In this chapter, we introduced the concept of an empathic agent that proactively identifies potential conflicts of interests in interactions with other agents and uses a mixed utility-based/rule-based approach to find a mutually acceptable solution. The theoretical framework can serve as a general purpose model, from which advanced implementations can be derived to develop socially intelligent systems that consider other agents' (and ultimately humans') welfare when interacting with their environment. The example implementation, the reasoning-loop architecture we introduced for our empathic agent, and the discussion of how the agent can be implemented with a belief-desireintention approach provide first insights into how a more generally capable empathic agent can be constructed. As the most important future research steps to advance the empathic agent, we regard the conceptualization and implementation of an empathic agent with learning capabilities, as well as the development of a first simple empathic agent that solves a particular real-world problem.
Water Dispersal of Methanotrophic Bacteria Maintains Functional Methane Oxidation in Sphagnum Mosses It is known that Sphagnum associated methanotrophy (SAM) changes in relation to the peatland water table (WT) level. After drought, rising WT is able to reactivate SAM. We aimed to reveal whether this reactivation is due to activation of indigenous methane (CH4) oxidizing bacteria (MOB) already present in the mosses or to MOB present in water. This was tested through two approaches: in a transplantation experiment, Sphagna lacking SAM activity were transplanted into flark water next to Sphagna oxidizing CH4. Already after 3 days, most of the transplants showed CH4 oxidation activity. Microarray showed that the MOB community compositions of the transplants and the original active mosses had become more similar within 28 days thus indicating MOB movement through water between mosses. Methylocystis-related type II MOB dominated the community. In a following experiment, SAM inactive mosses were bathed overnight in non-sterile and sterile-filtered SAM active site flark water. Only mosses bathed with non-sterile flark water became SAM active, which was also shown by the pmoA copy number increase of over 60 times. Thus, it was evident that MOB present in the water can colonize Sphagnum mosses. This colonization could act as a resilience mechanism for peatland CH4 dynamics by allowing the re-emergence of CH4 oxidation activity in Sphagnum. INTRODUCTION Peatlands store over one third of global terrestrial carbon (Gorham, 1991). Although these ecosystems are carbon dioxide (CO 2 ) sinks they are also a major source of methane (CH 4 ) , formed as the final product of anaerobic degradation of organic matter. Most carbon in these systems is derived from Sphagnum mosses (Clymo and Hayward, 1982), the dominant plant in bog-type northern peatlands. Mosses sequester atmospheric CO 2 directly through photosynthesis. Methanotrophic bacteria (MOB) living inside the moss hyaline cells and on leaf surfaces (Raghoebarsing et al., 2005;Kip et al., 2010) also play an important role in carbon binding. These bacteria provide CO 2 for the plant via CH 4 oxidation, a mechanism that is especially important in submerged conditions where CO 2 diffusion is slow (Kip et al., 2010). This phenomenon is of local and global importance as it has been detected in all 23 Sphagnum species of a peatland area (Larmola et al. (2010)) and in geographically distant peatlands (Kip et al., 2010) and may be partly responsible for the lower CH 4 emissions of Sphagnum bogs in relation to other peatland types (Nykänen et al., 1998). About 10-15 (Raghoebarsing et al., 2005) or 10-30% (Larmola et al., 2010) of Sphagnum biomass carbon is from CH 4 oxidation by MOB. Thus, it seems clear that mosses benefit from their partners and the relationship has been discussed to be symbiotic (Raghoebarsing et al., 2005). Still, there is evidence that the bacteria involved are only loosely connected to Sphagnum (Basiliko et al., 2004;Larmola et al., 2010). The study by Larmola et al. (2010) showed that peatland water table (WT) level is the main factor influencing MOB activity in mosses. Sphagnum associated methanotrophy (SAM) became de-/reactivated upon natural WT fluctuation. However, Larmola et al. (2010) did not provide evidence whether reactivated CH 4 oxidation was caused by reactivation of the original MOB community, invasion of new MOB from the surrounding water or by both mechanisms. The ability of MOB to colonize Sphagnum from surrounding water would make ecosystem CH 4 dynamics less vulnerable to extended periods of drought than a tight symbiosis between MOB and Sphagnum or relaying on the reactivation of original community. To test the importance of colonization we examined the question more thoroughly. First, we conducted a similar transplantation trial as in Larmola et al. (2010) where inactive mosses were planted next to active ones. Colonization process was followed by measurement of CH 4 oxidation potentials and community analysis by a microarray that profiles diversity within the pmoA gene coding for particulate methane mono-oxygenase (pMMO), a key enzyme in CH 4 oxidation (Bodrossy et al., 2003). By using this method covering a wider range of MOB diversity than Larmola et al. (2010) we aimed to reveal more detailed changes in community compositions. We hypothesized that colonization of MOB through the water phase is a substantial reason for methanotrophic reactivation. Since we presume that all mosses are colonized through the same pathway this should be reflected in MOB of the neighboring mosses influencing the microbial community of the transplanted moss. Second, we tested the hypothesis in the laboratory by treating inactive Sphagnum mosses with water from a wet depression (flark) harboring methanotrophic active Sphagnum mosses. As a control, parallel samples were treated with the same water after MOB removal through filtration. CH 4 oxidation potentials were measured and MOB communities analyzed by pmoA-based quantitative PCR (qPCR) and denaturing gradient gel electrophoresis (DGGE) analysis followed by sequencing. Sphagnum transplantation The experiment was conducted at the Lakkasuo mire (61˚48 N, 24˚19 E; 150 m. a.s.l), a boreal raised bog complex in southern Finland. On 7 July 2008, patches (8 cm in diameter) of inactive Sphagnum rubellum from site O were transplanted to six different flark sites (A-F) showing high Sphagnum associated methanotrophic (SAM; CH 4 oxidation) activity (Figure 1). To control the effect of transplantation, S. rubellum was replanted in the original site and the native Sphagnum species (Table A1 in Appendix) gathered from each of six different flark sites were also returned to their original places. Thus, all samples were of transplanted Sphagnum. Moss samples were gathered at the beginning of the experiment (0 day), after 3 days, and 28 days. After gathering, mosses were rinsed with deionized water and dried overnight at +4˚C. Only upper 10 cm of the moss plants were included in the following analysis. Ecological variables of the transplantation sites are listed in Table A1 in Appendix. Methane oxidation potential Methane oxidation potentials were measured as described in Larmola et al. (2010). Briefly, 30 g of moss was incubated in a 600 mL flask with an initial CH 4 concentration of 10000 ppm in the dark at +15˚C and the oxidation was monitored after 24 and 48 h by gas chromatography. Results are presented in micromole CH 4 per gram dry weight per hour (μmol g dw −1 h −1 ). Analysis of methanotrophic community composition by pmoA-microarray Community composition of MOB in Sphagnum samples was investigated using a microarray (Bodrossy et al., 2003) designed to detect diversity within the pmoA gene. DNA was isolated as in Siljanen et al. (2011). A fragment of the pmoA gene was amplified using a semi-nested PCR approach with primer pairs A189f/T7-A682r and A189f/T7-mb661r as in Siljanen et al. (2011) with the exception that after the first PCR-step, products not detectable on the gel were diluted 1:10 before being used as templates in the second PCR. Concentration of PCR products was quantified using a Qubit fluorometer (Invitrogen, Carlsbad, CA, USA). In vitro transcription and hybridization was performed as in Stralis-Pavese et al. (2004) and the applied probe set was similar to that applied by Abell et al. (2009). One of three parallel original inactive moss samples from the 0 day time point could be successfully analyzed (see Larmola et al. (2010) that SAM inactive Sphagnum mosses host MOB DNA). Probing pmoA diversity cannot detect Methylocella or the recently discovered Methyloferula (Vorobev et al., 2011) methanotrophs as the pMMO enzyme is not present in these bacteria. It should be noted that another newly discovered methanotroph group, Verrucomicrobia, is also not detectable by the probes we used. Statistical analysis of microarray data The quantitative nature of the microarray data was converted to a binary matrix (presence = 1, absence = 0) to reveal the community changes caused by different groups of MOB colonizing mosses after transplantation, and also to prevent false interpretation originating from non-quantitative nested PCR approach. The data were then analyzed using principal component analysis (PCA) carried out with CANOCO Version 4.52 (ter Braak and Smilauer, 2002). Sites A-F (Figure 1) were analyzed separately (n = 1) and as parallel samples (n = 6). Universal MOB probes (positive controls) and probes not hybridizing to any of the samples were excluded from the analyses (threshold for positive samples ≥3 after normalization of the data to the scale of 0-100). Flark water bathing Sphagnum mosses were gathered from Sallie's Fen in NH, USA (43˚12.5 N, 71˚03.5 W 110 m. a.s.l.). Triplicate (n = 3) fresh samples of approximately 30 mL (volume based on the volume of water replaced by the mosses) of inactive S. magellanicum were subjected to the following treatments: (I) no treatment; (II) overnight (11 h) incubation in SAM active S. majus 200 mL flark; (III) overnight incubation in 200 mL 0.45 μm filtered SAM active S. majus flark water; (IV) overnight incubation in 200 mL SAM active S. majus flark water followed by rinsing with deionized water. In the final treatment, S. majus gathered from the active (flark) site was included as a positive control (n = 3) in the analyses described below. SAM active flark water was collected directly from a wet depression next to S. majus vegetation and did not contain any macroscopic plant material. Each overnight incubation was conducted in the dark at +20˚C. Following treatment, all mosses were dried overnight at +4˚C. Only upper 10 cm of the moss plants were included in the following analysis. Methane oxidation potential and statistical analysis Methane oxidation potentials were measured as above in the transplantation experiment but on a Shimadzu 14A gas chromatograph equipped with a flame ionization detector (Shimadzu Corp., Kyoto, Japan). The results are presented in micromole CH 4 per gram dry weight per hour. The difference between sample treatments was tested using Kruskal-Wallis non-parametric analysis of variance followed by Nemenyi test for pairwise comparisons (p < 0.05; Zar, 1999). Analysis of community composition by DGGE and sequencing Diversity of MOB in Sphagnum samples from the bathing experiment was explored by pmoA-based PCR-DGGE analysis and sequencing as previously described (Tuomivirta et al., 2009;Larmola et al., 2010) using the primer pair A189f/GC-621r designed to target methanotrophs abundant in boreal peatlands (Tuomivirta et al., 2009). DNA was isolated as above in the transplantation experiment. Determined pmoA gene sequences were submitted to Genbank under accession numbers HQ651182 and HQ651183. Quantification of pmoA genes and statistical analysis Quantitative PCR was carried out as previously described (Tuomivirta et al., 2009) using the same primer pair as in the DGGE analysis (A189f/GC-621r). Results are expressed in pmoA copy number per gram dry weight. To test the difference between sample treatments, values were ln transformed to normalize the data followed by ANOVA and Tukey's HSD test (p < 0.05). RESULTS AND DISCUSSION In the transplantation experiment SAM inactive Sphagnum rubellum mosses were planted on six different SAM active sites. Most of the originally inactive mosses showed detectable CH 4 oxidation potentials (>0.005 μmol CH 4 g −1 h −1 ) after 3 days and 28 days (Figure 1). Comparing transplantation sites individually indicated that initially different MOB communities became more similar with time ( Figure A1 in Appendix). Averaging over the entire data set, the pmoA-microarray showed that the MOB community of the originally inactive mosses started to resemble that of the native mosses in the active site after 3 days (Figure 2). After 28 days, MOB communities from the majority of inactive Sphagnum mosses transplanted to active sites (immigrants in Figure 1) were more similar to those of native active mosses than to those in the inactive site at 0 day. Thus, this field experiment indicated that MOB could be transferred between mosses through the water phase. In addition, the original inactive Sphagnum (site O) became active after transplantation in its original site and had a different MOB community than before transplantation, demonstrating possible new MOB movement through the water phase (Figures 2 and 3 and Figure A1 in Appendix). However, although SAM activity was induced in most of the samples together with invasion of new MOB this was not always the case and SAM activity could also be induced without major changes in community composition (Site E, Figure A1 in Appendix). Thus there are methanotrophs that move through water but we cannot state that for all community members. Some members of the methanotroph community seem to be permanently associated with the mosses regardless of whether conditions favor CH 4 oxidation or not. In our transplantation experiment this factor hindered us from seeing the invading members of the community and partly explains why large changes in community composition were not always seen when CH 4 oxidation was reactivated. As Methylocystis-related type II MOB were present in practically all samples in the transplantation experiment (Figure 3 and Figure A2 lineage, the γ-proteobacterial type I, was present more rarely and with more variability. Prior to transplantation, no type I methanotrophs were detected in inactive mosses. After 28 day, they were present in four of six "immigrant" samples transplanted to active sites. Type Ia subgroup was found only in mosses transplanted in the active flarks and the native mosses of these sites, suggesting that this group moved from the native mosses to the transplanted "immigrants." Type I MOB could not be clearly linked to the emergence of CH 4 oxidation activity, although they were present in most of the active samples and absent from most of the inactive ones. Consequently, based on the transplantation experiment alone, we cannot exclusively state that invasion by nearby MOB is an imperative route in the reactivation of SAM activity. To examine the hypothesis "the colonization of MOB through the water phase is a substantial reason for methanotrophic reactivation" further, a more simplified experiment was conducted in laboratory conditions. In this bathing experiment, SAM inactive S. magellanicum mosses exposed to unfiltered, SAM active flark water began CH 4 oxidation within 11 h, as indicated by CH 4 oxidation potential measurements and community analyses including pmoA-based qPCR, DGGE fingerprinting and sequencing. Maximum CH 4 oxidation potentials and pmoA copy numbers were measured for mosses treated with unfiltered water (averages for rinsed mosses 0.33 μmol CH 4 g −1 h −1 , 1.9 × 10 7 pmoA copies g dw −1 , and for unrinsed 0.29 μmol CH 4 g −1 h −1 , 2.6 × 10 7 pmoA copies g dw −1 ) and positive control S. majus mosses (average 0.63 μmol CH 4 g −1 h −1 , 20 × 10 7 pmoA copies g dw −1 ) from the active flark site (Figure 4). Respective values for mosses treated with filtered water (no CH 4 oxidation detected, 0.03 × 10 7 pmoA copies g dw −1 ) and the negative control (<0.005 μmol CH 4 g −1 h −1 , 0.06 × 10 7 pmoA copies g dw −1 ) were clearly lower. The pmoA copy number between filtered and non-filtered flark water treated S. magellanicum increased in average by a factor of 63. Also, DGGE revealed the transfer of two Methylocystis-related methanotrophs through unfiltered water (Figure 5). Filtered water did not induce CH 4 oxidation activity. This experiment clearly demonstrated the watermediated dispersal of MOB, but it also showed that compared to invasion by new MOB, reactivation of the original MOB was not a major mechanism in the reactivation of CH 4 oxidation process in the studied mosses. In the DGGE gel the two Methylocystis bands are faintly present already in the unbathed negative control. Bathing of the mosses with sterile-filtered water caused these bands to fade away as shown also by the qPCR. On the contrary, treatment with unfiltered water caused emergence of high numbers of MOB and also high SAM activity. Thus the reactivation of the CH 4 oxidation activity must have been brought up by MOB invading the moss through the water phase or the growth in MOB numbers should have been seen also in the moss bathed with filtered water. Moreover, known Methylocystis strains have doubling times of several hours when growing on CH 4 in laboratory conditions (Wise et al., 1999;Dedysh et al., 2007;Baani and Liesack, 2008) suggesting that it would be highly unlikely for these MOB to increase their numbers over 60 times higher in 11 h, as seen in the bathing experiment. Therefore we accept the posed hypothesis with a minor modification: MOB colonization through the water phase occurs and it obviously supports the reactivation of CH 4 oxidation in Sphagnum mosses, but our experiments cannot rule out the possibility of reactivation of original community members. This could be investigated in a prolonged bathing experiment in combination with diurnal light rhythm, but was beyond the scope of this investigation. Based on our results that MOB colonize the mosses from water, the relationship between MOB and Sphagnum seems to be a loose, mutually beneficial association rather than a tight symbiosis. This result is in line with a recent finding by Bragina et al. (in press) who showed by pyrosequencing that some bacteria are passed from the Sphagnum sporophyte to the gametophyte but no known methanotrophs were among them. Representatives of the genus Methylocystis, however, were detected in the gametophyte. Still, even though methanotrophs may not be obligately dependent on the mosses, they most likely prefer the plant cells over life in the water phase. This is supported by our results from the bathing experiment. Despite having slightly lowered the amount of pmoA detected, mosses rinsed with sterile water had almost the same potential CH 4 oxidation activity as unrinsed ones, indicating that loosely attached methanotrophs play only a minor role in the process. In addition, the rapid (<11 h) increase in pmoA copy number suggests that methanotrophs present in the water phase quickly colonize Sphagnum. Compared to free-living bacteria, those associated with plants may gain an advantage from a stable CH 4 gradient and supply of oxygen from photosynthesis, but it has yet to be demonstrated. In the bathing experiment methanotrophs moved to the mosses even in the dark when oxygen was not formed in photosynthesis, indicating that at the very least oxygen is not the only advantage bacteria gain from the mosses. In another study no CH 4 oxidation activity was detected in peat water surrounding Sphagnum mosses (Kip et al., 2010), also indicating that, although present, MOB are not actively oxidizing CH 4 in the water phase. Since the only MOB, Methylocystis, detected in the bathing experiment, is non-motile (Dedysh, 2009), it remains open how these MOB cells end up on the moss surface and inside the hyaline cells. On the other hand another peatland inhabiting type II MOB genus, Methylosinus (Dedysh et al., 2003;Chen et al., www.frontiersin.org 2008), does contain motile species (Bowman et al., 1993) and has been isolated from Sphagnum mosses (Kip et al., 2011a). Similar to previous studies (Tuomivirta et al., 2009;Larmola et al., 2010;Yrjälä et al., 2011) of Finnish peatlands, DGGE of our bathing experiment samples from Sallie's Fen, located in NH, USA, detected only Methylocystis-like MOB. These were also the dominant methanotrophs in our transplantation experiment, run on the Finnish Lakkasuo raised bog complex, when the pmoAmicroarray was used. Dominance of type II MOB in our samples is in line with previous studies. Especially the high prevalence of Methylocystis in Sphagnum samples is not surprising as it is commonly found in northern peatlands (McDonald et al., 1996;Morris et al., 2002;Jaatinen et al., 2005;Dedysh et al., 2006;Chen et al., 2008). Although Kip et al. (2010) found, in contrast to our results, a high diversity of type I methanotrophs, Methylocystis was still the dominant species in their globally gathered Sphagnum samples (Kip et al., 2010) and also very abundant in mosses from a Dutch peat bog (Kip et al., 2011b). We have demonstrated that water serves as an essential route for methanotroph dispersal and is thus an imperative part of Sphagnum-methanotroph association. This is likely to act as a backup mechanism for peatland CH 4 dynamics. Drainage of peatlands can alter the methanotroph community composition (Jaatinen et al., 2005;Yrjälä et al., 2011) and reduce Sphagnum coverage (Yrjälä et al., 2011), consequently compromising this mutualistic association. A case study (Yrjälä et al., 2011) found that the particular Methylocystis sp., which was found now also in the mosses of Sallie's Fen (northeastern USA) of our bathing experiment and in the mosses of Lakkasuo (Larmola et al. (2010)), was lost when the WT dropped by 14 cm, which is similar to the predicted drawdown for northern peatlands in the global warming scenario by 3˚C (Roulet et al., 1992). Restoration of drained peatlands aims to reactivate ecosystem function and restart methanogenesis (Tuittila et al., 2000). Any peatland restoration program should also aim to re-establish the conditions for the mutualistic association between methanotrophs and Sphagnum. Our study indicates that this could be done via transplantations of Sphagnum from donor sites with undisturbed CH 4 dynamics. In natural environments Sphagnum associated methanotrophic communities may reduce the methane flux by as much as 80% (Kip et al., 2010). It is not yet known whether this phenomenon can reach that scale also in compromised ecosystems. CONCLUSION Here we showed, by two complementing experiments, that invasion of new MOB through water occurs and that it can be an important mechanism in the reactivation of CH 4 oxidation in Sphagnum mosses. Based on this result, the relationship between Sphagnum and methanotrophs is a loose, mutually beneficial association, although some methanotrophs may have an even tighter connection to the mosses. ACKNOWLEDGMENTS We thank T. Ronkainen and L. Maanavilja for field and laboratory assistance, R. Varner, J. Bubier, A. Saari, and P. J. Martikainen for facilities, P. Crill, S. Whitlow, J. Digg, and N. Blake for access to Sallie's Fen, M. Hardman for checking the language, and S. Elomaa for the graphics. This work was mainly funded by Maj and Tor Nessling Foundation and the Academy of Finland (Project 121535). Additional funding was received from Maa-ja vesitekniikan tuki Foundation, Kone Foundation, and the Academy of Finland (Projects 118493 and 218101).
Modulation of the Gut Microbiota in Memory Impairment and Alzheimer’s Disease via the Inhibition of the Parasympathetic Nervous System The gut microbiota has been demonstrated to play a critical role in maintaining cognitive function via the gut-brain axis, which may be related to the parasympathetic nervous system (PNS). However, the exact mechanism remains to be determined. We investigated that patients with mild cognitive impairment (MCI) and Alzheimer’s disease (AD) could exhibit an altered gut microbiota through the suppression of the PNS, compared to the healthy individuals, using the combined gut microbiota data from previous human studies. The hypothesis was validated in rats to suppress the PNS by scopolamine injections. The human fecal bacterial FASTA/Q files were selected and combined from four different AD studies (n = 410). All rats had a high-fat diet and treatments for six weeks. The MD rats had memory impairment by scopolamine injection (2 mg/kg body weight; MD, Control) or no memory impairment by saline injection. The scopolamine-injected rats had a donepezil intake as the positive group. In the optimal model generated from the XGboost analysis, Blautia luti, Pseudomonas mucidoiens, Escherichia marmotae, and Gemmiger formicillis showed a positive correlation with MCI while Escherichia fergusonii, Mycobacterium neglectum, and Lawsonibacter asaccharolyticus were positively correlated with AD in the participants with enterotype Bacteroides (ET-B, n = 369). The predominant bacteria in the AD group were negatively associated in the networking analysis with the bacteria in the healthy group of ET-B participants. From the animal study, the relative abundance of Bacteroides and Bilophilia was lower, and that of Escherichia, Blautia, and Clostridium was higher in the scopolamine-induced memory deficit (MD) group than in the normal group. These results suggest that MCI was associated with the PNS suppression and could progress to AD by exacerbating the gut dysbiosis. MCI increased Clostridium and Blautia, and its progression to AD elevated Escherichia and Pseudomonas. Therefore, the modulation of the PNS might be linked to an altered gut microbiota and brain function, potentially through the gut-brain axis. Introduction With an increase in the life expectancy, cognitive impairment is a significant health problem, worldwide. Mild cognitive impairment (MCI) is an early stage of loss of memory or cognitive function in people unable to perform independent daily activities [1]. The causes for the development of MCI have not been completely elucidated; however, it is known that it is related to changes in the brain during the early stages of neurodegenerative diseases, including Alzheimer's disease (AD) [1]. People with MCI are more susceptible to dementia, including AD. About 10-20% of people, aged over 65 years with MCI develop dementia over one year, and about 80% progress to AD, as seen in a 6-year follow-up study [2]. Therefore, MCI can be considered a pre-dementia stage that can progress to AD. In the PCA, all participants (n = 410) from the four studies were categorized into two clusters, namely ET-B (n = 369) and Halomonas (n = 41, ET-H) ( Figure 1A). The samples with ET-H were too small to evaluate for further analysis, and the analysis was conducted on participants in the ET-B group. In the present study, we analyzed the gut microbiota composition and metagenome function in the participants with MCI and AD in the ET-B group. The number of participants in the healthy, MCI, and AD groups was 125 (33.9%), 141 (38.2%), and 103 (27.9%) in ET-B and 28 (68.3%), 6 (14.6%), and 7 (17.1%) in ET-H, by χ2 test (p < 0.01; Figure 1A). The relative abundance of fecal bacteria in the ET-B and ET-H groups is shown in Figure 1B,C, respectively. In the ET-B participants, the relative abundance of Bacteroidaceae, Phocaeicola, Halomonadaceae, Pseudomonadaceae, Enterobacteriaceae, and Streptococcaceae decreased in the order of the healthy (n= 125), MCI (n = 141), and AD (n = 103) groups ( Figure 1B; p < 0.001). However, there was a higher relative abundance of Lachnospiraceae, Oscillospiraceae, and Bifidobacteriaceae in the MCI group than in the AD group in the ET-B participants ( Figure 1B; p < 0.001). In the participants in the ET-H group, the relative abundance of Halomonadaceae, Pseudomonadaceae, Moraxellaceae, and Microbacteriaceae was markedly lower, and that of Enterobacteriaceae, Rhizobiaceae, and Bacillaceae was higher in the healthy group (n = 28) than in the MCI (n = 6) and AD groups (n = 7; Figure 1C; p < 0.001). The relative abundance of Enterobacteriaceae and Rhizobiaceae was higher in the MCI group than in the AD group in the ET-H participants ( Figure 1C; p < 0.001). α-Diversity and β-Diversity in the Participants with ET-B The α-diversity represents species richness, dominance, and evenness by different indices. The observed OTUs represent the number of species per sample, while the Chao1 and Shannon indices estimate the richness of the species present in the sample. The Chao1 index was higher in the healthy group than in the MCI and AD groups, while the Shannon index was higher in the ascending order of AD, MCI, and Heath groups of the ET-B participants (Figure 2A,B; p < 0.05). In the β-diversity analysis, OTUs varied among the healthy, MCI, and AD groups of the ET-B participants in the PCoA analysis ( Figure 2C), and they were statistically significantly different among the groups in the Analysis of Molecular Variance (AMOVA) analysis (p < 0.001). At the family level of the ET-B participants, Bacteroidaceae, Halomonadaceae, and Prevotellaceae were lower. However, Enterobacteriaceae, Pseudomonadaceae, and Streptococcaceae increased in the order of the healthy, MCI, and AD groups (p < 0.001; Figure 2D). Lachnospiraceae increased only in MCI compared to the other groups. At the genus level, Bacteroides, Phocaeicola, Prevotella, Ruminococcus, and Parabacteroides were lower, and Escherichia, Enterobacter, and Pseudomonas were higher in the order of the healthy, MCI, and AD groups (p < 0.001; Figure 2E). Interestingly, Faecalibacterium and Blautia were higher in the MCI group compared to the others. Primary Fecal Bacteria in Each Group by XGboost and SHapley Additive exPlanations (SHAP) Analysis in the Participants with ET-B The primary bacteria were found at the genus level with a linear discriminant analysis (LDA) score. The LDA scores of bacteria higher than three were as follows: Bacteroides, Phocaeicolas, Alistipes, Parabacteroides, and Ruminococcus in the healthy group, Blautia, Faecalibacterium, Streptococcus, Collinsella, Erysipelatoclostrodium, and Lachnospira in the MCI group, and Escherichia in the AD group ( Figure 3A). The prediction model for the healthy, MCI, and AD groups was generated from the relative importance of species in the fecal bacteria by Xgboost in ET-B. As seen in Figure 3B, the relative abundance of 20 species was different among the healthy, MCI, and AD groups. It showed the bacteria with different relative abundance among three groups: Bacteroides uniformis, Parabacteroides merdae, Streptococcus salivarius, Blautia luti, Escherichia fergusonii, and others ( Figure 3B). The different relative abundance between the three groups was difficult to compare among the groups. The relative abundance of bacteria in the healthy group was compared with that of the MCI or AD groups. In the comparison between the healthy and MCI groups, the area under the curve (AUC) of the ROC was 93.5%, and the 10-fold accuracy of the trained and test sets was 0.78 ± 0.02 and 0.85 ± 0.03, respectively ( Figure 3C). The participants in the MCI group had a higher relative abundance of Blautia luti, Streptococcus salivarius, Desulfovibrio simplex, Escherichia marmotae, Bacteroides faecis, and Gemmiger formicilis than the healthy group in the healthy-MCI model ( Figure 3C). In the model for the healthy-AD group comparison, the AUC of the ROC was 94.7%, and the 10-fold accuracy of the trained and test sets was 0.77 ± 0.03 and 0.81 ± 0.03, respectively ( Figure 3D). Escherichia fergusonii, Streptococcus thermophilus, Mogibacterium neglectum, Kawsonibacter asaccharolyticus, and Dorea longicatena were higher in the AD group than in the healthy group ( Figure 3D). At the same time, Parabacteroides merdae, Lachnospira eligens, Enterobacter hormaechei, and Catinella morbi were lower in the AD group than in the healthy group ( Figure 3D). Network of Fecal Microbiota and Metagenome Function in the Participants with ET-B In AD, Pseudomonas fidesensis, Pseudomonas syringae, Escherichia marmotae, and Escherichia fergusonii showed a high positive correlation in the AD group (p < 0.001; Figure 4A). However, they were negatively correlated with the bacteria in the MCI and healthy groups. The negative correlation was higher in the healthy group than in the MCI group. However, Lawsonibacter asaccharolyticus had a negative correlation with four bacteria in the AD group and a positive correlation with the MCI and healthy groups, although they belonged to the AD networking ( Figure 4A). The bacteria in the healthy group were positively correlated with each other. Most bacteria in the healthy group were negatively correlated with the primary bacteria, such as Pseudomonas fidesensis, Pseudomonas syringae, Escherichia marmotae, and Escherichia fergusonii in the AD group. Some bacteria in the healthy group were also negatively correlated with those in the MCI group, including Gracilibacter thermotolerans, Bifidobacterium longum, and Streptococcus salivarius ( Figure 4A). These results suggested that the gut bacterial network in the healthy group might protect against the survival of ADrelated gut bacteria. However, the protection by the bacterial networking in MCI was too weak to move toward an AD state quickly. Therefore, a robust bacteria network can prevent the increase in harmful bacteria that could result in AD. The primary bacteria in the healthy participants can protect against AD induction by improving the gut microbiota-brain axis. The metabolism of nucleotide, purine, and pyrimidine was negatively correlated with the primary bacteria in the AD group but positively correlated with those in the healthy group ( Figure 4B). the protein digestion and most amino acid metabolisms, such as cysteine, lysine, alanine, aspartate, and glutamate metabolism exhibited a significant negative correlation with the bacteria in the AD group, while they had a positive correlation with those in the healthy group ( Figure 4B). However, the metabolism of valine, leucine, isoleucine, tyrosine, tryptophan, and β-alanine was positively correlated with the bacteria in the AD group, which was in contrast to that seen with the other amino acids. The metabolism of starch, sucrose and glucose, fat biosynthesis, and glucose-related pathways, such as the insulin and glucagon signaling pathways, were significantly negatively correlated with the main bacteria in the AD group and positively correlated with those in the healthy group ( Figure 4B). However, fat degradation, digestion, and absorption were significantly positively correlated with the primary bacteria in the AD group, but negatively correlated with those in the healthy group ( Figure 4B). Memory Deficit in the Animal Study A spatial memory deficit determined by the water maze test was induced in the MD group (intraperitoneal scopolamine injection; n = 10), compared to the normal group (intraperitoneal saline injection; n = 10). The latency time of the first visit to zone 5, where the platform was located, was longer in the MD group than in the positive (intraperitoneal scopolamine injection plus donepezil intake; n = 10) and normal groups in the water maze test (p < 0.001; Figure 5). The frequencies visiting zone 5 were also fewer in the MD than in the normal (p < 0.05; Figure 5) group. respectively. (B). Metagenome functions of the primary bacteria in human samples. Memory Deficit in the Animal Study A spatial memory deficit determined by the water maze test was induced in the MD group (intraperitoneal scopolamine injection; n = 10), compared to the normal group (intraperitoneal saline injection; n = 10). The latency time of the first visit to zone 5, where the platform was located, was longer in the MD group than in the positive (intraperitoneal scopolamine injection plus donepezil intake; n = 10) and normal groups in the water maze test (p < 0.001; Figure 5). The frequencies visiting zone 5 were also fewer in the MD than in the normal (p < 0.05; Figure 5) group. Following the first two training sessions in the passive avoidance tests, the short-term memory was measured in the third session. The latency time to enter the dark room was reduced in the order of the normal, positive, and MD groups in the third session (p < 0.001; Figure 5). The results suggested that the MD rats exhibited short-term and spatial memory impairment, compared to those in the positive and normal groups. However, the memory improvement in the positive group was lower than in the normal group. Fecal Bacterial Analysis in the Animal Study The Chao1 and Shannon indices of the fecal bacteria, representing the α-diversity, were lower in the MD than in the normal group (p < 0.001; Figure 6A,B). The positive group (donepezil intake) increased the Chao1 index, compared to the MD group, suggesting the prevention of a decreased α-diversity of the gut bacteria by scopolamine injections ( Figure 6A,B). The fecal bacteria were clearly clustered into three groups in the PCoA, indicating that the bacterial species were different in the three groups (p < 0.01; Figure 6C). Rats fed on a high-fat diet exhibited a gut microbiota composition similar to ET-B. At the genus level, the relative abundance of Bacteroides and Biophilia was much higher, and that of Escherichia, Blautia, and Clostridium was much lower in the normal group than in the MD group ( Figure 6D). The changes in these bacteria were shown in the positive group, but some protection was detected. Lactobacillus and Blautia increased in the Following the first two training sessions in the passive avoidance tests, the short-term memory was measured in the third session. The latency time to enter the dark room was reduced in the order of the normal, positive, and MD groups in the third session (p < 0.001; Figure 5). The results suggested that the MD rats exhibited short-term and spatial memory impairment, compared to those in the positive and normal groups. However, the memory improvement in the positive group was lower than in the normal group. Fecal Bacterial Analysis in the Animal Study The Chao1 and Shannon indices of the fecal bacteria, representing the α-diversity, were lower in the MD than in the normal group (p < 0.001; Figure 6A,B). The positive group (donepezil intake) increased the Chao1 index, compared to the MD group, suggesting the prevention of a decreased α-diversity of the gut bacteria by scopolamine injections (Figure 6A,B). The fecal bacteria were clearly clustered into three groups in the PCoA, indicating that the bacterial species were different in the three groups (p < 0.01; Figure 6C). Staphylococcus aureus, Clostridium aldenense, Ruminoccocus torques, Faecalibacterium pranunsnizii, Clostridium symbiosm, Lactobacillus vaginalis, Ruminococcus fauvreauii, Bacteroides uniformis were higher in the normal group than the other groups ( Figure 6F). The relative abundance of Ruminoccocus gnavus, Clostridium saccharophila, Lactobacillus mucosae, Clostridium citroniae, Clostridium ramosum, Parabacteroides distasonis, Bacteroides ovatus, Clostridium celatim, Escherichia coli, and Clostridium perfringens was higher in MD ( Figure 6F). Therefore, different species in the same genus were either beneficial or harmful for the development of the disease, and although Clostridium and Escherichia were mainly harmful to MD, some species may inhibit MD from developing by interacting with other bacteria. Metagenome Function of the Fecal Bacteria in the Animal Study The metagenome results of the animals were similar to those in humans, but their correlations were stronger in animals than in humans ( Figure 6G). The metabolism of amino acids, including glycine, arginine, cysteine, methionine, proline, serine, and threonine, the biosynthesis of tryptophan, tyrosine, and phenylalanine, and the degradation of valine, isoleucine, and leucine were much lower in the MD group than in the normal group. The metagenome pathways in the positive group were altered from those in the MD group and also different from those in the normal group ( Figure 6G). Starch and sucrose metabolism, glycolysis, fat digestion and absorption, and secondary bile acid biosynthesis were much lower in the normal group than in the MD and positive groups (Figure 6G). Interestingly, the pathways related to AD, such as glutamatergic synapse, GA-BAergic synapse, peroxisome proliferator-activated receptor (PPAR-γ), and insulin signaling pathways, were reduced in the MD and positive groups, compared to the normal group. However, glucagon signaling showed the opposite pattern ( Figure 6G). Rats fed on a high-fat diet exhibited a gut microbiota composition similar to ET-B. At the genus level, the relative abundance of Bacteroides and Biophilia was much higher, and that of Escherichia, Blautia, and Clostridium was much lower in the normal group than in the MD group ( Figure 6D). The changes in these bacteria were shown in the positive group, but some protection was detected. Lactobacillus and Blautia increased in the positive, compared to the normal group ( Figure 6D). The relative abundance of Clostridium was much higher in the MD group than in the normal group, and it decreased in the positive group than in the MD group (p < 0.001; Figure 6E). The relative abundance of Escherichia was also higher in the MD group than in the normal group and it decreased in the positive as much as the normal group (p < 0.05; Figure 6E). The primary bacteria in each group were used to calculate the LDA scores of the bacterial species, the LDA scores of Staphylococcus aureus, Clostridium aldenense, Ruminoccocus torques, Faecalibacterium pranunsnizii, Clostridium symbiosm, Lactobacillus vaginalis, Ruminococcus fauvreauii, Bacteroides uniformis were higher in the normal group than the other groups ( Figure 6F). The relative abundance of Ruminoccocus gnavus, Clostridium saccharophila, Lactobacillus mucosae, Clostridium citroniae, Clostridium ramosum, Parabacteroides distasonis, Bacteroides ovatus, Clostridium celatim, Escherichia coli, and Clostridium perfringens was higher in MD ( Figure 6F). Therefore, different species in the same genus were either beneficial or harmful for the development of the disease, and although Clostridium and Escherichia were mainly harmful to MD, some species may inhibit MD from developing by interacting with other bacteria. Metagenome Function of the Fecal Bacteria in the Animal Study The metagenome results of the animals were similar to those in humans, but their correlations were stronger in animals than in humans ( Figure 6G). The metabolism of amino acids, including glycine, arginine, cysteine, methionine, proline, serine, and threonine, the biosynthesis of tryptophan, tyrosine, and phenylalanine, and the degradation of valine, isoleucine, and leucine were much lower in the MD group than in the normal group. The metagenome pathways in the positive group were altered from those in the MD group and also different from those in the normal group ( Figure 6G). Starch and sucrose metabolism, glycolysis, fat digestion and absorption, and secondary bile acid biosynthesis were much lower in the normal group than in the MD and positive groups ( Figure 6G). Interestingly, the pathways related to AD, such as glutamatergic synapse, GABAergic synapse, peroxisome proliferator-activated receptor (PPAR-γ), and insulin signaling pathways, were reduced in the MD and positive groups, compared to the normal group. However, glucagon signaling showed the opposite pattern ( Figure 6G). Discussion The autonomic nervous system regulates involuntary physiologic processes, such as the heart rate, blood pressure, digestion, temperature, fluid balance, and urination through the opposing actions of the sympathetic nervous system (SNS) and the PNS. The SNS is responsible for the fight-or-flight response through adrenergic synaptic neurons, while the PNS helps reduce stress and the heart rate through cholinergic synaptic neurons [17]. The autonomic dysfunction is prevalent in people with dementia, and with impairment in the PNS, especially to the cholinergic neurotransmitter system, contributes to cognitive dysfunction, leading to AD [18]. The autonomic nervous system is bidirectionally involved in the microbiota-gut-brain axis, mainly through the vagus nerve, the principal component of the PNS. It plays a role in the cross-talk between the brain and gut microbiota and delivers messages to the brain regarding the gut microbiota and related metabolites. Moreover, factors such as stress, inhibit the vagus nerve to potentiate gut inflammation and decrease intestinal permeability, possibly contributing to the modulation of the gut microbiota [11]. Therefore, suppressing the cholinergic nerve system can contribute to memory dysfunction and inflammatory bowel disease by modulating the microbiota-gut-brain axis. The present study showed that the gut microbiota composition was different in healthy, MCI, and AD patients of the Bacteroides enterotype: Bacteroides and Phocaeicola decreased, and Escherichia and Pseudomonas increased in the MCI and AD patients. At the same time, Bluatia was elevated only in MCI patients. When rats were fed a high-fat diet, they elevated Bacteroides to mimic the Bacteroides enterotype in humans. The suppression of the PNS by scopolamine in MD group decreased Bacteroides and increased Clostridium, Lactobacillus, and Escherichia, compared to the normal group in the present animal study. Previous studies have demonstrated that a high-fat diet increases Bacteroidaceae, compared to a low-fat diet in rats with intact gallbladders, but does not do so in rats without gallbladders [19]. It suggests that bile acid is responsible for the increased Bacteroidaceae. The PNS inhibition suppresses the bile acid synthesis and secretion, which may change the gut microbiota in a high-fat diet [19,20]. In the present study, the reduced Bacteroidaceae in the scopolamine-injected rats, was related to inhibiting the bile acid secretion by the suppressed PNS. Therefore, the PNS inhibition develops memory impairment, linked to the gut microbiota changes. Cognitive function is modulated through the gut-brain axis involved in the PNS (vagus nerve) activity, circulating hormones, and proinflammatory cytokines. Increased sympathetic activity suppresses the PNS, which contributes to the gut dysbiosis, elevating inflammation and sympathetic activity [21]. The PNS suppression is linked to memory impairment, which progresses to neurodegenerative diseases, including Alzheimer's dis-ease [18,22]. Bacteroides decreased, and Clostridium, Escherichia, and Lactobacillus increased in rats injected with scopolamine. Donepezil treatment (positive group) prevented the decrease of Bacteroides and increased Clostridium, but it elevated Blautia and Lactobacillus more than in the MD and normal groups. The α-diversity was lower in the MD group than in the normal group. Previous studies have shown that scopolamine injections decreased the α-diversity, increased Firmicutes, and decreased Bacteroides [23]. Scopolamine injections have been demonstrated to positively correlate with Clostridium, Bifidobacterium, Ruminococcaceae_unclassified, Lachnospiraceae_unclassified, and Lactobacillus while being negatively related to Desulfovibrio, Akkermansia, and Blautia [23]. Scopolamine injections increase the intestinal permeability and induce memory impairment [23,24]. The present study also showed similar changes in the gut microbiota and memory function. Therefore, the changes in the fecal bacterial composition were associated with suppressing the PNS to increase the intestinal permeability and decrease the digestive fluids, especially bile acids. Moreover, donepezil partially prevents the PNS inhibition by scopolamine injections. The primary gut bacteria in the different groups can be determined by LDA, which identifies the bacteria that are more abundant in one group than another, adopting a principle of analysis closely related to the principal component analysis and linear regression analysis [24]. The present study demonstrated that only Escherichia was selected with a high LDA score in the AD group. Moreover, in the SHAP analysis using XGboost, Escherichia and Pseudomonas were high in the AD group and belonged to Proteobacteria and Gammaproteobacteria, respectively. The bacteria infect the host through the consumption of infected foods. They cannot grow in healthy people, due to their healthier gut conditions. However, they can grow in certain conditions, such as the reduced secretion of digestive juices, especially bile acid, and an increased intestinal permeability related to a suppressed PNS [25]. In AD patients, the serum concentration of bile acid metabolites is modulated, compared to elderly persons without AD [25][26][27]. It suggests that AD bidirectionally increases harmful bacteria, namely Escherichia and Pseudomonas, due to the disturbed microbiota-gut-brain axis. However, the primary bacteria in the MCI group were included from both AD and healthy groups, suggesting MCI can be progressed into AD and be prevented by modulating the gut microbiota. Some beneficial gut bacteria in the MCI group could prevent or delay the progression to AD, but other harmful bacteria could lead to the progression to AD. Previous studies have reported that having an Escherichia coli infection increases the risk of AD by 20.8 (95% CI = 17.7-24.3) times [27], and the genus Escherichia increases in both fecal and blood samples of AD and MCI patients [28]. Pseudomonas aeruginosa is linked to the AD pathogenesis and is believed to promote amyloid-β deposition, called amyloidosis [29,30]. Indeed, the amyloid-associated pathogenesis of AD may be triggered by a shift in the gut microbiota from MCI to AD. Escherichia coli K99 pili protein and lipopolysaccharide (LPS) have been shown to be co-localized in the amyloid plaque in the postmortem brain tissues of AD patients [28,31]. The present study also demonstrated that a relative abundance of Escherichia was observed in the descending order of the AD, MCI, and healthy groups. A lower Escherichia in the MCI and healthy groups could be due to beneficial bacteria, such as Faecalibacterium, Butyricicoccus faecihominis, and Bifidobacterium longum, which produce butyrate, preventing its increase. The network analysis represents the predominant bacteria with a co-occurrence and positive and negative relations among the groups [32]. The bacteria with positive connections grow and survive with each other, the harmful bacteria are unable to cause infections, and the bacteria with negative connections find it difficult to live together [33]. Therefore, the harmful bacteria can be eliminated by increasing the beneficial bacteria's negative connection with harmful bacteria [33]. In the present study, the network analysis showed that the Escherichia marmotae, Escherichia fergusonii, Pseudomonas fidesensis, and Pseudomonas syringae were the primary gut bacteria in the AD group, and they were negatively associated with the predominant gut bacteria in the healthy and MCI groups. However, the number of bacteria with negative connections in the AD group was much higher in the co-occurrent bacteria of the healthy group than in the MCI group. The present network analysis results can be applied to explore a therapeutic approach for preventing or delaying the progression of MCI and AD by modulating the gut microbiota. The present study was novel in its intention to show potential therapeutic approaches to prevent and delay the AD progression by modulating the gut microbiota with a higher power. Most studies on the gut microbiota are based on small sample sizes, and their power can be weak. The present study included several studies and combined the data to increase the power of the study. The limitations of the current study were as follows: (1) Age and gender of the participants were the only data provided with the fecal bacteria files. Lifestyles, including nutrients, smoking, alcohol consumption, medications, and antibiotic intake, play critical roles in modulating the gut microbiota [15]. However, the environmental factors could not be controlled to identify the potential confounders, since the availability of this data was limited. (2) The data were collected in case-control studies, and the results could not be applied to evaluate cause and effect. However, the observation studies supported that the microbial community changes can influence the host metabolism through their metabolite production [34]. Further studies to represent the causality between the gut bacteria and AD should be conducted. (3) In the present animal study, fecal bacteria modulation by scopolamine injections was conducted in a high-fat diet to mimic ET-B [16]. However, it needed to be studied in a normal-fat diet (25-30 energy% diet) since a high-fat diet itself can change fecal bacteria, related to memory impairment [35]. In conclusion, the fecal bacteria in the participants with the Bacteroides enterotype in the AD and MCI groups, significantly differed from those in the healthy group. The relative abundance of Escherichia fergusonii, Mycobacterium neglectum, and Lawsonibacter asaccharolyticus was much higher in the AD group than in the normal group. That of Blautia, Facalibacterium, Streptococcus, Collinsella, and Erysipelatoclostridium was observed to be higher in the MCI group than other groups, using the optimal model generated from the XGboost analysis. In the animal study, the relative abundance of Bacteroides and Bilophilia was lower, and that of Blautia, Escherichia, and Clostridium was higher in the MD group than in the healthy group. The results suggested that MCI and MD were associated with the PNS suppression, which was thought to progress into AD, eventually. MCI increased Clostridium and Blautia, and its progression to AD elevated Escherichia and Pseudomonas. Therefore, the modulation of the PNS altered the gut microbiota and brain function, potentially through the gut-brain axis. Further studies are needed to identify the therapeutic agents to enhance the activation of the PNS and decrease the harmful bacteria associated with AD, by increasing the beneficial bacteria with their inverse relationship with the harmful bacteria. Animal Care Thirty-eight-week-old male Sprague Dawley rats (n = 10 per group; total n = 30) were purchased from Daehan Bio Inc. (Eum-Sung, Korea). They were acclimatized for one week in an animal facility at Hoseo University. Each animal was housed in an individual stainless-steel cage in a controlled environment (23 °C, 12 h light/dark cycle) and was given a high-fat diet (43 energy percent diet) and water ad libitum. All procedures conformed with the Guide for the Care and Use of Laboratory Animals (8th edition) issued by the National Institutes of Health (Washington DC, USA) and were approved by the Institutional Animal Care and Use Committee of Hoseo University (HSIACUC-201836). Experimental Design for the Animal Study Previous studies have shown to increase in the relative abundance of Bacteroides in animals and humans fed a high-fat diet [16]. Since the fecal bacteria of the participants in the present study mainly belonged to ET-B, all rats were fed a high-fat diet to mimic the fecal bacteria, such as the ET-B. The experimental design is presented in Figure 8. The rats were divided into the memory deficit (MD), positive, and normal groups. Scopolamine and donepezil (Sigma Aldrich, St. Louise, MO, USA) were dissolved in 0.9% saline and water, respectively. The rats in the positive group were orally administered donepezil at 1 mg/kg body weight/day using a feeding needle, and those in the MD and normal groups were given water for seven weeks. The dosage of donepezil (10 mg/kg bw/day) was assigned, based on the previous studies [36]. Scopolamine dosage (2 mg/kg bw/day) and period (50 min later measurement) to induce the memory impairment were designated from our preliminary study. At the beginning of the fourth week, a rat daily had an oral administration of water or donepezil in the MD and positive groups, respectively, and Animal Care Thirty-eight-week-old male Sprague Dawley rats (n = 10 per group; total n = 30) were purchased from Daehan Bio Inc. (Eum-Sung, Korea). They were acclimatized for one week in an animal facility at Hoseo University. Each animal was housed in an individual stainless-steel cage in a controlled environment (23 • C, 12 h light/dark cycle) and was given a high-fat diet (43 energy percent diet) and water ad libitum. All procedures conformed with the Guide for the Care and Use of Laboratory Animals (8th edition) issued by the National Institutes of Health (Washington DC, USA) and were approved by the Institutional Animal Care and Use Committee of Hoseo University (HSIACUC-201836). Experimental Design for the Animal Study Previous studies have shown to increase in the relative abundance of Bacteroides in animals and humans fed a high-fat diet [16]. Since the fecal bacteria of the participants in the present study mainly belonged to ET-B, all rats were fed a high-fat diet to mimic the fecal bacteria, such as the ET-B. The experimental design is presented in Figure 8. The rats were divided into the memory deficit (MD), positive, and normal groups. Scopolamine and donepezil (Sigma Aldrich, St. Louise, MO, USA) were dissolved in 0.9% saline and water, respectively. The rats in the positive group were orally administered donepezil at 1 mg/kg body weight/day using a feeding needle, and those in the MD and normal groups were given water for seven weeks. The dosage of donepezil (10 mg/kg bw/day) was assigned, based on the previous studies [36]. Scopolamine dosage (2 mg/kg bw/day) and period (50 min later measurement) to induce the memory impairment were designated from our preliminary study. At the beginning of the fourth week, a rat daily had an oral administration of water or donepezil in the MD and positive groups, respectively, and then 30 min later, had an intraperitoneal injection of scopolamine (2 mg/kg body weight/day). The rats in the normal group were intraperitoneally injected with saline without the induction of memory deficit. At the end of the experiment, the fecal samples were collected from the cecum of all rats. period (50 min later measurement) to induce the memory impairment were designated from our preliminary study. At the beginning of the fourth week, a rat daily had an oral administration of water or donepezil in the MD and positive groups, respectively, and then 30 min later, had an intraperitoneal injection of scopolamine (2 mg/kg body weight/day). The rats in the normal group were intraperitoneally injected with saline without the induction of memory deficit. At the end of the experiment, the fecal samples were collected from the cecum of all rats. Memory Assessment Using the Passive Avoidance Test and the Morris Water Maze Test in the Animal Study The passive avoidance apparatus is equipped with a two-compartment dark/light shuttle box, wherein the rat quickly enters the darkroom when left in the light shuttle box [37]. Electrostimulation (75 V, 0.2 mA, 50 Hz) is delivered to the feet of the rat when entering the dark room to train the rat not to enter the dark room. The rat is trained in this manner in two acquisition trials lasting 8 h. Then, after 16 h, i.e., after the second trial, the time latency to enter the dark chamber is reassessed in the same manner but without electrical stimulation. Latency is checked up to a maximum of 600 s. The longer the latency time, the better the memory function. The spatial memory function is evaluated using the Morris water maze test, which assesses the hippocampal-dependent learning, including the acquisition of spatial memory [38]. At the start, a rat was placed at zone 1 of the pool and then began to find the platform at zone 5 [37,39]. The water maze test was conducted three times: the rat learned to find the platform located in zone 5 on days 1 and 2, and 5, and the first two sessions were the training sessions. The latency time, frequency to visit, and duration in zone 5 at the third session were measured to evaluate spatial memory. The test was performed with a cut-off time of 600 s. Fecal Bacteria Sequencing for the Animal Study The investigation of the fecal microbiome communities was carried out using cecum feces by Miseq using next-generation sequencing (NGS) technology [40]. Bacterial DNA Memory Assessment Using the Passive Avoidance Test and the Morris Water Maze Test in the Animal Study The passive avoidance apparatus is equipped with a two-compartment dark/light shuttle box, wherein the rat quickly enters the darkroom when left in the light shuttle box [37]. Electrostimulation (75 V, 0.2 mA, 50 Hz) is delivered to the feet of the rat when entering the dark room to train the rat not to enter the dark room. The rat is trained in this manner in two acquisition trials lasting 8 h. Then, after 16 h, i.e., after the second trial, the time latency to enter the dark chamber is reassessed in the same manner but without electrical stimulation. Latency is checked up to a maximum of 600 s. The longer the latency time, the better the memory function. The spatial memory function is evaluated using the Morris water maze test, which assesses the hippocampal-dependent learning, including the acquisition of spatial memory [38]. At the start, a rat was placed at zone 1 of the pool and then began to find the platform at zone 5 [37,39]. The water maze test was conducted three times: the rat learned to find the platform located in zone 5 on days 1 and 2, and 5, and the first two sessions were the training sessions. The latency time, frequency to visit, and duration in zone 5 at the third session were measured to evaluate spatial memory. The test was performed with a cut-off time of 600 s. Fecal Bacteria Sequencing for the Animal Study The investigation of the fecal microbiome communities was carried out using cecum feces by Miseq using next-generation sequencing (NGS) technology [40]. Bacterial DNA was extracted from the feces of the rats, and the sequencing was performed, using the Illumina MiSeq standard operating procedure and a Genome Sequencer FLX plus (454 Life Sciences) (Macrogen, Seoul). The DNA was amplified with 16S amplicon primers in the V3-V4 region by PCR, and libraries were prepared for the PCR products, according to the GS FLX plus library prep guide, as described previously [41]. Fecal Bacterial Community Analysis The fecal FASTAQ files from humans were downloaded from 410 participants using the National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA) toolkits (https://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=software; accessed on 13 January 2022). The FASTAQ from the humans and animals were separately filtered and cleaned up with qiime2 tools (https://view.qiime2.org/; accessed on 10 February 2022). From the human samples, operational taxonomic units (OTUs) for the healthy, MCI, and AD groups were obtained, respectively, while from the animal studies, the normal, positive, and MD groups, respectively. Using the Qiime2 program, 410 sequences were obtained after merging the double-ended sequences with the "make.contigs" command; the sequences were aligned with the SILVA v 1381 database, and the non-target sequences, such as mitochondria, archaea, fungi, and unknown sequences were removed. The remaining sequences were preclustered, and the chimeras were eliminated, using the "chimera.vsearch" command [36]. The sequences were then clustered with 97% similarity. The taxonomy of the OTUs in the FASTAQ files was annotated, according to the NCBI Basic Local Alignment Search Tool (BLAST) (https://blast.ncbi.nlm.nih.gov/Blast.cgi; accessed on 24 February 2022). Finally, 40,289 representative sequences were obtained for the subsequent analyses, and their biome files containing the taxonomy and counts were used for further analysis. Enterotypes The enterotypes were classified using the taxonomy and counts of the fecal bacteria in the human fecal FASTA/Q samples, including the control (healthy, n = 153), MCI (n = 147), and AD (n = 110) groups, by the principal component analysis (PCA). The number of enterotypes was assigned, based on eigenvalues >1.5 in the PCA. Two enterotypes satisfied these eigenvalues using the FactoMineR and Factoextra packages in the R software [42]. The names of the enterotypes were assigned from their primary bacteria: The main bacteria in enterotypes 1 and 2 were Bacteroidaceae and Halomonadaceae, called ET-B and ET-H, respectively. ET-B and ET-H included 369 and 41 participants, respectively. The number of participants with ET-H was too small to analyze the fecal bacteria; hence, the fecal bacteria in ET-B were used to identify MCI-and AD-related bacteria. α-Diversity, β-Diversity, and LDA Scores Alpha-diversity (α-diversity) is the species diversity in a site at a local scale, β-diversity is the ratio between the regional and local species diversity, and the LDA scores represent the effect size of each abundant species. The α-diversity metric was calculated with a "summary.single" command in the Mothur software package, and the Chao1 and Shannon indices were obtained. For the β-diversity measurement, the clearcut command in Mothur was used to construct a phylogenetic tree, the "unifrac.unweighted" command was applied to calculate the unweighted UniFrac distance matrix, and then the principal coordinate analysis (PCoA) was used for the visualization. The AMOVA command was used to compare the significant differences among the β-diversity groups. The LDA scores were analyzed with the lefSe command in the Mothur program. XGBoost Classifier Training and the SHAP Interpreter The fecal bacteria compositions of the Bacteroides (ET-B) participants in the healthy, MCI, and AD groups were analyzed. The characteristic features included the relative abundance of the fecal bacteria at the species level, the anthropometric variables, the serum biochemical, and metabolic variables. The fecal data were divided randomly into 80% (n = 295) for the training set and 20% (n = 74) for the testing set. A random grid search was used to find the best hyperparameter settings, and the search was carried out 1000 times in the XGBoost algorithm [43]. We first trained the XGBoost algorithm with all of the variables to find the top 10 most important variables and then used these ten variables to retrain the XGBoost algorithm. The best model with the highest receiver operating characteristic (ROC) area, accuracy, and 10-fold cross-validation in the test data set was selected from the random forest and XGBoost algorithm models. The 10-fold cross-validation was calculated using the cross_val_score function in the sklearn package. The function split the original training data into ten subsets and alternately used nine as the training data and one as test data for iterating ten times. Finally, ten sets of data were generated to obtain the mean and variance, which were used as the final accuracy result of the model. The 0.9 value of the 10-fold cross-validation indicated that the accuracy of the selected model was 90%. The SHAP analysis is a method used to explain the output of the XGBoost model [44]. We used the SHAP (0.39.0) package to calculate the SHAP value of each variable relative to the classifier (diet types). We observed the importance of the variable and its impact on the classification. The network analysis to determine the links among the gut bacteria at the species level was carried out using the Cytoscape program, downloaded from the website (https://cytoscape. org/; accessed on 9 March 2022). Metagenome Function of the Fecal Bacteria by Picrust2 The metabolic functions of the fecal bacteria were estimated from the genes they contained. They were determined, using the FASTA/Q files and count tables of the fecal bacteria on Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (Picrust2), a software for predicting functional abundances, based only on marker gene sequences [45]. The metabolic functions were based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) Orthologues (KO), mapped using the KEGG mapper (https://www.genome.jp/kegg/tool/map_pathway1.html; accessed on 30 March 2022) [40]. The gut microbiome was used to explore the differences in the metabolic functions among the groups. Statistical Analysis The statistical analysis was performed using SAS version 7 (SAS Institute; Cary, NC, USA) and the R package. The data were expressed as the mean ± standard deviation (SD), and statistical significance was set at p < 0.05. Multiple comparisons were conducted with Tukey's test when the three groups had statistically significant differences. Visualization of the data was conducted using R-studio and the ggplot2 package. Institutional Review Board Statement: Each study received the fecal sample FASTA/Q was approved by corresponding Institutional Review Board. Informed Consent Statement: Not applicable. Data Availability Statement: The authors confirm that the data supporting the findings of this study are available within the article and its Supplementary Materials. The FASTA/Q files for human studies were downloaded from the NCBI, and the data for the animal studies were available upon request to the corresponding author.
Recommendations and Action Plans to Improve Ex Situ Nutrition and Health of Marine Teleosts Abstract The International Workshop for Ex‐Situ Marine Teleost Nutrition and Health, hosted by Disney's Animals, Science and Environment in conjunction with the Comparative Nutrition Society, brought together over 50 animal experts and scientists representing 20 institutions to review current science and identify challenges of marine teleost nutrition and health. Invited speakers presented critical information and current research topics for areas of emphasis and expertise. Subject matter experts identified knowledge gaps and primary areas of focus to guide the scientific community's research efforts to improve the care of ex situ marine teleosts. The clinical medicine working group highlighted standardized approaches to ante‐ and postmortem sample collection, diet biosecurity and supplementation, advanced diagnostic methods, and expanded training in fish nutrition. Nutrition identified the creation of a husbandry and feeding management manual, comprehensive feeding program review and design, and specialty feeder/life stage nutrition as areas of focus, while animal husbandry focused on body condition scoring, feed delivery techniques, and behavioral husbandry topics. The physiology and chemistry and water quality working groups discussed components of the aquatic environment and their effects on fish health, including organic matter constituents, microbial diversity, disinfection, and managing microbiota. Finally, we reviewed how epidemiological approaches and considerations can improve our evaluation of aquarium teleost nutrition and health. The goals outlined by each working group and supporting literature discussion are detailed in this communication and represent our goals for the next 3 to 5 years, with the ultimate objective of the workshop being the production of a husbandry manual for marine teleost nutrition and health. Any scientists who feel that their experience, research, or interests align with these goals are invited to participate by contacting the authors. In January 2018, Disney's Animals, Science and Environment partnered with the Comparative Nutrition Society to present the International Workshop for Ex-Situ Marine Teleost Nutrition and Health. Over 50 experts in nutrition, clinical medicine and pathology, animal husbandry, physiology, water chemistry, toxicology, and epidemiology, representing 20 institutions, discussed the current science, research, and knowledge base for marine teleost nutrition and health. The purpose of the workshop was for experts to identify knowledge gaps and develop action plan recommendations that could guide the scientific community's research priorities to improve the care of ex situ marine teleosts. In the ex situ management of teleost species, the challenges can be as numerous and varied as the number of teleost species themselves found worldwide. Much of our current knowledge is limited to aquaculture species and institution-specific best practices. In aquaria, we often design nutrition plans to offer a broadly balanced diet to many unique species, often originating from many habitats throughout the world's oceans; various species often represent unique dietary strategies and/or feeding adaptations within their natural ecosystems. Our nutrition, husbandry, and medical care teams seek to provide the highest quality animal welfare possible. Each aspect of animal care is interdependent and often necessitates collaboration to maximize health outcomes. It was under this spirit of connection and cooperation that we chose to bring together this diverse group of specialists. By fostering connections and working together as a cohesive, multidisciplinary group of scientists across specialties, we sought to maximize research outcomes and, in turn, maximize animal welfare outcomes for all ex situ marine teleosts. It is critical in the care of aquarium animals that interdisciplinary groups communicate and coordinate animal care to maximize animal wellness outcomes. One example of this collaboration would be in assessing the feeding rate of a group-fed mixed-species exhibit based on animal health status, nutrient composition of the diet, and diet consumption by the animals. By collaborating on animal care, each group of specialists can offer feedback and unique perspectives on how their specialty can be leveraged to improve or enhance animal care. 70 This communication contains a brief review of some of the available literature that supports the summarized reports from each of the six working groups of subject experts. While some points are specific to the working group's specialty, others are multidisciplinary and will facilitate collaboration across disciplines. While working groups have some overlapping recommendations, their specific, individual perspectives and framing are considered valuable to each audience and retained rather than summarized. Each group has suggested goals for the next 3 to 5 years, with the ultimate objective of the workshop being the production of a husbandry manual for ex situ marine teleost nutrition and health. CLINICAL MEDICINE AND PATHOLOGY WORKING GROUP The fishes comprise a large, paraphyletic group with over 34,000 species (Froese and Pauly 2021). Marine teleosts are an important subset and include species that are important for public aquaria and the aquarium fish hobby, food, bait, research, and restocking of natural communities. Fishes collectively have diverse feeding strategies and correspondingly varied species-specific macro-and micronutrient needs. Basic nutritional requirements and related nutritional diseases are well documented in only a small number of commercially important species, most of which are reared in freshwater and used for human food fish production, with nutritional emphasis placed on attainment of rapid growth rates and efficient production (NRC 2011;Hoopes and Koutsos 2021). In 2016, aquacultured products contributed 46.8% of global seafood production (FAO 2018). We simply do not know which (if any) of these physiologic models that have been developed for commercially cultured food fish are suitable for the majority of marine teleost fishes that are maintained in zoos and aquariums. Furthermore, the production goals (reproduction and longevity) of exhibit fishes or those maintained for conservation purposes can differ drastically from those that are reared for aquaculture purposes. Nutritional requirements likely vary accordingly (Hoopes and Koutsos 2021). Feeding requirements of various life stages, particularly broodstock and larvae, are of particular priority for aquarium fish that are reared for display or to achieve conservation goals (Hoopes and Koutsos 2021). In contrast, nutritional requirements of juvenile fish during grow-out may be of greater interest for species that are cultured for food production. The use of low-cost diets, while optimizing survival and yield, is critical to managing production costs in aquaculture operations. Noninfectious and infectious diseases and syndromes may also have nutritional components (Blazer 1992;Davies et al. 2019), and these important interactions are largely unrecognized in the majority of aquatic species. Although some specific nutritional disorders are recognized, such as skeletal disorders, ascorbic acid, and thiamine deficiencies (Halver et al. 1975;Fitzsimmons et al. 2005;NRC 2011), many cases of nutritional disease are difficult to diagnose and only arrived at by exclusion. Hepatic lipidosis is an excellent example of a disorder that, while normal in some wild fish, is a common finding that is usually considered a pathology in animals under human care, particularly marine teleosts (Spisni et al. 1998;Wolf 2019). Despite being a frequent finding, the clinical significance of the condition is not always obvious (Wolf 2019). Additionally, fish that are maintained with suboptimal nutrition may present subtle anomalies that are characterized by poor growth, lack of vigor, poor reproductive performance, or increased susceptibility to infection (Davies et al. 2019). Thus, the role of nutrition, and even specific feed ingredients, is increasingly recognized as critical to optimal immune function and disease resistance (Zhao et al. 2015;Martin and Król 2017). Six complementary focus areas were identified by this working group as critical to improving the clinical recognition and management of nutritional disease in marine teleosts: (1) development of a standardized approach for antemortem (clinical) sample collection, (2) development of a standardized approach for postmortem (pathology) sample collection, (3) food biosecurity, (4) dietary supplementation, (5) application of advanced diagnostic methods, and (6) education and training in clinical fish nutrition. Development of a Standardized Approach for Antemortem Evaluation and Clinical Sample Collection During a clinical evaluation of a fish, morphometrics (including relative weight and body condition), the animal's general appearance, position in the water column, movement, and behavior have been used as a rudimentary proxy for nutritional status (Hoopes and Koutsos 2021). Occasionally, imaging or clinical pathology provide additional data; however, current levels of diagnostic investigation have limitations and can easily overlook subtle changes and effects to an animal's nutritional status. While radiology is an excellent tool for antemortem detection of skeletal and swim bladder anomalies (Soto et al. 2019), detecting liver pathology that is associated with nutritional abnormalities has been more dependent on necropsy findings and histopathology than premortem examination and clinical pathology (Wolf and Wolfe 2005). While some advances in fish health monitoring have been developed in aquaculture species-for example, automated blood cell count analysis (Fazio 2019)-widespread aquarium adoption has been slow. A more robust, standardized, and comprehensive approach for antemortem clinical evaluation is needed. This will require multidisciplinary collaboration among veterinarians, nutritionists, epidemiologists, aquarists, and systems engineers. Concurrently, standards ACTION PLANS FOR EX SITU MARINE TELEOST HEALTH, NUTRITION of communication between fish suppliers and institutions need to be developed to ensure consistency of feeding, diet, and husbandry and to help with acclimation of wild or aquaculture fish into new environments. Both clinical evaluations and communication standards must remain flexible and evolve as new scientific information becomes available. As part of this focus area, a baseline data set for nutritional disease needs to be developed. A number of questions must be addressed during this process, including those regarding (1) which diseases and disease processes can be accurately diagnosed and evaluated and (2) what samples and diagnostic tests are relevant and appropriate. For example, in addition to morphometrics, gross visual scoring, and standard clinical pathology, how can imaging (ultrasound, radiography, MRI, CT), metabolite and or microbiome evaluations, and other emerging technologies be standardized for various types of samples, validated, and prioritized considering resource limitations? Processes for evaluating functional feeds, defined as feeds that are supplemented with the intent of optimizing fish health as well as growth, need to be enhanced through the use of "omics" technologies to clarify molecular and cellular processes that are influenced by various additives (Martin and Król 2017). Similarly, species-specific baseline data (wild vs. captive, age-related, and sex-related) for feeding response behaviors as well as relevant organ and tissue samples must be gathered, standardized, and quantified. Approaches to best assess the nutritional status of mixed species tanks need to be established and evaluated. Development of a Standardized Approach for Postmortem Sample Collection A standardized process for postmortem evaluation and sample collection of a fish's nutritional and disease status will help inform clinical evaluation standards. Standardization will also provide more specific baseline data at both the gross and microscopic levels of structure. Likewise, diagnostic imaging, clinical pathology, and other morphometric and pathophysiologic indicators can be evaluated for their diagnostic or predictive value, especially in cases that are euthanized prior to necropsy. One example would be to develop a more specific and standardized definition of hepatic lipidosis, including gross, histologic, and clinical pathologic evaluation. As with clinical evaluations, postmortem standards must be flexible and updated over time as new information is learned. Diet Item Biosecurity Food biosecurity, defined here as the concern for accidental introduction of pathogens via food items, is another critical topic in need of further evaluation and standardization. Food items can serve as potential reservoirs for communicable pathogens such as viruses, bacteria, parasites, and fungi, but they can also contain heavy metals, microplastics, or adulterating ingredients. An example is liver pathology caused by aflatoxicosis resulting from the contamination of food stuffs with Aspergillus flavus (Frasca et al. 2018). How does food biosecurity affect gut health and immune function? This focus area requires multidisciplinary collaboration among nutritionists, collectors and distributors of food sources (live, frozen, and commercial feeds), aquarists, microbiologists, immunologists, and epidemiologists to summarize the current literature and determine best practices for (1) pathogen reduction/destruction, including irradiation, freezing, and other methods; (2) food handling and storage; (3) pathogen/microplastics/toxin testing; and (4) quality control. Supplemented Feeds A fourth relevant area of teleost nutrition in need of further investigation, expansion, and standardization is use of food for oral delivery of desired supplements and medication. Food is frequently used as a primary delivery mechanism for drugs, vaccines, and vitamin supplementation. Specific protocols for nutritional support are needed and may include the modification of delivery methods for diverse taxa and the use of food items for delivering feed additives, which could include specific vitamins or minerals, appetite stimulants, probiotics, or prebiotics (Hoopes and Koutsos 2021). The consideration of equipment, carrier methods (e.g., gel food, microencapsulation, and biodegradable gels), and administration logistics (e.g., target or tube feeding), as well as further investigation into the application of markers to ensure proper administration/consumption, is warranted. In addition to troubleshooting the practical approach to delivery, there is also a need to better understand vitamin and mineral requirements as well as drug pharmacokinetics, which likely depend on species, life stage, and reproductive status. Advanced Diagnostics While advanced diagnostic techniques may be a subsection of focus areas 1 and 2 (standardization of approaches for clinical and pathologic evaluation of nutritional disease), this topic is so broad, technical, and rapidly evolving that the authors believe it deserves special attention. As advanced human and domestic animal diagnostic approaches develop, progress, and become conventional, many of these methods become more readily adaptable and available for other species, including fish. Concurrently, many advances in teleost biology and physiology, environmental microbiology, and related fields may have clinical or pathologic applications. Areas that are currently underused in teleost clinical diagnostics and pathologic investigations include comparisons with established species-specific baseline values and health assessments using immunologic methods and species-specific markers 72 (including evaluation of blood and mucus samples, genomics, transcriptomics, proteomics, metabolomics, and other relevant molecular markers). Additionally, the evaluation of microbial communities in the fish gut and any apparent alterations in response to different external and internal conditions should be considered. The role of gut microbes in digestion and nutrient availability is poorly understood in teleosts and is another area that is in need of investigation (Hoopes and Koutsos 2021). Education and Training in Clinical Fish Nutrition Tackling the basic knowledge gaps and developing standard approaches within the previously discussed focus areas are critical to the advancement of marine teleost nutrition. However, without targeted programs in education and training, the knowledge gained will not be disseminated effectively to all of the relevant stakeholder groups-including those at the front lines of husbandry and veterinary care. Coordinated collaboration among nutritionists, veterinarians, and husbandry staff is critical to success, as is proper identification and targeting of other stakeholder groups, such as collection fish wholesalers and producers, veterinary students, and diet item and feed suppliers. NUTRITION WORKING GROUP The nutrition working group consisted of participants from a wide range of backgrounds and expertise, including applied nutritionists and husbandry specialists for marine teleosts as well as commercial aquaculture species, nutritionists from commercial feed manufacturers and ingredient suppliers, and representatives from academia and conservation communities. The group recommended several areas of focus for which the current body of knowledge should be summarized and additional information be further developed. These areas of focus are detailed below and include creation of a husbandry manual containing practical feeding management information; a comprehensive review of current knowledge of nutrient requirements of marine teleosts; a summary and/or database of available food items, including assessment of their sustainability and considerations for food options for the future; and a review of current knowledge of larval and broodstock nutrition, as well as that specifically related to nutrition of herbivorous fish. Practical Feeding Management The nutrition working group recommends and supports the creation of a readily accessible husbandry manual to provide comprehensive information regarding practical feeding and nutritional management for marine teleosts under human care. Specific areas to address include techniques for nutrient delivery, including target feeding and methods for diet distribution and usage, and provisioning feed at an appropriate rate for the physiology and behavior of the target animal. Examples of this type of contribution can be found in many species survival plans as well as for some other aquatic species (e.g., elasmobranchs; Janse et al. 2004). Additionally, methodology is needed for health and welfare assessment in applied nutrition programs, including the development of behavioral tools to assess feeding response, body condition scoring methods, and tools for the estimation of biomass in large, mixed-species exhibits, which may be guided by behavioral work that was intended to facilitate other interventions (e.g., Corwin 2012). Finally, guidelines for food item and diet preparation are necessary, including methods for quantifying food items, frequency and types of analyses required, and standard procedures for maintaining food safety and quality through diet item delivery, storage, and feeding, such as recommendations for handling fish that will be fed to fish-eating animals (Crissey 1998) and for food preparation and feeding fish (Hoopes and Koutsos 2021). Designing Feeding Programs for Marine Teleost Fish The nutrition working group recommends a comprehensive peer-reviewed manuscript of the current knowledge of nutrient requirements and feeding programs for marine teleosts under human care. Specifically, this manuscript/review will summarize the current knowledge of nutrient requirements of various marine fish species to serve as a guide for diet development and assessment as well as integration of nutritional ecology knowledge to establish feeding guidelines for a species based on in situ cohorts, with the ultimate goal of using this expanded knowledge base to make actionable recommendations of feed management for marine teleosts. The vast majority of literature concerning the nutrient requirements of teleost fish has been generated in and for aquaculture species for which growth rate is often the primary variable by which titration of nutrient requirements is quantified. Thus, additional considerations for longevity, display needs (e.g., pigmentation of fish on display), reproductive success, etc. will have to be taken into consideration when applying and adapting data that are generated in aquaculture species. Additionally, a summary of best practices for animals with special dietary needs, including quarantine animals and animals with acute or chronic medical and rehabilitation challenges, will be important. A substantial portion of this information has recently been published (Hoopes and Koutsos 2021). Diet Items for Marine Teleosts The nutrition working group recommends the development of reference materials that summarize the availability of and opportunities for incorporation of various diet ACTION PLANS FOR EX SITU MARINE TELEOST HEALTH, NUTRITION items for marine teleosts under human care. The working group recommends that the dissemination of this information include at least one peer-reviewed journal article and the establishment of a framework and protocol for an open-source, online database of foodstuff nutrient composition and a summary of the available diet items (current and historical), encompassing the broad range of fresh and frozen aquatic food items, supplements, and dry/prepared diet items that are typically used in marine teleost feeding programs. Additionally, the sustainability of these items should be assessed and improved. As a result of this workshop, research has been initiated to investigate the application of black soldier fly Hermetia illucens larvae meal as a sustainable replacement for fish meal in marine teleost diets with no significant differences in growth performance (S. Williams and coworkers, unpublished data). Other opportunities considered for future diet development (e.g., culture methods for live feed organisms, alternative sources of nutrients, improving formulation of diets for water quality, value of diversity in the diet) should also be investigated. Larval and Broodstock Nutrition The nutrition working group identified the critical need to facilitate breeding within marine teleost facilities and thus reducing the need for collection from wild fish stocks. The working group recommends the development of a summary of current knowledge of larval and broodstock fish nutrition, expanding on previous publications (e.g., Hamre et al. 2013). Recent successes with captive propagation of acanthurid, chaetodontid, and labrid fishes should be emphasized, with a focus on how these advances collectively contribute to the aquaculture of marine teleosts. A summary review article on this topic and contributions to an online database detailing foodstuff nutrient composition are anticipated outputs of this focus area, particularly identifying information gaps that can be targeted for future applied activities. Specialty Diets Including Herbivorous Fish Nutrition The nutrition working group recognized the need for a comprehensive review of the current knowledge of herbivorous fish nutrition, expanding upon work previously published (e.g., Clements et al. 2009). Recent work has improved our understanding of both feeding strategies and nutrient usage by a variety of herbivorous fish species; however, the majority of information available on the natural diets of herbivorous fish that graze on coral reefs amounts to little more than feeding observations. It is recommended that newer data sets, including biomarker data that identify actual dietary targets and specific assimilation of dietary elements, be a focus area of this review. A separate database detailing nutrient composition (and/or utilization) of foodstuffs consumed by this subgroup of fishes may be an additional useful output. Overall, the nutrition working group identified these five areas for further summary and, ideally, additional research and future funding focus. Integration of current knowledge and recognition of data gaps will provide a foundation for future collaborations and targeted activities for greatest progress moving forward. ANIMAL HUSBANDRY WORKING GROUP A variety of methods are used for managing nutrition and delivering food to fish in large aquariums. These methods vary depending on species, age of individuals, fish population, facility preferences, behavioral husbandry application, exhibit size and shape, cohabitants in the exhibit, and feed type. Because of these distinctive variables, there is no simple solution for determining a best method that applies to every system. However, there is information, albeit limited and dispersed, on many strategies used in aquariums. Every aquarium team tends to develop its own set of institutional knowledge based on its physical facility, species, and staffing. Modifications of common feeding methods developed by on-site staff are often trialed before a specialized practice is developed for that particular operation. Most of the peer-reviewed information on teleost nutrition is based on aquaculture species. This information is often based on economics and fast growth, thus not taking into account a balanced nutrition plan for all life stages and mixed-species habitats as we would see in aquarium settings. Applying this information for use in large aquariums containing various species and sizes of fish often requires modification of practices that were successfully used by aquaculture industries. Behavioral observations, health assessments, necropsies, and animal body scoring are all used as indicators of feeding strategy effectiveness and evidence that proper nutrition requirements are being met. Body Condition Scoring Applying techniques to evaluate body condition and feeding response of teleost fishes as a way to score the population is a newer practice within the industry. While detailed body scoring criteria have been developed for certain species of teleosts (Priestley et al. 2006) and elasmobranchs (Kamerman et al. 2017), they are limited in comparison to the mammalian, avian, reptile, and amphibian species resources available (AZA Nutrition Advisory Group 2021); such extensive resources do not yet exist for most teleost species. Further, the subjective nature of body condition scoring can be challenging due to differences in human perception; more work is needed to 74 develop objective descriptions to improve body condition scoring accuracy relative to clinical evaluation. Feeding Techniques Traditional feeding techniques, designed to reduce competition between specific animals and/or species in a large multitaxa exhibit, can include broadcast feeding, locationspecific feeding, simple target feeding, mechanical feeders, and the use of nonspecific diet items. To ensure that an adequate amount of diet is consumed, aquarists may feed a greater amount of food to multiple species at one time in a "broadcast" feed over a large surface area or choose to feed smaller amounts of food in specific locations targeted to certain animals or species. A disadvantage of broadcast feeding, whether in a large system or smaller "jewel" tanks, is the potential for more food waste by overfeeding or underfeeding certain individuals due to competition. Modified mechanical feeding devices such as pumping the food to underwater outlets and offering nonspecific diet items like lettuce in underwater feeders and bags can help disperse food at various depths and provide different feeding locations for a variety of species in large exhibits. Feeding a variety of taxa often involves various feeding groups (i.e., carnivores, herbivores, and omnivores) of fish. It is very important to be familiar with your collection and each species' life history to ensure that the fish are offered the correct diets in the correct manner. This is often overlooked in the development of feeding regimes. Knowledge of your collection is essential for development of effective diets and feeding methods. The development of specific care plans that are focused on caring for particular groups (genus or species, life stage, habitat) can improve the specificity of feed delivery and health outcomes. While the Nutrition Working Group section of this manuscript has previously discussed nutritional areas of potential future of research, there is plenty of potential for improvement in husbandry feeding protocols as well. Record keeping within institutions and sharing techniques across institutions could lead to codification of best practices for many diverse species. Behavioral Husbandry Training husbandry behaviors is an essential part of well-rounded and excellent animal care programs, and it can provide animals with the mental stimulation, physical exercise, and cooperative veterinary attention and treatment they need to successfully survive in the environment that is provided for them in zoological settings (Ramirez 1999). Studies have shown that fish have demonstrated a high capacity for learning through observational, spatial, and aversion techniques (Helfman and Schultz 1984). The application of more advanced behavioral husbandry practices can be implemented to create a comprehensive and effective feeding strategy in even the most complex aquatic environments. By using operant conditioning techniques through positive reinforcement to modify behavior, aquarists have been able to condition animals (individuals or groups) to come to a recognized "target" (typically a discernable shape) or location for a feeding session. Target trained fish are often fed by hand or tongs, which allows for an exact amount of diet and supplements to be delivered. Many times, these animals are fed at the same time as others in an effort to keep them separated and prevent potential interruptions and/or competition with tank mates. Animals can also be trained to voluntarily move into a net or holding area for feedings (Corwin 2012). This not only eliminates competition and ensures accurate delivery of diets and supplements, but can also help set these animals up for future success in training them to participate in other aspects of their husbandry care. As more innovative feeding methods for teleost fishes are designed and implemented, it will be vital for institutions to both share their trials and successes and promote industry standards for marine teleost nutrition. Currently, the details and complexities of providing for teleost nutrition in public aquariums are generally shared between facilities informally through conversations between colleagues, industry listservs, and annual conferences. To efficiently capture these information exchanges, share best practices, and archive them for the future, there is a considerable need for a single common electronic hub that can be easily accessed and used among institutions. PHYSIOLOGY AND CHEMISTRY WORKING GROUP The physiology and chemistry working group consisted of participants from a wide range of backgrounds and areas of expertise, including applied chemists, animal husbandry experts, and academic nutritional physiologists. After discussing the needs for understanding the aquarium environment and the physiological demands of the aquarium inhabitants, the group generated two focus areas that are critical for moving the field forward. The Organic Matter Constituents of Dissolved Organic Matter Using Fourier-transform ion cyclotron resonance mass spectrometry, it is possible to identify the number, and sometimes types, of compounds that exist in the dissolved organic matter (DOM) of seawater (Hansman et al. 2015). Seawater that is taken from the world's oceans is composed of tens of thousands of different compounds, whereas DOM from recirculating aquaria has significantly fewer (Semmen, unpublished data). Dissolved organic matter constituents may play a role in aquarium health, and thus we ask the following questions: what are the compounds composing the DOM of natural seawater ACTION PLANS FOR EX SITU MARINE TELEOST HEALTH, NUTRITION (sensu Hansman et al. 2015), and what is missing in aquarium water? Do the constituents of DOM affect the "health" of the aquarium? This should be a primary area of research and will dovetail with microbial diversity and function. What roles do microbes in the aquarium environments play in DOM chemical diversity and variation? Which microbes matter more, those in the aquarium environment (including those associated with specific animals or plants) or those in the biological filtration systems? And finally, is sterilization with ozone/UV a good idea if beneficial microbes are killed in the process? Does this affect DOM concentrations and diversity? Understanding the Needs of the Consumers within the Aquarium Environment Each aquarium's physical environment will be different, and clearly not all community members can eat the same thing or require the same amount of space. By using energetics models (based on respirometry and accelerometry of at least closely related species to those in captivity [e.g., Parsons 1990] and digestibility estimates for different foods [German 2011]), different guidelines could be developed for each group or individual within a population. This requires significant work to be performed on many different species but with their management in mind. Data from aquaculture efforts cannot be easily extrapolated to aquarium fishes, especially because of the diversity of species kept in aquarium environments (aquaculture is focused on a handful of mostly carnivorous species; Clements et al. 2014). Thus, we are calling for deliberate studies of digestibility and energetic needs of as many species held in aquaria as possible to develop a better understanding of the true needs of each species. With these kinds of data, trophic guilds (Clements et al. 2017) can be identified for each environment based on their needs. Although covered in other sections (e.g., Nutrition Working Group), providing as realistic food as possible will be crucial for lower trophic level consumers (herbivores/detritivores). Some of the most visually dynamic members of aquatic communities, like herbivorous surgeonfishes, cannot subsist on diets of lettuce or kale alone, as these are terrestrial plants and surgeonfishes naturally graze or browse on algae (Choat et al. 2002(Choat et al. , 2004; algae are very different from terrestrial plants in terms of nutrients and fiber (Choat and Clements 1998). What can be done to increase the utility of food for consumers that feed at lower-trophic levels? We recommend a dedicated focus on determining the energetic needs of each species as well as the digestibility of different diets. Additionally, examining the potential of outdoor enclosures with naturally occurring algae would be a project idea. Is this sufficient to grow the requisite algal diversity (especially if the algal community is seeded with algae from the fishes' natural environment)? Should herbivorous/detritivorous animals be allowed time in outdoor enclosures as a means of increasing their access to algae as a food source, or is it sufficient to grow algae and biofilms in tanks that are exposed to sunlight (e.g., Hauter and Hauter 2019) and then bring these food items inside to offer to the consumers? Finally, in exploring the potential possibilities of feeding supplemented diet items, some experimentation with different compounds (like humic substances; Yılmaz et al. 2018) may help, although a natural diet and environment are best when feeding fish species. Matching the natural diet as much as possible is ideal, and meeting the energetic and nutrient requirements is critical. This is an area in which we currently are not adequate, especially for lower trophic level consumers in marine aquarium environments. However, we must work to improve managed diets if we cannot practically feed the natural diet (Yılmaz et al. 2018). Finally, ensuring palatability of offered diet items so as to avoid their degradation in the water due to nonconsumption is critical. Overall, the limits of using recirculating systems are probably linked to water chemistry and microbes on some level (see the Water Quality Working Group section). However, attempting to understand the constituents of seawater beyond ionic or nitrogenous compounds (e.g., ammonium, nitrite, nitrate) will be critical in maintaining long-term recirculating systems. Moreover, improved tools are needed to improve our ability to truly understand individual animal performance within a larger community, improving our decisions regarding animal health and maintenance. WATER QUALITY WORKING GROUP Water is one of the richest and most diverse microbial reservoirs on Earth and it informs the biota of exposed fish tissues. Perhaps not surprisingly, the establishment of interactions with the water biota is critical for fish to adapt to many adverse aquatic environments. Like other animals, fish host and live among communities of microbes that influence a wide variety of their biological processes. Recent surveys of these healthy fish microbiomes have begun to document which species are present, how they facilitate fish health and functioning, and the role of water quality in selecting, promoting, and controlling them. More comparative studies are needed to determine whether characteristics such as nutrient and mineral availability are major determinants of the fish microbiome. Just as digestive tract microbes interact with the food consumed that is by terrestrial vertebrates, the fish gut and gill microbiomes mediate the aquatic-based diet and nutrient ion exchange. Microbial Diversity Microbial diversity is influenced by environmental complexity. The density of microbes in many aquatic systems 76 is staggering, with tens of millions of organisms and thousands of species per liter having been described (Sogin et al. 2006). For example, seasonal time series analysis revealed repeated annual patterns in marine microbial communities off the coast of San Pedro California (Fuhrman et al. 2008). These repeating patterns indicate that environmental parameters are ecological drivers that shape marine microbial communities, and they include chemical and physical parameters such as temperature, pH, nutrients, and salinity (Van Bonn et al. 2015). On the other hand, water treatment processes in artificial systems maintain the chemical and physical values within a restricted range of variation and their management typically includes reduction in microbial abundance. As a result, it is likely that the microbial assemblages in aquarium systems differ significantly from natural systems, with the former having lower microbial diversity. This makes niche spaces available for potential pathogens and potentially a reduced immune system memory for resident animals. Microbes found in managed aquatic systems are subject to powerful selection pressures. The equilibrium state of managed systems is characterized by a much different microbial ecology than naturally occurring systems (Vadstein et al. 2018). This, in turn, may have a profound effect on the adaptive immune responses of host organisms sharing the environment. For these reasons, disinfection and water conditioning practices and procedures for aquarium systems should be reviewed. Disinfection The disinfection of managed systems is used to reduce infectious organisms as well as to control undesirable algae and color of the water column and along surfaces. It is a "point source" process, as the inline application of ozone during filtration dissipates before it gets into the habitat. Advances in disinfection processes, such as more efficient ozone mass transfer, automated control systems, and less disruptive ozone contact used in combination with foam fractionation, help mitigate some of the undesirable effects of residual oxidants. We are proposing that, in light of recent findings, it is time to eliminate the word "disinfection" from our aquarium vocabulary. There are indirect effects on health caused by the disruption of the fishes' host and environmental microecology that are brought about by oxidation disinfection. There is literature that suggests that ozone could also be affecting fish health indirectly by transitioning metal oxidation to a more biologically reactive valance state of iron (Bagnyukova et al. 2006), copper (Craig et al. 2007;Bopp et al. 2008) and chromium (Lushchak 2008;Lushchak et al. 2009aLushchak et al. , 2009bKubrak et al. 2010;Vasylkiv et al. 2010), which leads to oxidative stress. In some cases, this process may be converting trace metal nutrients into toxic ions. There is evidence that ozone or its derivative by-products might cause oxidative stress directly in fish as well (Fukunaga et al. 1999;Hébert et al. 2008). The totality of environmentally induced oxidative stress on an organism has been shown to add an accumulative oxidative stress effect on the animals' physiology, resulting in a stabilized, prolonged "quasi-stationary" state (Lushchak 2011). This could help explain why certain fish species, especially within the elasmobranchs, are more easily pushed over the edge with ozone-produced total residual oxidants in the aquarium environment than teleosts (Rudneva et al. 2014). Managing the Microbiota Current research provides a thorough description and characterization of gut microbiomes of aquaculture species (Wong and Rawls 2012;Llewellyn et al. 2014;Trinh et al. 2017). Changes that these communities exhibit when pre-or probiotics or other feed additives are incorporated into the diet have been documented (Abdel-Wahab et al. 2012;Sihag and Sharma 2012;Karlsen et al. 2017). Humic substances, when added to their water or diet, may help the animal's defense system by inducing a number of nonspecific host immune responses. These include the elimination of metal bioaccumulation in fish tissues. Also observed are increased production of biotransformation enzymes and stress defense proteins, such as chaperons or heat shock protein in fish and invertebrates (Menzel et al. 2005;Abdel-Wahab et al. 2012). In a study featuring Kelp Grouper Epinephelus bruneus, the addition of 1.0% chitin or chitosan extracted from shrimp shells to the diet stimulated immune response and enhanced disease resistance against infections of the protozoan parasite Philasterides dicentrarchi (Harikrishnan et al. 2012). Prebiotics and probiotics can also lead to health-promoting postbiotics, generated by a healthy gut microbiota that metabolizes ingested food to produce various beneficial postbiotic compounds. From human gut microbiota studies, we know these might include amino acids, vitamins, and short-chain fatty acids and may be anti-inflammatory, immunomodulatory, antiobesogenic, antihypertensive, hypocholesterolemic, or antiproliferative and may enhance antioxidant activities (Shenderov 2013;Sharma and Shukla 2016). Microbial Maturation Many studies on the effects of prebiotics and probiotics on farm-reared fish and their associated microbiomes have been published in the past decade (Goldin and Gorbach 2008;Sihag and Sharma 2012). This includes a new way of thinking about how hygienic barriers (e.g., antibiotic regimens, ozone and UV disinfection), organics, and other nutrients are managed to allow for the microbial maturation of water and systems (Attramadal 2011;Attramadal et al. 2012). Defined in part as the selective promotion of slow-growing competition specialists, the K-strategist ACTION PLANS FOR EX SITU MARINE TELEOST HEALTH, NUTRITION bacteria (Skjermo et al. 1997;Salvesen et al. 1999;Skjermo and Vadstein 1999;De Schryver and Vadstein 2014) are assumed to act as a barrier against pathogenic invasion and establishment by opportunistic r-strategists (Stecher and Hardt 2008). In the natural environment, heterotrophic bacteria obtain part or all of their carbon (C) resources from algae (phototrophs), thus making this interdependence between bacteria and algae inseparable. At the same time, heterotrophs are competing with algae for the available reactive phosphorus (Ortho-P) in the water column and competing with them for biofilm recruitment space. Therefore, a healthy biofilm community, characterized by a high diversity and stability, is dependent on a healthy ratio of heterotrophic to phototrophic microorganisms. This is dictated by the C:P ratio (determined to be around C:P = 1) in the water column and controlled niche spaces or surfaces (Hall and Pepe-Ranney 2015). Application We must keep in mind we are feeding not only the fish host, but also its symbiotic microbiome. Microecology principles dictate feeding as close to a natural diet as possible. Fresh and raw foods are good, but facilitating the feeding of living biota could go beyond improved nutritional content to the infusion of natural probiotics and prebiotics. These are critical to the host's utilization of the food's nutritional content, including naturally occurring extracellular polymeric substances, chitin, and humic substances. Enriching the water might include the addition of water-conditioning probiotics and prebiotics routinely to avoid r-strategist takeovers, tapering of traditional disinfection and organic matter control, and a more ecologically diverse biofauna. The latter might be accomplished by incorporating a diurnal rotation routine by moving water and fish through interconnected microcosm modules with environments and substrates that facilitate the culturing of biota native to the fishes' grazing environments. EPIDEMIOLOGY The epidemiology working group concluded that there is a critical need to first design and distribute a survey to collect critical information that will facilitate the assessment and evaluation of factors that affect teleost health. It is important to be able to accurately quantify disease status and risk factors (including the environment) that are associated with individual and population teleost health. A consensus was reached among all participants in the workshop regarding the need to gain knowledge for different teleost species and life stages. General approaches and considerations to address current challenges on teleost health were outlined by the epidemiology group and subsequently complemented by specific questions arising from each working group. General Approaches and Considerations It was agreed that species-specific diet requirements are needed as well as behavioral standards and better knowledge of environmental conditions, including water quality. Understanding changing nutritional, behavioral, and clinical demands at different life stages within species was also highlighted as an important area of investigation. The quality and quantity of records/data available are a primary consideration when moving forward and trying to better understand teleost health. Procedures for obtaining future data will be important, but access to centralized resources of currently available data would be beneficial as well. The "unit of interest" (tank, population of certain species, individual fish) needs to be clearly determined when assessing teleost health. At the same time, it is important to identify parameters and approaches that can be measured and used effectively for multiple species (i.e., delivery of food and managing feeding behavior). These should be differentiated from instances in which species-or agespecific approaches will be more appropriate. Although sometimes the goal is not to compare ex situ populations to wild ones, it is important to have reference data from wild populations to understand baseline parameters. The biggest challenge in the care of ex situ marine fish is to establish more complete definitions and standards for animal health, reproduction, nutrition, environment, and behavior. Thus, each working group developed central questions that will serve as the foundation for developing the proposed survey. This group thinks that it is extremely important to address current gaps in knowledge regarding teleost health and nutrition to better address the challenges faced by ex situ teleost populations. Collecting and analyzing this information will be a first and critical step toward having a positive effect in maintaining the health of ex situ teleost populations worldwide. CONCLUSION In analyzing the framing that is used by media outlets when discussing zoos and aquaria, we found that institutions can be viewed in many contexts, with animal welfare, business interests, and their function as entertainment/recreation accounting for 85% of the articles that we studied. Additionally, while a majority of these media articles were supportive of zoos and aquaria, the articles that were negative were overwhelmingly focused on animal welfare topics (Maynard 2017). This growing focus on animal welfare among the public is an example of why scientists and animal care professionals must 78 continue to collaborate and engage in science to improve the care of our aquaria species. The importance of animal welfare to the public and the ability of zoos and aquaria to affect the public show how excellence in animal care directly influences guests' perception and the institution's ability to educate the public and advance their conservation efforts. The goals and areas of focus outlined above represent information that needs to be expanded and developed over the coming years to provide a strong foundation for the production of a husbandry manual for marine teleost nutrition and health. Much of the existing knowledge in teleost health and nutrition is focused on production aquaculture, which seeks to maximize outputs while minimizing input costs. In aquaria, our goals are often the opposite, focusing on maximizing life span and maintaining animal health with vibrant coloration. Our goal in developing a robust animal care manual with a multidisciplinary focus is to continue the improvement in the care and condition of our collection species. It is important to consider all aspects that affect fish health and wellness when designing an animal care plan, including clinical medicine, nutrition, animal husbandry, and water quality. Enhancing our knowledge of both in situ and ex situ systems will improve our understanding of how fish interact in complex environments and better support their diverse requirements in aquaria settings. By enhancing our animal care practices, we can better serve both our local communities and our conservation and education goals as zoo and aquaria. Through fostering a strong communication network among fish professionals, we hope to gain greater insight on best practices as well as emerging science to drive innovation and excellence in teleost care. By collaborating across institutions and disciplines, we hope to promote and enhance the welfare of fishes under human care through improved nutrition and health.
Reduction of Losses and Operating Costs in Distribution Networks Using a Genetic Algorithm and Mathematical Optimization This study deals with the minimization of the operational and investment cost in the distribution and operation of the power flow considering the installation of fixed-step capacitor banks. This issue is represented by a nonlinear mixed-integer programming mathematical model which is solved by applying the Chu and Beasley genetic algorithm (CBGA). While this algorithm is a classical method for resolving this type of optimization problem, the solutions found using this approach are better than those reported in the literature using metaheuristic techniques and the General Algebraic Modeling System (GAMS). In addition, the time required for the CBGA to get results was reduced to a few seconds to make it a more robust, efficient, and capable tool for distribution system analysis. Finally, the computational sources used in this study were developed in the MATLAB programming environment by implementing test feeders composed of 10, 33, and 69 nodes with radial and meshed configurations. General Context Distribution networks are a fundamental component of power systems involved in warranty power supply to end customers (i.e., residential, commercial, mixed, and industrial customers) [1,2]. Due to their radial topology, in these systems, a large percentage of power is often lost (i.e., when power energy is transformed to heat energy). Around 70% of the energy losses are presented in power distribution systems, and 13% of the energy delivered to these systems is lost during the distribution stage [3,4]. The author of [5] discussed the amount of money invested in these systems and the type of losses presented; besides, they mentioned that 2/3 parts of the investment in power systems is associated with the distribution, for which distribution is known as "invisible giant" of the power system. In terms of power losses, these can be categorized into technical and nontechnical losses [6]. Technical losses occur through the energy dissipation when running power system components such as transmission and distribution lines and primary and secondary conductors; likewise, these are found in transformers' winding and cores [7,8]. Non-technical losses are caused by external actions of the power system, either due to illegal electrical service connections, non-payment of utility bill by end-customers, or errors in accountability and metering maintenance [9]. Illegal connections may cause overloading and malfunctioning in electrical equipment, for which utility companies' profitability is affected due to the unidentified power consumption. The large percentage of losses found in distribution systems and the high levels of investment, operation, and maintenance required have prompted network operators to perform significant amount of research to determine the most practical and economic solutions for reducing the operative costs associated with energy losses [10,11], as the good management of this issue will greatly impact the system's efficiency and the power supply service's profitability [8]. Network operators have found that locating distributed generators and capacitor banks in parallel is the most popular mechanism to improve the system performance, as these reduce losses and improve voltage profile [12,13]. Capacitor banks are only useful when their optimal location and suitable size are chosen; accordingly, the power factor and voltage profile are improved. Misdone siting and/or sizing can generate problems such as an increase in power losses or lead the voltage magnitude to reach unacceptable limits [4,14]. Therefore, it is important to employ efficient mathematical models to address the problem of capacitor banks' location and guarantee their efficient use in the distribution system, improve the technical features of the network for end-customers, [4,13], and increase the economical benefits of energy trading for the network operator. Motivation Reactive management presents a challenge for network operators and end customers. The former seek to decrease technical losses presented in the wires of the primary and/or secondary circuits and transformers' cores and theirs winding [7,15].Additionally, they intend to eradicate the non-technical losses caused mainly due to illegal connections on the system [6,16]. End customers seek not to be involved in economic penalties due to losses caused by the usage of electric machines at the industrial level, which create inconveniences with the power factor. To control the subject of reactive power, since 2005 in Colombia, the Energy and Gas Regulation Commission (CREG in Spanish) with the CREG 018 resolution, established a limit for the power factor (minimum as 0.9) [17]. Additionally, this resolution satisfies the requirements of the agents (entities entrusted for generation, transmission, and distribution of power to end customers), delineates the role played by the capacitor banks for reactive compensation, and addresses the controlling of power losses in power distribution. It is important to mention that the voltage control performed by plugging and unplugging capacitors banks in parallel corresponds to discrete control. In 2018, the CREG 015 resolution established that reactive power consumption cannot be greater than 50% of the active power consumption. A set of rates was created for this, that is supposed to be reported by the network operator to the liquidator or accounts manager in the first 10 days of each year [18]. In order to guarantee the service quality for end customers and increase the economic benefits for network operators, in this work, the implementation of a discrete genetic algorithm for the optimal siting and sizing of fixed-step capacitor banks is proposed for solving the mixed-integer non-linear programming (MINLP) model that represents the problem [11]. Additionally, this mathematical model is implemented using the General Algebraic Modeling System (GAMS) software by comparing the results obtained with the method proposed [19]. It is important to mention that the objective function considered is related to a reduction in power losses and corresponding costs during a year of operation [13]. Review of the State of the Art In the specialized literature of the last 20 years, several works can be found that use the installation of capacitor banks and different strategies to improve the operation of distribution networks. Some of these works are discussed as follows: In [20], the authors developed a methodology that considers node sensitivity in regard to the active power losses. The methodology proposed was tested in two systems, and it was concluded that the installation of capacitor banks on the most sensitive nodes provides economical benefits and improves voltage profiles. The authors of [21] proposed to locate capacitor banks on the transformer's low-voltage side using a mixed integer non-linear method and a control operative method of the capacitors with the objective of maximizing the net present value (NPV) of the project and obtaining cost benefits. The project was implemented in a network located at Macao (China), and it was found to be computationally efficient and that a notable NPV can be obtained if the capacitor banks are optimally located. The authors of [22] developed a fuzzy method to find the candidate nodes for bank capacitors' location; in that work, a set of constraints was included for the voltage limits, number, and siting of capacitors. Apart from that, the sensitivity factors to power losses were used, and the model was implemented in two radial systems: 12 and 34 nodes, respectively. As a result, a prominent reduction was made in the losses and an improvement in the voltage profile, which confirmed that the methodology was suitable for solving this optimization problem. In [23], the authors developed a genetic algorithm that considered codification in terms of possible capacitor sizes (discrete sizes with desired resolution). Besides, a comparison using pairs was used in the operator for tournament selection, and it was tested in 18-, 69-, and 141-node test systems. Likewise, different type of loads were experimented (commercial, residential, and industrial); obtaining that under these load conditions adequate results regarding the minimization of the objective function value. In [24], a bio-geography optimization algorithm that minimizes the cost function and meets the proposed constraints was discussed. It was applied to radial systems of 10, 15, 34, and 69 nodes, reducing the active power losses after the compensation and stabilization of the voltage level within the range established by the regulatory entity. The authors of [25] performed their study on a 14-node network with a genetic algorithm for the cost optimization and sizing of capacitor banks. They determined that the method's accuracy depends on the population size and that the voltage profile proportionally improves with reduction of losses. A binary codification was employed, which increments the required processing times and reduces the possibility of having infeasibilities during the optimization process. In [26], two novel algorithms were proposed. The first algorithm is a hybrid of the particle swarm optimization algorithm and the Quazi-Newton algorithm, and the second method combines the particle swarm and gravitational search algorithms. The authors employ losses sensitivity factors, which are validated in 33-, 69-, and 111-node test systems. The method was considered robust and efficient, as the voltage profiles are enhanced, losses are minimized, and net saving is maximized. In [27], an optimization method based on the behavior of the ant lions was presented, which minimizes losses and total annual costs from the objective functions. The method was validated in 33-and 64-node test systems, and a significant annual savings and reduction in power losses was found when the capacitor banks location was determined. In [28], capacitor banks were sized and sited from a discrete genetic algorithm to reduce losses and improve voltage profiles in the networks belonging to the eastern and western regions of Saudi Arabia. Two objective functions were formulated: the cost equilibrium of the capacitors and the total cost of the system after the capacitors' location. As a result, the genetic algorithm provided feasible solutions for the system, as the voltage profile, losses and power factor were improved. The authors of [29] employed an artificial electric field and a sensitivity factor. Net saving was maximized by reducing the losses, and the algorithm was validated in 69-and 118-node test systems with different installation scenarios of capacitor banks. They concluded that their algorithm is able to maximize the current net saving with low capacity capacitors, reducing losses and improving voltage profiles. In [30], a heuristic method was developed based on power flows and demand curves in a one-day period. From the capacitor's location and the nature of the bank, it is possible to determine whether they are fixed or variable. Likewise, the solution obtained is beneficial with the upstream and downstream of every node compensated; besides, the power factor gets closer to the unity, and the voltage profile is improved when compensation is performed. On the other hand, the authors of [31] performed a study under three scenarios, where the second scenario is of interest of this project due to the capacitor bank's location. With mixed integer non-linear programming, a methodology was developed to find the optimal location of the capacitors, with the following results: a correlation between active or reactive power is injected to the system leading to the improvement of the voltage profile and a reduction in the losses. In the case study of the 33-node test system, node 18 is determined to be the worst in terms of voltage regulation, given its distance with the slack node. When active and reactive power is injected, the power flows are observed to be better distributed compared to the benchmark case. Finally, the authors of [11] presented a master-salve optimization algorithm for the selection and location of fixed-step capacitor banks in distribution systems. In the master stage, a discrete version of the vortex search optimization algorithm was employed that determines the siting and sizing of the capacitors via a unique codification vector. In the slave stage, a power flow known as the successive approximations method was employed to assess the operative conditions of the network in regard to the technical losses and voltage profiles. The numerical results demonstrated the methodology's efficiency, with the best results recently presented in the specialized literature on the reduction of the operative costs during one year of operation at maximum load, as compared to different heuristic and metaheuristic methods used in the scientific literature. The main aspects of the state of the art, that is, the main objective functions and solution strategies, are summarized in Table 1. Table 1. Summary of the main approaches reported in the literature regarding optimal placement and sizing of fixed-step capacitor banks. Optimization Method Objective Fucntion Reference Year Linear sensitivies to reduce the set of nodes NPV (i.e., energy losses and investment costs) [20] 2005 Mixed-integer linear programming formulation NPV [21] 2013 Hybrid approach between fuzzy logic and particle swarm optimization NPV [22] 2014 Penalty-free genetic algorithm NPV [23] 2016 Biogeography-based optimization NPV [24] 2015 Binary genetic algorithm Improve the grid voltage level [25] 2017 Hybrid optimization algorithms based on heuristics Active power losses, voltage profile improvements, and NPV [26] 2017 Ant lion optimizer Energy losses and investment costs [27] 2018 Genetic algorithm Voltage profile improvement and power losses reduction [28] 2018 Artificial electric field algorithm NPV and voltage profile improvement [29] 2019 Heuristic method based on grid sensitivities Power factor and voltage profile improvement [30] 2019 Mixed-integer nonlinear programming model solved in the GAMS optimization package Power losses minimization [31] 2018 Discrete vortex search algorithm NPV [11] 2020 With regard to the literature review summarized in Table 1, the main advantages of using combinatorial optimization methods are as follows [2]: (i) the possibility of implementing the optimization procedure to locate and select the set of capacitor banks for reactive power compensation in distribution networks using multiple programming languages, as these approaches can be structured using sequential programming and (ii) the possibility of working in the infeasibility space based on the usage of the fitness function that allows the finding of promissory solution regions; however, heuristic and metaheuristic methods have some disadvantages, such as the following: (i) the impossibility of ensuring that each running of the algorithm will find the same numerical solution and behavior caused by their random nature; (ii) the high number of parameters that must be tuned to ensure an adequate performance between the optimal solution and the processing times required to achieve this solution, and (iii) a high dependence on the programmer, which is demonstrated when the same algorithm is implemented by different authors who, then, report different solutions for the same optimization problem. To deal with the disadvantages of the metaheuristics and solve the problem under investigation, the following aspects have taken into account for the implementation of the Chu and Beasley genetic algorithm (CBGA): (i) A statistical study will be conducted to determine the rate of convergence of the algorithm to the same solution, which is done by consecutively running the algorithm 100 times; (ii) the different population sizes will be evaluated to determine the best trade-off between the optimal solution and the required processing times; and (iii) the methodology will be applied to distribution networks with radial and meshed configurations and comparisons will be made with powerful solvers available in the GAMS optimization package for solving MINLP problems. The main contribution of the proposed approach will be summarized in the next section. Contributions Based on the review of the state of the art presented in the previous section was performed, the contributions of this work are summarized as follows: The fixed-step capacitor banks siting and sizing problem is represented via an integer codification and the classic optimization method is applied based on the CBGA, which reaches the optimal solution of the problem in minimum computational times when compared with the results reported in [11]. The solution developed from the interaction between the CBGA and the successive approximations method can be implemented in any radial or meshed distribution system. The selection and location of capacitor banks are not restricted in regard to the size (reactive power supplied) and the number to be installed. It is important to mention that while genetic algorithms have been widely reported in the specialized literature to solve the problem of optimal selection and location of capacitor banks, as presented in the review of the state of the art, the solutions reported get stuck in local optimums due to the binary codification of two stages that are commonly used for node selection, which increases the algorithm's complexity and the possibility to obtain infeasible configurations. Furthermore, in this work, the problem of the binary codification is solved using the discrete codification presented in [11], thus accelerating the computational times and reaching the convergence to the best optimal solution reported in the specialized literature. Paper Setting The rest of this paper is organized as follows: Section 2 presents the mathematical formulation of the optimal location of fixed-step capacitor banks in distribution systems, Section 3 presents the features of the CBGA, Section 4 describes the test systems used in this study, Section 5 shows the results obtained for each of the proposed cases and compares them with the solutions found in the specialized literature, and Section 6 discusses the conclusions extracted from the development of this work as well as the possible future works. Mathematical Formulation In this section, the general mathematical formulation is presented for the location and selection of capacitor banks [32]. The function to minimize is as follows: where K p is the average cost of energy during the period of analysis, Z 1 is the objective function regarding the active power losses in all the branches of the network, K c is the cost of each capacitor bank selected, Q bc is the value in kvar of each capacitor bank, and X ibc is a binary variable that represents the node and the type of bank to install. where V i , V j are the voltages on each node, which have angles θ i and θ j , respectively, Y ij is the admittance magnitude that connects the nodes i and j, respectively, φ ij is the angle, and Ω N represents the set that contains all the network nodes. The equations for active and reactive power balance defined in Equations (3) and (4) are applied for each node of the system, i.e., ∀k ∈ Ω N . where P gi and P di represent the generated active power and demanded reactive power at each node, respectively. where Q gi is the generated active power, Q di is the demanded reactive power, and k represents the maximum number of capacitor banks available for installation in the power network. Equation (5) is related to the maximum and minimum voltage limits, Equation (6) determines the number of banks to be located at each node, Equation (7) are the values that the binary variable can take, and Equation (8) defines the maximum number of capacitor banks to be installed in the system (N BC ins ). It is important to mention that the mathematical model developed in Equations (1)- (8) is non-linear and non-convex with a mixed integer non-linear programming structure, which largely complicates its solution, as there are no methodologies in the specialized literature that guarantee an optimal solution. Therefore, in this work, a hybrid method devised with the Chu and Beasley genetic algorithm and the power flow method, known as successive approximations, is proposed. The advantage of this proposal is the ability to find the optimal solution in the test systems under investigation with low computational cost and ease of implementation. Methodology Proposed To solve the problem of losses and operative cost reduction in distribution systems through the installation of fixed-step capacitor banks, the implementation of a CBGA is proposed. This algorithm is part of the metaheuristic optimization methods based on the knowledge and the selection process presented in the nature, in which the most adapted individuals are more likely to survive and transmit their genes to the offspring. The implementation of this optimization strategy involves the development of the following three stages: selection, crossing and mutation, as presented in [33,34]. Mathematically, the CBGA is considered a combinatorial optimization technique, with a high probability of finding global solutions for large-sized complex problems and with multiple local optimums [35]. It is also necessary to mention that a method for the solution of the power flow problem is required, which, in this case, is the successive approximations method, proposed in [36] and which the simulation results confirm as being faster in terms of computational times and the number of iterations required. This leads to a typical optimization problem defined as master-slave, as indicated in [37], where the CBGA is the master algorithm and the successive approximations method is the slave. Here, each of the stages are presented and the CBGA implementation is briefly explained with regard to solving the problem of optimal selection and location of fixed-step capacitors in distribution systems for reducing operative costs. Codification To apply the CBGA to the problem under investigation, an appropriate codification is necessary via a vector. This codification is denominated as the individual, which will be part of the initial population (IP). The size of the codification vector is given by 1 × 2N BC ins . It is worth remembering that N BC ins corresponds to the number of capacitor banks available to be installed in the test system. The initial N BC ins positions of the vector correspond to a random number (represented by "Node") between 2 and the number of nodes of the system, other than the reference node that is usually located at node 1. The terminal positions of the mentioned individual correspond to a random number (represented by "Ncap") between 1 and the number of possible banks that can be installed. The codification proposed to provide a solution to the problem is presented in Figure 1, where the individual I n is represented by the Equation (9). Aside from this, in Equation (10), it is guaranteed that each individual of the population is unique, which is known as the diversity criteria [35]. I n = I n1 I n2 · · · I n2N BC ins (9) It is important to highlight that in order to guarantee the feasibility of each individual, it is necessary to make sure that no nodes are repeated on the first N BC ins , as this would imply the location of more than one type of fixed-step capacitor in the same node, which is not recommended in the specialized literature. IP The IP is created with a matrix of size N i × 2N BC ins , where N i is the number of individuals that will be part of the IP, as presented in Equation (11). It is worth mentioning that the individuals have to be different to guarantee a better solution and avoid premature convergence as in [35]. Fitness Function Assessment For the accurate performance of the CBGA, it is necessary to rewrite the objective function presented in Equation (1) by adding two terms, as shown in Equation (12). Then, this fitness function is assessed for each individual of the IP by employing the successive approximations method related to the solution of the power flow problem, which will be addressed later. where α is the penalization factor. This factor, as its name suggests, penalizes those individuals that belong to the IP and are not in compliance with the constraint defined in Equation (5). Notice that V is the voltage vector obtained when the power flow is solved via the successive approximations method by considering the installation of the capacitor banks in the respective nodes according to the codification vector that represents each individual. Selection As mentioned earlier, the first stage of the CBGA is selection, as described by [35], and consists of arbitrarily choosing a subset of individuals of the IP that will be submitted to a tournament for providing two individuals with the best fitness function (in this case, the individuals with the solutions that minimize the cost of the power losses to the greatest extent) [38]. It is important to mention that the number of individuals involved in the tournament is arbitrary and largely dependent on the programmer's expertise; therefore, in this study, Ref. [10]'s recommendation that the selection process be performed with a tournament composed of four individuals is taken into account. Crossing In this stage, two individuals are created with part of the information belonging to the winners of the tournament as demonstrated in [39]. If the first winner of the tournament is called Pi i and the second Pi j and a random number is generated between 1 and 2N BC ins − 1, the new individuals are obtained via crossing the information of Pi i y Pi j , as shown in Equation (13). Mutation Once the crossing is done according to Equation (13), an arbitrary position has to be chosen within the new individuals Hi 1 and Hi 2 (between 1 and N BC ins ) to change the value on the said position using a random number between 2 and the number of nodes presented in the test system, i.e., n, which serves to guarantee compliance with Equation (10). After this, the fitness function for Hi 1 and Hi 2 is evaluated to determine the best fitness function and choose the winner; this is done in the following manner: where R is the offspring winner. Replacement of Individual in the Population In this stage, the IP matrix is sorted with respect to the value taken as the fitness function for each individual. For this, it is sorted in descending order, where the first and last individuals will be those with the greatest and lowest fitness function. Once this order is carried out, the result of the last individual of the matrix is compared with the winning individual according to Equation (14). If the fitness function of the winning individual is less than that of the last individual of the IP, R will take the place corresponding to the loser as long as this individual is different with respect to the other individuals of the IP [3]. If the abovementioned conditions are not complied with, the process will have to be repeated from the selection stage. Stopping Criteria The stopping criteria of the CBGA allows the termination of the iterative process after certain conditions are complied with, as shown in [40]. The the most important conditions mentioned in [40] are as follows: A high percentage of the population converges to a value from a fixed number of evaluations, the objective function results are not improved according to a number of iterations, and central processing unit (CPU) times. Therefore, choosing the number of times the CBGA is assessed is contingent upon the programmer's expertise and the criteria. For this case, it was tested for a different number of evaluations, and it is observed that for more than 300 iterations, the results do not improve, but the CPU time significantly increases. With less than 40 iterations, the solution gets stuck in local optimums, and the CPU time decreases. According to this, the stopping criteria was established in Table 2. Successive Approximations Method Through the use of Equations (3) and (4) and considering that the compensated reactive power has been added as part of the demand with the corresponding signs, we derive the following: where Y bus is the admittance matrix of the system. It is possible to differentiate within the Y bus matrix the terms of generation and demand as follows: where Y gg is the admittance between the generation nodes, Y dd is the admittance between the demand nodes, and Y gd = Y T dg is the admittance matrix that relates to the generation and the demand. Now, if this separation is applied to Equation (15), we obtain the following: where it is observed that V g is the voltage at slack nodes of the distribution system, i.e., known voltages, and V d are the voltages at all the demand nodes, which correspond to the variables under interest. Notice that, from Equation (18), it is possible to find V d as follows: From Equation (19), the iterative form of the successive approximations method reported in [41] can be obtained, such as presented in Equation (20). where t represents the iterations indicator. The successive approximations method guarantees convergence as shown by [42], as this method is a particular case of the Banach fixed-point theorem. Is is considered that the solution has been achieved when the error between the voltages of two consecutive iterations is less than a determined tolerance, i.e., min V d t+1 − V d t ≤ , being the convergence error, which is assigned as 1 × 10 −10 for distribution systems, as recommended in [36]. Figure 2 presents the flow diagram of the optimization methodology proposed for solving the problem of the optimal location of fixed-step capacitor banks in distribution systems and reducing technical losses and operative costs via the implementation of a master/slave optimization strategy, which is composed by the CBGA with integer codification and a power flow algorithm known as successive approximations. Implementation of the Proposed Methodology On the other hand, the implementation features of the CBGA are reported in Table 2. These parameters have been adjusted via a heuristic process based on multiple simulations where the trade-off between the simulation time and the quality of the solutions obtained is verified. At the end of the Simulation Results' section, a simulation case will be presented using different population sizes that confirms that 20 individuals is enough to explore and exploit the solution space with the proposed CBGA fixed-step capacitor banks can be selected and located in distribution networks with radial and meshed topologies. Test Systems In this section, the characteristics of the capacitor banks is presented as well as the information regarding the test radial systems employed in the simulation cases. These test feeders are composed by 10, 33, and 69 nodes. These systems operate at medium voltage level and have been used in several publications related to technical losses reduction and power flow analysis [36]. Table 3 shows the possible capacitor banks that can be installed in the radial distribution systems, including their rates and costs [13]. Note that for all the distribution test feeders presented below, all the options reported for the fixed-step capacitor banks in Table 3 are considered, and it is possible to note that the difference between two continuous capacitor banks is 150 kvar, which corresponds with the minimum step size considered for reactive power compensation in this study. 10-Node Test System This system is composed of 10 nodes and 9 branches, whose information has been taken from [43]. In Figure 3, this test configuration can be observed. The bases for the system are 23 kV and 100 kVA. Additionally, the initial network losses are 783.77 kW. In this case, four fixed-step capacitor banks will be installed for the purpose of comparison. Information regarding the demands and impedance of the 10-node system is reported in Table 4. 33 Nodes Test System The radial distribution network is composed by 33 nodes and 32 lines [44], shown in Figure 4. The base values for this system are 12.66 kV and 10 MVA. The benchmark losses are 210.9867 kW [36]. In this case, three fixed-step capacitor banks are installed for comparison purposes with the main reports of the specialized literature. Table 5 presents the branch information and demand nodes. 69-Node Test System The radial distribution network is composed by 69 nodes and 68 lines [45]. In Figure 5, this test configuration can be observed. The base values for this system are 12.66 kV and 10 MVA. Additionally, the benchmark losses of the network are 224.9352 kW. In this system, three fixed-step capacitor banks are installed for the purpose of comparison with the findings reported in the specialized literature. System data is reported in Table 6. 69-Node Meshed Test System The 69-node test feeder with a mesh configuration is composed of 69 nodes and 73 lines, taken from [46]. Figure 6 presents the electrical configuration of this test feeder, and Table 7 presents the additional lines considered in this test feeder that are added to the information reported in Table 6 for the radial configuration with the same voltage and power bases. The information of the new lines added was taken from [47]. The initial power losses of this test feeder under peak load condition (see loads in Table 6) are 82.5290 kW; in addition, for this test feeder, the possibility of installing the three fixed-step capacitor banks using the proposed CBGA is considered in order to compare its results with the GAMS optimization package, as in the literature, the problem of the optimal location and sizing capacitor banks in this meshed configuration has not been reported. Simulation Results To validate the optimization methodology proposed, the proposed optimization model from Equations (1)-(7) is considered for implementation in the commercial optimization software GAMS. This allows one to solve mixed integer non-linear programming models as described in [48,49] by combining the interior point and branch and bound methods. For comparison purposes, the results reported in [11] through the application of discrete vortex method do not take into account the capacitors' cost, resulting in costs presented in the system lower than those real; furthermore, in this work, these costs have been included within the objective function. Computational Implementation To solve the mixed-integer, non-linear programming model that represents the problem of the location and selection of capacitor banks, the GAMS software is used as a comparison tool, and the programming environment of MATLAB 2020b is used in a desktop computer with AMD Ryzen 7 3700U (AMD, Santa Clara, CA, USA), 2.3 GHz, 16 GB RAM with 64-bits Windows 10 Home Single Language. Results of the 10-Node Test System According to Table 8, the active power loss in the benchmark case is 783.79 kW. When the proposed methodology is applied to the system, an improvement in the results is evident compared with the method developed by the authors of [43], as the losses of PGSA are 694.93 kW, 783.31 kW for GAMS, while for the proposed method, (i.e., CBGA) the losses are 691.99 kW. This represents a decrease with respect to the benchmark case of 11.34%, 0.06%, and 11.71%, respectively. In relation to the operative costs of the capacitor banks and the cost of energy losses, from Table 8, it is possible to say that the CBGA presents a better solution with respect to the other methods, as the benchmark case presents a cost of US$131,674, the cost as per the PGSA is US$118,340, US$117,771 is provided by GAMS, and US$117,655 for the methodology proposed. The abovementioned amounts can be translated into reductions of 10.12%, 10.55%, and 10.65%, respectively, as compared to the benchmark case. To demonstrate that the location of the fixed-step capacitor banks effectively leads to the reduction of the annual operative costs caused by energy losses, Figure 7 presents the discriminated costs for the the 10-node test feeder. The bar-plot in Figure 7 shows that the PGSA, the GAMS, and the CBGA permit the reduction of the annual operative costs in the network when capacitor banks are installed, as the summation of the final costs of the energy losses and the costs of the capacitor banks are lower than the benchmark case of the grid. In addition, in the case of the proposed CBGA, the investment costs in capacitors is about US$13,995, which amounts to a reduction in the cost of the energy losses by about US$28,014, which clearly compensates the investment costs on these fixed-step capacitor banks with an additional gain about US$14,019 in the annual operative costs for the 10-node test feeder. With regard to the CBGA, it is worth mentioning that of the total number of times this converges to the best results, as observed in Figure 8, 5% of the solutions given are found to be between US$121,000 and US$121,500. Figure 9 shows the voltage profile for the 10-node system considering the initial operative status and the effect of including the reactive compensation. It can be observed that at all the nodes (other than the slack node), the voltage magnitude is significantly improved, which implies the efficiency of the system and voltage regulation. Note that the remaining test feeders considered in this research, i.e., 33 and 69 nodes, are taken as comparative methods for the flower pollination algorithm and the discrete vortex search algorithm, as in the literature, reports that use the GPSA approach proposed in [43] for these test feeders were not found. For this reason, the GPSA approach was only considered in the 10-node test feeder. Table 9 depicts the results obtained with the comparative methodologies and the approach proposed. In the case of the commercial software GAMS, a reduction of 34.214% was obtained compared with the losses mentioned in Section 4.3. Besides, with respect to the costs proposed in [13], the GAMS provides an increase of US$19, and in comparison to [11], the increase is US$54. 16, which means that the solution found is of good quality, but not the best. The CBGA provides a reduction of 34.397% with respect to the benchmark case; and related with [11,13], the costs are reduced by US$36 and US$0.85 respectively, which confirms the robustness and effectiveness of the approach proposed in this study. Figure 10 presents the histogram for the CBGA in the 33-node test system. It is observed that the best result obtained converges to the optimal solution in 5% of the runs, as observed in the US$23,600 to US$23,800 interval. Figure 11 presents the discriminated behavior of the costs in the 33-node test feeder for the proposed and the comparative approaches, including the benchmark case. Results of the 33-Node Test System Observe that the bar-plot in Figure 11 shows that the FPA, the GAMS, the DVSA, and the CBGA permit the reduction of the annual operative costs in the network when capacitor banks are installed, as the summation of the final costs of the energy losses and the costs of the capacitor banks are lower than the benchmark case of the grid. In addition, in the case of the proposed CBGA, the investment costs in capacitors are about US$467.1, which produces a reduction in the costs of the energy losses by about US $12,191 On the other hand, Figure 12 presents the voltage profiles for the 33-node system. The solution of the CBGA shows that with the reactive power compensation through the fixed-step capacitor banks at nodes 18 and 33, there was an improvement in terms of voltage regulation greater or equal than 0.930 pu, which is significantly better than the benchmark case of the system. Results of the 69-Node Test System According to Table 10, the best results for the operative costs when installing capacitor banks are provided by the CBGA. This is because US$158.78 is saved with respect to the results reported by the FPA; besides, if this is compared with the solution provided by the DVSA method, a decrease of US$27.65 is observed. Therefore, it can be proved that the savings on annual operative costs is improved, corresponding to 34.35% of savings, i.e., an improvement of 0.42% and 0.07% in relation with the FPA and DVSA methods, respectively. If a comparison is carried out in regard to the losses, it can be noticed that the proposed methodology reflects an improvement of 0.49 kW and 0.027 kW related to the FPA and the DVSA, respectively, i.e., the reduction in losses for the system with respect to the benchmark case corresponding to 35.17%, 35.37%, and 35.39% for the FPA, DVSA, and CBGA, respectively. To demonstrate the effectiveness of the installation of fixed-step capacitor banks in the 69-node test feeder, Figure 13 presents the discriminated behavior of the costs for the proposed and the comparative approaches, including the benchmark case. The bar-plot in Figure 11 shows that the FPA, the GAMS, the DVSA, and the CBGA permit the reduction of the annual operative costs in the network when capacitor banks are installed, as the summation of the final costs of the energy losses and the costs of the capacitor banks are lower than the benchmark case of the grid. In addition, in the case of the proposed CBGA, the investment costs of capacitors is about US$392.85, which leads to a reduction in the costs of the energy losses of about US$12,975.08 that clearly compensates the investment costs on these fixed-step capacitor banks with an additional gain of about US$12,582.23 in the annual operative costs for the 69-node test feeder. On the other hand, Figure 14 presents the number of times the CBGA provides a result within the range established. It can be noted that the best results are found within US$24,800 and US$25,000, which corresponds to 12 % of the total evaluations. Figure 15 depicts the voltage profiles of the 69-node radial distribution system before and after the capacitors' location. It can be clearly observed from this figure that when the losses and operative costs are reduced, the voltage profiles are improved and voltage regulation is enhanced accordingly. Results of the 69-Node Meshed Test System In this subsection, we present the application of the proposed CBGA to locate and select the capacitor banks in meshed distribution networks; for doing so, the 69-node test feeder with mesh configuration presented in Figure 6 is considered. In addition, due to the fact that this system is typically used to validate strategies regarding the reconfiguration of the network [50], here, we only compare the proposed approach with the GAMS optimization package, as no studies investigating the location of capacitor banks have been previously reported in the specialized literature. Table 11 presents the numerical achievements of the proposed CBGA and the GAMS optimization package. Note that regarding the base case, the GAMS optimization software leads ro a reduction of about 33.21% in active power losses, while the proposed CBGA leads to a reduction of 33.34%. Regarding the operative costs, the CBGA presents a better solution when compared with the GAMS package, as it allows the saving of an additional US$37.2. Figure 16 shows the histogram of the proposed CBGA for the 69-node test system with meshed configuration; we note that the best solution converges 2% of the times, and it is contained in the US$9600 and US$9700 interval. Figure 17 presents the voltage profiles of the 69 node test feeder with meshed configuration. Note that is observed an important improvement in the voltage profiles in the nodes located in the neighborhood of the capacitor banks. In addition, the minimum voltage profile in the benchmark case was 0.9653 pu, which is improved to 0.9765 pu when capacitors are installed, i.e., an average improvement of 141.792 V in all the nodes of the network. To observe the effect that the reactive power compensation has on the minimization of energy losses, Figure 18 presents the magnitude of the current in each branch of the 69-node test feeder with meshed configuration. The following should be noted: (i) In lines 1 to 20 as well as lines 52 to 60, the magnitude of the current decreases with respect to the benchmark case, which implies that the amount of active and reactive power losses will decrease, as these are a function of the square magnitude of the current, and (ii) the downstream of the nodes 12, 22, and 61 provides the reactive power to the supply part of the loads, which implies that the magnitude of the current upstream of these nodes (i.e., lines 1 to 20 and 52 to 60) decreases due to a reduction in the equivalent load as can be observed in Figure 18. Table 12 shows the average run times of the two strategies. For the GAMS case, these are fairly high due to the large number of variables managed by the model and its non-linear nature. The results reported by the techniques in the specialized literature are also reported in corresponding publications. According to the results of Table 12, the best computational times are achieved by the methodology proposed in this work, which confirms its efficiency and applicability to mid-sized and large-sized systems. Analysis of the Processing Times To present the effect of the population size on the behavior of the proposed CBGA, Table 13 reports the solutions reached by this in all the test feeders. This simulation considers population sizes of 20, 50, 100, and 150 for the CBGA. From the results in Table 13, it is possible to note the following: (i) the active power losses and annual operative costs have small variations independent of the population size, and (ii) the required processing times to solve the optimization problem is directly proportional to the number of individuals in the population being a minimum of 20 individuals and the maximum being 150 individuals in all the test feeders. Conclusions and Future Works The methodology proposed to minimize the losses and operative costs of distribution systems employing fixed-step capacitor banks, based on the interaction between the CBGA and the successive approximations method, allowed power losses and operative costs to be reduced by over 34% and 35%, respectively. This methodology significantly decreased the computational times, with an average of 0.205 s for the 33-node test system and 1.07 s for the 69-node test system, which confirmed its superiority in relation to the methodologies available in the specialized literature, including the GAMS software. An evaluation of the different population sizes of the proposed CBGA helped determine that the best performance is reached when the population is about 20 individuals, as better solutions regarding power losses and annual operative costs were found when it is compared with GAMS and literature reports, with the main advantage being that minimum processing times are required to solve the optimization problem. Simulation results helped reach the observation that with the capacitor banks' installation, the voltage on the farthest nodes from the source node tends toward a better regulation voltage. Besides this, applying CBGA in the radial distribution systems composed of 10, 33 and 69 nodes with radial and meshed structures confirms that this method is outstanding when solving mathematical models of MINLP nature, as the solutions found in the specialized literature are substantially improved with minimum computational times. As future work, it is proposed that the reactive power compensation problem is expanded to operational environments of 24 h with high penetration of renewable generation as well as the MINLP formulation that represents this problem in a convex equivalent in order to guarantee the optimal solution without multiple evaluations, that is, statistical studies. Caldas under grant 1643-12-2020 associated with the project "Desarrollo de una metodología de optimización para la gestión óptima de recursos energéticos distribuidos en redes de distribución de energía eléctrica". and in part by the Dirección de Investigaciones de la Universidad Tecnológica de Bolívar under grant PS2020002 associated with the project: "Ubicación óptima de bancos de capacitores de paso fijo en redes eléctricas de distribución para reducción de costos y pérdidas de energía: Aplicación de métodos exactos y metaheurísticos". Conflicts of Interest: The authors declare no conflict of interest.
Regulation of Intestinal Inflammation by Soybean and Soy-Derived Compounds Environmental factors, particularly diet, are considered central to the pathogenesis of the inflammatory bowel diseases (IBD), Crohn’s disease and ulcerative colitis. In particular, the Westernization of diet, characterized by high intake of animal protein, saturated fat, and refined carbohydrates, has been shown to contribute to the development and progression of IBD. During the last decade, soybean, as well as soy-derived bioactive compounds (e.g., isoflavones, phytosterols, Bowman-Birk inhibitors) have been increasingly investigated because of their anti-inflammatory properties in animal models of IBD. Herein we provide a scoping review of the most studied disease mechanisms associated with disease induction and progression in IBD rodent models after feeding of either the whole food or a bioactive present in soybean. Introduction The burden of inflammatory bowel diseases (IBD), Crohn's disease (CD) and ulcerative colitis (UC), are on the rise globally and represent one of the most prevalent chronic inflammatory conditions, particularly in the United States. Nearly half of Americans suffer from one or more chronic diseases, accounting for nearly 75% of aggregate healthcare spending [1]. Numerous lines of evidence indicate that alterations in the gut microbiota, shifting toward a pro-inflammatory state, plays a fundamental role in the development and progression of intestinal inflammation [2]. In this regard, the pathophysiology of IBD is attributed to a dysregulated T-helper cell immune response to gut microbiota catalyzed by the genetic susceptibility of an individual, which leads to a progressive and chronic loss of epithelial barrier integrity. As a result, intestinal microbiota and dietary antigens can easily translocate across the mucosal barrier and trigger mucosal immunity in the lamina propria, which serves to perpetuate the ongoing inflammatory response and chronic inflammatory state. Given the close relationship between inflammation and the generation of free radical species, oxidative stress has also been proposed as a potential underlying mechanism in IBD pathogenesis [3]. Although the causative factors of IBD remain unclear, disease incidence has been, in part, attributed to environmental factors, of which diet is considered the most important, associated with influencing the gut microbiota composition and, in turn, disease severity and progression. For instance, two major types of bacterial metabolites, short-chain fatty acids (SCFAs) and secondary bile acids, are known for their role in immune modulation, each causing opposing effects on intestinal inflammation at chronically high physiological levels [4]. Therefore, dietary management has been increasingly investigated for its therapeutic potential in IBD, focused on its ability to modulate the functional profile of gut microbiota and promote a balanced immunological response. Current approaches have focused primarily on chemically defined elemental diets (e.g., exclusive enteral/parenteral nutrition) or the restriction of specific food items (e.g., specific carbohydrate diet) [5,6]. However, the efficacy of such diets has been largely limited to pediatric populations [6]. More recently, plant-based diets, including the anti-oxidative agents contained therein, which interfere with cellular oxidative stress and cytokine production, have been among the dietary modalities investigated for their therapeutic potential in IBD [7,8]. With respect to individual food items, a growing body of preclinical evidence indicates that soybean, as well as soy-derived bioactives (e.g., isoflavones), have potent anti-inflammatory/antioxidant activity and can mitigate inflammatory changes in the gut induced either chemically or by diet (e.g., high-fat diet; HFD) [9][10][11]. In our own recent work, we demonstrated that soy, as a plant-based substitute for animal-based protein, in the context of an American diet (designed to mirror the NHANES survey), exerted a remarkable anti-inflammatory effect in the treatment and prevention of chronic CD-ileitis in mice genetically predisposed to CD [12]. Soybean or soya bean (Glycine max) is a species of legume native to East Asia, that today serves as an economically important crop in Western countries by providing a source of good-quality protein for both animals and humans. Soybeans are an exceptional source of essential nutrients, especially for protein and bioactive proteins (e.g., Bowman-Birk inhibitor; BBI), lipids such as monounsaturated fatty acids (oleic acid), and polyunsaturated fatty acids (PUFAs, n-3; α-linolenic acid, n-6; linoleic acid), as well as soluble and insoluble carbohydrates (e.g., raffinose, cellulose, pectin). Several lines of evidence from human and animal studies support the notion that high soybean intake provides significant health benefits, including the prevention of heart disease [13,14] and certain cancers [15][16][17]. However, soy also contains a unique mixture of~139 phytochemicals (e.g., isoflavones, phytosterols, saponins) that are known to confer health benefits, many of which hold strong therapeutic potential in IBD [10,17]. Owing to the experimental advantages of animal models, particularly in the context of well-controlled dietary settings, we focus this review on the current understanding of how soybean can influence intestinal biology and inflammation, focusing on animal models, with validation from cell lines. Herein, we provide a scoping review of the major macronutrient, bioactive, and phytochemical components of soybean and their role in IBD, followed by the most studied disease mechanisms associated with disease induction and progression in IBD rodent models after feeding of either the whole food or a bioactive present in soybean. The Bioactive Composition of Soy and Its Effect in Experimental IBD Numerous studies have demonstrated the various health benefits of soy products in preventing heart disease, obesity, cancer, diabetes, osteoporosis, and regulating blood pressure and menopause symptoms [10,18]. Based on this evidence, in 1999, the Food and Drug Administration (FDA) authorized the "Soy Protein Health Claim" that 25 g of soy protein per day may reduce the risk of heart disease (Available at: https://www.fda.gov/ media/108701/download, accessed on 5 March 2021). Today, the global soybean market is valued at around $148 billion as of 2018 and is projected to grow with a CAGR of 4% during the period of 2019-2025 [19]. During the last decade, soybean and soybean bioactives have been increasingly investigated because of their anti-inflammatory properties in animal models of IBD [10,20]. Below we provide an overview of the bioactive compounds relevant to the macronutrient (lipid, carbohydrate, protein) and bioactive composition of soybean in the context of their effects in IBD. We summarize the beneficial effects of soy and the bioactive compounds derived from soy in context to gut inflammation in Table 1. The effect of dietary fat on inflammation in IBD depends on the type, and amount of dietary fat consumed [23]. Current evidence suggests that the ratio between dietary omega-3 (n-3) and n-6 PUFA intake is directly linked to the pathology of inflammation-mediated human diseases such as IBD, obesity, cancer, atherosclerosis, rheumatoid arthritis [24]. For instance, diets high in saturated fat, particularly milk-fat, and/or excessive n-6 PU-FAs (e.g., Western diet) exert a pro-inflammatory effect in IBD, the latter serving as a substrate for the production of pro-inflammatory prostaglandins, leukotrienes, and thromboxanes. By contrast, n-3 fatty acids, namely alpha-linoleic acid (ALA; C18:3, n-3; plant oils), eicosapentaenoic (EPA; C20:5, n-3), and docosahexaenoic acid (DHA; C22:6, n-3) are generally considered anti-inflammatory, serving to displace arachidonic acid and decrease inflammatory response severity [25]; albeit findings have varied [23]. In this context, raw soybean oil is not considered as an anti-inflammatory therapeutic dietary supplement because of its fairly high saturated fat content and high n-6 to low n-3 PUFA content (7:1 ratio) [26]. However, other bioactive compounds, including phospholipids and soyasaponins contained within the lipid fraction of soybean, have indeed been shown to exert anti-inflammatory effects [10]. Phospholipids Phospholipids in the diet are ingested in the form of glycerophospholipids, forming an important structural component and influencing the fatty acid composition and microstructure of cell membranes. Found in the yellow-brown fraction of the soybean, lecithin is considered a rich source of dietary glycerophospholipids, as well as phosphatidylcholine, which forms an important component of the intestinal mucus layer (along with protein mucins), that is essential to maintaining the barrier between gut microbiota and host intestinal mucosa [27]. Dietary phospholipids have been shown to reduce oxidative stress in the brain, reduce cardiovascular risks, and were shown to be effective in reducing inflammatory reactions in murine models [24,28,29]. However, it is the PUFA content of phospholipids, namely n-3 and n-6 PUFAs, which influence the nature of inflammatory responses primarily through the biosynthesis of phospholipid-derived lipid mediators that can have pro-or anti-inflammatory effects [24]. Phosphatidylcholine, a component in glycerophospholipids and mucus, is observed to have therapeutic applications in IBD [30]. This is spurred from the observation that patients with UC in remission have decreased levels of phosphatidylcholine in rectal mucus samples, which is consistent with the symptoms of weakened intestinal barrier typical in IBD patients [31]. Soybean phosphatidylcholine supplementation is reported to increase mucus secretion in the colon and improve mucus layer integrity [10]. As a vital component of mucus, phosphatidylcholine has a role in blocking hydrophobic bacteria and hydrophilic antigens from entering the intestine. These mucus-regenerating effects of phosphatidylcholine may additionally be effective for those who cannot accept traditional UC therapies, such as in the case of refractory UC [27]. Despite the potential therapeutic effect of phosphatidylcholine against UC, its pathophysiological mechanism for increased mucus secretion is unknown and requires further research. Despite these promising findings for the therapeutic potential of soyasaponins in IBD, various factors require further investigation, namely, (i) the ability for intact soyasaponins to reach peritoneal macrophages in vivo, (ii) the bioactivity of soyasapogenols in enterocytes, and (iii) the safety of oral soyasaponin consumption in vivo [10]. Phytosterols Phytosterols are plant-derived sterols that are commonly found in the human diet. Although more than 100 types of phytosterols exist, the most common plant sterols, including those present in soybean (~300 mg/100 g of phytosterol), comprise of β-sitosterol, campesterol, stigmasterol, and ∆5-avenasterol [45,46]. Phytosterols have a similar structure and physiological function as that of cholesterol and are well-characterized for their ability to lower total cholesterol and low-density lipoprotein (LDL) levels, in which they affect bile acid homeostasis to reduce lipid absorption [47][48][49]. There is also evidence suggesting that colitis pathology can be mediated by phytosterol-induced T-cell changes, and that this effect may reflect the cholesterol-lowering properties of phytosterols due to the important role cholesterol metabolism plays in the activation of the adaptive immune response [50]. Besides the well-characterized effect on lipid profiles, in vitro [51,52] and in vivo [53,54], evidence is emerging that phytosterols also exert anti-inflammatory and anti-oxidative effects to reduce the inflammatory activity of immune cells [55,56]. In fact, both β-sitosterol and stigmasterol have been shown to elicit anti-colitic benefit in HFD-and chemicallyinduced colitis mouse models [54,57,58]. This occurs, in part, by suppression of NF-κB activation, with stigmasterol found to also downregulate COX-2 expression [58,59]. Stigmasterol also acts as an antagonist of farnesoid X receptor (FXR), a nuclear receptor responsible for maintaining intracellular bile acid homeostasis [60]. Soy Protein Fraction Soybean comprised of ~35-40% protein based on the dry weight of a mature seed, is an easily digestible non-animal complete protein source (contains all nine essential amino acids, albeit low methionine content) that has a high Digestible Indispensable Amino Acid Score (DIAAS), on par with animal protein sources such as egg and dairy [21,[63][64][65]. Because of this, soy has been used for decades in the food industry as an alternative protein and meat analogue and is one of the most used protein sources in many commercial laboratory rodent diets. Beta-conglycinin and glycinin are the major source of soy proteins, accounting for ~65-80% of total proteins, and form the precursor of most peptides isolated from soybean. Of the various peptides, many have been shown to exert antioxidant, immunomodulatory, anticancer, antibacterial, angiotensin-converting enzyme (ACE)-inhibitory, and insulin-modulating activities (reviewed in [21]). Minor proteins in soy with bioactive properties in IBD include lunasin, lectin, and Bowman-Birk protease inhibitors [21]. Soy Protein Fraction Soybean comprised of~35-40% protein based on the dry weight of a mature seed, is an easily digestible non-animal complete protein source (contains all nine essential amino acids, albeit low methionine content) that has a high Digestible Indispensable Amino Acid Score (DIAAS), on par with animal protein sources such as egg and dairy [21,[63][64][65]. Because of this, soy has been used for decades in the food industry as an alternative protein and meat analogue and is one of the most used protein sources in many commercial laboratory rodent diets. Beta-conglycinin and glycinin are the major source of soy proteins, accounting for 65-80% of total proteins, and form the precursor of most peptides isolated from soybean. Of the various peptides, many have been shown to exert antioxidant, immunomodulatory, anticancer, antibacterial, angiotensin-converting enzyme (ACE)-inhibitory, and insulinmodulating activities (reviewed in [21]). Minor proteins in soy with bioactive properties in IBD include lunasin, lectin, and Bowman-Birk protease inhibitors [21]. β-Conglycinin and Glycin Of the soy protein fraction, 90% is comprised of two storage proteins: β-conglycinin (7S globulin) and glycinin (11S globulin) [68]. These two storage proteins are significantly related to the allergenic effects of soy, as the two proteins remain stable when heated [69]. β-conglycinin is composed of α', α, and β subunits, while glycinin is composed of five subunits that are a combination of acidic and basic parts [70]. Major storage proteins in soybean seeds, such as β-conglycinin, produce hydrolysates that have been shown to help maintain intestinal mucosa integrity [71,72], regulate intestinal flora balance, maintain intestinal health, and reduce the amount of enteric pathogen colonization [71,73]. In vitro studies also suggest that soybean glycopeptides (prepared from hydrolysates of β -conglycinin) inhibit enteropathogen adhesion, specifically Escherichia coli, Salmonella typhimurium, and Salmonella enteritidis, as well as prevent damage caused by bacterial infection to the plasma membrane of LoVo cells [74]. Further, high molecular weight β-conglycinin hydrolysates improved epithelial cell growth, and in Caco-2 cells, effectively increased transepithelial monolayer resistance (TER) and reduced the likelihood of S. typhimurium monolayer translocation [74]. In a dextran sodium sulfate (DSS)-induced intestinal mucosa injury model with female BALB/c mice, soybean β-conglycinin peptide treatment (50 or 500 mg/kg in 0.2 mL with 21.77% glutamic acid) for 28 days significantly reduced histological scores and MPO activity (indicative of neutrophil infiltration) in both protective and reparative settings compared to positive and negative controls [71]. Mechanistically, there was an inhibited expression of inflammatory factor NF-kB/p65. Recent studies have correlated specific globulins with beneficial effects on metabolic disease. A β-conglycinin diet was shown to reduce atherosclerosis in mice, as well as reduce liver and plasma cholesterol in rats fed a high-fat diet (HFD) [75]. Dietary soy glycinin protein has also been shown to prevent muscle atrophy after denervation in mice [76]. A study on the anti-inflammatory activities of soy proteins on DSS-treated pigs showed that soy peptides exerted inhibitory effects on pro-inflammatory pathways mediated by T helper-1 (Th-1) type response, Th17, and upregulated FOXP3+ T-regulatory (T-reg) response in the colon and ileum [72]. There is also evidence that tripeptides derived from the enzymatic hydrolysis glycinin, specifically valine-proline-tyrosine (VPY), play a role in inhibiting the production of pro-inflammatory mediators and down-regulate the expression of pro-inflammatory cytokines. It is suggested that VPY may be a potential therapeutic agent for IBD by targeting the Human peptide transporter 1 (PEPT1), an uptake transporter in the small intestinal lumen [77][78][79], which is up-regulated during intestinal inflammation [77]. Lectin First characterized by Peter Hermann Stillmark in 1888, lectins are a class of nonimmune origin carbohydrate-binding proteins found in all eukaryotes and many bacterial species and viruses, and are widely distributed in grain legumes, including soybean. In plants, lectins have a defensive role against predators. Orally ingested lectin remains undigested in the gut and is able to bind to various cell membranes, including epithelial cells and glycoconjugates in intestinal and colonic mucosa. Lectin binding in the gut can negatively affect gut immune function, microbiota profiles in the gut, and damage mucosal cells [81]. Soybean agglutinins (SBA), also known as soybean lectins, are non-fiber carbohydraterelated proteins that represent 5-7% of the soybean and are considered the main antinutritional factors that affect the quality of soybean [82]. While various beneficial bioactive effects have been attributed to SBA (antitumor, antifungal, antiviral, and antibacterial activities) [83][84][85], SBA are also able to negatively affect gut health by disrupting gut barrier function [86,87], inducing local inflammatory responses [88,89], decreasing immunological responses [88], and interfering with the balance of the intestinal microbiota [82,90]. Modulation of gut microbiota by SBA occurs via three possible mechanisms, including; (i) SBA binds to small bowel epithelial cells resulting in alterations to the glycan structures of the intestinal mucosa and changes to bacterial binding sites, which in turn, selectively stimulates growth of some bacteria, (ii) SBA serves as a nutrient source for bacteria, and (iii) SBA induces alterations to the gut mucosal system resulting in reduced immunoglobulin A (IgA) secretion and inhibition of bacterial proliferation [82]. Intake of SBA by intestinal epithelial cells can also elicit a toxic effect in most animals [82]. In rats, intraperitoneal injection of SBA induced a dose-dependent inflammatory response, which was blocked by pretreatment with glucocorticoid or by co-injection of N-acetyl-galactosamine, but not other sugars [91]. Similarly, high-dose administration of SBA has been shown to increase intestinal permeability in pigs, whereas low-dose had no effect [92]. On the other hand, when present in circulating blood, soybean SBA has been shown to elicit an anti-inflammatory effect [91]. Despite the above deleterious effects of SBA, aqueous heat treatment, particularly pre-soaking soybean in water and cooking (212 • F, 100 • C for at least 10 min) [93], almost completely deactivates SBA, and thus the presence of SBA in human foodstuffs is relatively low [94,95]. Lunasin Lunasin is a naturally occurring 43-amino acid peptide with high concentrations of the soybean's total aspartic acid content [96]. Beside soybean, lunasin is found in other beans, grains, and herbal plants, including wheat, barley, rye, sunberry, wonderberry, bladder-cherry, jimson weed, tofu, tempeh, whole wheat bread, at concentrations ranging from 0.013 to 70.5 mg protein lunasin/gm of protein [96,97]. Lunasin has been intensely investigated for almost two decades for its potential use as a dietary supplement given its rare composition and the unusual aspect of the polypeptides structure [96,98], which have various proposed health benefits, including anti-carcinogenic, anti-oxidant, and anti-inflammatory properties [99][100][101]. In vitro studies have demonstrated the ability of lunasin to suppress LPS-induced inflammatory reactions in macrophages via reduction of pro-inflammatory cytokines production (IL-6, TNF-α), as well as other pro-inflammatory mediators, such as PGE2, through modulation of COX-2, and iNOS/nitric oxide pathway by NF-κB pathway inhibition [101][102][103]. Lunasin has also been reported to inhibit the translocation of p50 and p65 subunits in NF-κB in the cell nucleus, thereby inhibiting gene transcription and production of proinflammatory molecules [102,103]. In a similar fashion, lunasin administration significantly attenuated the severity of DSS-induced inflammation, in part by decreasing colonic COX-2 expression [104], an important mediator of the inflammatory response that is known to increase in DSS-colitis [105,106]. While lunasin is heat stable and readily bioavailable [72,107], there is evidence to suggest that processing methods may affect lunasin efficacy. For example, commercial lunasin from soy was found more protective of DSS-induced inflammation in Swiss Webster mice compared to that of lunasin of soybean extract at higher doses. This suggests that lunasin purity and its anti-inflammatory properties are affected by method of lunasin extraction [104]. Bowman-Birk Inhibitor (BBI) The Bowman-Birk inhibitor is a serine protease inhibitor present as an anti-nutritional factor in soybean and various other types of legumes. BBIs resist digestion and have been shown to reach the small and large bowel intact [108,109], or be absorbed through the gut lumen and act systemically before being excreted in urine [110]. The trypsin-like and chymotrypsin-like proteases [111,112] of BBI are, however, thought to decrease protein digestibility [113] and possibly promote pancreatic disease. As such, BBI is inactivated in the processing of soy concentrate (e.g., soymilk) [10]. However, BBI is known to have anti-inflammatory activity in both in vitro and in vivo systems and has long been recognized as a potent inhibitor of malignant transformation across various cell lines [114,115]. BBI also exerts potent anti-inflammatory activity, particularly in the gut [116], acting to effectively inhibit both serine proteases released from inflammation-mediating cells [117], as well as suppress proteolytic and oxidative damage that occurs during inflammation [118]. Given that serine proteases are known for their active involvement in pro-inflammatory actions [117], and have been implicated in both the production of pro-inflammatory cytokines [119] and the aberrant inflammatory process in IBD [117], BBIs have been investigated for their therapeutic potential as a natural alternative to protease inhibitors. In DSS-treated Swiss Webster mice, 0.5% of BBI concentrate supplementation before and after colitis induction resulted in a significant reduction in mortality (by 15%) and mortality scores (by 50%) compared to non-supplemented mice fed a standard diet [118]. The beneficial effects of BBI were, however, not seen when BBI was supplemented after DSScolitis induction [118]. In line with these findings, feeding of a BBI-containing fermented soy germ extract was shown to significantly reduce TNBS-colitis in Wistar rats [120]. The role of BBI in regulating inflammation is in part via its ability to decrease LPSinduced pro-inflammatory cytokines (IL-1β, TNF-α, IL-6) and increase anti-inflammatory cytokine (IL-10) in macrophages [121,122], known mediators of inflammation and immuneactivation. In addition, BBI has been shown to increase the proportion of CD4 + CD25+Foxp3+ Tregs, which exert immune-suppressive activities [123]. Soy Carbohydrate Fraction The carbohydrate composition of soybean, comprising 9% dietary fiber from its total weight, consists mostly of oligosaccharides ('soy oligosaccharides'), including stachyose, raffinose, sucrose, and common components formed by various linkages of mono-and oligosaccharides [124,125]. Raffinose and stachyose are non-digestible in the gut and thus remain intact until reaching the lower intestine, where they are metabolized by certain bacteria, which possess the alpha-galactosidase enzyme. Soy Oligosaccharides Several studies have investigated the prebiotic potential of soy oligosaccharides [126,127], particularly in terms of gut health. Overall, soy oligosaccharides have been shown to benefit immune function by promoting the abundance and metabolism of beneficial commensal gut bacteria [128], in part, via enhanced T-lymphocyte and lymphocyte proliferation [125]. Further, soybean meal oligosaccharides have been shown to promote competitive exclusion of pathogenic bacteria [129]. There is also evidence that soybean oligosaccharides influence hematological and immunological parameters, for instance, (i) by increasing levels of superoxide dismutase (SOD) and IgG, (ii) by promoting splenocyte proliferation, as well as enhancement of the number of antibody-forming cells in normal mice, and (iii) via attenuation of immune effects in SAM-and S180-treated mice [130]. In general, soybean oligosaccharides have been shown to increase microbial diversity in the gut, including the abundance in short chain fatty acid (SCFA)-producing bacterial taxa such as Bifidobacterium and Lactobacillus [124,125,131] (although soy protein intake has also been shown to exert similar effects in vivo) [132]. Unfortunately, bacterial metabolism of these oligosaccharides in the colon results in gas production, lowering acceptability of intake. Soy fermentation eliminates this problem as well as increases the bioavailability of soy isoflavones. The physiological effects of isoflavones depend on their bioavailability, with the bioavailability of genistein being greater than that of daidzein [135]. In the small intestine, Bifidobacteria and lactic acid bacteria that possess β-glucosidase activity are able to hydrolyze isoflavone glucosides into aglycones (reviewed in [136]). The derived metabolites therein are either absorbed by the host or metabolized further in the colon by colonic bacteria into metabolites of various estrogenic potential, such as equol, O-desmethylangolensin, and p-ethylphenol [137][138][139][140]. These metabolites are then absorbed via the portal vein and can persist in plasma for~24 h [141]. Isoflavones have been reported for their beneficial effect in cardiovascular disease, osteoporosis, cancer, and alleviation of menopausal symptoms (reviewed in [142]). Isoflavones are classified as phytoestrogens and exhibit both functional and structural similarities to the mammalian estradiol molecule, giving isoflavones their 'estrogen-like' activity via the estrogen receptor (ER) [140,[143][144][145][146][147][148]. Although isoflavones bind to both α and β isoforms of ERs, there seems to be a preferential binding and activation of approximately 20 times towards ERβ than to ERα [143,144]. Notably, the predominant ER subtype expressed in colon tissues is ERβ, and it serves to maintain a normal epithelial architecture protecting against chronic colitis [149,150]. In recent decades, extensive epidemiological evidence, together with preclinical in vivo and in vitro studies, indicate that isoflavones also exert potent anti-inflammatory activity in a range of inflammatory diseases via increased antioxidative activities, NF-κB regulation, and reduced pro-inflammatory factors including enzymatic activity and cytokine levels (reviewed in [151,152]). The antioxidant activity of isoflavones is largely attributed to their inhibitory effect on the COX-2, an enzyme that mediates the conversion of arachidonic acid to pro-inflammatory prostaglandins. Prostaglandin is an important mediator in the inflammation process, and its synthesis is increased in inflamed tissues [153]. By comparison, raw soybean oil, which is void of isoflavones, has been shown to significantly raise the levels of arachidonic acid [154]. Numerous studies support the anti-oxidant and anti-inflammatory activity of soy isoflavones; however, their therapeutic role in human IBD remains largely unknown. Data from preclinical rodent studies suggest that the antioxidant activity of soy isoflavones occurs via the scavenging of free radicals, upregulation of antioxidant enzyme systems, and promotion of tight-junction protein expression and TLR4 signaling activity [149,150]. Diets containing high isoflavones contents showed consistent and significant elevation of antioxidant enzymes in various organs [155,156]. Fermented soy germ contains phytoestrogen that is similar to 17B-estradiol in women [150,157], which has been demonstrated to mitigate effects of IBD, such as decreased paracellular permeability and increased tight junction sealing [150,158]. In a partial restraint stress female Wistar rat model, the estrogenic and protease inhibitor properties of phytoestrogen-rich soy germ (34.7 µmol/gm of isoflavones vs. 17β-estradiol benzoate) were shown to prevent stress-induced intestinal hyperpermeability and hypersensitivity, although had no effect on plasma corticosterone [48]. Genistein Among the soybean isoflavones, genistein is considered the most predominant in the human diet [159]. Genistein has been shown to act as a potent agent in both the prevention and treatment of cancer and various chronic inflammatory diseases [160]. The anti-cancer activity of genistein is mainly attributed to its ability to mediate apoptosis, cell cycle, and angiogenesis, as well as inhibit metastasis [161,162]. Genistein has also been suggested to reduce obesity in adults due to its estrogenic activity on genes associated with regulation of lipolysis, lipogenesis, and adipocyte differentiation via ERβ, and 5 adenosine monophosphate-activated protein kinase (AMPK) signaling within muscle and adipose tissue via ERα [162,163]. However, the effects in adipose tissue appear to be sexand dose-dependent, and the effects of soy may vary based on time of administration (early development vs. adult), likely due to differences in ER expression and levels of endogenous estrogens [163]. In IBD, the anti-inflammatory properties of genistein have been reported both in vitro and in vivo. For example, at physiological concentrations (0.1 µM-5 µM), genistein was found to inhibit TNF-α-induced endothelial and vascular inflammation in C57BL/6 mice via mediating protein kinase pathway A [164]. Other studies have also demonstrated genistein inactivation of NF-κB signaling [165,166]. Rodent models of chemically-induced colitis have also demonstrated marked attenuation of colitis severity and reduced proinflammatory cytokine profiles following genistein treatment [20,167,168]. In vitro administration of genistein in Caco-2 cells was further shown to improve both cell viability and cellular permeability and inhibited DSS-induced activation of TLR4/NF-κB signaling [167]. Further, genistein treatment was shown to skew M1 macrophages toward the M2 phenotype, marked specifically by increased expression of arginase-1 and reduced systemic cytokine profiles. Additionally, genistein increased the number of dendritic cells and IL-10 producing CD4+T cells, which in part attenuated colitis symptoms [168]. Another route in which isoflavones modulate intestinal inflammation in epithelial cells is via activation of the Janus kinase (JAK) signal transduction and activator of transcription (STAT) pathway via cytokine signaling. During inflammation, STAT activation is needed for CD4+ T-cell differentiation into the T-helper phenotypes (i.e., Th1, Th2, Th17) [169,170]. Dysregulated STAT phosphorylation is implicated in the development of chronic inflammation due to excessive T-helper cell response by inhibiting immune cell apoptosis [171]. In vitro, isoflavones have been shown to modulate JAK-STAT activity in intestinal epithelial cells. In murine macrophages, genistein was shown to inhibit STAT1 translocation induced by LPS [172]. In Caco-2 cells, low-dose genistein treatment (3 µM) decreased STAT3 translocation by 56%, whereas higher doses (30 µm) decreased STAT 1 nuclear translocation by 23% [173]. STAT inhibition by genistein and possibly other soy isoflavones is viewed as one of the main mechanisms of action of genistein in inflammatory diseases [173][174][175]. It is important to note that genistein (and flavonoids) is susceptible to oxidative degradation, particularly when exposed to oxygen, light, moisture, heat, and food processing conditions that might affect their nutraceutical value, and make them less active, and reduce their absorption efficiency [176][177][178]. In this regard, encapsulation systems, such as microencapsulation (e.g., obtained by water-soluble Chitosan obtained by Maillard reaction) and nanoencapsulation, could serve as useful tools for oral administration of genistein to preserve biological, antioxidant, and anti-inflammatory properties, as well as the functionality of genistein in vitro and in vivo [179][180][181][182]. Equol The soy isoflavone, equol, is a daidzein-derived metabolite that exists in two enantiomeric forms, R-equol and S-equol, the latter naturally metabolized by microbiota in the intestines of humans and rodents [183]. In the gut, S-equol is produced from daidzein via 21 different intestinal bacterial strains, which have been identified in 30-50% of the population. However, prevalence rates are lower (20-30%) in Westerners compared to that of Asians (50-80%) [184,185]. While studies from northeast Asia have suggested that S-equol, and not dietary soy, is inversely associated with incident coronary heart disease, various short-term RCTs have found that both soy isoflavones and S-equol improve arterial stiffness [186][187][188][189], an independent and important predictor of coronary heart disease [190]. Both clinical and experimental evidence shows that equol holds unique immune properties, having greater estrogenic and antioxidant activity compared to most other isoflavones [157,183,191,192]. In vitro, genistein, daidzein, and equol were shown to significantly decrease nitric oxide production via inhibition of iNOS mRNA expression and protein in a dose-dependent manner. In addition, pre-treatment of human intestinal cells (Caco-2 cell line) with the isoflavones reduced LPS-induced inflammatory responses via decreased NF-κB activation [193]. However, in a DSS-colitis female BALB/C mouse model comparing the effect of genistein, daidzein and equol, daidzein, and particularly equol were found to severely perpetuate the DSS-induced effects on body weight, resulting in 14% survival in equol-treated mice. It is important to note that effects were dose-dependent, with significant reductions in body weight seen at equol dosages of 20 mg/kg body weight and not at 2 or 10 mg [194]. Equol administration also resulted in decreased production of anti-inflammatory cytokine IL-10 by mesenteric lymph node T-cells. The worsening of colitis by equol was proposed to reflect T cell-dependent and -independent mechanisms, given the significantly lower survival rate seen in equol-treated severe compound immunodeficiency (SCID) mice compared to controls [194]. Notably, the consumption of purified isoflavones by equol producers results in significantly higher levels of equol in both urine and blood plasma, about 10 to 1000 times higher than that of non-producers fed the same supplement [195]. Intestinal Mucosa Permeability The epithelial mucous layer consists of mucus glycoproteins (mucins or MUC;) and trefoil factors (TFF) secreted by goblet cells that collectively provide the first line of defense against pathogens, both of which are critical to protecting the intestine from inflammation [196,197]. Impairments in the gut epithelial barrier are characteristic of IBD and results in increased gut permeability to bacterial pathogens and other antigens, which in turn, triggers and perpetuates ongoing immune responses and chronic inflammation [198]. There are more than 40 different tight junction associated-proteins involved, with occludin, claudin, zonula-occludens-1 (ZO-1), and junction adhesion molecules (JAM) proteins considered the most important in maintaining epithelial integrity. These proteins are integral proteins associated with peripheral membrane proteins, such as ZO-1, which play a role in scaffolding and anchoring integral proteins [199,200]. Chemically-induced colitis studies have demonstrated the protective effects of soy isoflavones on colonic inflammation and specifically, intestinal barrier integrity, albeit alterations in the expression of tight junction protein appears to vary not only between the different isoflavones but also between the various fractions (isoflavones, proteins) found within soy. For instance, in DSS-treated female institute of cancer research (ICR) mice, supplementation of a standard rodent diet with 0.5% soy isoflavones significantly enhanced occludin colonic mRNA [149], whereas in DSS-treated female BALB/C mice, genistein supplementation (600 mg/kg diet) enhanced both ZO-1 and occludin colonic mRNA, as well as lowered serum LPS (as a marker of colonic permeability) following colitis induction [167]. The combination of a barley and soybean mixture enriched in β-glucans, genistein, and daidzein was shown to significantly prevent loss of tight junction proteins ZO-1, occluding, and claudin-1 in colonic epithelial cells of female C57BL/6 mice following DSS treatment. In vitro, the barley-soybean mixture dose-dependently recovered the DSS-induced loss of tight junction proteins within the monolayer of Caco-2 cells, an epithelial cell line broadly used as a model of intestinal barrier [201]. Protease-activated receptors (PAR) are a subfamily of the G-protein-coupled receptor family that are activated through the cleavage of a part of their extracellular domain. Among the PARs, PAR-2 is highly expressed on endothelial cells and has been widely studied for its ability to modulate inflammatory responses, with various proteases able to act as inflammatory mediators and disrupt barrier function, due to their ability to cleave and activate PAR [202][203][204][205][206][207]. Indeed, several studies have shown that patients with IBD have increased levels of protease activity in feces [204,[206][207][208][209][210][211]. While the exact mechanism behind the regulation of fecal proteolytic activity is not fully understood in the context of IBD, soy protein, soy isoflavones, and soy-derived serine protease inhibitors (soy BBI) are promising candidates for its regulation. For example, the protease inhibitor activity of fermented soy germ extract containing both BBI and isoflavones (55% daidzein, 30% glycitein, and 15% genistein in aglycone forms) was shown to significantly reduce the severity of TNBS-induced colitis and was associated with increased intestinal permeability at 24 h and 3-days post colitis in Wistar rats [120]. Additionally, fermented soy germ prevented luminal increases in protease activity and decreased epithelial Protease-activated receptor-2 (PAR-2), independently of ER-ligand activity [120]. In a model of stress-induced irritable bowel syndrome (IBS), the same group found that fermented soy germ extract also prevented the decreases in occludin expression resulting from prolonged stress when compared to non-supplemented controls [150]. In both studies, however, the protective effects of soy germ extract were reversed via administration of the estrogen receptor antagonist ICI 182.780, suggesting that the protective properties are linked to ER-ligand activity [120,150]. The protective effects of soy on intestinal permeability are, however, not entirely dependent on ER-ligand binding activity of isoflavones. For example, supplementation with isoflavone-free soy protein concentrate was shown to attenuate the effects of DSScolitis and prevent loss of gut barrier function in male CF-1 mice, suggesting that different components within soybean (other than isoflavones) exert protective effects on intestinal permeability [11]. Isoflavone-free soy protein supplementation also reduced colonic glucagon-like peptide-2 (GLP-2) protein levels (a key regulator of intestinal mucosa via regulation of epithelial cell growth) but had no effect on mRNA expression of claudin-1, occludin, or the ratio between the two [11]. In vitro, co-treatment with soy protein concentrate in Caco-2 cells mitigated both intracellular oxidative stress as well as DSS-induced increases in monolayer permeability. Intriguingly, hydrolysis of soy protein concentrate with pepsin and pancreatin (reducing thiol content) reduced the radical scavenging activity but not the effect on monolayer permeability, suggesting that the beneficial antioxidant effect and effect on permeability are a result of different underlying mechanisms of action or different components within soy protein [11]. By comparison, administration of soy protein BBI to DSS-treated male C57BL/6 mice was found to only increase colonic mRNA expression of occludin, whereas colitic mice treated with pea seed albumin extract (known to contain BBI) exhibited increased mRNA expression of MUC-3, occludin, and ZO-1 [212]. Treatment with the albumin fraction of pea seed extract, however, had no significant effect on tight junction protein expression [212]. The effect of soy protein on mucosal integrity may also vary based on the time of administration. For example, in DSS-colitis C57BL/6 mice fed from 7 weeks-of-age, either a soy-, casein-, or whey-based diet with or without the addition of probiotic Lactobacilus rhamnosus GG, colonic Muc1 but not Muc2 expression was significantly lower in soy-fed DSS mice. The addition of L. rhamnosus GG to the diet had no effect on colonic MUC expression [213]. By comparison, feeding a soy protein isolate-based AIN-93G diet for 21 days to 3-week-old C57BL/6 weanling mice resulted in a suppression of secretory IgA and mucin (decreased expression of Muc2, Tff3, GRp94 and Agr2) in the ileum compared to mice fed the referent casein-based diet [214]. Current data indicate that excess dietary fats (i.e., HFD) can promote LPS-induced gut barrier dysfunction, resulting in enhanced local/serum LPS levels, which, in turn, promotes intestinal inflammation, alterations in tight junction signaling, intestinal epithelial cell dysfunction, and hyperpermeability [215]. Emerging evidence indicates that various soy bioactives can attenuate the inflammatory activity induced by an HFD. For instance, in an HFD-induced obese Sprague Dawley rat model, soy isoflavone supplementation increased ZO-1 expression and reduced LPS concentrations when compared to that of non-supplemented HFD-fed mice, with increases in both occludin and Muc-2, found in mice supplemented with high doses of isoflavone [216]. In another study, dietary supplementation (21d) of either a high-or low-fat diet (30% vs. 6% fat) with tempe, a fermented soy product, resulted in markedly elevated fecal mucins (indices of intestinal barrier function) and IgA in male Sprague Dawley rats compared to non-supplemented rats [217]. In HFD-fed C57/BL6 mice supplemented with genistein for 6 months, mice exhibited lower circulating levels of LPS (vs. non-supplemented mice), suggesting a protective effect on mucosal gut barrier health (preventing LPS influx) [218]. Similar protective effects on intestinal permeability were reported in SAMP1/YitFC mice challenged with and without DSS-colitis and fed a Western-style diet [12]. In a recent RCT, combined supplementation of soy and vitamin D significantly reduced plasma inflammatory markers and fecal protease activity, as well as improved gut permeability in women with irritable bowel syndrome (IBS) [219]. Despite the protective effects on intestinal permeability observed from both the isoflavone and protein fraction of soy, such effects are not seen with soybean oil. In fact, a soybean oil-supplemented elemental diet increased intestinal permeability (measured by urinary excretion of phenolsulfonphthalien; PSP) and intestinal damage to a level comparable to that of standard chow-fed male Sprague Dawley rats following indomethacininduced small bowel inflammation [220]. The pro-inflammatory effect of soybean oil compared to that of soybean may be attributed to the high n-6 to n-3 ratio, as well as the absence of isoflavones or other soy-derived peptides. Oxidative Stress Various lines of evidence suggest that the modification of macromolecules (e.g., DNA, lipid, protein) induced by reactive oxygen species (ROS) plays an important role in DNA damage, genotoxicity and carcinogenesis [221,222]. Overproduction of ROS and reactive nitrogen species are known to exacerbate symptoms in IBD, affecting the redox equilibrium within the gut mucosa. Antioxidant enzymes, for example, SOD, are capable of eliminating ROS and byproducts of lipid peroxidation, which, in turn, protects tissues and cells from oxidative damage. Recent studies show that soy isoflavones and protein concentrate are able to reduce radical scavenging activity [149,216]. High-fat diets are known to induce oxidative stress and lipid peroxidation, and reduce antioxidant enzyme activities in various organs of obese mice, particularly the intestinal mucosa [44]. Soy isoflavones, particularly at high dosages (450 mg/kg and 150 mg/kg), were shown to significantly attenuate HFD-induced intestinal oxidative stress in the colon via upregulation of important ROS scavengers, including SOD, total antioxidant capacity, glutathione peroxidase (GSH-Px), and catalase, with a concomitant reduction in malondialdehyde, a product of lipid peroxidation and an indicator of protein oxidative damage [216]. Significant reductions in free radical activity has also been reported from soy protein concentrate supplementation in a standard rodent diet in vivo, as well as in vitro, although pre-oxidation of soy protein concentrate or the blocking of free thiols abolished the latter effects [11]. In another study, soy isoflavone extract exhibited strong antioxidant activity in vitro. However, a dose-dependent response was observed, with antioxidant activity found to plateau at higher concentrations [155]. When administered to rats, soybean isoflavone extract (250 ppm) enhanced SOD activity in various organs (lungs, small intestine, kidney) when compared to vitamin E, with SOD activities found to markedly increase from the 8th to 16th week of feeding, and the most notable effects observed after 24 weeks [155]. Of interest, however, laboratory-prepared tofu (containing~50 ppm isoflavones) had better effects than the soy extract (containing~250 ppm), suggesting that that molecules other than isoflavones present in tofu (soybean-based product) may have a synergistic effect on in vivo induction of antioxidant enzyme activity [155]. Laboratory studies with genistein have yielded mixed results regarding oxidative stress. While some have shown that dietary supplementation significantly decreases expression of molecular and biochemical markers of inflammation [20,120] and increases antioxidant enzyme activity in various mouse organs [147], others have reported genistein to enhance colonic oxidative stress and colon carcinogenesis [223], with the combination of genistein and epigallocatechin-3-gallate, a green tea polyphenol, shown to enhance tumorigenesis in Apcmin/+ mice [221]. By contrast, microencapsulated genistein, but not non-encapsulated genistein, was found to significantly reduce oxidative stress in murine colonic tissue, supporting that differences in the delivery system could explain earlier discrepancies in genistein antioxidant activity [182]. Oral administration of soyasaponin I in TNBS-induced mice was shown to significantly reduce both inflammatory markers and lipid peroxide (malondialdehyde and 4-hydroxy-2nonenal) levels, while increasing glutathione content, SOD and catalase activity [44]. Myeloperoxidase (MPO) Activity Myeloperoxidase is a peroxidase enzyme that is mostly expressed in neutrophil granulocytes and produces hypohalous acids to conduct antimicrobial activity. Elevated levels of MPO can cause oxidative damage in host tissue [222]. Reductions in MPO have been reported in various animal studies following supplementation with soy-derived bioactives. For example, oral supplementation of fermented soy germ (55% daidzein, 30% glycitein, and 15% genistein in aglycone forms for 15 days) to TNBS-induced colitis Wistar rats significantly suppressed colonic MPO compared to untreated animals [120]. In a comparable study, TNBS-induced MPO activity was suppressed in Wistar rats treated orally with genistein (100 mg/kg for 14 days) compared to untreated colitic rats [20]. Studies on TNBS-treated ICR mice fed soyasaponin Ab and I (10 mg/kg) for 5 days revealed similar results regarding MPO activity inhibition [42,44]. Reductions in MPO activity have also been reported in DSS-colitis BALB/c mice administered soybean β-conglycinin (50 or 500 mg/kg for 28 days) [71] and transported tripeptide VPY (Val-Pro-Tyr, 2-week pretreatment of drinking water with 0.1 and 1 mg/mL) [76]. Taken together, it appears that the reduction in MPO, oxidative and intestinal inflammation is attributed to various bioactive components of soy, although the exact mechanisms remain unclear. Cytokines Studies have shown various effects of soybean and soy derivatives on cytokine pathways, which cannot be easily integrated into a single narrative. However, decreased expression of pro-inflammatory cytokines TNF-α, IL-1β, IL-6, and IFN-y (stimulates macrophages to induce innate/adaptive immune responses) among others, have been frequently reported following administration of soy in models of inflammation with and without chemically induced colitis [42,44,77,120,168,201,213,[224][225][226]. The effect of soy on anti-inflammatory cytokine IL-10 has, however, yielded variable results [131], with some studies showing increased production following administration of soy isoflavones [216], whereas others showed no effect [120,131,149]. It is possible that variability of effect reflects differences in processing, dosage, and animal model. Cyclooxygenase 2 (COX-2) The pro-inflammatory enzyme COX-2 or Prostaglandin-endoperoxide synthase 2 (PTGS2) is a key inducible enzyme encoded by the PSG2 gene that is rapidly upregulated by cytokines and growth factors and is thus an important mediator of the inflammatory response. Increased COX-2 expression in colonic epithelium has been associated with IBD and DSS-colitis rodent models [102,106], with affected areas of the gut known to exhibit increased prostaglandin production [124,125]. Across the various chemically-induced colitis models reviewed, colitis induction was uniformly found to increase the mRNA expression of COX-2 in the colonic mucosa. While genistein [20], soyasaponin [42,44], and lunasin [104] have been shown to markedly attenuate colitis-induced increases in colonic COX-2 mRNA, additional studies are needed to determine potential differences in effect in terms of timing of treatment administration. For instance, the effect of genistein and soyasaponin were evaluated when administered to rodents prior to colitis induction [20,42,44], whereas lunasin was evaluated for its effect after the induction of colitis [104], for which effects were seen both at doses of 20 mg/kg and 40 mg/kg [104]. Fermented soy sauce (prepared from defatted soybeans or other protein-rich materials) was also shown to inhibit colonic COX-2 in DSS-induced C57BL/6J mice, albeit low-dose treatment (4 mL/kg) resulted in greater anti-colitic effects than that of high-dose (8 mL/kg) [225]. Of interest, COX-2 was not inhibited in DSS-treated C57BL/6 mice fed pure soy BBI (despite the anti-inflammatory effect of pure soy BBI in the DSS model), whereas in the same model, the attenuation of DSS colitis in mice fed pea seed extract containing BBIs was accompanied by significant reductions in COX-2 mRNA expression [212]. Of interest, the same study also found that pure soy BBI upregulated expression of matrix-degrading proteases, specifically metalloproteinase-14 (MMP)-14, whereas pea seed extract significantly reduced mRNA expression of MMP-2, MMP-9 and MMP-14 [212], suggesting that the actions on altered colonic immune response vary based on the dietary source of BBI. Toll-Like Receptors (TLRs) Toll-like receptors are a class of proteins expressed in innate immune cells including, macrophages and dendritic cells, that recognize microbial structural components by binding to pathogen-associated molecule patterns (PAMPS). TLRs, in turn, trigger the production of pro-inflammatory mediators, including cytokines such as TNF-α and IFN-y through the NF-κB and the interferon regulatory factor 3 (IRF3) signaling pathways [227]. In this regard, TLRs play an important role in innate immune system pathways linked to inflammation, and in turn, the pathogenesis of multiple diseases involving the innate and adaptive immune systems. Indeed, mutations of the TLR are linked with chronic inflammatory conditions, including IBD [228]. Numerous studies have investigated how soy or soy bioactives mediate TLR expression and inflammation in rodent IBD models. However, these studies have varied considerably based on the type of soy-derived bioactive administered, rodent genetics, and macronutrient composition (i.e., high-fat vs. low-fat) of the underlying diet. For instance, both the soluble and insoluble fiber component isolated from soy hulls were found to block the TLR4/NF-κB inflammatory signaling pathway in DSS-treated BALB/c mice compared to control mice [229]. In another study, soy-derived BBI (50 mg/kg/day for 23 days) was shown to significantly reduce the expression of TLRs, including TLR2, TLR4, TLR6, and TLR9 in C57BL/6 mice following DSS treatment [212]. In another study, replacement of 6 or 12% of the dietary protein with isoflavone-free soy protein concentrate for 10 days was shown to blunt the DSS-induced increases in colonic IL-1β and TLR4 expression in CF-1 mice [11]. By comparison, in DSS-colitis ICR (institute of cancer research) mice, dietary supplementation with soy isoflavones (0.5% for 7 days) alleviated colitis severity and inactivated Myd88 but did not significantly alter TLR4 expression following colitis induction [149]. In a TNBS-colitis ICR mouse model, Soyasaponin Ab treatment was shown to attenuate colitis severity and inhibit NF-κB activation and expression of TLR4 [42]. In vitro, soyasaponin Ab administration to TLR4 siRNA-treated peritoneal macrophages did not affect TLR4 expression or LPS-induced NF-κB activation, suggesting that soyasaponin may ameliorate colitis through the inhibition of LPS binding to TLR4 on macrophages [42]. There is also evidence that the inflammatory potential of HFDs can be enhanced or suppressed by soy bioactives. For example, in an HFD-induced obese Sprague Dawley rat model, soy isoflavones blocked TLR4 and NF-κB expression in colonic tissue [216], whereas the long-term addition of genistein to an HFD fed to C57BL/6J mice resulted in a significant increase in TLR4 expression, as well as expression of TNF-α and IL-6 compared to controls [224]. Peroxisome Proliferator-Activated Receptors (PPARs) Peroxisome proliferator-activated receptors are ligand-activated nuclear hormone receptors for lipid-derived substrates. The family of PPARs plays an essential role in energy metabolism and consists of three unique members: PPAR-α, PPAR-δ, and PPAR-γ, with the latter recently elucidated for its role in bacterial-induced inflammation and the IBDs [230]. Studies investigating the effects of soy-derived isoflavones compared to that of soy protein isolate on PPAR activity have, however, yielded inconsistent findings. While some in vivo studies show PPAR-α activation due to the isoflavone content of soy [231,232], others demonstrate PPAR-α activation as a result of the non-isoflavone phytochemicals or their metabolites derived from soy [233]. With regards to PPAR-y, activation of PPAR-γ by isoflavones has been shown in vitro [231,232], whereas the in vivo effects of soy on PPAR-γ appear to be tissue-specific [234,235]. Microbiome The gut microbiota is shaped by diet and plays a critical role in both the etiology and progression of IBD. Decreases in microbial diversity, as well as alterations to the Firmicutes: Bacteroidetes ratio, with reductions in beneficial taxa Bacteroidetes, Lactobacillus and Fecalibacterium prausnitzii have been reported in chronic inflammatory conditions including obesity and IBD [216,[236][237][238]. Comparable alterations in taxa, particularly increased abundance of Firmicutes and decreased Bacteroidetes, are reported in animal models of chemically-induced colitis [212], and thus such models have been frequently used to test the in vivo effects of soy on microbiota composition. Several studies have shown the consumption of soy protein exerts a beneficial effect on gut microbiota, leading to greater microbial diversity, decreased Firmicutes and increased Lactobacillus abundance, and enhanced bacterial production of SCFA, especially lactic and butyric acids [132,239,240]. Further, the anti-inflammatory effects of soy protein have been linked to changes in bile acid pool composition and metabolism [241,242], which itself is subject to gut microbiota modulation [243,244]. Although the interplay between the latter two factors remains unclear, bile acids act as hormone-like regulators of inflammation, and evidence suggests that dysbiosis of the gut microbiota and its reciprocal interaction with the composition and pool size of bile acids is associated with the pathophysiology of metabolic disease [245,246]. HFDs are known to result in alterations to the gut microbiota and intestinal pool of bile acids and promote inflammation [242]. In the context of an HFD, soy protein intake has been shown to promote microbiota-driven increases in bile acid transformation of primary bile acids towards their secondary forms, as well as favor reabsorption of bile acids in the colon [242]. For instance, the impairments in intestinal permeability observed in C57/BL6 mice fed an HFD composed primarily of soybean oil (40% fat, absence of isoflavones, peptides) were found to significantly correlate with increases in cecal concentrations of primary bile acids, colic, chenodeoxycholic, and alpha-muricholic acid, as well as secondary bile acids lithocholic, hyodeoxycholic, and ursodeoxycholic acid, compared to control-fed mice [242]. While these findings are consistent with previous in vitro studies demonstrating that certain bile acids, including colic acid, chenodeoxycholic, and ursodeoxycholic acid, increase tight junction permeability [247], other studies, however, have reported a protective effect both in vivo and in vitro from ursodeoxycholic and lithocholic bile acid on intestinal inflammation [248,249]. In one study, the protective effect of fermented soy on permeability and concomitant reductions in lithocholic acid occurred in supplemented rats fed either a high-or low-fat diet composed primarily of beef tallow as the fat source [217], suggesting the soy-derived components such as peptides and isoflavones can mediate the pro-inflammatory HFD-induced shift in bile acids, although this requires further study. Some of the beneficial health effects of soybean and soy isoflavones may be attributed to their ability to stimulate or inhibit the growth of the gut microbiota population [250]. In an HFD study, the addition of 0.2% genistein significantly increased the relative abundance of Firmicutes (from 21.3 5 to 23.8%) and decreased Bacteroidetes (from 67.1% to 56.8%), with a concomitant increase in both Verrucomicrobia and Prevotellaceae, compared to C75BL/6 mice fed a non-supplemented HFD [224]. At the species level, these changes could be explained by shifts in Bacteroides acidifaciens and Bacteroides uniformis, whereas the genuslevel changes in Prevotella and Akkermansia were mainly attributed to Prevotella copri and Akkermansia muciniphila, respectively [224]. By comparison, rats fed an HFD supplemented with soy isoflavone (150 mg/kg vs. 450 mg/kg) exhibited a significantly higher relative abundance of Bacteroidetes and Proteobacteria, and reduced the proportion of Firmicutes and Firmicutes: Bacteroidetes ratio [216]. In line with previous studies [251,252], the increased proportion of Phascolarctobacterium and concomitant decrease in Oscillibacter, Morganella, and Pasteurella were believed to contribute to the improvements in gut barrier integrity and reduced inflammation observed in soy isoflavone supplemented mice [216,253,254]. In another study, the addition of fermented soy to a HFD resulted in a lower proportion of Bacteroides and higher Clostridium cluster XIVa, as well as higher cecal acetate, butyrate, propionate, and succinate levels, compared to a HFD alone [217]. Hydrolysate-soybean media has been shown in vitro to promote a higher growth rate of both Bifidobacteria and Lactobacillus [255], whereas the addition of soy protein did not increase probiotic microorganism growth [256]. Various animal models, however, with or without colitis induction, have reported increases in abundance of anti-inflammatory taxa Bifidiobacterium and Lactobacillus [12,257,258], with improvements in the Firmicutes to Bacteroidetes ratio, following the feeding of soy oligosaccharides, soluble and insoluble soy-derived fibers, or soy-based products [201,229,259]. Lactic acid bacteria, such as Lactobacillus plantarum strains [257,258], which are known for their anti-inflammatory and antioxidant effects on IBD [257,260,261], are promoted by consuming a soy-based diet [12]. Several studies have demonstrated a protective effect from soy-based products fermented with probiotic bacteria strains on chemically-induced colitis and cancer severity. For example, fermentation of a soy-based product by Enterococcus faecium CRL 183 and Lactobacillus helveticus 416 with Bifidobacterium longum ATCC 15,707 significantly reduced DSS-colitis symptom severity in male Wistar rats, as well as increased abundance of Lactobacillus spp. and Bifidobacterium spp., the latter accompanied by increases in SCFA levels (propionate, acetate) compared to controls [131]. Similarly, fermentation of soy milk by Lactococcus lactis subsp. lactis S-SU2 [257], or the riboflavin-producing strain Lactobacillus plantarum CRL 2130 [262], has been shown to significantly reduce chemically induced colitis severity compared to those fed unfermented soy milk. Findings from the latter study suggest that riboflavin-producing lactic acid bacteria may serve as an effective antiinflammatory therapy to promote mucosal integrity [263]. Fermented soybean pastes synthesized with probiotic species Aspegillus oryzae, Bacillus subtilis-SKm, and Lactococcus lactis-GAm has also been shown to inhibit both DSS-colitis and azoxymethane (AOM)induced colon carcinogenesis [264,265]. Discussion During the last decade, preclinical and human studies have shown a beneficial and therapeutic anti-inflammatory activity from soybean and soybean-derived compounds in chronic inflammatory disorders, including IBD. This paper reviews the preclinical evidence provided by animal modes of IBD (with or without chemically-induced colitis) that tested the role of soybean and bioactive components of soybeans on the severity of intestinal inflammation to closely examine the mechanistic effects soybean has on inflammation, oxidative stress, intestinal permeability, gut microbiota profiles, and immune systems under controlled conditions. Our review highlights the overlapping anti-inflammatory potential of soybean and soybean bioactive compounds in experimental IBD, as well as the differences in the mechanism of action that exist due to different components within soybean components. However, our review also highlights the variability in findings between studies, which appear to depend on various factors, including rodent genetics, the method of colitis induction, as well as the duration of the feeding trial, and dosage/source/fermentation and structural composition of the soybean/soy compound/bioactive used. Other factors, including accompanying diet compounds/diet profile (e.g., HFD), quality and processing methods of soybean, and gut microbiota, could also play a role in the mechanism and effectiveness of soy to modulate intestinal inflammation. Despite the great advancement and generation of relevant data, a limitation to note is that many studies do not report in detail the nutritional composition of the soybean compound or the diet, making it difficult to elucidate the exact role (or interaction) of soy from that of other dietary factors, and creating less reproducible experiment conditions, as previously described by our group [266]. In the future, it is important to improve reporting and that experiments are conducted in a fashion that correlates the described mechanisms, host genetics, and host microbiome. Conclusions Soybeans have been a major component in Asian cuisine for centuries, and soy products have increasingly become a popular and preferred choice in Western countries because of their diverse nutritional content, particularly as a high protein legume. Over the last decade, a growing body of evidence has revealed a wealth of potential health benefits from consuming soybean products, attributed primarily to the rich source of antioxidants and immunomodulatory molecules present in soy. In particular, soybean bioactives have attracted attention from a therapeutic perspective in IBD because of their anti-inflammatory, anti-oxidative, and protective effects against intestinal permeability demonstrated in rodent models of IBD. While the preclinical findings to date are promising, more studies are needed to understand and characterize the complex biochemical mechanisms through which soy bioactives interact and exert their effect, especially in the context of the genetics of the host and the microbiome of the host.
Segmenting Dynamic Network Data Networks and graphs arise naturally in many complex systems, often exhibiting dynamic behavior that can be modeled using dynamic networks. Two major research problems in dynamic networks are (1) community detection, which aims to find specific sub-structures within the networks, and (2) change point detection, which tries to find the time points at which sub-structures change. This paper proposes a new methodology to solve both problems simultaneously, using a model selection framework in which the Minimum Description Length Principle (MDL) is utilized as minimizing objective criterion. The derived detection algorithm is compatible with many existing methods, and is supported by empirical results and data analysis. interested in analyzing how the network evolves. There are two main areas of research for dynamic networks: consensus clustering, where one tries to find a community structure that fits well for all the snapshots in the data sequence, and change point detection, where one aims at locating the time points at which community structures change. In terms of consensus clustering, several main techniques have been developed in the literature, which are closely related to static network community detection methods. These include sum graphs and average Louvain (Aynaud & Guillaume 2011), which start by constructing a special graph that captures the topology of all snapshots in a given graph sequence and then apply any static community detection method to this summary graph. This assumes that the discovered structure fits well to all snapshots in the sequence. The construction of this special graph can be done in many ways, and the simplest way is to add up the adjacency matrices of each snapshot to create a new matrix that resembles this special graph (see Section 4 for more details). Another detection method by Lancichinetti & Fortunato (2012) aims to find a partition for the sequence using individual partitions from the snapshots. That is, using the community structure of each snapshot as input, the method constructs an adjacency matrix M that captures the community assignment relationships between the nodes across all snapshots, and conducts community detection on M . As for change point detection, there have been a few well known methods that use different approaches on solving the problem. They include GraphScope (Sun et al. 2007), Multi-Step (Aynaud & Guillaume 2011), generalized hierarchical random graph (GHRG) (Peel & Clauset 2015), and SCOUT (Hulovatyy & Milenković 2016). GraphScope works by sequentially evaluating the next snapshot, to see whether the community structure of the next snapshot matches well with the one from the current segment according to an evaluation criterion derived based on the Minimum Description Length Principle. This method also works well for online change point detection, i.e. streaming data. However, the algorithm assumes the nodes can be partitioned into two sets: sources and sinks, and finds the partition within each set. Multi-Step, on the other hand, starts by assuming each snapshot belongs to its own segment. At each iteration, the two snapshots that are most similar (measured by an averaged modularity quantity) are grouped together. This is similar to a hierarchical clustering approach. GHRG works by first assuming a parametric model on the individual networks and a fixed-length moving window of the snapshots, and statistically tests whether a given time point in the window is a change point. Lastly, SCOUT works by finding the set of change points and community structures that minimizes an objective criterion derived based on the Akaike Information Criterion (AIC) (Akaike 1974) or Bayesian Information Criterion (BIC) (Schwarz 1978). Hulovatyy & Milenković (2016) This paper proposes to conduct change point detection and community detection simultaneously using the Minimum Description Length Principle (MDL) (Rissanen 1989(Rissanen , 2007. In short, the detection problem is cast as a model selection problem, where one tries to select the number of change points and community assignments by minimizing an objective criterion. Note that although GraphScope also uses the MDL principle as their objective criterion, their model assumptions are different from the ones made in this paper. Also, unlike many of the existing papers, this paper provides a thorough analysis of the proposed method via simulated data, to analyze accuracy of the method when the ground truth is known, an important validation step. It is specifically shown that, when the underlying model is correctly specified, the proposed method is able to detect change points with very high accuracy. Even when the model is misspecified, the proposed method can still capture the change points, while competitor methods tend to over-estimate the locations in this scenario. The rest of the paper is organized as follows. Section 2 formally defines the problem. Sections 3 and 4 introduce the proposed methodology. Section 5 presents an empirical analysis of the proposed methodology and Section 6 concludes. Notations Denote a sequence of graphs of length T as G = {G (1) , . . . , G (T ) }. Each graph G (t) consists of a vertex set V (t) and an edge set E (t) , where the node degree of each v ∈ V (t) is at least 1. Note that there is no restriction on the size of the vertex sets: implying that the graphs G (t) and G (t ) have different size -a quite natural assumption for time evolving networks. For example, in the popular Enron Email dataset (Priebe et al. 2005), each graph G (t) represents the email communication pattern between employees over one week. The nodes represent employees of the company, and an edge between two nodes means there is at least one email communication between the two employees within the time frame of the graph. It is possible that some employees have no email connection with the subjects of interest in the data set for some time t, hence these employees will be missing in V (t) , and show up again at another time. Denote the overall node set as V = t V (t) , with |V | = N . In general, each graph G (t) ∈ G can be represented as a binary adjacency matrix A (t) of dimen- ij = 1 represents a connection between the nodes i and j, and 0 otherwise. If |V (t) | < N , one can simply insert rows and columns of 0 at the appropriate locations such that the row and column arrangements of all matrices A (t) have the same meaning. Note that j A (t) ij = 0 means that no edge is connected to node i (i.e. node i is a singleton). Of interest are the nodes such that j A (t) ij = 0, but for simplicity of notation and computation, all adjacency matrices are fixed at the same size. As stated in the Introduction, this paper focuses on simple undirected networks, ji = 1 if there is at least one connection between node i and j at time t. The graph is also assumed to have no self-loops, i.e. A (t) ii = 0. Problem Statement Suppose the sequence of graphs G can be segmented into M + 1 segments, with the graphs in each segment satisfying some homogeneity properties. For m = 1, . . . , M + 1, define the graph segment G (m) = {G (t m−1 ) , . . . , G (tm−1) }, using the conventions t 0 = 1 and t M +1 = T + 1. The problem of change point detection in dynamic networks can then be defined as follows: Problem 2.1. Given a sequence of graphs G, find the locations t 1 , . . . , t M such that the community structure of each resulting graph segment G (m) is homogeneous but different from the community structure of any adjoining graph segment. The time points t 1 , . . . , t M are called change point locations. It is important to note that, as mentioned above, the number of nodes within each graph can be different, even within the same time segment. However, if a change in node size is considered as a change in community structure, it can easily result in T segments consisting of one graph each. Hence a more robust definition of 'change' is needed in order to prevent overestimating the number of segments. It is possible that some nodes might not show up in all of the graphs in the segment. However, if the original community structure is strong, adding nodes to the existing network can only strengthen the existing communities unless the new nodes introduce a large number of new connections. Similarly, removing certain nodes will not significantly weaken the existing structure unless the removed nodes play a central role in their communities. Hence this is a valid definition of community assignments. Because of this, for simplicity we will write V (t) instead ofṼ (t) in what follows. Change Point Detection and Community Detection Using MDL This section describes the modeling procedure of the dynamic network and introduces the proposed methodology for change point and community detection. The statistical model used for the individual networks is presented first. The Stochastic Block Model Many statistical models have been proposed to analyze network data, with the Stochastic Block Model (SBM) the most widely used. Below we briefly review the non-degree-corrected SBM. Recall that the adjacency matrix is a symmetric binary matrix, with 1 representing the existence of a connection between two nodes. Given the community assignment vector c and link probabilities P kl between communities k and l, one can model the edges with a Bernoulli distribution: A ij |P, c ∼ Ber(P c i c j ), where c i and c j are the community assignments of nodes i and j, and P is a symmetric matrix with [P] kl = P kl . The standard assumption entails that P kl should be large if k = l, i.e. if two nodes belong to the same community, there is a high probability of an edge existing between the two nodes. This results in a denser connection for intra-communities than for inter-communities. Extending this notation to the segmented setting mentioned above, gives belongs to the m th segment. Note that the link probabilities are not assumed to remain the same throughout a given segment. The estimation of the link probabilities can be solved via the maximum likelihood method. Suppose the community assignment c at time t is known, where t ∈ [t m , t m+1 − 1]. The loglikelihood function is then Equation (1) gives the representation when the edges are assumed to have Bernoulli distributions. Equation (2) is for the aggregation of all edges within a given community into one group, with kl as the total number of possible edges between communities k and l, and kl as the number of observed edges between communities k and l. The parameters can then be estimated by finding theP (t) kl that maximize Equation (2). The MDL Principle Using the SBM as the base model for the graphs, one can write down a complete likelihood for modeling the change points and the community assignments for each segment (call this the segmented time-evolving network). As seen in Section 3.1, the estimation of the link probabilities is trivial if the change point locations and community assignments are given. However, the estimation of the community structures and change points is less straightforward. In terms of community detection, various algorithms and objective criteria have been proposed to solve the problem (see Introduction). If the change point locations are known, one can easily adopt the existing methods to derive the community assignments. The rest of this section will apply the MDL principle to derive an estimate for the change point locations as well as community assignments for each segment. The MDL principle is a model selection criterion. When applying the MDL principle, the "best" model is defined as the one allowing the greatest compression of the data A = (A (1) , A (2) , . . . , A (T ) ). That is, the "best" model enables us to store the data in a computer with the shortest code length. There are several versions of MDL, and the "two-part" variant will be used here (see (3)). The first part encodes the fitted model being considered, denoted byF, and the second part encodes the residuals left unexplained by the fitted model, denoted byÊ. Denote by CL F (A) the code length The goal is to find the modelF that minimizes (3). Readers can refer to Lee (2001) for more examples on how to apply the two-part MDL in different models. To use (3) for the finding the best segmentation as well as community assignments for a given evolving network sequence, the two terms on the right side of (3) need to be calculated. To fit a model for the segmented time-evolving network, one needs to first identify the change locations. Once the locations are determined, one can proceed to estimate the community assignments as well as the link probabilities. Denote by c (m) = (c |V (m) | ) the community assignment for the m th segment, and C = {c (1) , . . . , c (M +1) }. SinceF is completely characterized by T = (t 1 , . . . , t M ), C, and P = {P (1) , . . . , P (T ) }, the code length ofF can be decomposed into According to Rissanen (1989), it requires approximately log 2 I bits to encode an integer I if the upper bound is unknown, and log 2 I u bits if I is bounded from above by I u . Hence CL F (M ), the code length for number of change points, translates to log 2 (M + 1), where the additional 1 is to differentiate between M = 0 (no change point) and M = 1. To encode the change point locations T , one can encode the distances between each change point rather than the locations themselves. Once the change points are encoded, one can encode the community structures and link probabilities, i.e. the networks themselves. Recall in Definition 2.1, that the goal is to partition each node set V (m) into c m non-overlapping communities. Therefore, where the first term encodes the number of communities for the m th segment (c m ≥ 1), and the second term encodes the community assignment for each node. Lastly, by Rissanen (1989), it takes 1 2 log 2 N bits to encode a maximum likelihood estimate of a parameter computed from N observations. Hence, CL F (P) = T t=1 k≤l To obtain the second term of (3), one can use the result of Rissanen (1989) that the code length of the residualsÊ is the negative of the log-likelihood of the fitted modelF. With the assumption that, given the community structures and link probabilities, ij follows a Bernoulli distribution, Combining (5) and (6) together, the proposed MDL criterion for estimating the change point locations and community structures is The goal is to find the change point locations and community assignments that minimize (7). Change Point and Community Assignment Search As pointed out in Section 3.1, the estimates of link probabilities P are easy to obtain if the change points T and community assignments C are known. However, the estimation of T and C are non-trivial. Below describes the procedure for estimating these two parameters, which combine to estimate the segmented time-evolving network. Community Detection The procedure for community detection within a given segment of networks is described first. Recall that in Definition 2.1 the goal is to find, for the m th segment, c (m) such that each node in belongs to exactly one community. However, it is possible that some nodes only appear in certain snapshots within the m th segment. Hence the community search procedure should be robust enough to deal with this problem. Consider the set of adjacency matrices A (m) = (A (tm) , . . . , A (t m+1 −1) ). Suppose we can aggregate these matrices (which represent networks), by simply adding up these t m+1 − t m matrices. The resulting matrix forms a super network that overlays all the networks between G (tm) and G (t m+1 −1) , and community detection can be conducted over this super network. Since only simple undirected networks are considered, all values larger than 1 in the aggregated adjacency matrix will be replaced by 1. As seen in the Introduction, community detection has been a popular research area in the past few decades, and many known fast algorithms have been developed for the task. However, most of the designed algorithms aim at maximizing the modularity of the network, hence they cannot be applied directly here since the objective function of interest is the MDL criterion. Nonetheless, one can still borrow ideas from the algorithmic portion of the designed methodologies. The Louvain method of Blondel et al. (2008) is known to be one of the fastest community detection algorithms for static networks. It works in the following way. First, all nodes are assigned to be its own community. In the first iteration, each node (in some random order) is moved to its neighborhood community if there is a positive gain in modularity. If there are multiple neighborhood communities with positive gain, the one with maximum gain is picked. This is repeated for all nodes and perhaps multiple times per node until no modularity gain is achieved. Then the newly formed communities are treated as nodes and the merging procedure is repeated again until no modularity gain is achieved (at this step a neighborhood community is a group of vertices such that it has at least one connection with the current community). This method is fast and suitable for large graphs. However, it might be prone to overestimating the number of communities since it is a bottom-up search method. Also, the number of communities is usually a lot smaller than the number of nodes, hence it seems not necessary to initialize N communities with N nodes in the graph. Instead of a bottom-up search, a top-down algorithm for detecting communities is proposed here. The main idea is to recursively split the network into smaller communities until no further improvement can be achieved. The algorithm starts by randomly assigning each node to one of two communities. In the first iteration, each node (at some random order) is switched to the opposite community if the switch leads to a decrease in MDL value. Repeat this multiple times until no switch will cause a decrease in MDL value. Then, repeat the same procedure on each sub-community until no further split can be found. To prevent overestimating the number of communities, a merging step is conducted after the splits. At each iteration, each community is merged with its neighborhood community if there is a drop in MDL value, and the one with the biggest drop is picked if there are multiple such communities. Repeat this with all communities. One can think of this procedure as a top-down search (splitting communities) followed by a bottom-up search (merging communities). One can repeat the entire procedure after the merge step to prevent from trapping at a local optimal solution. Notice that since all the segments are assumed to be independent of each other, there is no need to calculate the entire MDL value (7) when conducting community search. Instead, one can consider the sub-MDL criterion when doing the splitting and merging steps mentioned in the previous paragraphs. This also means that all the segments can be searched simultaneously, which can then speed up computational time. Algorithm 1 lays out the community assignments search procedure. By doing so, the solution is guaranteed to be a global minimum, but the computation also becomes intractable once T is large (with T snapshots, there are 2 T −1 combinations to loop through). One can use dynamic programming to reduce the computational complexity, but still needs to search through a large solution space before finalizing a global solution. Change Point Detection Both top-down and bottom-up searches are greedy algorithms, and their computation can be complex. For top-down search, one starts with the entire sequence of T graphs, and finds the location t 1 ∈ [2, T ] that minimizes the objective criterion (as well as a decrease in the criterion value). Then, one finds the location t 2 ∈ [2, T ]\t 1 that minimizes the objective criterion (with t 1 already in the model), and repeats until no change point can be found. By doing so, one needs to go through T − i calculations at the i th iteration. Bottom-up search, on the other hand, starts by assuming each location t is a change point, and merge the adjacent segment such that the objective function is minimized. This procedure is repeated until no further merge can be found. This paper proposes a top-down search for finding the change point locations. However, instead Algorithm 1 Community Detection for the m th Segment 1: Assign each node to one of two communities. To speed up the initialization process, use existing methods to identify the two communities. for each node in V (m) do 5: Switch the community assignment if the value of (8) for each community found do 13: Repeat steps 1-6, but with a subset of V (m) . 14: end for 15: Update MDL m . of naively testing each location for the possibility of being a change location, a screening process is first conducted to select a set of candidate change locations. Then each candidate location (in some specific order) is checked to see whether it is a change point or not. Below describes the details of the search algorithm. The screening process is conducted as follows. First, calculate the difference between each consecutive adjacency matrix. The distance used is the 1-norm between the two matrices normalized by their geometric means, given by the following formula: where vec(A) is the vector form of A. The idea is that if the community structure between two consecutive networks does not change, then regardless of the differences in link probabilities, the edge pattern should remain roughly the same, hence the distance should be relatively smaller. Therefore, a large value of d t is an indicator that there is a change in the community structure at time t. Set the locations whose distances are above the median value of d t 's as the candidate change locations. This is equivalent to assuming that the maximum number of change points is T /2, which is a reasonable assumption in most situations. Once the candidate locations are determined, order them by the d t values from largest to smallest. Starting with the first location (denote byt 1 ), segment the data into two pieces, conduct community search within each segment, and calculate the MDL value (7). If this value is smaller than the MDL value with no segmentation, sett 1 as a change location, otherwise segment the data att 2 and repeat. Every time a change location is found, remove the location from the candidate set and reset the search procedure, with the previously selected locations in the estimated model. Algorithm 2 lays out the change points search procedure. Empirical Analysis To assess the performance of the proposed methodology, multiple simulation sets will be conducted. Application to a data set is also performed to showcase the practical use of the proposed method. Algorithm 2 Change Point Detection in Dynamic Networks. 1: Calculate the consecutive distances d t using (9) for adjacency matrices A (2) , . . . , A (T ) . Segment the network sequence at time t (given change points at t ∈ τ ) and conduct community detection with Algorithm 1. 6: Calculate the MDL value (7) Merge the consecutive segments at t and conduct community detection with Algorithm 1 (given change points at t ∈ τ ). Simulation This section focuses on analyzing the performance of the proposed method on synthetic data. Out of the four settings compared, three settings involved networks generated according to the SBM discussed in Section 3.1, with each time shot independent of each other. The last setting Table 1 shows a summary of each setting. Detailed descriptions of each setting can be found in the Appendix. Figures 1-4 show the histograms of the estimated change point locations for Settings 1 through 4, respectively. All settings were repeated with 100 trials. As listed in Table 1, three of the settings involved dense networks while the remaining one it is seen that the method was not able to correctly identify the number of change points once this restriction was relaxed. As the number of change points is often unknown in real data, it is more reasonable to compare with results of the automatic selection case (with BIC). To also evaluate the performance of the proposed community detection algorithm, the normalized mutual information (NMI) was used. In brief, NMI is an evaluation criterion used to evaluate the performance of clustering results. It is defined as The quantities H(·) (entropy) and I(·) (mutual information) are then defined as The overall NMI for the sequence of networks is defined as the mean of all individual NMIs: (ĉ (m) , c (m) ). Notice that NMI(ĉ (m) , c (m) ) ranges between 0 and 1, where 0 means the estimated community structure is a complete random guess, while 1 means it is perfectly matched. Table 2 presents the results for community detection of the proposed algorithm, as well as detection results from SCOUT. To guarantee the community detection results are comparable Data Analysis In this application, the World Trade Web (WTW), also known as the International Trade Network (ITN), is considered. This data set is publicly available at Gleditsch (2002). In brief, this data set captures the trading flow between 196 countries from 1948 to 2000. The data set consists of the total amount of imports and exports between two countries each year. Several papers have been published on the analyses of the WTW, including Tzekina et al. (2008), Bhattacharya et al. (2007), Bhattacharya et al. (2008) and Barigozzi et al. (2011). Since the import/export information is given, many of these analyses were done considering the trade network as a directed weighted network, where the weights represent the amount of goods going from country A to country B. For this analysis, however, since the focus of this paper is on undirected networks, the data set was modified such that an edge exists between two countries if there is some trading between the two countries. As there is data for each year between 1948 to 2000 (53 years), it is straightforward to consider this as a dynamic network. Table 3 shows the summary of the data set. The proposed algorithm detected 5 change points on this data set. As comparison, the SCOUT algorithm (with BIC to select the number of change points) was also applied to the data set, which detected 5 change points as well. The results are listed in Table 4 below. 1948-19591960-19651966-19741975-19801981-19901991-2000SCOUT 1948-19611962-19721973-19801981-19901991-19921993-2000 Only the community assignments of the proposed method will be investigated here. Figures 5-10 show the trading communities for the six detected segments. Since multiple communities have been detected for each segment, only the top 7 largest communities will be analyzed for each time period (the top 7 communities cover a majority of the countries in most cases). For each map, blue denotes the largest community, green the second largest, then yellow, red, pink, orange, and purple, for the third to seventh largest communities, respectively. One can see that the largest community consists of all the largest nations in the world, including the US, Canada, China, Russia, and many others. can be explained by the lack of data for such an early period, as well as most countries were still in their developing phase. Starting from the second segment (1960 to 1965), most countries in Africa started to get involved in trading, but mostly among themselves. One possible event that triggered this behavior was that, Conclusion This paper presented a new methodology for analyzing dynamic network data. By assuming each individual network follows a Stochastic Block Model, an objective criterion based on the Minimum Description Length Principle was derived for detecting change points and community structures in dynamic networks. Simulations showed promising results of the proposed algorithm, and a data analysis confirmed the proposed methodology is able to detect major changes. Appendix This section provides the details of the simulation settings. Setting 1: Table 5 lists the specification for this setting. T = 30 for this and all following settings. The number of nodes for each snapshot ranged between 280 to 300. The community sizes were specified according to the ratios listed in the column 'Community Size Ratio': the ratios (1/3, 1/3, 1/3) mean there are three communities, each containing roughly 1/3 of the total nodes of the graph. The link probabilities are listed in the 'Link Probability' column, with P W representing the probability of an edge existing within a community, and P B the probability of an edge existing between two communities. Note the quantities satisfy the assumption P W > P B . For this setting, all networks within the same segment had the same within and between links probabilities. The true segments are listed in the column 'Segment Number'. 23 -28 1/5, 1/5, 1/10, 3/10, 1/5 P W = 0.80, P B = 0.15 280 -300 6 29 -30 3/10, 2/5, 3/10 P W = 0.90, P B = 0.10 280 -300 Setting 2: The previous setting assumed the link probabilities remain the same within each segment. However, this is not necessarily a valid assumption in real world data. This setting provides a setup such that each graph has a different intra and inter link probability. For all graphs, the intra and inter-link probabilities followed Uniform distributions: P W ∼ U (0.70, 0.95) and P B ∼ U (0.05, 0.3). The rest of the specifications are listed in Table 6. Setting 3: Both settings considered so far consist of dense networks. Often times, however, observed networks have a sparse structure. Instead of having a high P W value, this setting used P W ∼ U (0.35, 0.40) and P B ∼ U (0.05, 0.10). The rest of the specifications are listed in Table 7. involved networks with correlated edges. Such networks have been studied by Saldaña et al. (2017). In their paper, the parameter ρ controls the correlation between network edges. The correlation used here was ρ = 0.7, with a dense setting. The specifications of this setting are listed in Table 8.
Modeling the oxygen transport to the myocardium at maximal exercise at high altitude Abstract Exposure to high altitude induces a decrease in oxygen pressure and saturation in the arterial blood, which is aggravated by exercise. Heart rate (HR) at maximal exercise decreases when altitude increases in prolonged exposure to hypoxia. We developed a simple model of myocardial oxygenation in order to demonstrate that the observed blunting of maximal HR at high altitude is necessary for the maintenance of a normal myocardial oxygenation. Using data from the available scientific literature, we estimated the myocardial venous oxygen pressure and saturation at maximal exercise in two conditions: (1) with actual values of maximal HR (decreasing with altitude); (2) with sea‐level values of maximal heart rate, whatever the altitude (no change in HR). We demonstrated that, in the absence of autoregulation of maximal HR, myocardial tissue oxygenation would be incompatible with life above 6200 m–7600 m, depending on the hypothesis concerning a possible increase in coronary reserve (increase in coronary blood flow at exercise). The decrease in maximal HR at high altitude could be explained by several biological mechanisms involving the autonomic nervous system and its receptors on myocytes. These experimental and clinical observations support the hypothesis that there exists an integrated system at the cellular level, which protects the myocardium from a hazardous disequilibrium between O2 supply and O2 consumption at high altitude. for oxygen diffusion through the alveolo-capillary barrier; (2) due to a lower arterial O 2 content, peripheral O 2 extraction increases and PO 2 in the venous blood coming back to the lungs is lowered, rending a proper reloading of O 2 in the capillaries more difficult (Mollard et al., 2007;Van Thienen & Hespel, 2016). The myocardium is very sensitive to O 2 availability, especially when energetic demand is high such as during exercise. Therefore, the myocardium is submitted to a high constraint in terms of O 2 availability when exposed to both hypoxia and intense exercise. In this matter, if the maximal work of myocardium depends on mitochondrial O 2 content, the latter itself follows the variation of venous PO 2 (Gnaiger et al., 1995;Sutton et al., 1988), so we could assume that myocardial venous PO 2 is a valuable index of cardiac O 2 consumption, even if it is not likely to be linear. Paradoxically, in alpinists exercising in extreme conditions over the altitude of 8000 m with an arterial PO 2 of around 35 mmHg, no cardiac failure, coronary insufficiency, angina pectoris or myocardial infarct has ever been reported (Mallet et al., 2021;Reeves et al., 1987). In parallel, heart rate at high altitude, although increasing at submaximal exercise for any level of workload, is greatly reduced at maximal exercise (Richalet, 2016), hereby protecting the myocardium against a too high energy consumption in conditions of low O 2 availability. An important series of studies in animals and humans have been performed to explain this decrease in maximal heart rate and developed the hypothesis of a downregulation of beta-adrenergic receptors in the myocardium in prolonged exposure to hypoxia, together with an increase in parasympathetic influence (Antezana et al., 1994;Boushel et al., 2001;Favret et al., 2001;Hartley et al., 1974;Kacimi et al., 1993;León-Velarde et al., 2001;Richalet, Mehdioui, et al., 1988;Siebenmann et al., 2017;Voelkel et al., 1981). This modulation of cardiac receptors would reduce the chronotropic response to the hypoxiainduced adrenergic activation and protect the myocardium in these extreme conditions (Richalet, 2016). The present study aims to develop a model of O 2 transport in the myocardium at exercise in hypoxia in acclimatized subjects in order to demonstrate that the decrease in maximal heart rate at high altitude is necessary for the survival of myocardial tissue in these extreme conditions. | Model description Monitoring the level of oxygenation of the myocardial tissue would require measuring PO 2 within the tissue, which is not readily feasible in humans exercising in altitude conditions. Therefore, we aimed to determine an alternative method that would give us an indirect measure of tissue and mitochondrial oxygenation, represented by myocardial venous blood PO 2 . A model of O 2 transport to the myocardium is given in Figure 1. Along the myocardial capillary, blood PO 2 is progressively decreasing from the arterial to the venous end while O 2 is diffusing to the tissue. We can assume that end-capillary PO 2 is in equilibrium with tissue PO 2 , therefore, venous PO 2 , equal to end-capillary PO 2 , would be a reliable substitute to tissue PO 2 (Gnaiger et al., 1995;Herrmann & Feigl, 1992;Rubio & Berne, 1975;Sutton et al., 1988). The objective is therefore to calculate myocardial venous PO 2 , a marker of myocardial tissue oxygenation, as a function of altitude in the condition of maximal exercise. | Determinants of myocardial tissue PO 2 Myocardial tissue PO 2 is the result of O 2 consumption and O 2 availability. Oxygen consumption is determined by the cardiac mechanical power of the left and right ventricles (Ẇ LV andẆ RV ), which depends on heart rate (HR), stroke volume (SV), and mean ejection pressure of each ventricle, in the aorta and in the pulmonary artery (PejAo and PejPa, respectively) (Opie, 1991): where Let us write this equation for heart rate at maximal exercise in normoxic (mn) and hypoxic (mh) conditions: and the ratio HRmh HRmn : In order to estimate HRmh as a function of HRmn, we need to evaluate the changes induced by hypoxia in the above ratios in Equation (4). First, the ratio Q mḣ Qmn is the ratio of myocardial blood flow at maximal exercise between normoxia and hypoxia, for example, the "coronary reserve" that can be mobilized in hypoxia. Although there is no data in the literature above 4500 m, it is likely that coronary reserve is near maximal in normoxia and can hardly increase in hypoxia (Wyss et al., 2003). Therefore, this ratio is close to unity. In a second part of the study, we will evaluate the possible influence of a substantial increase in coronary reserve (see below). Second, the ratio [Hb]mh [Hb]mn represents the intensity of the erythropoiesis induced by the prolonged exposure to high altitude. It is 1 in acute hypoxia and increases with acclimatization: For example, if [Hb] is 15 g/dl in normoxia and goes up to 20 g/dl in prolonged hypoxia, this ratio will be 1.33. Third, the ratio Samh − Svmh Samn − Svmn represents the change in arterio-venous difference in O 2 saturation at maximal exercise from normoxia to hypoxia. We know from the literature that Samn is normally around 98% and that Svmn is around 30%, so that the arterio-venous difference in saturation in normoxia is around 68% (Heiss et al., 1976;Richalet et al., 1981). Altitude-induced changes in arterial O 2 saturation at maximal exercise are known from the literature. However, myocardial venous O 2 saturation at maximal exercise (Svmh) has never been measured yet. Finally, the ratio Amh Amn depends on the ratio of energetic equivalents, the ratio of stroke volumes and the ratio of ejection pressures. Although no data is available, the energetic equivalent is probably not modified by altitude, unless profound changes in substrate utilization occur in hypoxia. Stroke volume is marginally modified in hypoxia: while a 10% decrease has been measured at rest, its value at maximal exercise at altitude (7620 m) has been estimated at 86% of its sea level value (Reeves et al., 1987;Sutton et al., 1988). Mean aortic pressure at exercise does not consistently increase at high altitude, while mean pulmonary pressure increases through pulmonary vasoconstriction (Boussuges et al., 2000). The sum of mean aortic + pulmonary pressures has been estimated to go from 153 mmHg at sea level to 150, 169 and 157 mmHg at 6100 m, 7620 m and 8840 m, respectively (Sutton et al., 1988). Altogether, the ratio Amh Amn probably stays around the unity since a decrease in stroke volume would compensate an increase in ejection pressures (Stembridge et al., 2016;Sutton et al., 1988). Finally, if we summarize our first assumptions (no change in coronary reserve and compensations in variations of ejection volumes and pressures), we can write that: Therefore, combining Equations (4) and (5): Estimating Samn-Svmn at 68% (see above), we can calculate Svmh as a function of Samh as follows: (1) Samh can be estimated by linear regression from our data (Table1, Figure 2) by the following equation: Equation (6) then allows calculating myocardial venous O 2 saturation at maximal exercise in various altitude conditions if arterial O 2 saturation, heart rate, and hemoglobin concentrations are known. From O 2 saturation (SO 2 ), we can estimate O 2 pressure (PO 2 ), given a standard equation of the oxyhemoglobin dissociation curve and an estimated value of venous pH of 7.32: Dash et al.(2016) Therefore, we reach our main objective: estimating venous tissue O 2 pressure at maximal exercise at various altitudes and evaluating the influence of maximal heart rate on tissue oxygenation. | Summary of main assumptions In order to build the present model, we made several assumptions, as follows: • There is no significant increase in coronary reserve at high altitude (in a first approach). • Arterio-venous difference in oxygen saturation in normoxia equals 68%. | Data from the literature In order to feed our model, we reviewed all available studies in the literature that simultaneously proposed values of heart rate, hemoglobin concentration, and arterial O 2 saturation for various altitudes above 4000 m at maximal exercise. Data from studies concerning prolonged exposure to hypoxia (>3 days) were included and studies concerning acute hypoxia were excluded. The first historical values come from the "International High Altitude Expedition to Chile" in 1935 (Christensen & Forbes, 1937). Values are presented in Table 1. | Role of coronary reserve Very few studies are available about coronary reserve at maximal exercise, especially at high altitude. Wyss and coworkers found no significant increase in acute hypoxia (4500 m) (Wyss et al., 2003). However, studies by Kaufmann and coll. have shown that it may increase by 20% at 4559 m (Kaufmann et al., 2008). To our knowledge, no value is available at higher altitudes. However, we evaluated how our model is modified, assuming that coronary reserve at maximal exercise may increase from sea level to high altitude. If we suppose that the minimal value of myocardial venous O 2 saturation compatible with adequate O 2 supply to the myocardium is 10% (Goodwill et al., 2017), we can calculate from Equations (4) and (7) the maximal altitude (max-Alt) compatible with this minimal O 2 saturation as a function of an estimated percentage increase in coronary reserve at maximal exercise (ΔQhn) from sea level to a given altitude: | RESULTS Using equation (6) and Table 1, we can calculate Svmh in two scenarios: 1. Using the actual value of HRmh observed in the studies quoted in Table 1 2. Considering that there is no decrease in HRmh at altitude, so that the ratio HRmh HRmn is 1. Results are shown in Figure 3. Considering the second hypothesis of no decrease in maximal heart rate at altitude, venous O 2 saturation decreases with altitude and becomes negative above 8000 m, condition that is not physiologically compatible with life. Similarly, values of venous PO 2 become negative around 8000 m (Figure 4). In contrast, taking the first hypothesis, there is only a slight decrease in venous saturation and pressure but not as pronounced as for the first hypothesis (Figures 3 and 4). Figure 5 shows that if we suppose that coronary reserve at maximal exercise is already maximal at sea level, the maximal reachable altitude compatible with myocardial euoxia is around 6200 m in case of no regulation of maximal heart rate. To reach the summit of Mount Everest without decrease in maximal heart rate, the increase in coronary reserve would have to be as high as 44.5%. | DISCUSSION The present model was constructed from the physiological data available in the literature. However, as expected, very few measurements are available in humans in those extreme conditions of exercise and altitude, so that we had to make some reasonable assumptions. To reduce the uncertainty of these assumptions, future studies may include measurements of myocardial blood flow, cardiac venous and mitochondrial PO 2 at maximal exercise, both at sea level and high altitude. Let us reconsider the above assumptions and estimate the effects on the results of a non-validity of some of them. First, arterial hypoxemia is a probably the most powerful stimulus for coronary vasodilation, either directly or through active metabolites such as adenosine, NO or prostaglandins. However, hypoxia-induced vasodilation is limited (coronary reserve). If myocardial blood flow at maximal exercise can increase significantly at high altitude, let us suppose that the maximal value of ratio Q mḣ Qmn is 1.2 (20% increase), as previously suggested (Kaufmann et al., 2008). In that condition, maximal altitude reachable would be around 7600 m ( Figure 5). The minimal value of this ratio suitable to reach the summit of Mount Everest (8848 m) would be 44.5%, which is incompatible with our F I G U R E 3 Calculated values of myocardial venous O 2 saturation (SvO 2 ) at maximal exercise as a function of altitude in prolonged exposure to hypoxia. In open squares, values are calculated using data from the literature (Table 1) with the actual value of maximal heart rate (decreasing from sea level). In black triangles, values are re-calculated using the same data but with a value of maximal heart rate at altitude identical to the sea-level value. Note that with the actual values, SvO 2 stays over 10% (minimal value compatible with normal myocardial oxygenation), while if we suppose that maximal heart rate does not decrease with altitude, SvO 2 plunges below 10% over 6200 m and becomes negative above 8000 m, values incompatible with life. Negative values of SvO 2 are physiologically impossible in the case of the absence of regulation F I G U R E 4 Calculated values of myocardial venous PO 2 (PvO 2 ) in the same conditions as in Figure 3. Pv at maximal exercise stays almost constant, whatever the altitude, thanks to the autoregulation of maximal heart rate (see text for explanations). Negative values of PvO2 are physiologically impossible in the case of the absence of regulation F I G U R E 5 Maximal reachable altitude compatible with normal myocardial oxygenation (myocardial venous O 2 saturation above 10%) as a function of an expected increase in coronary reserve at maximal exercise from sea level to high altitude, if we suppose that maximal heart rate does not decrease with altitude (no autoregulation). Note that if we consider that coronary reserve at maximal exercise is already maximal at sea level, the maximal tolerated altitude would be 6200 m. If we hypothesize a 20% increase in coronary reserve, the maximal altitude would be 7600 m. To reach the summit of Mount Everest (8848 m), the coronary reserve would have to increase by 44.5% present understanding of the regulation of myocardial blood flow and adequate myocardial oxygenation. Second, if the increase in ejection pressures largely overpasses the decrease in stroke volume, the conditions would be worse for myocardial oxygenation, as inferred by Equation (4). Conversely, if pressures do not change and stroke volume largely decreases, conditions of oxygenation would be better, but this hypothesis is incompatible with values of ejection pressures and volumes available in the literature (Naeije, 2010;Stembridge et al., 2016;Sutton et al., 1988). From the present modeling study, based on measured values from the literature, we suggest that the hypothesis of a preservation of maximal heart rate at high altitude at its sea level value would necessarily lead to values of myocardial tissue PO 2 incompatible with a viable myocardial oxygenation. Therefore, the alternative hypothesis of a mechanism limiting heart rate at exercise in hypoxic conditions therefore appears realistic ( Figure 2). We hypothesize that cardiac chronotropic function could be controlled by a local mechanism linked to myocardial PO 2 (White et al., 1995). Several pathways have been mentioned in the literature. A downregulation of the adrenergic system has been shown in prolonged hypoxia, either in humans or animal models . Adrenergic activation is well documented in acute and prolonged hypoxia (Antezana et al., 1994;Richalet et al., 1990) but the response to this activation is blunted as shown by a lower heart rate for a given value of plasma norepinephrine at exercise (Antezana et al., 1994;Richalet, Mehdioui, et al., 1988) or for a given value of perfused isoproterenol (Richalet, Larmignat, et al., 1988). In parallel, although a chronic exposure to 3500 m triggers a long-term reduction of the vagal tone at rest (Ponchia et al., 1994;Siebenmann et al., 2017), the parasympathetic system may be activated as shown by the restoration of heart rate at exercise after infusion of a muscarinic blocker (Bogaard et al., 2002;Boushel et al., 2001;Hartley et al., 1974). In a model of rats exposed to prolonged hypoxia, the density of beta-adrenergic receptors has been shown decreased, while, conversely, the density of muscarinic receptors is increased (Kacimi et al., 1992(Kacimi et al., , 1993Voelkel et al., 1981). The complex pathway connecting adrenergic, muscarinic, and adenosinergic receptors to the adenylate cyclase in the cardiomyocyte is modified when exposed to hypoxia: the activity of the Gs protein is reduced while the expression of Gi protein is enhanced, both phenomenon leading to a blunting of adenylate cyclase activity and a reduced chronotropic function Fowler et al., 1986;Kacimi et al., 1995;León-Velarde et al., 2001;White et al., 1995). Moreover, an extensive evidence exists concerning the role of downregulation of adrenergic receptors in cardiac failure, another representative condition of imbalance between cardiac oxygen supply and consumption (Hamdani & Linke, 2012;Soltysinska et al., 2011). The heart is not the only organ where these desensitization mechanisms appear in hypoxia. Fat cells also show a decrease in their response to adrenergic activation in prolonged hypoxia (de Glisezinski et al., 1999). Renal handling of calcium is submitted to a down-regulation of parathormone effects in hypoxia (Souberbielle et al., 1995). Similarly, growth hormone production is subjected to a down-regulation of its specific receptor (Richalet et al., 2010). Lactate release by the muscle could be modulated by a down-regulation of beta-receptors (Reeves et al., 1992). Common elements in all these signaling pathways seem to be receptors regulated by a G protein complex (Hamdani & Linke, 2012;Richalet, 2016). | CONCLUSION Altogether, there appears to exist an integrated system at the cellular level that protects the myocardium from a hazardous disequilibrium between O 2 supply and O 2 consumption at high altitude. This system would fully explain the decrease in heart rate at maximal exercise at high altitude. This autoregulation of O 2 supply in the myocardium efficiently protects this vital organ against myocardial ischemia and its potentially serious clinical consequences (Richalet, 1997(Richalet, , 2016. Simple modeling of biological mechanisms may help for a better understanding of regulation systems in complex environmental conditions. This paper allows some significant advances in the knowledge of physiological adaptations to stressors such as hypoxia. It is a remarkable example of autoregulation of a vital organ submitted to a severe metabolic challenge that contributes to an overall process of homeodynamics (Hermand et al., 2021;Richalet, 2021). Future studies may include measurements of myocardial blood flow, cardiac venous, and mitochondrial PO 2 at maximal exercise, both at sea level and high altitude, to validate and refine our model. CONFLICTS OF INTEREST None. AUTHORS CONTRIBUTION Both authors contributed to data management and writing of the paper.
Neurosciences and Philosophy of Mind: A Reductive Interpretation of the “Mirror Neurons System” (MNS) The first group of reflections I want to develop is about the way in which the mind/body problem is placed within the sciences that, in virtue of a well-established convention, are unified under the name of “neurosciences”. I turn now to a second set of considerations that enter into the merits of mind/body problem. In order to this problem, like many other scholars, I support a thesis to which I give the name of “ontological monism”. From the ontological point of view, mental activity must necessarily (almost under an irrefutable postulate) be thought as a product of the work of the brain. The two neurobiological and mental conceptual systems may come into relation only if we place some ad hoc assumptions which work as a bridge between the two systems, allowing to relate psychological knowledge with neurobiological one. Another group of preliminary reflections concerns the distinction between possession and use of mental functions. In relation to higher mental functions (perception, attention, memory, language, thought, aimed behavior, etc.) I think very useful, both for theoretical investigation and for clinical distinction, between possession and use of the function. One of the richest and most exciting books in reporting experiments and implications arising from this important discovery is Rizzolatti and Sinigaglia (2006). The central argument around which the seven chapters of the book are articulated is that «the brain that acts is also and above all a brain that understands» (Ib., p.3). MNs, or “cerebral cells of empathy”, as they are sometimes called, are in these last years the focus of an important debate between many scientists, namely V.Gallese and A.Caramazza. Pascolo, anticipating in a study of 2008 the criticism of the research group of Caramazza about the existence or, at least, the cognitive role played by mirror neurons, criticized in a radical way the theories of Rizzolatti and his collegues at the University of Parma about the existence and the role of mirror neurons.P. Jacob underlies the fact that MNs are not able to explain all the cognitive functions which are necessary to make possible the understanding of intentions in other people and the complex phenomenon of empathy.L. Boella, Professor of Moral Philosophy at the University of Milan, states that the discovery of mirror neurons has certainly contributed to the actual and popular success of neurosciences. From Boella’s point of view this success has been favoured by the special evidence and simplicity of the result of this discovery. By this way MNs discovery has spread through many disciplines, being interpreted beyond the specific context of the experimental research. MN activity, on my view, is more closely related to understanding goal-directed behavior than intentional actions. Now I’m going to show the theoretical concordance of this discovery with a reductive theory of mind, and particularly with the supervenience reductive theory of mind claimed by Kim (1993, 1996, 1998, 2005). The Multi-layered Building of Neurosciences The first group of reflections I want to develop is about the way in which the mind/body problem is placed within the sciences that, in virtue of a well-established convention, are unified under the name of "neurosciences". The word "neurosciences" refers to a broad spectrum of sciences which constitute a multi-layered building, with its foundations, with low and high levels. I will use this metaphor of the building to express my thoughts. Starting from the lowest level, we find, in ascending order (from the ground to the upper floors), sciences such as genetics, embryology, molecular biology, microscopic and macroscopic anatomy of the nervous system, neurophysiology, biochemistry of the brain processes, etc. It's in these disciplines that we have the evolving neuroscientific knowledge and discoveries, including, most recently, the very debated discovery of the so-called Research in Psychology and Behavioral Sciences "mirror neurons". The knowledge of the brain and nervous system began to develop during the nineteenth century, and then in the last century and particularly in recent decades has progressed by leaps and bounds. The knowledge and discoveries in neuroscience I refer to are characterized by two important factors. 1) The use of highly sophisticated technologies made possible by the research based on strong science, like physics and chemistry. I allude to medical instruments as Transcranial Magnetic Stimulation (TMS), functional Magnetic Resonance Imaging (fMRI), Electro Encephalo Graphy (EEG), etc. 2) The systematic use of rigorous testing proceduresthose ones of natural sciences -which usually exclude any possible ambiguity. From the point of view of the nature and reliability of the neuroscience knowledge in these key disciplines mind/body problem do not pose any particular epistemological or methodological problems. At this level of research it is given the assumption that mental activity depends on the work of the brain, and this assumption does not influence the investigation around the formation, morphology and functioning of the brain itself. Let me explain: the problem of the nature of mental activity does not enter the field of inquiry in the study of genetics, anatomy or physiology of the brain. From their point of view, it is a problem that affects the speculative philosophical and not the scientific research, or at least marginally. Things start to get complicated if we continue to climb up the neurosciences building till the higher and top floors. Here we meet disciplines closely related to earlier ones, genetics, anatomy and physiology. They are based on them and yet are very different from the epistemological point of view, namely in terms of how they acquired the knowledge. At the top of the building we meet: the neurobiology of cognition and emotion, neuropsychology, cognitive psychology, artificial intelligence, philosophy of mind, linguistics, etc. Scientists and scholars engaged in this research at the top of the neurosciences building do not deal with the important "small things" of our brain, but with the high functions and phenomena of our brain. Making use of the words of A. Lurija 1 , they deal with higher cortical functions: perception, attention, memory, recognition, thought, consciousness, emotions, the ability to plan and implement aimed behaviors. They would also deal with the realm of moral and artistic creativity. Mind-body problem clearly emerges for these disciplines, not only as a speculative problem, but as a 1 A. R. Lurija was a physician, psychologist and sociologist in the Soviet Union. He was disciple and collaborator of Lev Vygotskij, who was one of founders of the historical-cultural school and neuropsychology. The architecture of the brain functions, that Lurija discovered, consists of three major systems or functional units: 1) the first regulates the sleep-wake cycle and modulates cortical activity on the attention, the selection of information and perception of emotions; 2) the second has the primary function in perception, analysis and memory, and it is concerned with temporal, parietal and occipital cortices; 3) the third provides for the regulation, modulation and control of voluntary actions, affecting the frontal and prefrontal cortical and motor areas, cerebellum and deep nuclei. These functional units are not, according to Lurija and Vygotskij too, genetically determined, but ontogenetically determined, through the pressure of the historical and cultural factors, and therefore they take on different characteristics depending on the periods and contexts of history and human society. practical issue that guides the theoretical investigation and clinical practice. I try to explain it briefly. In the theoretical research, as in the clinical one, neurosciences of the upper floors are interested in topics such as memory, thinking, emotions etc. It is now clear that in order to scientifically study these things you should know what kind of things they are. For example, to study the brain bases of memory, you need a reliable scientific concept of memory. In general, when we study neurosciences from the point of view of higher mental functions, mind/body problem acquires a clear theoretical and practical importance for the simple fact that higher mental functions do not lend themselves to strictly scientific inquiry -as it happens, on the contrary, for disciplines such as genetics, embryology, anatomy, and so on. The question "why the higher functions are not likely to be treated scientifically?" opens a conversation without end. I limit myself to a banal observation: the mental functions are objects that I would define "multi-purpose". In fact, they lend themselves to be treated and conceptualized in different ways, through different linguistic and conceptual systems. There is not a universal criterion to describe mental functions. Various theories of the mind and its functions are sometimes different or even seemingly incompatible because maybe focus on different aspects or speak different languages. We are probably faced with an apparent paradox for scientific inquiry. The Ontological Monism I turn now to a second set of considerations that enter into the merits of mind/body problem. In order to this problem, like many other scholars, I support a thesis to which I give the name of "ontological monism". We may explain it in the following way. From the ontological point of view, namely from the perspective of how things are in themselves, regardless of how we know, the thesis of ontological monism claims that mental activities and behaviors are products of the organism, and in particular of the central nervous system (CNS). The idea of a mental activity which takes place in the absence of a corresponding brain activity is unthinkable, is a "material nonsense", as Husserl 2 would say. The physician and philosopher Pierre Cabanis 3 , a member of the movement of the late Enlightenment ideologues, liked to say that as 2 E.G.A. Husserl was an Austrian-German philosopher and mathematician, founder of phenomenology, and a member of the School of Brentano. The current philosophy of phenomenology has influenced much of European culture of the twentieth century and beyond. In addition to M. Scheler's philosopher phenomenology had a profound influence on the existentialism and on M. Heidegger's philosophy. Finally, his thinking has influenced indirectly on a certain part of the Cognitive Science and Philosophy of Mind today (for example, according to H. Dreyfus, Husserl is considered the "father of contemporary research in cognitive psychology and artificial intelligence" as he endorses in 1982 Husserl, Intentionality and Cognitive Science, Bradford/M.I.T. Press, 1982. 3 P. J. G. Cabanis (1757-1808) was a French physician, physiologist and philosopher. According to Cabanis, training is conducted by the ideas of our organic sensibility, that also direct the activities of our organs, and thus the totality of all living things. From the observation of pathological conditions, or the effect of narcotics and associated psychological states, he presents our thoughts as physiological results of a perception by an appropriate body, the brain. In this way, Cabanis connected the instinct to the material structure of every living being, just as every organ has an its own predisposition to do this or that specific task in the body. the stomach secretes gastric juice, so the brain secretes thought. This statement contains, in a provocative version, the essence of ontological monism that we are arguing here. If there is no brain function, there can be no perceptions, thoughts, emotions and other mental activities. Before concluding with the ontological point of view, I'll try to deepen its philosophical meaning. The idea of a mental activity which unfolds on its own, in the absence of a brain function which produces it, cannot be the object of a scientific thought. Similarly, and for the same reasons, we cannot think that a chair is able to speak or a toy to feel emotions. These things can happen only in fairy tales. They can be produced by the imagination and therefore as images, sometimes very seductive, but nothing more. Theology and metaphysics take the freedom to believe in the existence of an immaterial soul, capable of thinking and feeling even after the separation of the body. The metaphysics realm of images and symbols is so compelling as far from scientific thought. In other words, the theory of an ontological dualism, in the manner Descartes 4 or, in our times, John Eccles 5 and other few neuroscientists and philosophers, is not a product of good way of reasoning, while blanketing a scientific language, but it is generated by a science fiction capable of producing certainly seductive, but not rational images and concepts. The Epistemological Dualism The third point of reflections concerns the epistemological dualism. It 's a thesis in which I firmly believe and that in different versions is supported by many scholars. From the ontological point of view, as I argued before, mental activity must necessarily (almost under an irrefutable postulate) be thought as a product of the work of the brain. Which is to say that it is unthinkable that a cognitive or phenomenal mental process or quale (reasoning, perception, intention, emotion, etc.) occurs in the absence of and adequate brain activity which generates it. But if we change the point of view, departing for a moment from the objective point of view to the subjective one, things abruptly change in a radical way, because we are "dualist beings", in spite of the irrefutable ontological monism which identifies mental states with brain states. The crucial point, in my point of view, is that in the subjective thought and language neurobiology cannot work because every knowledge and representation of our inner mental states take place in the domain of psychology. So, if we take into account both the way in which we know and act on things, and the way in which they are 4 R. Descartes (1596-1650) was a French philosopher and mathematician. Considered the founder of modern philosophy and the father of analytic geometry. In particular, Descartes argued an interactionist dualism between mind and body, by supporting that the mind ("res cogitans") interacts with the body ("res extensa") through the pineal gland. 5 Sir J. C. Eccles was an Australian neurophysiologist. He was the author of fundamental discoveries on-the physiology ofneurons (nerve cells) and in particular the biochemical mechanism of nerve impulses. In clearing of this ion mechanism, leading to excitation and inhibition underlying physiology of the nervous system, the contribute of the British A. L. Hodgkin and A. F. Huxley, with whom they shared the Nobel Prize in Medicine and Physiology in 1963, was also important. Together with others few neuroscientists and philosophers he endorses a mind/body dualism. physically realized and scientifically described, we are obliged, from my point of view, to endorse a position that I (and many scholars) call "epistemological dualism" or "duality of knowledge". I try to explain briefly this crucial concept. The conceptual and linguistic framework and methods by which we know and describe, through neurosciences and cognitive sciences, the structure and functioning of the brain and NS are completely different from the conceptual and linguistic system that we use to understand and describe the mental activity and behavior. They are completely different in the sense that the two conceptual and linguistic systems are not translatable one into the other. This untranslatability makes arise hard problems in the assumption prima facie of a full reduction of the mental domain to the physical domain. The untranslatability of the two systems can be described in the following way: the concepts we use to describe a brain process are not suitable (Wittgenstein 6 would say that "they spin freely") to describe the corresponding mental process. Vice versa, the concepts we use to describe a mental process spin freely when applied to the underlying brain process. According to Wittgenstein's language games theory I cannot say in a precise way, for instance, that Broca's area (the area of the brain, placed in the base of the left frontal lobe, which physically realizes the ability to speak) speaks because this ability and its features belongs to another conceptual and language grammar. Another example. It's universally accepted biological knowledge that the intellectual activities heavily depend on the functioning of the frontal lobes. But it would be meaningless to assert that the frontal lobes think. It's clear that while it's correct to say that frontal lobes make possible pur thinking, it's at all uncorrect to say that frontal lobes think. For it is very important, as it may be seen in these two simple examples, the distinction between possession and use of a mental function. In the Broca's area physical and biochemical processes take place and they make it possible to speak, while it would be a nonsense to say that Broca's area speaks. Coming back to Wittgenstein arguing, it is not logically possible to apply the mentalistic concept of "speaking" to the neurobiological concept of "Broca's area". The same applies to the thinking and the frontal lobes, and so on. But if it isn't Broca's area to speak, who is speaking? This is a typical way of arguing that Descartes and a cartesianan thinker would endorse. In this case and in every case like this one, the correct answer is that the predicate of speaking or thinking, etc., precisely refer and belong to a human being or some other animal. This is the classic philosophical problem of the identity (who am I?) and, leaving aside the many theories about it during all the history of philosophy, we may see that it is certainly a 6 L. J. J. Wittgenstein (1889 -1951) was an Austrian-born philosopher who held the professorship in philosophy at the University of Cambridge. Wittgenstein inspired two of the century's principal philosophical movements, logical positivism and ordinary language philosophy, though in his lifetime he published just one book review, one article, a children's dictionary, and the 75-pages Tractatus Logico-Philosophicus (1921). His Philosophical Investigations, published posthumously (1953) is considered one of the most important books of 20th-century philosophy. In this last book he provided a detailed account of the many possible uses of ordinary language, considering language as a series of interchangeable language-games in which the meaning of words is derived from their public use. 27 Research in Psychology and Behavioral Sciences mentalistic concept, perhaps the most mentalistic one, from which every other mentalistic concept takes its origin. By this way we come to this actual conclusion, which many scholars share: although the mind is a product of the brain, the conceptual system that allows us to know and speak of the brain is immeasurably different from the conceptual system that we use to study and learn about the mental activities. This leads to two distinct domains of knowledge: the neurobiological domain and the psychological/behavioral domain. There is not, for the over logical and linguistic reasons, the possibility to pass directly from one to the other unless we argue, as the american philosophers Paul and Patricia Churchland 7 endorse, stating that in a near future conceptual and linguistic vocabularies of philosophy and neurosciences will mix together to create a new science of the mind, the neurophilosophy. This relation between the neurobiological domain and the psychological/behavioral domain, or between neurosciences and philosophy, brings us to the fourth set of observations. The ad Hoc Theories or Bridge Theories The two neurobiological and mental conceptual systems may come into relation only if we place some ad hoc assumptions which work as a bridge between the two systems, allowing to relate psychological knowledge with neurobiological one. The use of ad hoc assumptions is perfectly legitimate as long as we are aware of using them. I'll make an example for the concept of consciousness. Wittgenstein, in his Philosophical Investigations (1953), as I wrote before, said that the meaning of a word is the use we make of it. Therefore, he argued that a word has different meanings depending on the context of communication (language game) in which the word occurs. Now we have to wonder: how many different language games there are for the word and concept of consciousness? There are certainly some language games for it: we may speak of consciousness to refer to moral responsability and free will, or we may speak of consciousness to refer, in a minimal way, to a simple biological state of awareness, etc. But, despite the different uses in which "consciousness" may occur, the different meanings of consciousness are based on one single and basic concept, which is the ontological concept, the biological one, without which we couldn't assign any other concept. Nevertheless, it's also clear that the everyday concept of "consciousness", relating to moral responsability and free will, etc., is essential for our language communication. And in everyday conversation misunderstandings or different opinions about its meaning rarely arise. The places for the disputes of meaning are almost always the discussions between philosophers and scientists. Therefore, it's clear that the concept of consciousness cannot be totally 7 Patricia S. Churchland has focused his philosophical investigation on the interface between neuroscience and philosophy. According to her, philosophers are increasingly realizing that to understand the mind one must understand the brain and in the near future neurosciences together with philosophy will create a new language with a neuroscientific base, thanks to the many discoveries. Churchland, Patricia Smith (2002). sacrificed because we would deprive our language of an essential piece. Vice versa, in the scientifical and clinical domains some uses or aspects of the concept of consciousness can be sacrificed. This leads us to the ad hoc theories that are being introduced to establish a bridge between the mental and the brain. Let's examine one example to understand better this important account. In neurology, the concept of consciousness is characterized in terms of awareness. This characterization allows a strict translation of the mentalistic concept of consciousness in a rigorous neurological device. The neurologist defines consciousness in a negative from the detailed description of situations characterized by a decrease or absence of consciousness. To understand better what I'm going to say, consider two situations. First situation. A person who is asleep and we add, not to complicate things too much, that it is a non-REM sleep. This person is unconscious in two main ways: i) within certain limits, he does not respond to external stimuli; ii) he has a characteristic ElectroEncephaloGraphic trace (EEG). Second situation: a person in a deep coma. In this situation the absence of consciousness is defined by specific clinical and instrumental criteria. On a clinical level the main features are: a prolonged state of unconsciousness, like sleep, but marked by a complete non-responsiveness to external stimuli, even the most energetic, with a complete abolition of sensibility and movement, and serious vegetative disorders. On the instrumental level we have an EEG which shows a profound impairment of brain electrical activity, along with other important physiologic changes instrumentally tested. Therefore it is correct to say that in neurology a person has got a consciousness if he is not in one of these states or in other similar ones, such as fainting or syncope. Here we have a precise and rigorous example of translation from the mental to the biological. But it's clear that the concept of consciousness used by the neurologist is a concept aimed solely and exclusively to the needs of neurology. That is precisely an ad hoc concept. The neurologist adopt a theory ad hoc, or a bridge theory of consciousness. Consciousness is translated by neurologist in a well-defined NS and body state. He has no interest for all other aspects or conceptual and linguistic uses of "consciousness" and he may ignore or sacrifice them. For example, he is not interested in the fact that we often use in our everyday language the word "moral consciousness" to refer to responsibility, or that sometimes we use the word "consciousness" to say that someone is very conscientius or not conscientius at all. It is clear that the moral concept of "conscientiousness" used in these language games has nothing to do with the biological concept of "consciousness" in which neurology is basically interested. This is a clear example of a rigorous conceptualization of the ad hoc concept of consciousness: some aspects (those that are useful for the neurologist) are themed, the others, which are not considered interesting for the neurologist's purposes, are discarded. In other cases, ad hoc theories may be used in a less rigorous, absolutist way. I'll take an example from the field of psychopathology. Consider a delirium. A delirium is a false belief that is supported even in front of the most obvious evidence of its falsity. Delirium is a serious mental symptom just because it expresses a deep alteration of the subject's relationship with reality. It is possible, at least in part, to conceptualize the delirium to enable its translation into the language of neuroscience. Delirium is a mental phenomenon and as such can not be entirely described in an objective way. Nevertheless, through an appropriate ad hoc theory, it can be described as something objective. It is sufficient, in describing it, to take into account its legitimately objectivizable goals, rejecting everything appears refractory to objectivity. A delirium has an onset and its course. For example, it can occur at twenty or sixty. And these are objective data. Other objective data are about the duration of delirium, if it is a chronic, persistent delirium, or an episode delirium which appears and disappears in a short time. These are objective data too. But there are many others objective data about it: if delirium is confused or organized, if it's weird, if it is shiny, it is a delirium of persecution, or omnipotence, if it is responding to antipsychotic drugs (and which) or resistant to each drug. We can also take into consideration the informations of the patient's medical history and family: age, sex, social status, past illnesses and so on. On the base of this ad hoc conceptualization, the delirium can be treated in the research such as a precise objective entity, which can be classified and investigated with statistical and epidemiological instruments, and it could become the subject of a research around the related neurobiology. This way of considering this mental pathological phenomenon has important advantages for theoretical research and for psychiatric practice, but there is a cost to be paid. Translating delirium in an objective entity means to completely overlook the subjective side of the delirium experience, its being a personal life story, as something private and unrepeatable. The reification of the delirium involves the waiver of the psychological and existential search for meaning that it expresses. If on one hand the research can clearly go in the direction of the objectivity, on the other hand, it abandons the dimension of subjectivity and story of life. In short, the ad hoc theories conceptualize mental activity with the aim of making it fit for empirical research and, where possible, to establish a bridge with the basic neuroscience research. This is perfectly legitimate and right, provided you know that you are sacrificing many aspects of the mental activity. The risk is that the ad hoc theory can become for the community of researchers the real and only truth. Possession and Use of Mental Functions Another group of preliminary reflections concerns the distinction between possession and use of mental functions. In relation to higher mental functions (perception, attention, memory, language, thought, aimed behavior, etc.) I think very useful, both for theoretical investigation and for clinical distinction, between possession and use of the function. Take for example the function of the linguistic expression. The possession of this function depends on the level of the brain, the integrity of certain cortical areas well known in neurobiology, including Broca's area. If these structures suffer a damage, the expressive function is altered. The person loses all or part of the ability to speak. Here you have the possession of the function, which may exist or be questioned in the case of a brain damage. Possession is entirely in the domains of neurobiology and neuropsychology. But in addition to the possession of a function there is the use we make of it. The use of a function has a more qualitative feature. For example, the full possession of the expressive function of language is consistent with the fact that the subject can make a good or a bad use of the words. In fact we can speak saying something inappropriate or stupid, helpful or harmful things, things you understand or that are unintelligible. This applies to the language, but also to all the other higher functions: memory, reasoning, learning, and so on. There's no doubt that also the quality of the use of a function depends (according to the theory of ontological monism) on the brain and NS, but the at the present state of neurobiology there is no satisfactory answer to the question of the use of a function. I mean that, for example, the brain of a Nobel prize for literature and the brain of a healthy illiterate man usually do not show any difference in relation to the present acquisition and tools of neurobiology. And if the illiterate man is young, and the Nobel Prize is old, the brain of the latter will certainly be less healthy. This reflection is useful to make clear the fact that the use of function calls into question a wide variety of variables that are largely foreign to the issue of possession, that is to the integrity of the organ. It's clear that the problem of the use is correspondingly more difficult and elusive than the problem of possession. A Short History of the Discovery I'm going to start considering mirror neurons discovery with a short selection of many recent quotes from wellknown neuroscientists and philosophers who are writing and debating very much on this discovery, its interpretation and its consequences for our theories of mind and cognitive functions. The mirror neurons are for psychology what DNA was for biology. 8 The discovery of motor resonance mechanism of mirror neurons has shown that the motor system, far from being a mere muscles controller and a simple executor of commands coded elsewhere, is able to perform cognitive functions that for a long time have been erroneously considered prerogative of psychological processes and neural mechanisms of a purely associative kind ... Surely mirror neurons bother who look at neuroscience as a simple method of tracking and validation of mental mechanisms deemed valid a priori. 9 It is important to keep several questions distinct. When do we have simulation? When do we have mirror neurons? When do we have social-intentional relationships? I do not argue that conceptualization and imputation are necessary for the existence or activation of mirror neurons (MNs), only that they (or something similar) are necessary for social-intentional relationships. If MNs themselves do not guarantee such elements, then they don't, all by themselves, guarantee social-intentional relationships. 10 Human ability to represent psychological states (beliefs, intentions, desires, emotions) and to attribute them to others (so-called mindreading) goes beyond the mechanism of mirror neurons. Consequently, also the idea that autism stems from lack of mirror neurons is wrong. 11 Right now It has not been demonstrated that mirror neurons play a very functional role in understanding the action. And, even if they do, how they do it. 12 In the '80s and '90s the group of researchers at the University of Parma headed by Giacomo Rizzolatti and composed by Luciano Fadiga, Leonardo Fogassi, Vittorio Gallese and Giuseppe di Pellegrino was devoted to the study of the pre-motor cortex. They had placed electrodes in the inferior frontal cortex of a macaque monkey to study the specialized neurons in the control of hand movements, such as collecting or handling objects. During each experiment it was recorded the behaviour of individual neurons in the monkey brain while it allowed the macaque to access bits of food, in order to measure the neural response to specific movements. Like many other important discoveries, that of mirror neurons was due to chance. The anecdotal reports that while one investigator took a banana in a fruit basket prepared for the experiments, some neurons in the so called F5 brain area and around (in which there are visuo-motor neurons) in the monkey, which was watching the scene, had reacted. How could it happen if the monkey had not moved? How could it happen if until then we thought that these neurons are activated only for motor function? At first, investigators thought it was a defect in the measures or a failure in the instrumentation, but everything turned out okay and the reactions were repeated as soon as it repeated the action of grasping. Since then, this work has been published, with the update on the discovery of mirror neurons located in both the inferior frontal parietal regions of the brain and confirmed. In 1995, Luciano Fadiga, Leonardo Fogassi, Giovanni Pavesi and Giacomo Rizzolatti demonstrated for the first time the existence in humans of a system similar to that found in monkeys. Using Transcranial Magnetic Stimulation (TMS), they found that the human motor cortex is facilitated by the observation of human actions and movements of others. More recently, other evidences obtained by functional Magnetic Resonance Imaging (fMRI), TMS, ElectroEncephaloGraphy (EEG) and behavioural tests have confirmed that similar systems existing in the human brain are highly developed. They have been precisely identified the regions which respond to the action/observation. Given the genetic similarity between 10 primates (including humans), it is not surprising that these brain regions are closely similar in them. During the last years researchers thought and created more and more sophisticated experiments on monkeys. Particularly, I want to remind and mention a recent ingenious experiment. The usual criticians had argued that the alleged "mirror neuron" is only sensitive to a movement of grabbing or handling or tighten the fingers, not to the intention of the action with a clear and precise purpose. To show that the activation of the neuron is also sensitive to the purpose, researchers used the typical pliers for French gourmet escargotes. These pliers open when you tighten the grip and viceversa close when you release the grip. Training a monkey to grab a nut with these pliers, you may see the activation of the mirror neuron which fires specifically when there is an action of grabbing of food to eat it. The fingers are relaxed rather than contracted, but the intentional action, on an abstract level, is the same and therefore the mirror neuron fires. This interesting and important experiment should clearly show the fact that MNs are not only sensitive to a movement of grabbing or handling or tighten the fingers, but also to the intention of the action with a clear and precise purpose. Finally, at the beginning of April 2010, the team of neuroscientists headed by Roy Mukamel and Marco Iacoboni, University of California at Los Angeles (UCLA), gave the important new that the problem of proving in a direct way the existence of such a mirror neurons system in humans (because it is considered unethical to implant electrodes in the brain of people for these research purposes), has been overcome, thanks to twenty-one patients treated for epilepsy which was resistant to medication (reference: Mukamel et al., Single-Neuron Responsens in Humans during Execution and Observation, Current Biology, 2010). Some electrodes have been planted in their brain with the aim to identify the epileptogenic foci for a neurosurgical intervention, assuming that the electrodes placement was based solely on clinical criteria. During their hospitalization, the researchers told them to perform certain actions, such as grasping objects, or to observe facial expressions on a screen and then to repeat them. The authors say that among the 1177 neurons under observation a significant portion responds to both stimuli (action and observation of grasping and facial emotions) in the supplementary motor area of the medial frontal lobe (SMA) and in the medial temporal lobe, specifically in the hippocampal and the entorhial cortex, while in the amygdala and in the pre-SMA the number of cells would not reach significant levels of response. According to the theory of MNS, mirror neurons (MNs) fired both for action and observation, and their excitement was directly recorded by the electrodes. These findings suggest, according to these researchers, the idea that in humans there are probably multiple systems with mirror neural mechanisms both for the integration and for the differentiation of perceptual and motor aspects fo the actions performed by themselves and by others. Since these new areas of the cortex perform different functions (vision, movement, memory), Iacoboni thinks that this discovery suggests us the idea that mirror neurons provide a very rich and complex system of pre-logic reproduction and interpretation of actions and emotions of others. G. Rizzolatti and C. Sinigaglia's Interpretation of the MNS pp.14-19 One of the richest and most exciting books in reporting experiments and implications arising from this important discovery is Rizzolatti and Sinigaglia (2006). The central argument around which the seven chapters of the book are articulated is that «the brain that acts is also and above all a brain that understands» (Ib., p.3).The meaning and scope of this statement lie in the heart of the neural mechanism identified by the neurophysiologists at the University of Parma headed by G. Rizzolatti. As I already said in the history of this discovery, in a series of studies conducted over the past two decades, these researchers discovered in the pre-motor cortex of monkeys and later also in the human one, by medical instruments of cerebral imaging, the existence of two groups of neurons which are both active during the implementation of actions related to objects: they are simple and familiar gestures like grabbing something with your hand or bringing food to your mouth. The surprising thing is that these two groups of premotor neurons are also activated in the absence of any enforcement action during purely observational explicit tasks: the first group of neurons respond to the vision of the object to which the action could be directed, while those of the second group respond to the observation of another individual who performs the same action. Following the authors, we may take the example of the coffee cup: the pre-motor neurons are activated while you grasp the handle, but for some of them activation is triggered even by the simple observation of the cup resting on the table, for others also by the observation of our neighbour who grabs the cup to drink his coffee. Therefore we have in both cases bimodal neurons, that activate both for motor and perceptual processes. Their activity may be described by the mechanism of "neural embodied simulation": during the observation of an object it activates a motor pattern appropriate to its characteristics (such as size and orientation in space) "as if" the viewer enters into interaction with it. In the same way, during the observation of an action performed by another individual, the neural system of the observer activates "as if" he were to perform the same action he observes. The neurons of the first group were called "canonical neurons" because since the 30s it had suggested the involvement of the pre-motor areas in the processing of visual information about an object in the motor acts required to interact with it; those of the second group were called "mirror neurons" because they cause a mirror reaction in the neural system of the observer, in which it takes place a simulation of the observed action. In the light of this mechanism of neural embodied simulation, it could be reinterpreted the role played by the motor system within the whole cognitive system, because the first was usually connected only with the planning and execution of actions. On the contrary, it seems that bimodal neurons found in pre-motor cortex are strongly implicated in high-level cognitive processes, particularly in the perceptual recognition of objects and actions, and in the understanding of their meaning. This new way of seeing and explaining the motor system, which comes to be also implied in perceptual recognition of objects and actions, and in the understanding of their meaning, undermines the rigid boundary between the perceptive cognitive processes and the motor ones. This rigid boundary between motor and cognitive processes has for years characterized the interpretation of the architecture of the brain. On the contrary, it seems that perception, understanding and action are grouped together into a unified mechanism, according to which «the brain that acts is also and above all a brain that understands» (Ib., p. 3). Brain understanding regarding objects is related to their functional significance or "affordance". Canonical neurons allow an immediate understanding of the possible interactions that some objects have for a perceiving subject (in the case of the handle of a coffee cup, the possibility to be grasped). With regard to the actions the understanding is related to the purpose behind them. Instead mirror neurons enable an immediate understanding of the intentions of other individuals (e.g. the intention of a man to bring the cup to his mouth to drink the coffee), making possible a prediction of their future behaviour. Many experiments were conducted on monkeys and humans to achieve these theories. Obviously the techniques used for monkeys and for humans are usually very different (except for the experiment of Mukamel and Iacoboni of April 2010): while in monkeys it's possible to make a record of a single neuron via intra-cortical Research in Psychology and Behavioral Sciences insertion of electrodes, in human subjects only noninvasive methods of brain imaging are used, such as Positron Emission Tomography (PET) or fMRI that allow us to visualize the activity of whole brain areas but not of individual nerve cells. This is, so far, the invincible limit of such experiments on humans. Particularly in the fourth chapter of Rizzolatti and Sinigaglia (2006), entitled "Action and understanding", there are two experiments which are considered very important to define the role of mirror neurons in our understanding of the purpose underlying the actions. The former has revealed the existence of a mechanism not only in motor and visual mode, but also in auditory mode. Indeed, when the monkey is in darkness and listens to the noise produced by an action as breaking a nut, the same neuron fires when the animal breaks the nut, when it sees someone breaking a nut, and when it hears the sound of someone who breaks it. The interpretation of this experiment is that, whatever the mode, the same neuron fires to encode the "breaking-of-a-nut" that coincides with the purpose, that is the intention of the action. The second experiment has allowed to discriminate between a gesture of grabbing aimed to bring the food to the mouth or put it in a container. This experiment goes in the same direction of that one on the typical pliers for French gourmet escargotes I talked about before. I talked about before During the execution of that precise action (grasping), MNs fired in different ways depending on the ultimate goal of action, that is if the intention was to bring food to the mouth or to move it into the container. In the same direction they seem to go some results obtained with humans by an experiment with fMRI. It was noted a particularly significant activation of the mirror system in experimental subjects during the observation of actions which were not "pure", but precisely included in the context, from which one could clearly infer the intention that was implied. All these experiments would allow us to state that the mirror neuron system is able to encode not only the act, but also the intention with which it is made. According with the paradigm of the embodied cognition (endorsed by many philosophers and neurobiologists, namely A. Clark, A. Damasio, etc.), the intentions of others can be understood without any reflective conceptual or linguistic mediation. It would be nothing but a pragmatic understanding based solely on the motor knowledge on which it depends our own capacity to act. Another very interesting chapter of the book is the sixth, entitled "Imitation and language". Two other important functions assigned to the mirror system are described, as basilar capacities which would make the verbal and non verbal language possible. They are: 1) an "imitative function" intended as the ability to replicate gestures already belonging to our own motor repertoire; 2) the "capacity of learning new motor patterns by imitation". It is a common function that would also outline a possible scenario for the origin of human language related to the evolution of the mirror system, which could be interpreted in this sense as an instrument of integration between gestures and sounds to foster a more precise understanding of social behaviours. Dulcis in fundo, the last chapter of the book is dedicated to the sharing of emotions. The central thesis is that the recognition of the emotions of others is based on a set of different neural circuits which share the mirror properties already seen in the case of action understanding. It was possible to study experimentally some primary emotions such as pain and disgust, and the results clearly show that observing in the other an expression of sorrow or disgust activates the same neural substrate underlying the perception in first person of the same kind of emotion, as if it were a sort of involuntary perceptual and motor imitation. Further confirmation comes from clinical trials in patients suffering from neurological diseases. Once lost the ability to feel and express a given emotion, it becomes impossible to recognize even when expressed by others. As in the case of the actions, also for the emotions one can speak of an immediate, pre-logic understanding that do not require cognitive processes of the kind of conceptual inference or association. This immediate understanding of the emotions of others would be the necessary precondition for that empathic behaviour underlying much of our inter-individual relations. Moreover, as the authors rightly note, already Darwin himself (in The Expression of Emotions in Man and Animals, 1872) has emphasized the adaptive value of emotions and the evidence of the perceptual and emotional empathy in the animal kingdom. Far from being confined to the functioning of certain nerve cells, the mirror properties would pervade the entire system of the brain: the same logic that allows us to pair execution and action understanding in a single neural mechanism, allows us also to describe the emotional sharing and perhaps the arising of the phenomenon of consciousness too. The neuro-psychologist Anna Berti has identified a similar mode of "neural coupling" for the execution of actions and the awareness of having (or not) performed them. This motor awareness, that allows us to be conscious of our actions would share the same neural substrate underlying the motor control of these actions. How to Interpret the New Results and Challanges of Neurosciences? V. Gallese Versus A. Caramazza (May 29, 2009) "The discovery of motor resonance mechanism of mirror neurons has shown that the motor system, far from being a mere muscles controller and a simple executor of commands coded elsewhere, is able to perform cognitive functions that for a long time have been erroneously considered prerogative of psychological processes and neural mechanisms of a purely associative kind... Surely mirror neurons bother who look at neuroscience as a simple method of tracking and validation of mental mechanisms deemed valid a priori." (Vittorio Gallese, Medical neurologist, is Professor of Physiology at the University of Parma) "Till now we haven't had any important clinical demonstration that mirror neurons really have a functional role in the action understanding and, even if they do, in which way they can do it." (Alfonso Caramazza, Director of the Laboratory of Cognitive Neuropsychology at Harvard University and Director of the International Center for Mind/Brain Science of the University of Trento). MNs, or "cerebral cells of empathy", as they are sometimes called, are in these last years the focus of an important debate between many scientists, namely V.Gallese and A.Caramazza. As I told in the Chapter 2, MNs were discovered in the '90s by the group of researchers at the University of Parma headed by G. Rizzolatti and composed by L. Fadiga, L. Fogassi, V. Gallese and G. di Pellegrino during their experiments on pre-motor cortex of macaque monkeys brain, while they were collecting or handling objects. Finally, at the beginning of April 2010, the team of neuroscientists headed by Roy Mukamel and Marco Iacoboni, University of California at Los Angeles, gave the important new that the problem of proving in a direct way the existence of such a mirror neurons system in humans has been overcome, thanks to twenty-one patients treated for epilepsy. Some electrodes have been planted in their brain for medical purposes. During their hospitalization, the researchers told them to perform certain actions, such as grasping objects, or to observe facial expressions. According to the theory of MNS, MNs fired both for action and observation of actions, and their excitement was directly recorded by the electrodes. The authors say that among the 1177 neurons under observation a significant portion responds to both stimuli (action and observation of grasping and facial emotions) in the supplementary motor area of the medial frontal lobe (SMA) and in the medial temporal lobe, specifically in the hippocampus, parahippocampal and the entorhinal cortex, while in the amygdala and in the pre-SMA the number of cells would not reach significant levels of response. Since these new areas of the cortex perform different functions (vision, movement, memory), Iacoboni thinks that this discovery suggests us the idea that mirror neurons provide a very rich and complex system of pre-logic reproduction and interpretation of actions and emotions of others. But, in the opinion of A. Caramazza this is not the crucial proof for mirror neurons existence, at least with the cognitive features that Gallese, Iacoboni and Rizzolatti assign to them. According to Caramazza, to value if human brain contains mirror neurons, may be useful a medical technique called fMRI adaptation. This technique allows to us to understand if a specific cerebral area is sensitive to the change of a property of a stimulus (e.g. the colour or the shape), or not sensitive to a similar change. The principle is that the repetition of a stimulus determines a less strong response of the nerve cells involved. Mirror neurons should be sensitive to a change of motor acts, not depending on the fact that the motor act is observed or performed. But this doesn't happen in the research carried out. In fact, the motor acts taken into considerations usually have "target objects", as a pen, a cup, etc., for which it is not clear if the neurons activation is about the properties of the target object or the movement in itself. From Caramazza's point of view there are two principal models or paradigms of the architecture of the mind in the brain: 1) the reductive-eliminativistic perspective, which claims that the whole cognition may be reduced to sensori-motor representations; 2) the non-reductive perspective, which claims that cognition is not at all reducible to sensorimotor representations. The most important point of disagreement with Gallese and Rizzolatti too is about the interpretation of mirror neurons role. Their existence is consistent with a potential role in the action understanding, but till now we haven't had any important clinical demonstration that mirror neurons have really a functional role in the action understanding and in which way they could do that. In Caramazza's perspective there is a relevant gap between the original discovery of mirror neurons which show a selectivity in relation to motor acts and their involvement in human cognitive functions. Higher cognitive human functions cannot be usually reduced to simple relations just like the sound of breaking of a nut and associate it with its own motor act. There are many examples in our everyday experience in which the same visual input (e.g., a yawn) can take on different meanings (fatigue, boredom, provocation, malaise) that can be understood only thanks to other background informations which are probably not available on a selective basis to the motor system. It's necessary to distinguish between the data which are consistent with the involvement of MNs in action understanding and the data which really show that mirror neurons have a crucial role in action understanding. On the other side of the dispute MNs discovery and its interpretations, V. Gallese claims that mirror neurons certainly annoy those who look at neurosciences only as mere instruments of localization and validation of mental/psychological mechanisms deemed valid a priori. When neurosciences produce results that challange or even refute these models, the first reaction is to deny the existence of these results. Gallese states that what Caramazza says about mirror neurons -beyond the inherent limitations of his recent work considered as the proof of the inexistence of mirror neurons in man-would be a clear example of this attitude. During the last ten-fifteen years from MNs discovery in the macaque brain, several researches deeply changed both the traditional way of conceiving the relation between perception and action, and the role that perception and action have in the construction of social cognition. The discovery of the mechanism of motor resonance in MNs has shown that the motor system, far from being merely a muscle controller and a simple executor of commands coded elsewhere, is able to perform cognitive functions that have long been erroneously considered the prerogative of psychological processes and neural mechanisms of a purely associative kind. The perception of action in the others, that is the understanding of the specific intentional aim in physical movements, which are not pure and neutral, appears to be a intrinsic mode of action, since it is rooted in the motor knowledge which underlies our own capacity to act. Classic cognitive sciences have hard problems, in Gallese's opinion, to accept this idea because, according to their traditional architecture of our mind, motor system cannot have prima facie (in principle) cognitive features. The embodied mechanism in MNs would provide us, on the contrary, a more rich series of physical processes that underly social interactions, starting from filogenetical and ontogenetical processes. Our understanding of actions and motor intentions, made possible by MNS, puts in jeopardy the abstract mentalism or "mentalese" supported by many cognitive psychological models, starting from the classic paradigm of the modularism supported by Fodor in his influent theory of mind. The debate on the capacity to understand the others has been faced, in Gallese's opinion, in a bad way for many years, conceiving solely this capacity to "mindread" (this is the definition used by Alvin Goldman see "theory of mind" wikipedia), that is to attribute beliefs and desires to other people, in the light of a priori and abstract psychological theories, based on propositional attitudes. By this way, it is given a psychological explanation considered right a priori, but without explaining both real neural mechanisms and consequently psychological processes underlying social cognition. For many years it has been told that when we understand the behavior of others, specific cerebral areas are activated, as the anterior cingulate cortex (ACC) and the temporo-parietal junction (TPJ), that were considered the cerebral places of an alleged module of the cognitive theory of mind. But this is false. In fact it has been demonstrated that these areas are activated even for tasks completely unrelated from the "mind-reading", such as attention or sexual excitation. In Gallese's opinion, cognitive theories about mind-reading are a sort of phrenology of the XXI century, considering also the fact that cognitive psychologists know few things or nothing about how the brain works. On the contrary, neuroscientists try to understand which is the neurophisiological mechanism that determines the activation of a certain cortical circuit during a task. The reality is that nobody till now know why ACC and TPJ sistematically activate even during mentalistic tasks, because we ignore again which is the neurophisiological mechanism that underlies their activation. Caramazza claims that it is unlikely that MNs -if they really exist-can play a role in complex functions as empathy and language understanding o in explaining cognitive pathologies as authism. What philosophers, sociologists and people interested in marketing wrote in these last years about mirror neurons and their way of working and capacity to explain is often wrong. Gallese doesn't agree with Caramazza both in denying MNs existence or in reducing so much their role. According to Gallese, we cannot prove that they don't exist, while we have many experimental data and scientific works in favour of their existence, obtained by different researching techniques as PET, Fmri, TMS, EEG, MEG and at last, at the beginning of April 2010, the planting of some electrodes in the brain of twenty-one patients of Iacoboni treated for epilepsy that would clearly show mirror neurons existence. In fact, according to the theory of MNS, MNs fired both for action and observation, and their excitement was directly recorded by the electrodes. The activity of 1177 MNs was recorded also in areas in which it was not yet assumed their presence. The presence in human brain of a mechanism similar to MNS represents, in Gallese opinion, the most parsimonious unifying explanation for several behavioural and clinical data. Since it has proved that MNs exist and have an adaptive-evolutive role in birds and monkeys, it would be a non-sense or at least a strange thing in the evolutionary processes that such a MNS, which is so useful for the survival of many natural species, is absent in human being, whose evolutionary story is not longer than that of birds and monkeys. But it is clear that the cognitive complexity of the human being leads us to wonder what mirror neurons and mirror neurons system can explain about human behaviours and what they couldn't explain. This is the reason why there is a wide interest and a fruitful debate among many researches in disciplines as philosophy, psychology, sociology, etc. about the interpretion of this discovery. Gallese is one of the most known fan and supporter of the so-called "cognitive neurosciences", whose aim would be that of looking for the neurophisiological bases of social cognition. He is convinced that the hard problem of how to explain intersubjectivity cannot be faced and solved uniquely either by philosophy or neuroscience or psychology, but it requires an interdisciplinary approach. As I told in the Chapter 1.3 (The epistemological dualism), mind-body problem requires an interdisciplinary approach and the status and role of cognitive neuroscience is a good example of this multilevel approach to understand and explain in a better way complex phenomenona as consciousness, the self, intersubjectivity, etc. The idea of "neurophilosophy" supported and endorsed by P.&P. Churchland goes precisely in this direction. Gallese, Churchland and other (neuro)researchers also claim the need for a new common language which is able to put together in a harmonious way the different perspectives of research carried out by the new theories from neuroscience, philosophy, psychology, etc., to understand and explain cognitive complex phenomena. Thanks to a better scientific disclosure, Gallese thinks that there is a more and more interest in mirror neurons discovery and cognitive neurosciences, generally speaking, because, by this way, people are really realizing that this is the best way to know who we are and how we function. Notably, MNs capture people attention because they show in a clear and simple way the crucial role of empathy in allowing us the mutual communication and understanding by emotional modalities rather than by sophisticated and abstract logic inferences and operations. From MNs discovery, it has begun to investigate in a scientific way intersubjectivity and its pathological alterations from a physical and neural perspective, very tied to the body. By this way it has been possible to reactivate and revitalize the philosophical tradition, above all the phenomenological view, in the direction of a phenomenology of perception notably deepened by M. Merleau-Ponty. Nevertheless, Gallese himself is clear to state that he didn't claim and don't claim that MNs are able to explain everything there is to explain about social cognition. From his point of view they allow us to understand basic aspects of subjectivity both from a phylogenetic and ontogenetic point of view. And this understanding may have important repercussions on the understanding of the mechanisms underlying more sophisticated forms of social cognition. It is and empirical question (luckily not an article of faith!) to understand how far you can go using the mechanism of mirror neurons as key of reading for social cognition. In the end Gallese doesn't agree with what the psychologist Paolo Legrenzi and the neuro-psychologist Carlo Umiltà state in their last book, "Neuro-manie", Il Mulino Publisher, Bologna 2009, that brain doesn't explain who we are. He doesn't agree with them because this statement and some others promote in our Italian society, already storically skeptical and ignorant about science, a higher separation between scientific and humanistic culture. Certainly this book has received the greatest approval by the extra-scientific circles, as the enthusiastic review published on the Vatican Newspaper, "L'Osservatore romano". Gallese share with Legrenzi and Umiltà the alarm for the excessive use of neuroimaging techniques, but doesn't agree their criticism against cognitive neuroscience. The neuroscientific explanation of a cognitive or behavioral feature cannot be reduced to a mere location, but it is really and explanation when it is able to identify the mechanisms that enable the activation of a specific brain circuit while performing a specific task. This is the important and distinctive contribution of cognitive neuroscience. Otherwise, there is the risk to do bad reasonings. Are We Sure that MNs Exist? P. Pascolo's Opinion about Them (August 5, 2009) If the neurons that mirror the action (a goal action) are the same neurons performing the action, what happens in the case of contemporary execution of these two kinds of actions? A competition? Double circuitry? How a mirror system arises in an animal, for example a horse, when performing an "equivalent" act such as opening the door of the stable with the mouth? (Paolo Pascolo, Extraordinary Professor of Industrial Bioingeneering at the University of Udine) Pascolo, anticipating in a study of 2008 the criticism of the research group of Caramazza about the existence or, at least, the cognitive role played by mirror neurons, criticized in a radical way the theories of Rizzolatti and his collegues at the University of Parma about the existence and the role of mirror neurons. The first couple of questions that Pascolo takes into consideration are: i) if the neurons that mirror the action (a goal action) are the same neurons performing the action, what happens in the case of contemporary execution of these two kinds of actions? A competition? Double circuitry? ii) How a MNS arises in an animal, for example a horse, when performing an "equivalent" act such as opening the door of the stable with the mouth? If we take in exam the many works from 1996 (the year of the official communication of mirror neurons existence in the monkey), we realize that "neuronal times" and "gestures times" are linked on a unique base of times which doesn't distinguish in a precise way the different phases in the execution of a gesture, that is facial expressions, intentions, beginning of the movement, contact with food, grabbing, etc. Therefore, the neural date checked in the monkey has an uncertain neural timing. In some cases the monkey was in delay while in others was ahead. Which is to say that monkey wasn't copying the gesture but was thinking in its own way: "I know what you are doing" or "if I am in your shoes I'll do it in this way" or "I already know what you are going to do", etc. It often happen to anticipate with our own thought the gesture that we aspect to see in the man we observing. It is usual that a parent who is observing his child while he is understanding, often anticipates with his own thought or actions what he expect from him. So, it should be the case to read again the experiments of the Parma team in a way to not take anything for granted. Pascolo underlies that in the whole literature on mirror neurons, it has been given too much for granted that mirror neurons are "seen" in the monkey brain. Experiments on the monkeys would have "certified" the existence of mirror neurons and several researchers focused on experiments about human being. Till now it has assumed that the experiments were valid and consistent. Experiments on human beings are not invasive and direct. Therefore they have only a circumstantial nature. Many researchers have questioned the effectiveness of fMRI experiments to show the existence/non-existence of mirror neurons in human being. The Parma team would not have instrumented in the right way the animal or the investigator with suitable sensors to perceive acts of motion, facial expressions, etc. The "anticipatory maneuvers", according to Pascolo, could be explained without relating to the MNS. A simple anticipatory maneuver could be the following: when I'm going to grab an object wringing the hands forward, I will execute a retreat in advance of the basin. I will counteract the effects induced by the inertial movement forward and prepare the body system in balance, to accomodate the load. In practice, I examine the hypothetical charge that will manage and implement strategies with the necassary experience (Pastore et al., Chaos, Journal of Biomechanics). You may apply your experience also with the Inno of Mameli: if you know the words you may sing in chorus, anticipate, delay, counterpoint, etc, while if you don't know the words you will stutter or sing badlyb and in delay. Anticipation would be nothing but experience. When you are used to have many hours of training, as in the case of a boxer, and you got many matches, you are able to anticipate the opponent's blows. On the contrary, when I use a certain neural circuit to examine a gesture and to execute it, I have to consider a necessary time for making possible the mirroring function of the neuron. This means delay, al least 130-150milliseconds because there must be some different and necessary physiological processes: interpretation, transmission to motor-neurons and muscle involvement. It's clear to everybody that when we are used to move and behave in a certain way, playing football or tennis, etc., we are consequently used to anticipate our movements and behaviors according to an experiential map in our brain and body. To Pascolo there are not absolute scientific evidences for the existence of MNs in monkeys and humans since the experimental subject is motionless under fMRI and the data refer to the thermal effect produced by visual stimulations and it is detectable in a considerable number of neurons. In conclusion Pascolo states that MNs theory should be revised. If we are not sure about the existence of MNs, we should also revise some experimental protocols for rehabilitation and the interpretation of autism. The Critique to Cognitive Role of Mirror Neurons: P. Jacob's Theory (June 8, 2009) "Human ability to represent psychological states (beliefs, intentions, desires, emotions) and to attribute them to others (so-called mindreading) goes beyond the mechanism of mirror neurons. Consequently, also the idea that autism stems from lack of mirror neurons is wrong." (Pierre Jacob, Philosopher of Mind and Cognitive Scientist, currently President of the European Society of Philosophy and Psychology and Director of the Jean Nicod Institute in Paris). Cognitive neuroscientists, particularly those ones of Parma University, the team headed by Rizzolatti, starting from MNs discovery in the premotor ventral cortex of the macaque monkeys, stated the existence of such a MNS in man thanks to neuroimaging, but what they call MNS includes an important area (the superior temporal sulcus) which doens't contain for itself MNs. P. Jacob underlies the fact that MNs are not able to explain all the cognitive functions which are necessary to make possible the understanding of intentions in other people and the complex phenomenon of empathy. If MNs fire in the brain of a monkey or of a man who is watching his conspecific while he is grasping an object, what the activity of MNs generates in the observer would be only a mental repetition of the act of a man who is acting. But there is a fundamental gap between the mental repetition of the act of a man who is grabbing a cup of coffee and the cognitive capacity to know if the agent is going to do it for drinking the cup of coffee or for giving it to somebody, or for putting it on the table or for throwing it out the window, and so on. Repeating or simulating in a mental way the motor act of an agent is not clearly sufficient to understand the intention behind that action, for example, giving the cup to our dear friend. It is not even sure that this mental repeating or simulating is necessary for understanding the intention. Moreover, we are speaking of transitive acts, that is grasping, squashing, etc., for which the aim is considered essential for the act in itself. On the contrary, probably nobody would say that while we are watching the act of grasping we may or should prove a feeling of empathy in relation to the agent to whom our act is directed. It seems that empathy is important and perhaps necessary only to understand and answer to affective internal states, (technically qualia), of the other people, that is pain, happiness, etc., while would be not important and necessary to understand motor acts. Naturally affective internal states may be manifested by behaviours, but they are not, properly speaking, acts. Human capacity of representing psychological states (beliefs, intentions, desires, emotions) and attributing to others (so called mindreading) go beyond the mechanism of mirror neurons. Therefore, the idea that autism arises from a deficit of mirror neurons is wrong. It is possible that autistic people have a deficit of mirror neurons (if they really exist), but also in this way it is certainly possible that the deficit of mirror neurons comes from a deficit in the capacity of mindreading, and not vice versa. Perhaps it is necessary to represent and understand the intention of an agent to make mirror neurons active. In this case the activity of mirror neurons doesn't generate the understanding of the intention of an agent. The causal chain goes in the opposite direction: first, you understand the intention of the agent, then MNs internally imitate (or mentally reproduce) the observed act. A Phenomenological Interpretation of MNs: L. Boella's View L. Boella, Professor of Moral Philosophy at the University of Milan, states that the discovery of mirror neurons has certainly contributed to the actual and popular success of neurosciences. From Boella's point of view this success has been favoured by the special evidence and simplicity of the result of this discovery. By this way MNs discovery has spread through many disciplines, being interpreted beyond the specific context of the experimental research. It is particularly significant that this discovery interested the philosophers. In fact it focuses on a central point of the contemporary thought, the belief that the inter-subjective relationship and the acknowledgement of the other are essential for the individual and society. Therefore, it is happened a particular convergence with the French phenomenology represented above all by Maurice Merleau-Ponty, in the direction of a new interpretation of perception, considered not only as an assembly of sensible data (visual, auditive, ecc.), but as a real dialogue with the external world (things and people) in which the body is the protagonist. In fact it shows, by its own movement, intentions and preferences, and in the same time knows the world discovering it. In France researchers and philosophers (as Jean Luc Petit) have uploaded the scientific references to Merleu-Ponty, working on the link between perception and action, using experimental data as those ones produced by experiments on MNs. The point of meeting between MNs discovery and philosophy concerns in primis the visual-motor role of MNs and their role in the perception of finalized actions. Recently (G. Hickok, 2009), the non cognitive mechanism of association between actor and spectator ("I know what you are doing"), concerning mirror neurons, has been tied with the "motor theory of knowledge", which was very common among psychologists of the beginning of 20 th century. For sure the experiments on MNS has become an obliged reference to every philosopher who makes a particular research on empathy. The "rediscovery of empathy" (Stueber, 2006, Goldman, 2006) is due to the interpretation of MNS as an "enlarged empathy". In fact, in this MNS researchers have seen the neurobiological base for a constellation of private states, notably qualia but also behavioural attitudes, from sympathy to the comprehension of the other people till the care. Having studied very much many philosophers and psychologists who took into consideration empathy, Boella is very impressed and sceptical about some forced implications between MNS discovery and the theories on empathy taken on by some philosophers and psychologists (e.g., Theodor Lipps) who played an important role in the reflection on empathy, but often offered a poor version of it. The philosophical reflection on empathy achieves the maximum deepening in the context of the first expression of the phenomenological movement, that is between 1910 and the first half of '20es, thanks to Husserl, Scheler and Stern. The most important thing is that empathy is not longer considered as a moral feeling (piety, compassion, sympathy), but as an autonomous experience. Before sharing the pain of somebody else, I must recognise the concept of "somebody else", understanding that he has feelings, thoughts, volitions, just like me, even if he is different from me. So, first we have what is called "the discovery of the other", the acknowledgement of his existence, his "being part of the world" in which I'm living. Only on this base, it has a meaning for us to speak of a passage of feelings from me to another, a sharing of feelings as solidarity. This is a fundamental premise because it is a crucial point for the philosophical thought on empathy. In fact, if we look at contemporary philosophy we see an interrupted or uncompleted way. The reason is in the fact that phenomenologists separated the empathic experience from its bodily origin (here the reference is above all the last thought of Husserl and Heidegger, and all their students). Notably, they directed their attention to the "Lebenswelt", the "world of qualia", that is the natural, social, linguistic and cultural horizon in which we are in relation with the others in an unconscious way, losing by this way every interest for the biologic-organic roots of empathy. MNs and Intention Understanding Actions, according to standard philosophical wisdom, are in some way conceptually tied to intentions. An action is something an agent intentionally does. 13 So, to recognize that some behavior (hand movement) is an action (waving hello), one must understand the intention with which the agent acts (to greet you). An intention is standardly regarded as a mental state (Davidson, 2001;Grice, 1972;Searle, 1983). As such, intentions are something distinct from the behavioral sequence and are not directly detectable from the behavioral sequence. I endorse the view that intentions are mental states and that explaining intentional action involves attributing some sort of mental state. 14 Strictly and broadly congruent mirror MNs fire in response to particular details (that it is a whole-handed grasp, or that it is an eating-related grasp) about a behavior, but there is a metaphysical and epistemological gap between this mirror neuron activation and the mental state attribution required for action understanding. On my view, the activation of broadly and strictly congruent mirror neurons is like yawning in response to observing another's yawn. My yawning reflex does not constitute understanding or in any way imply that I understand that you are bored. Understanding that you are bored requires recognizing that you are in a particular mental state, and my yawning response does not constitute that recognition. Likewise, the automatic resonance of my MNs does not constitute recognizing your mental states. Broadly and strictly congruent mirror neuron activation in an observer entails nothing about whether the observer attributes an 13 In the action theory literature, theorists distinguish three senses of intentional action: acting with a certain intention, acting intentionally, and intention-for-the future. There have been various attempts to give a unified account these senses of intentional action (Anscombe, 1957; Davidson, 2001). 14 There is a debate over exactly what kind of mental state an intention is. I shall sidestep this debate here. In endorsing the intention-as-mentalstate view, I am rejecting a host of non-standard views, inspired by Anscombe's account, that understand intention in terms of goal-oriented behavior (Moran and Stone, 2008; Thompson, 2008). These views hold that intending to do A is not a mental state; it is a form of being in progress toward some goal. Thus, on these views explanation of intentional action need not invoke any psychological terms. These nonstandard views will come up again later in this chapter. intentional mental state to the actor. The same is true for logically related MNs. Neural firing in expectation of an event in a behavioral sequence does not constitute understanding another's mental states. These different kinds of mirror neuron activity may all be partially causally related to action understanding, with each kind of mirror neuron providing different kinds of information about the observed behavior, but none of the kinds of mirror neuron activity, individually or collectively, constitute action understanding. Analogously, my yawning in response to your yawn may play a role in my inference that you are bored, but the yawning plays a metaphysically and epistemologically indirect role. Moreover, no kind of mirror MN activity is necessary or sufficient for genuine intention understanding. It is not necessary because we can infer an intention without mirror neuron firing. For example, suppose I tell you that Johnny went to the store and bought graham crackers, chocolate and marshmallows. From this you infer that Johnny intends to make s'mores. Although I have not done any brain scanning to test whether your mirror neurons would be activated in this scenario, I find it unlikely that they would be. What exactly would the mirror neurons mirror? The typical motor mirror neuron activations are for hand, foot and mouth movements, but none of that information plays a role in the story I describe, and yet you are still able to infer the intention. Furthermore, mirroring is not sufficient for intention understanding. Automatic neural resonance and anticipatory neural activation are not sufficient for understanding an intention. The automatic motor resonance, which I likened to yawning in response to observing a yawn, is not sufficient for understanding an intention, a mental representation. And neural firing in expectation of an event in a behavioral sequence is not sufficient for intention understanding, either. MNs and Goal Understanding MN activity, on my view, is more closely related to understanding goal-directed behavior than intentional actions. Understanding a target's goal-directed behavior amounts to understanding the target's orientation toward some thing in the world, which requires various motor and sensory representations. In contrast, actions transcend mere behaviors. An action is something an agent does intentionally. Understanding a target's actions requires not only understanding the target's orientation toward some thing, state or event, but also understanding how the target represents her orientation toward that thing, state or event. That is, understanding an action requires understanding the target's mental representations (Davidson, 2001;Searle, 1983). It is standard in philosophy to distinguish behavior, the understanding of which requires motor and sensory representations, and action, the understanding of which additionally requires mental representations. My insistence on making this distinction between goaldirected behaviors and actions may seem like a mere terminological squabble, but this is not the case. I do not care what we call these two categories so long as we keep them distinct. The reason I am at pains to distinguish the two categories now is that, as will become clear in the next section, in the discussion of the cognitive importance of mirror neurons this distinction is often neglected. If mirror neuron activity constituted action understanding in the full philosophical sense of the term, then mirror neuron activity would constitute intention understanding, and this would have important implications for how mindreading is accomplished. But if MNs simply activate for goal-directed behavior, then the relationship between mirror neurons and intention understanding is less direct and less clear, and the relationship between mirror neurons and mindreading is even more tenuous. I shall argue that, contrary to the many bold claims about mirror neurons and social cognition, the latter claim is true. On my account, MNs are more closely related to understanding goals than intentions. However, MNs are still only tenuously related to goal understanding. Mirror neurons do not constitute, and are neither necessary nor sufficient for, understanding goal-directed behavior. MNs can be causally relevant to goal understanding. Importantly, though, they are not the only relevant areas of the brain for understanding goal-directed behavior. In addition to MNs, neurons in superior temporal sulcus (STS), canonical neurons, and non-motor perceptual cues play key roles in understanding goal-directed behavior. Neurons in the STS have the same perceptual properties as mirror neurons but lack first-person motor properties. In other words, these neurons fire only when observing the target's goal-directed behavior, never simply when the subject acts. The STS is an area where others' behaviors are visually processed and has long been recognized as part of the neural circuitry underlying the perception of others' behaviors (Gazzaniga, et al., 2009, p. 549). Canonical neurons are the inverse of STS neurons; they have the same motor properties as mirror neurons but differ in their perceptual properties. These neurons fire when the subject grasps objects and when the subject sees a graspable object, but not when a target grasps an object. Canonical neurons are thought to process, in the firstperson case, one's own motor movements toward an object and, in the third-person case, the potential for behavior directed toward the observed object (Gazzaniga, et al., 2009, p. 550). Non-motor perceptual cues are also relevant to understanding goal-directed behavior. In an influential study on mirror neurons, researchers found that mirror neurons in monkeys preferentially responded to graspingto-eat over grasping-to-place behaviors even when these behaviors were motorically very similar (Fogassi, et al., 2005). For our purpose, the important feature of this study is that two factors helped the monkeys discriminate between grasping for eating and grasping for placing: whether the object grasped is food and whether a container is present in the context of the perceived action (Jacob, 2008). Both of these factors are purely perceptual cues. Importantly, purely perceptual cues do not themselves cause mirror neuron activity. If shown a picture of a container and a piece of food, one's mirror neurons would not fire. Only observing or performing motor acts causes mirror neuron activity. And yet perceptual cues are relevant factors in recognizing that some movement is a goal-directed behavior. My hypothesis is that perceptual cues modulate mirror neuron responses. That is, when observing some movement, perceptual cues aid in determining whether the movement is a goal-directed behavior. Thus, even if we narrow our focus from actions to goal-directed behavior, mirror neurons are still only a limited part of the story. Nonmotor perceptual cues and STS and canonical neurons are proof that mirror neuron activity does not constitute, nor is it sufficient for, understanding goal-directed behavior. On my account, MNs causally contribute to understanding goal-directed behaviors, and this may play a role in understanding intentional actions. I regard mirror neuron activity as a contributory cause of understanding goal-directed behavior. In other words, MNs are neither necessary nor sufficient for goal understanding, yet they still causally contribute to understanding goal-directed behavior. Mirror neuron activity is not sufficient for understanding goal-directed behavior because determining whether some behavior is goal-directed depends on nonmotor perceptual cues and non-mirror neuron areas of the brain. Furthermore, although mirror neuron activity may in fact be a mechanism we use to understand goal-directed behavior, it is certainly not logically necessary for understanding goal-directed behavior. We can imagine creatures (or computers!) that can recognize goal-directed behavior and yet lack MNs. I doubt that mirror neuron activity is even nomologically necessary for goal understanding, but my arguments do not hinge on this claim. MN Firing and Action Understanding Empirical evidence supports the hypothesis about the relation between mirror neuron firing and action understanding. First, mirror neurons are sensitive to the mode of presentation of actions. For example, monkeys' MNs do not fire when watching a familiar behavior on a video monitor (Ferrari et al., 2003;Keysers and Perrett, 2004) despite the fact that there is evidence that the monkey understands the behavior on the monitor. 15 This casts doubt on the claim that mirror neurons constitute or directly cause action understanding (Jellema et al., 2000). Second, humans' mirror neuron activity is insensitive to the difficulty of interpreting an action. Brass, et al. (2007) hypothesize that our remarkable capacity to flexibly interpret observed behaviors as intentional actions is mediated not by the mirror neuron system, but an inferential interpretive system located in the STS and anterior frontomedian cortex (aFMC), areas independently associated with perception of social stimuli, mentalizing, and action understanding. This study tests that hypothesis by having subjects, while in an fMRI machine, watch three short videos in which an actor operates a light switch with her knee. The three videos demonstrate the actor operating a light switch with her knee in a plausible context (both of the actor's hands are fully occupied), implausible context (actor uses two hands to hold a small, lightweight item), and no context (actor's hands are unoccupied). The subjects are required to come up with a rationale for each case. Experimenters found that activation in the STS and aFMC activated to a level corresponding to the difficulty in ascribing a rationale to the actor. In other words, attributing an intention to the actor in the plausible context elicited the lowest activation of the STS and aFMC, and attributing an intention to the actor in the no-context scene elicited the highest activation of the STS and aFMC. The more difficult it is to ascribe an intention to the actor, the more strongly these areas activate. Subjects' mirror neuron activity, in contrast, was the same for each condition. Mirror neuron activity does not differentiate between harder-to-interpret actions and easy-to-interpret actions. This undermines the idea that mirror neurons constitute or directly cause genuine action understanding. Genuine intentional action understanding is mediated by the STS and aFMC, an inferential interpretive system. A third source of evidence for my account comes from studies on social non-human animals. Many non-human animals understand goal-directed behavior, and as new research continues to reveal, many non-human animal species have mirror neuron systems. Scientists argue that various monkey, dolphin, and elephant species have rudimentary mirror neuron systems (Blakeslee, 2006). The animals in which mirror neurons are present are capable of understanding goal-directed behaviors, as evidenced by various studies on the social behavior of these animal species. For example, classic behavioral experiments in cognitive ethology, which test monkeys' social cognitive skills, reveal clear limitations on monkeys' social cognitive abilities (Povinelli and Vonk, 2003), but they also show that monkeys are not merely behavior-reading. Experiments in cognitive ethology reveal that some monkeys are sensitive to the gaze direction of conspecifics and humans, follow others' gazes to out-of-view objects, and take into account opaque barriers (Tomasello, et al., 2003). They can also adapt their food retrieval strategy based on whether a dominant competitor can or has seen the food location, and they can even manipulate whether a competitor can see them to gain strategic advantage (Hare, et al., 2000;Hare, et al., 2001, Hare, et al., 2006. These experiments indicate that monkeys are capable of understanding goal-directed behavior, but there is no unequivocal evidence that all these species are capable of genuine intention understanding. My account offers a unified explanation of these findings. MNs, in conjunction with other motor and perceptual mechanisms, work to detect goal-directed behavior, which the behavioral studies have shown these non-human animals are capable of detecting. If, contrary to my account, mirror neuron activity were to constitute action understanding, and thereby genuine intention understanding, then either we would have to accept the prima facie dubious conclusion that all these animal species have low-level mindreading abilities similar to ours in virtue of the similarity between our mirror neuron systems, or we would have to find some substantial, non-ad hoc difference between their mirror neuron systems and ours. I think neither of these options is particularly compelling. 16 16 It may be the case that some kinds of monkeys are capable of mentalistically understanding behavior. That is, some species may be capable of genuine intention understanding. If that is the case, it is not in virtue of their mirror neurons on my view. For all monkey species have mirror neurons, but social cognitive skills varies across species. Chimpanzees, for example, exhibit more sophisticated social cognitive behaviors than macaques. As a matter of fact, this is one of the reasons why research protocols allow single cell experiments on macaques but not on chimpanzees. Thus, even if some monkeys understand intentional attitudes, mirror neurons are not the relevant causal factor. The Tea Party Experiment Let's look at a particular study to see how my account explains the results. This example is from a foundational study on motor mirror neurons in humans referred to as the Tea Party experiment (Iacoboni, et al., 2005). Subjects observe the following scenes: context, action, and intention. The context scene contains only objects, e.g., a teapot, a cup, a plate with cookies. There are two kinds of context scenes. In one context scene, the objects are arranged neatly, suggesting that someone is going to have tea. In the other context scene are crumbs, a dirty napkin, and a tipped-over cup, suggesting that someone has already had tea. In the action scene, subjects observe a hand grasping a cup without any contextual cues. The intention scene combines the context and action scenes, and subjects observe a hand grasping a cup in the neat or messy context. 17 The researchers found higher activation in MN areas while subjects observed the intention scene embedded in the clean and dirty contexts, compared to observing the action scene. They also found higher activity in mirror neuron areas while subjects observed the intention scene with the context that suggested drinking, compared to the context that suggested cleaning up. In addition, half of the participants in this study were instructed to pay attention to the intention displayed by the behavior they were observing, while the other half were not told anything about intentions. The researchers found no difference in MN activation between the participants in each group, but in the debriefing session all participants were able to report accurately the intentions associated with each version of the intention scene. What should we make of the data from the Tea Party experiment? There is some mirror neuron activity in subjects observing the action scene, but there is higher activity when they are observing the intention scene, especially the neat version of the intention scene. On my view, the mirror neuron activity in the action scene is due to the activation of strictly and broadly congruent MNs, which motorically resonate in response to the observed features of the behavior, providing particular bits of information about the observed behaviors. The increased activity during the intention scene is due to the additional activation of logically related mirror neurons, which function as an anticipatory or predictive mechanism. It has been shown that mirror neurons activate more strongly when observing familiar behaviors (Calvo-Merino, et al., 2006). When a subject observes a familiar behavioral sequence, his logically related mirror neurons fire in expectation of a certain behavioral event. In the experiment, logically related mirror neurons do not activate for the context scene because there is no goaldirected behavior to predict. The context scene contains only objects. Logically related MNs do not activate for the action scene because, again, there is nothing to predict. The action scene shows a hand grasping a cup, and there is no information that indicates any goal beyond grasping the cup. The increased activation of logically related mirror neurons for the intention scene indicates an expectation of further unobserved behavior, drinking (in the neat version) or cleaning, perhaps (in the messy version). 18 This mirror neuron activity, in conjunction with the perceptual cues (e.g., crumbs and dirty napkins), neurons in the STS and canonical neurons, cause one to recognize, and anticipate the completion of, a goal-directed behavior. This expectation or prediction may sometimes be involved with intention understanding in the sense that it may provide information relevant in inferring intentions, but it is not constitutive of intention understanding. Finally, what should we make of the post-experiment debriefing about the intentions associated with the neat and messy versions of the intention scene? In light of the Brass, et al. study explained above, I think we have pretty good evidence that mirror neuron activation does not reflect understanding intentions associated with each scene. I will discuss this fully in the next section. For now, I shall just say that I think the post-experiment debriefing is uninformative with regard to the neural correlates of intention understanding. There are many possible explanations of this feature of the Tea Party experiment, and the hypothesis that this feature proves that mirror neuron activity constitutes intention understanding is plausible only if you already believe that MNs underlie intention understanding. Psychophysical Supervenience and Mental Causation (MC) Now I'm going to show the theoretical concordance of this discovery with a reductive theory of mind, and particularly with the supervenience reductive theory of mind claimed by Kim (1993Kim ( , 1996Kim ( , 1998Kim ( , 2005. Indeed, it seems to me that the way in which the socalled "mirror system" works is theoretically compatible with some crucial concepts and principles in the metaphysics of mind by Kim. Notably, they are: 1) the concept of reductive psychophysical supervenience, according to which a mental property is realized by a species-specific physical/neural property, making use of a functional model of reduction; 2) the pre-emption of a physical cause on a mental cause and the redundancy and unintelligibility of mental causation; 3) the principle of physical causal closure, according to which there are causes in a genuine sense always in the physical domain; 4) the multi-layered metaphysical model of the world, which distinguishes between ontological "levels" (micro/macro properties) and theoretical/conceptual "orders" (physical, mental, social, etc.). 1) Now, let's start with order considering these points starting from the concept of "psychophysical 18 The higher activation while observing the neat version of the intention scene may reflect the fact that drinking is pretty clearly the goal of a hand reaching for a cup in a scene set up as a tea party, whereas the goal of a grasping motion in the scene with crumbs and dirty napkins is more ambiguous. The scene does not unequivocally suggest cleaning. Perhaps the hand is reaching for crumbs to eat. Another possibility is that drinking is a more basic action than cleaning, and logically related mirror neurons fire more in response to basic actions. Either way, these results are compatible with my explanation of the results of the study. supervenience principle", canonically formulated by Kim: «The mental supervenes on the physical in that any two things (objects, events, organisms, persons etc.) exactly alike in all physical properties cannot differ in respect of mental properties» (Kim, 1996, p. 10). We could say in very synthetic terms: no mental difference without physical difference. It's important to stress that this principle doesn't state that things psychologically indiscernible must be therefore alike in every physical character, but only the converse thesis. So, psychophysical supervenience only claims in a necessary way that two or more creatures can't be psychologically different, being yet physically identical (in a negative form), or that physically identical creatures must be psychologically identical too (in a positive form). We can recognize three basic principles which found the canonical concept of psychophysical supervenience, as it was officially introduced in philosophy of mind by Davidson, in his "Mental Events" (1970). (i) Covariance (only apparently asymmetrical) of properties: if two things or individuals are indiscernible in relation to their base physical properties, they will be indiscernible in relation to their mental or higher-level properties too, while it's not necessarily worth vice versa. (ii) Dependence: supervenient properties depend or are determined on/by basic-physical properties. (iii) Irreducibility: supervenient mental properties are not reducible (in the canonical version of supervenience) to their physical base properties (from a nomological/explicative point of view). Leaving aside the long and complex history of this concept and starting from the last interpretation of Kim (1998) in a reductive sense, it is formulated in different ways, as the following two: [Indiscernibility definition of mind-boby supervenience] Mental properties supervene on physical properties, in that necessarily any two things (in the same or different possible worlds) indiscernible in all physical properties are indiscernible in mental respects (Kim, 1998, p. 10) [Mind-boby supervenience in relation with time] Mental properties supervene on physical properties in the sense that if something instantiates any mental property M at t, there is a physical base property P such that the thing has P at t, and necessarily anything with P at a time has M at that time (Kim, 1998, p. 39) They mean that a physical base property ("P") is necessarily sufficient for the supervenient mental property ("M") because supervenient properties depend or are determined on/by their subvenient species-specific properties and there's an ontological identity between them, being instantiated in the same time t. It seems to me that these distinctive features of the concept of psychophysical supervenience are fully consistent in a theoretical way with the functioning of mirror neurons and the "mirror system", according to which perceptual and cognitive processes are realized on the same neural circuitry of motor processes, depending on their way of playing. By this way, a will or an intention may be read as a motor disposition realized on its own neural circuitry. 2) Hence, it's clear that a genuine causal action or causation takes place first at a physical level, in the activation of neural circuits that realized the motor processes, configuring the same intentionality as a preparation for action. By this way, as it says Kim himself (1998), mental causation comes to be redundant because physical causation is necessary and sufficient. In the same way Kim's functional model of reduction, according to which the gene is the role played by DNA and the lightning is the function performed by the electric discharge, is part of a reductive explanation of many behaviours that we would erroneously consider as totally cognitive. Instead, in light of these discovery, studies and experiments, many cognitive behaviours seem to be much "embedded" and involuntary, in spite of appearance (as the awareness processes, e.g. decision making and will, namely described by Libet, 2004). Let's see the picture in which Kim (1996, p. 51) distinguishes supervenience from physical causation and from mental or supervenient causation. Figure 1. it's first the neural state, according to supervenience causation model, to cause, on one side, a certain neuro-phisiological event, the muscle contraction, on which wincing supervenes, while, on the other, to constitute the supervenience base of pain, which causes, on a mental level, wincing Generally speaking, this model considers higher order properties as necessarily dependent or supervenient on their realizers at a physical deeper level, and considers mental causal processes as necessarily dependent or supervenient on those at a physical deeper level, which is the only genuine causal level. 3) The reason is that physical domain is causally closed: indeed there's no mental or non-physical cause for itself, that is without a physical base of realization. Kim (1998) describes by these words his crucial "principle of causal closure": if you pick any physical event and trace out its causal ancestry and posterity, that will never take you outside the physical domain. That is, no causal chain will ever cross the boundary between the physical and the nonphysical. (Kim, 1998, p. 40) Thus, an intrinsic mental causation cannot exist and it turns out to be completely redundant and unintelligible. But this doesn't mean automatically its exclusion, because its lack would make arise invincible explicative problems for understanding and explaining our cognitive faculties as language, reasoning, memory, etc. From these considerations it the increasingly crucial role of the body and its physical laws appears to describe ontologically even the more abstract cognitive processes, following the idea of so-called "embodied cognition". But, as it is well known in the tradition of analytic philosophy, reasons are not causes. So if causes are physical descriptions, reasons and purposes may well cross the narrow domain of the physical world, and thus allow other more suitable explicative theoretical domains, such as psychology, sociology, etc. MC between Level and Order Hierarchy 4) Finally, I consider Kim's distinction between "levels" (micro/macro) and "orders" (physical/mental), drawn above all in the third chapter of Mind in a Physical World, very interesting and important from a logical and metaphysical point of view, towards a reductive theory of mind. For it would deserve an adequate analysis and reflection both for its ontological and causal outcomes. Let's get to the heart of the matter, following the analytic way to argue notably used by Kim. His starting point, accepted by the majority of the actual philosophers of mind, is that mental properties or, generally speaking, second-order properties, are realized on physical/neural properties, as their base of supervenience. This realization relation should make the multilayered structure of levels arising. But now, which is the relation between level hierarchy and order hierarchy? Kim stresses the fact that «both second-order properties and their first-order realizers are properties of the same entities and systems» (Ib., p. 82). Which means to say that order hierarchy takes place on a same level, being first/second/n-order properties nothing but properties of the very same object or system on a certain micro/macro level. So, making use of Kim's significant words, «when we talk of second-order properties and their realizers, there's no movement downward, or upward, in the hierarchy of entities and their properties ordered by the micro-macro relation» (Ib., p. 91). And again, to differ order with level series: «the series created by the secondorder/realizer relation does not track the ordered series of micro-macro levels; it stays entirely within a single level in the micro-macro hierarchy» (Ib., p. 82). I'm just insisting on this distinction between order hierarchy and level hierarchy because I'm convinced that it is much more than a terminological distinction. Indeed it seems undoubtedly crucial at least to award causal efficacy to some properties. We may observe, for example, that order-properties within a supervenient progression from first (physical) properties to second (mental) properties, as C/fibers exciting property and the feeling of pain, are all properties applying to entities at a single micro-macro level with no further injection of causal powers at the higher orders. On the contrary, «spin, charm, and such are properties of elementary particles, and they have no application to atoms, molecules, and higher objects in the micro-macro hierarchy; transparency and inflammability are properties of aggregates of molecules, and they have no place for atoms or more basic particles». (Ib., p. 83) In the same way consciousness and intentionality are properties of biological organisms, or at least of neural systems, and they have no application to entities which are micro in relation to them. If this is right, we should correctly speak of first/second/n-order properties within a metaphysical hierarchy of orders only for a same object or system, and it is about this order hierarchy we should imagine and recognize the relation of psychophysical supervenience. It is well known, then, that this logical and metaphysical relation of dependence and determination of mental properties on physical properties entails, according to Kim (1998), the idea of reducibility and causal inefficacy of mentality, whose apparent causal powers would be inherited from its physical base. This idea, from my point of view, clearly leads to a reductive way of seeing every mental process as it has shown by MNS working. Viceversa, micro-macro level hierarchy does not concern properties of a same object, but different properties for different objects, depending on their complexity in micro-macro progression. Within this progression you may speak of supervenience too, but only in a mereological sense, that is the "micro-based property" (as a "structural property" in David Armstrong) on its micro-constituents, such as, for example, a water molecule mereologically supervenes on two hydrogen atoms and one oxygen atom. But no micro-constituent, none of water proper atoms, which constitute it, has already the causal powers it represents. Following Kim's arguing, «H 2 O molecules have causal powers that no oxygen or hydrogen atoms have» (Ib., p. 85). In the same way, «a neural assembly consisting of many thousands of neurons will have properties whose causal powers go beyond the causal powers of the properties of its constituents neurons, or subassemblies, and human beings have causal powers that none of our individual organs have» (Ib., p. 85). In this light Kim's functional model of reduction seems to be consistent with the idea of "emergence" of new causal powers climbing up level hierarchy, from microconstituents level (such as atoms, neurons, etc.) through higher level properties (such as organs properties, apparatus properties, till human beings properties), but only if these complex properties are micro-based properties. By this way, we might claim proper causal powers for emergent complex properties, such as cognitive faculties and consciousness in human beings, in spite of their reductive and functional explanation via realization. So, macro-properties in level hierarchy, in a different way from second-order properties in order hierarchy, can, and in general do, have their own causal powers, which go beyond the causal powers of their micro-constituents. Here it is the importance of the distinction between orders and levels, whose target is not only to make clear our language, but also to clarify which properties have proper causal powers and which ones has not. Now, if macroproperties could have, and generally have, their own distinctive causal powers, in addition to their microconstituents properties, then we should recognize the possibility of a downward causation at least from macroproperties to their own micro-constituents properties. If we consider consciousness as a macro-property emergent or mereologically supervened on several micro-based properties (just like the majority of philosophers of mind), such as organs and apparatus properties, we should conclude that it could have new genuine causal powers able to influence in a causal way lower level systems and constituents. A key factor for attributing causal efficacy to the macro-properties is their temporal relation to microproperties. Important experiments of the physiologist B. Libet (2005) on "time of consciousness", show how the "timing" in the activation of brain processes, on one hand, and processes and/or "mental" causes (as will, decision making processes and conscious phenomena, generally speaking), on the other, is essential to maintain a reductive theory of mind. Coming back to Kim and mental properties (1998), the macro-properties of complex systems can have, and generally have, their own distinctive causal powers that go beyond the causal powers of their micro-constituents. It's Kim himself to say it explicitly: «an assembly of neurons consisting of many thousands of neurons whose properties have causal powers beyond the causal powers of the properties of neurons that constitute, or sub-assemblies, and humans have causal powers that none of our own individual bodies» (Kim, 1998, p.85). Among the macro-properties Kim includes undoubtedly intentionality and consciousness, attributing special causal powers to them, because they are located on a higher ontological level: «consciousness and intentionality are properties of biological organisms» (Kim, 1998, p. 83). About consciousness, intentionality and complex mental phenomena, the mirror neurons discovery, since it shows a close neuro-physiological link between motor processes and cognitive functions as perception, vision, etc, thus it confirms, in my opinion, the fact that our descriptive orders for a behaviour, such as intentionality, will, etc.. lie on the same physical level of the implementation of that behaviour, often even before our awareness, as it is shown by Libet (2004) experiments on consciousness and will. In conclusion, the discovery of MNs, as well as giving an immediate biological foundation to the concept of empathy and related ideas, makes us understand, in my opinion, as well observed in the Foreword to Rizzolatti and Sinigaglia (2006), that «The same rigid boundary between perceptual, cognitive and motor processes, ends up being largely artificial: not only perception is immersed in the dynamics of the action, being more articulate and complex than previously thought, but the brain that acts is primarily a brain that understands. This is [...] a pragmatic, pre-conceptual and pre-linguistic, understanding, and yet no less important, since it rests on many of our much celebrated cognitive abilities» (Ibid., p.3.).
Role of Prostaglandins in Neuroinflammatory and Neurodegenerative Diseases Increasing data demonstrates that inflammation participates in the pathophysiology of neurodegenerative diseases. Among the different inflammatory mediators involved, prostaglandins play an important role. The effects induced by prostaglandins might be mediated by activation of their known receptors or by nonclassical mechanisms. In the present paper, we discuss the evidences that link prostaglandins, as well as the enzymes that produce them, to some neurological diseases. Neuroinflammation and Neurodegeneration Neuroinflammation plays a key role in the progression or resolution of pathological conditions. Inflammatory responses in the brain parenchyma have been associated with the etiopathogenesis of different neurological disorders, including central nervous system (CNS) infection, brain ischemia, multiple sclerosis, Alzheimer's disease, and Parkinson's disease [1][2][3][4][5][6][7]. Then, it is presently clear that neuroinflammation is a key feature shared by many neurodegenerative disorders [8,9]. Different CNS cells, such as microglia, astrocytes, oligodendrocytes, and neurons produce a plethora of inflammatory mediators, which act either in a paracrine or an autocrine fashion, leading to an intricate cross-talk between these different cell types. Among these mediators, many studies have demonstrated that CNS cells produce prostanoids and that these mediators might contribute to the normal CNS function or to enhance the neuroinflammatory and neurodegenerative processes [10]. Herein, we review the current knowledge on the role of prostaglandins, as well as the enzymes that synthesize them, in neuroinflammatory and neurodegenerative diseases. Roles of Prostaglandins in Neuroinflammation: In Vitro and In Vivo Evidences Due to the variety of prostaglandins presently known, it is reasonable to speculate that these lipid mediators might play different roles in the CNS. Below, we describe some in vivo and in vitro data with regard to the potential role of specific prostanoids in neuroinflammation. PGE 2 . To date, three prostaglandin (PG) E synthases (PGESs) have been characterized: the microsomal PGESs (mPGES-1 and mPGES-2) and the cytosolic PGES (cPGES) [11][12][13][14]. mPGES-1 is an inducible enzyme and is expressed also in activated microglia [15,16]. There are at least four characterized PGE 2 receptors, namely, EP1, EP2, EP3, and EP4. This prostaglandin modulates the expression of inflammatory mediators by microglial cells. For example, PGE 2 and EP agonists inhibited the expression of inducible nitric oxide synthase (iNOS) and nitric oxide (NO) generation [17] and enhanced the expression of cyclooxygenase (COX)-2 induced by lypopolysaccharide (LPS) in cultured microglia [18]. Moreover, an EP2 agonist inhibited interleukin (IL)-1β release by cultured primary rat microglia stimulated with LPS, although no reduction of this cytokine was observed with EP1, EP3, and EP4 agonists [19]. Intraperitoneal injection of LPS increased the expression of EP4 receptors in microglial cells and in the hippocampus of mice [20]. Interestingly, activation of EP4 receptors reduced the expression of different cytokines, COX-2 and iNOS in BV-2 and primary mouse microglial cells [20]. 2 . PGD 2 has also been shown to be important in neuroinflammatory conditions. A 6-day infusion of LPS in the fourth cerebral ventricle of rats enhanced the PGD 2 production in the brain [21]. It has been shown that PGD 2 produced by microglia acts on DP1 receptors of astrocytes, leading to astrogliosis. Moreover, oligodendroglial apoptosis was reduced by hematopoietic prostaglandin D synthase (HPGDS) inhibitor and in HPGDS-null mice, suggesting an important effect of PGD 2 in demyelination in twitcher mice, a model of Krabbe disease [22]. Expression of DP1 and HPGDS is also increased in the brains of patients with Alzheimer's disease [23]. PGD 2 also induced apoptosis of mouse oligodendrocyte precursor (mOP) cells, what could interfere in the demyelination process that occurs in multiple sclerosis [24]. It was shown that mice deficient in lipocalin-PGDS reveal an increased number of apoptotic neurons and olygodendrocytes, suggesting a protective role of lipocalin-type PGDS in the genetic demyelinating mouse twitcher [25]. PGD 2.3. 15-Deoxy-Δ 12,14 -Prostaglandin J 2 (15d-PGJ 2 ). 15d-PGJ 2 is a metabolite of PGD 2 and is formed from PGD 2 by the elimination of two molecules of water. At least some effects mediated by 15d-PGJ 2 are mediated by activation of the peroxisome proliferator-activated receptors (PPARs)γ. This prostaglandin has been shown to inhibit NO and tumor necrosis factor (TNF)-α production as well as expression of major histocompatibility complex (MHC) class II in activated microglia, suggesting that this prostaglandin might be important to modulate microglia functions [26]. Similar effects, such as downregulation of iNOS and cytokines, have also been observed in astrocytes [27]. PGI 2 . Few studies were carried out to investigate the role of PGI 2 in the CNS. In general, these studies suggest a neuroprotective role for PGI 2 against different stimuli. 2.5. PGF 2α . In rat primary neuronal culture, hypoxia increased PGF 2α content. Importantly, previous addition of this prostaglandin to the culture medium exacerbated hypoxic injury [31]. PGF 2α reduced TNF-α in primary spinal cord cultures stimulated with LPS [32]. In a model of unilateral middle cerebral artery occlusion, knockout (KO) mice to FP, the receptor for PGF 2α , have less neurological deficit and smaller infarct volumes [33]. The KO animals were also less sensitive to excitotoxicity induced by unilateral intrastriatal N-methyl-D-aspartate injection. In agreement with that, in the same model, the FP agonist latanoprost increased neurological deficit and infarct size in wildtype (WT) mice [33]. Roles of Prostaglandins in Neurodegenerative Diseases As previously mentioned, there are strong evidences that inflammation contributes to etiopathogenesis of neuroinflammatory and neurodegenerative diseases. Below, we discuss the involvement of prostaglandins in these neuropathological conditions. Multiple Sclerosis (MS). A neuroinflammatory component is very evident in the etiopathogenesis of MS. MS is an autoimmune demyelinating disorder characterized by distinct episodes of neurologic deficits attributable to white matter lesions. It is the most common of the demyelinating disorders, which affects predominantly northern Europeans. The disease becomes clinically apparent at any age, although onset in childhood or after 50 years of age is relatively rare. Women are affected twice as often as men. In most individuals with MS, the illness shows relapsing and remitting episodes of neurologic deficits. The frequency of relapses tends to decrease during the course of the disease, but there is a steady neurologic deterioration in a subset of patients [34]. Modeling clinical aspects of any human disease in rodents and cells is a big challenge in all fields of research. However, it is especially more challenging to model MS, because this is an exclusively human disease, its etiopathogenesis is unknown, and this disease is multifaceted, which occur in a relapsing-remitting manner. As the toxin-induced models of demyelination such as those induced by cuprizone, ethidium bromide and lysolecithin are important to understand demyelination and remyelination but do not resemble the human disease as efficiently as the autoimmune model (experimental autoimmune encephalomyelitis EAE), this paper will be focused on the roles played by prostaglandins in this model because of its presumed higher predictive validity [35]. 3.1.1. Phospholipase A 2 (PLA 2 ) and COX. There is a large body of evidence demonstrating the role played by prostanoids in the onset and progression of EAE in a wide variety of animal models as well as in in vitro studies. Within the last decade, some studies have demonstrated that cytosolic PLA 2 (cPLA 2 ) plays a key role in the etiopathogenesis of EAE [36][37][38][39]. There are evidences supporting distinct roles played by different isoforms of PLA 2 in the onset or progression of EAE [40]. cPLA 2 plays a role in the onset of EAE, calcium-independent PLA 2 in the onset and progression, and secretory type II PLA 2 in the later remission phase. Immunohistochemical labeling of cPLA 2 was shown in either immune or endothelial cells in the spinal cord lesions of mice with EAE induced by myelin oligodendrocyte glycoprotein (MOG). Both preemptive and therapeutic treatments with a selective cPLA 2 inhibitor resulted in marked reduction in the onset and progression of EAE. Accordingly, the reduced clinical score parallels with reduced spinal protein concentration of COX-2 and both gene expression and protein concentrations of dozens of inflammatory mediators, including several cytokines and chemokines which are implicated with the etiopathogenesis of EAE [36]. Moreover, selective inhibition of cPLA 2α prevents EAE and suppresses Th1 and Th17 responses [38]. cPLA 2α inhibitors diminish the ability of antigen-presenting cells to induce antigen-specific effector T-cell proliferation and inflammatory cytokine production, inhibit microglial activation, and increase oligodendrocyte survival [39]. The latter study also showed that if cPLA 2α inhibitors are administered at the peak of disease or during remission-relapsingremitting model-, the subsequent relapse is abolished. Consistently with these pharmacological studies, a genetic study showed that cPLA 2α -deficient mice are resistant to EAE [37]. COX-1 and -2 are upregulated in the CNS of animals in different EAE models [36,38,41]. Accordingly, different selective and nonselective inhibitors of COX isoforms induce beneficial effects in different animal models of EAE. EAE onset is delayed if diet is supplemented with acetylsalicylic acid shortly after its induction in Lewis rats [42]. Indomethacin, another non-selective COX inhibitor, attenuates the progression of EAE [43]. 2 . PGE 2 seems to be the eicosanoid which is more strongly implicated with EAE onset and progression. Bolton and colleagues investigated the CNS concentrations of PGE 2 , 6-oxo-PGF 1α , and PGF 2α in acute EAE-affected guinea pigs [44]. They showed that a PGE 2 concentration increase in spinal cord and cerebellum precedes EAE onset, whereas the other two prostanoids were found to peak after the observation of the first clinical signs of EAE. The behavioral syndrome associated with EAE is also preceded by increased CNS concentration of PGE 2 in mice [45]. A wide screening that examined the correlation between many arachidonic acid (AA) pathway products and EAE onset and progression showed that PGE 2 (concomitantly with its receptors EP1, EP2, and EP4) is synthesized more markedly than other eicosanoids [46], suggesting an important role in exacerbating EAE. However, dual roles played by PGE 2 have been recently shown in mouse EAE. PGE 2 exacerbates Th1 and Th2 responses via EP2 and EP4 receptors during mouse EAE onset and protects the brain from immune cell infiltration via EP4 receptor [47]. PGE mPGES-1 upregulation occurs in microglia/macrophages in the spinal cord lesions of mice with EAE induced by MOG as well as in brain tissues from MS patients. mPGES-1-deficient mice exhibit a better clinical score and suppressed Th1 and Th17 responses when compared with those of nongenetically modified control mice after EAE induction [46]. Regarding the untoward gastric and cardiovascular effects induced by COX inhibitors [48], there is an eagerness to discover compounds that target mPGES-1 for treating inflammatory diseases [49][50][51] because this enzyme is downstream to COX-2 in AA pathway. 15d-PGJ 2 . Systemic treatment with 15d-PGJ 2 inhibits EAE progression in mice, and this is associated with reduced demyelination, neuroinflammation, IL-12 production by macrophage/microglial cells, T-cell proliferation, and IL-12-induced T-cell responses [52]. Moreover, pretreatment with this agonist of PPARγ delays the onset of EAE and reduces the spinal cord infiltration of CD4 + T cells and macrophages [53]. 15d-PGJ 2 suppresses the production of cytokines and/or chemokines in cultured T cells, microglia, and astrocytes [53][54][55]. Providing further support to the role played by 15d-PGJ 2 in EAE etiopathogenesis, it was shown that PPARγ antagonists reverse the inhibition of EAE clinical signs and Th1 response by this cyclopentanone prostaglandin [56]. Other Prostaglandins. As there is a correlation between increased spinal PGDS concentration and the initiation of relapsing phase of EAE, it has been suggested a role played by this isomerase in this phenomenon [57]. Indeed, PGD 2 is released from mast cells in allergic reactions, and it is suggested to modulate allergic inflammation [58,59]. On the other hand, a more recent study showed that PGD 2 , PGI 2 and 5-lipoxygenase pathways are suppressed in the acute phase of EAE and returns to constitutive levels in the chronic phase [46]. However, in a relapsing-remitting model, PGD 2 remained unaffected throughout all phases [41]. Alzheimer's Disease (AD). The first evidences supporting a role played by inflammation on AD onset rose up in the late 1980s, when many signs of inflammation in postmortem brains from AD patients were observed, such as activated lymphocytes and microglial cells in plaque and tangle lesions, presence of complement proteins, cell lysis, and opsonisation of debris [60][61][62][63][64]. 3.2.1. PLA 2 and COX. It was hypothesized that the longterm use of nonsteroidal anti-inflammatory drugs (NSAIDs) could reduce the risk for AD or delay disease onset. Indeed, McGeer et al. [65] observed a clear negative correlation between the prevalence of AD in general population versus that in rheumatoid arthritis patients taking NSAIDs, mainly salicylates. Reinforcing this evidence, a clinical trial conducted shortly afterwards, showed that treatment with indomethacin, a nonselective COX inhibitor, improves cognitive deficits in AD patients [66]. Since then, epidemiological studies have been showing either beneficial or detrimental effects induced by COX inhibitors on AD risk and delay of onset, though beneficial effects are mostly observed [67]. Despite controversy, these studies clearly show that prostanoids play an important role in AD etiopathogenesis. cPLA 2 , which cleaves AA from cellular membrane phospholipids, is elevated in AD brain [68]. The cyclooxygenation and subsequent isomerization of AA produces prostaglandins, which regulate immune responses and neurotransmission [69,70]. Accordingly, increased expression of COX-1 and -2 is observed in AD-affected brains [71,72]. One of the most versatile products of this cascade is PGE 2 , which is produced by glial cells and neurons. PGE 2 . An increased expression of mPGES-1 and mPGES-2 is observed in the brain of AD brains [73,74]. Moreover, patients with probable AD have higher cerebrospinal fluid (CSF) concentrations of PGE 2 than agematched control subjects [75]. It has been shown that PGE 2 increases amyloid precursor protein (APP) gene expression and production in vitro [76][77][78]. This effect is inhibited by immunosuppressants in astrocytes [77] and is associated with EP2 receptor activation in microglial cells [78]. On the other hand, there is evidence supporting an antiinflammatory role played by PGE 2 mediated by EP4 receptor in LPS-stimulated cultured microglial cells [20]. However, PGE 2 increases APP production via both EP2 and EP4 receptors (but not via EP1 and EP3 ones) both in vitro and in vivo [76,79]. Hoshino et al. [76] showed that PGE 2 -dependent internalization of EP4 receptor increases γsecretase activity, which in turn leads to higher proteolysis of APP. In transgenic mice overexpressing APP, selective inhibition of COX-2 blocks amyloid β (Aβ)-induced suppression of hippocampal long-term potentiation (LTP) and memory function independently of reductions in Aβ42 and inflammatory cytokines, but markedly dependent on PGE 2 concentrations, showing an additional mechanism by which NSAIDs may protect against AD progression and an important synaptic role of PGE 2 in this setting [80]. EP2 receptors are important mediators of PGE 2 actions on electrophysiological properties of hippocampal neurons, as EP −/− mice exhibit cognitive deficits in social memory tests associated with a deficit in long-term depression in hippocampus [81]. Pharmacological studies corroborate these previously mentioned findings. Either exogenous or endogenous PGE 2 , but not exogenously applied PGD 2 or PGF 2α , regulates hippocampal neuronal plasticity [69,70]. PGD 2 and 15d-PGJ 2 . One of the first studies which assessed prostaglandins concentrations in postmortem cerebral cortices of probable AD patients showed that only PGD 2 was increased in comparison with age-matched control subjects [82]. Indeed, PGDS expression was found to be localized in microglial cells surrounding senile plaques, and DP1 receptor expression was observed in microglial cells and astrocytes within senile plaques in human AD brains. In Tg2576 transgenic mice-a model of AD disease-, the DP1 receptor expression increases in parallel with Aβ deposition [23]. As 15d-PGJ 2 induces neuronal apoptosis [83], it was initially suggested that this prostanoid is associated with neurodegeneration. However, it was shown afterwards that 15d-PGJ 2 reduces microglial production of NO, IL-6, and TNF-α induced by Aβ40, which suggests anti-inflammatory indirect neuroprotective effect [84]. Accordingly, not only 15d-PGJ 2 , but also troglitazone and ciglitazone, other compounds known to activate PPARγ and attenuate the Aβ-induced impairment of hippocampal LTP in vitro, supporting a possible beneficial effect on AD progression. Parkinson's Disease (PD) . PD is the second most common neurodegenerative disease, characterized by abnormal motor symptoms such as stiffness, postural instability, slowness of movement, resting tremor, and bradykinesia. The neuropathological features of PD are progressive death of dopaminergic neurons in the substantia nigra (SN) pars compacta that project to the striatum. The exact cause of this cell death is not clear, but recent studies have shown that the process may involve inflammatory reactions, in addition to oxidative stress, mitochondrial dysfunction, neural excitotoxicity, and insufficient neurotrophic factors [85][86][87]. It is known that, in the SN of PD brains, microglia is activated [5], and its activation has been strongly associated with CNS pathology of PD, by production of proinflammatory and cytotoxic factors, such as cytokines, chemokines, NO, reactive oxygen species (ROS), and AA metabolites [88,89]. Many data demonstrated also an alteration of COX-2 expression in PD. In fact, different studies have shown an upregulation of COX-2 in animal models of PD [92][93][94]. COX-2 increased expression has been also demonstrated in the SN of postmortem PD specimens in comparison to normal controls [95,96]. Moreover, it has been shown that COX inhibition [93,94,97,98] and COX transgenic ablation [99][100][101] in in vivo models of PD increased survival of dopaminergic neurons. However, this effect was not observed in all studies. Rofecoxib, a COX-2 inhibitor, did not change MPTP-induced neurodegeneration and, paradoxically, caused a significantly augmented basal prostaglandin production [92]. Regular use of NSAIDs is associated with a lower risk of PD compared with nonregular users of these drugs [85,102]. However, this is still controversial, since recent studies could not demonstrate a protective effect of NSAIDs in PD [103][104][105]. Considering that these drugs might have other mechanisms of action unrelated to COX inhibition, it is important to evaluate the effect of specific compounds in the prevention or treatment of PD. 2 . It has been observed that PGE 2 is significantly elevated in the CSF and SN of PD patients in comparison to control subjects. Moreover, incubation of slices of SN with AA induced an increased production of PGE 2 synthesis, suggesting an enhancement of the enzymes responsible to its production [106]. PGE Release of aggregated α-synuclein, a major component of Lewy bodies in PD, after neuronal damage, may activate microglia. This activation could, in turn, lead to production of proinflammatory mediators, such as PGE 2 [107], contributing to the progression of nigral neurodegeneration. A pretreatment of primary mesencephalic neuron-glia mouse cultures with α-synuclein enhances the production of PGE 2 . Apparently, phagocytosis of α-synuclein activates NADPH oxidase, which produces ROS, and has a crucial role in microglial activation and associated neurotoxicity [107]. In primary mesencephalic mixed neuron-microglia cultures, MPP + , a neurotoxin that causes dopaminergic neuronal death, induced PGE 2 production. However, this effect was not observed in enriched microglia and enriched neuron cultures, indicating that is necessary an interaction between microglia and neurons for MPP + -induced increase of the PGE 2 production, probably due to COX-2 activity. Moreover, PGE 2 was not enhanced neither in enriched astroglia nor in neuron-astroglia cultures [94]. Conversely, PGE 2 was significantly reduced in the hippocampus, striatum, and cortex of animals injected with 6-hydroxydopamine (6-OHDA) [108]. It has been shown that EP receptors are expressed differently in the SN. To date, in the rat, EP1 is restricted to dopaminergic neurons, while EP3 is expressed exclusively by nondopaminergic cells. On the other hand, EP2 is localized to both dopaminergic and nondopaminergic cells [109]. In rats, EP1, but not EP2 and EP3 receptor antagonists, reduced the dopaminergic neuronal death induced by 6-OHDA, suggesting an important effect of EP1 receptor in the neurotoxicity induced by PGE 2 [109]. Also, culture of dopaminergic neurons displayed EP2 receptors after 6-OHDA neurotoxicity, and butaprost, a selective EP2 agonist, significantly increased survival of tyrosine hydroxylase positive cells, suggesting a possible neuroprotective role of EP2 of activation [110]. Interestingly, in comparison to microglia obtained of WT animals, microglia of EP2 KO mice reveal an enhanced capacity to clear aggregated α-synuclein in human mesocortex tissue of patients with Lewy body disease. Moreover, EP2 −/− mice were more resistant to neurotoxicity induced by MPTP, an effect that is associated with attenuated formation of aggregated α-synuclein in the SN and striatum [111]. 2 , and Other Prostaglandins. PGJ 2 and its metabolites might alter the process of protein folding and aggregation, contributing to the development of PD. In human neuroblastoma SK-N-SH cells, PGJ 2 disrupts the structural integrity of microtubules and actin filaments [112]. In vitro, this molecule also hindered the polymerization of highly purified tubulin from bovine brain [113]. Interestingly, in cells treated with PGJ 2 , microtubule/endoplasmatic reticulum collapse coincides with the formation of protein aggregates, such as ubiquitinated proteins and α-synuclein [113]. PGD 2 , PGJ In mouse and human neuroblastoma cells, as well as in rat primary embryonic mesencephalic cultures, PGA 1 , PGD 2 , PGJ 2 , and its metabolite Δ 12 -PGJ 2 induced accumulation of ubiquitinated proteins and cell death [114]. PGE 2 only exhibited neurotoxic effects at high concentrations. The ubiquitination induced by Δ 12 -PGJ 2 might be due to inhibition of ubiquitin C-terminal hydrolase (UCH) L3 and UCH-L1, implicating in an alteration of deubiquitinating enzymes, possibly contributing to the accumulation and aggregation of ubiquitinated proteins, what leads to inflammation associated with the neurodegenerative process [114]. Modification of UCH-L1, an enzyme that functions predominantly during monoubiquitin recycling in the ubiquitin-proteosome system, by cyclopentenone prostaglandins, induced unfolding and aggregation of the protein. Therefore, the deleterious effect of COX-2 in PD could be due to the production of cyclopentenone prostaglandins [115]. In addition to that, PGA 1 has been shown to reduce nuclear factor kappa B translocation to the nucleus, caspase 3 activation, and apoptosis of human dopaminergic SH-SY5Y cells induced by rotenone [116]. Amyotrophic Lateral Sclerosis (ALS) . ALS is a progressive neurodegenerative condition characterized by the selective death of motor neurons [117]. This neuropathological condition can be classified as familial, in which mutations in the enzyme superoxide dismutase-1 (SOD1) can occur, or as sporadic, which encompasses 90% of ALS patients [118]. Neuroinflammation seems to play an important role in the progress of this disorder. In ALS, microglia activation and proliferation is observed in regions where there is neuron loss, like motor cortex, motor nuclei of the brainstem and corticospinal tract. Microglia might be essential for the motor neuron toxicity [119]. PLA 2 and COX. It has been shown that cPLA 2 is expressed in astrocytes and motor neurons of the spinal cord of transgenic mice carrying the gene encoding a mutant form of human SOD1 [120,121]. In agreement with that, cPLA 2 immunoreactivity was also observed in the spinal cord of human SOD1-mutated familial ALS and in sporadic ALS patients [120,122]. An increase in COX-2 expression is observed in the spinal cord of SOD1 G93A transgenic mice [123,124] and human cases of ALS [125,126]. Postmortem examination of the ventral horn of the spinal cord of sporadic ALS patients revealed that COX-2 immunoreactivity was increased in motor and interneurons, as well as in glia, in comparison with non-ALS controls [127]. On the other hand, COX-1 expression was detected in microglia, but not in neurons, of ALS and controls tissues, albeit no difference was observed between the two groups of patients [127]. Few attempts have also been made to elucidate the effect induced by COX inhibitors in models of ALS. In organotypic spinal cord cultures, the COX-2 selective inhibitor SC236 significantly reduced the excitotoxic damage of motor neurons induced by threo-hydroxyaspartate, a compound that inhibits astroglial transport of glutamate [128]. Therefore, it is possible that COX-2 might be involved in the excitotoxicity induced by glutamate. Moreover, in vivo studies also suggested that COX might be a potential target for ALS treatment. It has been shown that traditional NSAIDs and COX-2 inhibitors reduced different pathological features developed by SOD1 G93A transgenic mice, such as loss of motor neurons and glial activation in the spinal cord, motor impairment and weight loss, as well as these compounds prolonged the survival of the animals [120,[129][130][131]. Considering these evidences, Minghetti [132] suggested that COX-2 enhancement could be deleterious in ALS not only due to the enhancement of glutamate release by PGE 2 [133], but also because of the ROS produced by COX peroxidase activity. On the other hand, Almer et al. [134] have shown a drastically reduced PGE 2 production in the spinal cord of transgenic SOD1 G93A /COX-1 −/− mice, suggesting a minor role for COX-2 in the production of PGE 2 in the disease. Moreover, deficiency of COX-1 did not affect motor neuron loss and survival of the animals [134]. These results challenge the concept that COX-2 is the main enzyme involved in ALS. PGE 2 and 15d-PGJ 2 . PGE 2 is elevated in the spinal cord of SOD1 G93A mice [130] and in the serum and CSF of ALS patients [127,135], though the levels of this prostaglandin did not correlate with clinical state of the patients [135]. The role of PGE 2 was further investigated in in vitro models of ALS. In an organotypic spinal cord slice model, motor neuronal death induced by D, L-threo-hydroxyaspartate is reduced by PGE 2 , as well as butaprost and sulprostone, EP2 and EP3 receptor agonists, respectively [136]. Interestingly, in the same study, SC58236, a COX-2 inhibitor, also reduced motor neuron loss. EP2 receptor expression is increased in astrocytes and microglia of SOD1 G93A mice and in astrocytes of human ALS spinal cord. Deficiency of EP2 receptor in SOD1 G93A mice increased the survival and grip strength in comparison with SOD1 G93A /EP2 +/+ and SOD1 G93A /EP2 +/− mice. The absence of EP2 receptor also reduced the production of different inflammatory mediators in this animal model of ALS [124]. Recently, it has been shown that mPGES-1 is enhanced in the spinal cord of SOD1 G93A in comparison with WT mice. Interestingly, AAD-2004, a molecule that inhibits mPGES-1 and free radical formation, reduced microglia activation and motor neuron loss, as well as it improved motor function and increased survival [137]. 15d-PGJ 2 immunoreactivity is increased not only in motor neurons, but also in astrocytes and reactive microglia in the spinal cord of ALS patients [138]. Huntington's Disease (HD). HD is a progressive neurodegenerative disease that reveals movement disorders and dementia as main features. This pathological condition is an autosomal-dominant pathological condition disease [139,140]. Although there are evidences that neuroinflammation is present in HD, it is not known whether it contributes to the etiopathogenesis of the disease or whether it is solely an epiphenomenon [141]. It has been shown that in R6/2 mice, an animal model of HD, the number of microglia is reduced in some brain regions in comparison with their WT littermates. Microglia of animals at 14.5 weeks of age were also smaller in size than the same cells in the animals at 7 weeks of age, and they also revealed condensed nucleus and fragmentation of the cytoplasm within processes, suggesting an impaired function of these cells in this pathological condition [142]. On the other hand, activated microglia are present in the neostriatum, cortex, and globus pallidus of HD brains. Importantly, the reactive microglia appeared in association with pyramidal neurons presenting huntingtinpositive intranuclear inclusions [143]. Although a causal link between neuroinflammation and HD onset or progression has not been demonstrated, it is reasonable to assume that microglia might play a role in its development. COX. Although there are different genetic models of HD, some compounds such as 3-nitropropionic acid (3-NP) and quinolinic acid (QA) are also used to induce striatal neuron toxicity, being therefore considered HD animal models [144][145][146]. COX-2 immunoreactivity is enhanced in striatal tissues 12 h after treatment of animals with QA. This enhancement was observed predominantly in neurons and microglia [147]. Chronic treatment with different COX inhibitors, such as rofecoxib, celecoxib, nimesulide, and meloxicam improved spontaneous locomotor activity and the motor performance, as well as these medicines reduced biochemical and mitochondrial alteration induced by QA [148][149][150]. Naproxen and valdecoxib, two COX inhibitors, also reduced 3-NPinduced motor and cognitive impairment [151]. This study suggested that these effects could be due to a reduction in the oxidative stress induced by the drugs. Although beneficial effects were observed induced by COX inhibitors in drug-induced models of HD, similar effects are not observed in transgenic mice. For example, administration of acetylsalicylate from weaning did not induce any alteration of rotarod performance and ventricle enlargement N171-82Q mice in comparison with untreated animals. Rofecoxib also did not change motor performance and lifespan of R6/2 mice [152]. On the other hand, acetylsalicylate and celecoxib shortened life expectancy of R6/2 and N171-82Q mice, respectively [152,153]. 2 , PGF 2α , and PGA 1 . Administration of 3-NP enhances PGE 2 and PGF 2α in the striatum [154,155]. These prostaglandins are reduced by licofelone, a competitive inhibitor of COX-1, COX-2, and 5-LOX isoenzymes. In addition, this compound reduced the impairment in locomotor activity and motor performance, as well as it reduced apoptotic markers [155]. Expression of COX-2, as well as PGE 2 production, is increased in the ipsilateral side compared with the contralateral vehicle-injected side in the striatum and cortex of rats by unilateral intrastriatal injection of QA [156]. Moreover, it has also been shown that QA injection induced EP3-positive striatal neuronal loss, whereas activated microglia expressed EP3 in vivo after excitotoxicity injury [157]. PGE A role for PGA 1 has also been suggested. This prostaglandin attenuated DNA fragmentation and neuronal loss and increased dopamine D1 receptor expression induced by QA in the striatum it also reduced the QA-induced activation of nuclear factor kappa B, but not activator protein-1, in this brain region [158]. Discussion There is an intricate relationship between neuroinflammation and neurodegeneration. In general, acute inflammation in the CNS is triggered by a neuronal injury or infection and is short-lived. This acute response is believed to have protective aspects, since it could avoid further injury and induce tissue repair [159]. Although an acute stimulus may trigger, for example, oxidative stress, this short-term event would not interfere with long-term neuronal survival [160]. It is known that moderate microglia activation might induce neuroprotective effects, such as to scavenger neurotoxins, remove cell debris and secrete mediators which are important for neuronal survival [160]. Acute activation of these cells is a normal response to injuries, and it contributes to wound healing [161]. On the other hand, chronic neuroinflammation persists for a long time after the initial insult and normally is selfperpetuating [160]. This condition induces neuronal death, and the molecules released by the dead neurons can further activate microglia, which enhances cell death. This vicious cycle, together with the continuous production of factors that activate microglia, contributes to the chronicity of this process. Again, microglia might play an important role in this long-term process. Intense activation and accumulation of these cells at the site of injury can induce neuronal damage, since they release a variety of neurotoxic substances. For example, the Aβ protein, which is involved in AD, can activate microglia and lead them to release neurotoxic factors such as NO, TNF-α, and superoxide, leading to the progression of this disorder [162]. An interesting finding is that a chronic inflammation induced by the infusion of LPS (a substance that strongly activates microglia) in the brain of rats resembles different features observed in AD patients [163]. Actually, it is presently not clear why the neuronal or glial cells cannot prevent the chronicity of the inflammatory process. However, it might be due to a plethora of effects. Abnormal synthesis of some proteins by neurons could continuously activate microglia, leading them to the release of neurotoxic factors. Moreover, oxidative stress is another important event that contributes to the neuronal damage observed in chronic neuroinflammation [164]. It is also possible that the senescence of immune system in the CNS could contribute to chronicity of this process. For example, it has been shown that microglia from old transgenic PS1-APP mice release an increased amount of inflammatory mediators and do not phagocytose Aβ properly in comparison to microglia from young mice [165]. Therefore, microglia senescence could play a role in the development of some neurodegenerative conditions [161,166]. Despite these facts, the adaptive immune system might also play a role, as it has been shown that it is involved in the etiopathogenesis of PD [167]. In this context, one might assume that the production of lipid mediators, such as prostaglandins, might differently modulate neuroinflammation and neurodegeneration. Considering the roles of prostaglandins and depending on the stage of inflammation, as well as different microenvironments generated by a variety of substances, these lipid mediators could determine the survival or death of neurons. Conclusion Here we summarized the evidences that prostaglandins might play a key role in the etiopathogenesis of neuroinflammatory and neurodegenerative diseases. Prostaglandins have a plethora of actions in CNS cells that differently affect the progress of inflammation and neuronal death or survival. Therefore, inhibition of the production of a specific prostanoid or its action on its receptor would be a better mechanism to control some pathological processes. On the other hand, inhibiting the effects of some prostaglandins could also be deleterious. Thus, further studies are important to make a more complete idea the role of these lipid mediators in neuroinflammation and neurodegeneration. This knowledge might serve to develop pharmacological strategies for the treatment of neurological diseases.
Simulating seasonal drivers of aphid dynamics to explore agronomic scenarios With the regulation of pesticides in European agricultural landscapes, it is important to understand how pest populations respond to climate and landscape variables in the absence of pesticides at different spatial–temporal scales. While models have described individual biological processes, few have simulated complete life cycles at such scales. We developed a spatially explicit simulation model of the dynamics of the bird cherry–oat aphid (Rhopalosiphum padi) in a pesticide-free simulated landscape using data from an agricultural landscape located in southwest France. Using GLMMs, we ran two statistical methods, one at the crop level, focusing on aphid densities within each crop individually (wheat and its regrowth, corn, and sorghum), and another at the landscape level where aphid densities were not differentiated by crops. For each season, we analyzed how temperature, immigration, and habitat availability impacted on aphid densities. Predictors of aphid densities varied between crops and between seasons, and models for each individual crop resulted in better predictions of aphid densities than landscape-level models. Aphid immigration and temperature were important predictors of aphid densities across models but varied in the directionality of their effects. Moreover, landscape composition was a significant predictor in only four of the nine seasonal crop models. This highlights the complexity of pest–landscape interactions and the necessity of considering fine spatial–temporal scales to identify key factors that influence aphid densities, essential for developing future regulation methods. We used our model to explore the potential effects of two agronomic scenarios on aphid densities: (1) replacement of corn with sorghum, where increases in available sorghum led to the dilution of aphid populations in sorghum in spring and their concentration in summer, and (2) abandonment of pastures for wheat fields, which had no significant effect on aphid densities at the landscape scale. By simulating potential future agronomic practices, we can identify the risks of such changes and inform policy and decision-makers to better anticipate pest dynamics in the absence of pesticides. This approach can be applied to other systems where agronomic and land cover data are available, and to other pest species for which biological processes are described in the literature. INTRODUCTION Agriculture is highly impacted by the presence of pests (Deutsch et al. 2018). Aphids, in particular, affect a wide variety of crops and can be found worldwide (Van Emden and Harrington 2017). Climate drives many of the processes involved in the life cycle of these ectotherm organisms, such as survival (Pons et al. 1993), reproduction (Simon et al. 2002), and development (Campbell et al. 1974). Other than favorable climatic conditions, aphids also require different habitats to fulfill their biological cycles, which can be unevenly distributed through space and time (Schellhorn et al. 2015). Therefore, the composition and configuration of habitats within landscapes (Chaplin-Kramer et al. 2011) should be considered when studying aphid population dynamics. Recent meta-analyses have, however, highlighted that natural pest regulation exhibits inconsistent responses to surrounding landscape structure (Rusch et al. 2016, Karp et al. 2018. One hypothesis explaining this result is that agricultural landscapes are subject to frequent changes due to their strong anthropogenic nature, which leads to highly variable spatial-temporal dynamics (Urruty et al. 2016). Farm management shapes spatial-temporal variability of land cover through factors such as crop rotation (Wibberley 1996), crop variety (Asrat et al. 2010), or individual crops managed differently due to the experience and ideological beliefs of farmers (McGuire et al. 2015). With agricultural practices changing rapidly due to economical (Van Vliet et al. 2012) and climatic (Rickards and Howden 2012) drivers, it is essential to develop a conceptual framework exploring how pests respond to these shifts. Such a framework would assist in anticipating undesired effects on pest dynamics and help identify new management methods. This tool should build upon the Integrated Pest Management framework (IPM; Elliott et al. 1995), which focuses on understanding the interactions between pests and landscapes. The aim of IPM strategies is not to eradicate pests, but to maintain populations below economically injurious levels (Stenberg 2017). The general IPM framework can be divided into two complementary approaches: a bottom-up approach regulating resource availability for the pest to minimize population dynamics, and a top-down approach increasing resource availability for natural enemies, in order to increase their abundance and thus increase their predation pressure on pest dynamics. Both approaches aim at managing resources in space and time. Therefore, it is necessary to consider a wide range of spatial-temporal scales, ranging from the finest scale (the individual plant) to the broadest (the landscape on a pluriannual scale). Fine-scale control methods are relatively well-documented, such as delaying sowing dates to temporally avoid pest migration windows and thus limit colonization (McLeod et al. 1992), or push-pull strategies (Cook et al. 2007), which control pest movement behavior through the use of natural chemical signals produced by specific plants. Landscapescale control involves looking at the spatial-temporal organization of crops and semi-natural habitats within the landscape and much is still to be learnt about how mobile organisms respond to these parameters. For example, the sourcesink processes of aphid colonization in agricultural landscapes represent an important challenge for crop management (Vialatte et al. 2006, Bianchi et al. 2007. Resource availability at the landscape level can have contrasting impacts on the abundance of pests. High abundance of host crops can decrease (e.g., through a dilution effect; Thies et al. 2008) or increase (e.g., through a concentration effect; Root 1973) the abundance of a pest in a given year at the field level. Agricultural practices conducted in the neighboring host crop fields are an important factor of these concentration or dilution effects (Monteiro et al. 2013). Interannual effects can also be observed, such as population explosions following a year where resource availability was high (Marrec et al. 2017). Finally, Bat ary et al. (2011 have underlined that species respond differently to increases in landscape heterogeneity, with generalists responding positively and specialists negatively. The top-down approach of pest regulation has been studied mostly through predator-prey interaction models (Liere et al. 2012, Thierry et al. 2015. On the contrary, fewer studies have focused on regulating pest populations through bottom-up approaches, with some examples on cereal aphids (Parry et al. 2006) and weevils (Vinatier et al. 2012). The dynamics of pests v www.esajournals.org represented in these models usually focus on fine spatial-temporal scales, modeling one process (i.e., dispersal) during a small time frame (mainly during spring colonization of crops). These models also tend to use a binary approach when modeling habitat, with all habitat types sharing the same properties. Kieckhefer and Gellner (1988) have shown that crop type and phenological stages can have different impacts on the reproductive success of cereal aphids. Thus, modeling habitat quality as a range of different values rather than a binary process should be considered. In this study, we used a spatially explicit simulation model to (1) determine how climate, pest immigration, and spatial-temporal variation in habitat availability influence seasonal variation of pest populations in both individual crops and at the landscape level, and (2) explore agronomic scenarios to identify potential effects of changes in practices on aphid populations and help inform future decisionmaking. To explore how these variables influence the entire life cycle of our pest, population dynamics were simulated in a theoretical environment with no pesticide applications to avoid chemically induced population crashes. To validate our approach, we focused on a specific case study of cereal aphid populations in France, the bird cherry-oat aphid, Rhopalosiphum padi (L.). This pest causes both direct damage to cereal crops and indirect damage through transmission of the barley yellow dwarf virus (Plumb 1983). Our theoretical landscape was based on the long-term ecological research site Vall ees et Coteaux de Gascogne (part of the Zone Atelier Pygar) located in the southwest of France. We selected this site due to both availability of agronomic data and its high heterogeneity of natural elements, which studies have shown to be the ideal structure to reduce potential pest infestations in agricultural landscapes (Veres et al. 2013). Landscape dynamics were simulated using the Agricultural Landscape Simulator ATLAS (Thierry et al. 2017). We also explored two agronomic scenarios, based on changes in agricultural practices currently observed in our study region, to determine whether R. padi populations are likely to increase in the future with agricultural landscape changes. Simulating the agricultural landscape The studied agricultural landscape is a 2 9 2 km landscape from the region of Vall ees et Coteaux de Gascogne. This site is part of the Long Term Ecological Research (L LTER_-EU_FR_003) network located in the southwest of France, near the city of Toulouse (43°17 0 N, 0°54 0 E). The climate is semi-oceanic, including hot and dry summers and cool and humid winters. The landscape is composed of parallel hillsides and valleys in which water bodies, such as streams, can be found. The agricultural system is essentially composed of crop-livestock farming. The geographical distribution of fields and pastures is directly influenced by the topography, with pastures located along the hillsides, whereas crops are generally sowed in the bottom of the valleys (Choisis et al. 2012). To simulate the spatial-temporal evolution of the agricultural landscape, we used the ATLAS simulator (Thierry et al. 2017). ATLAS simulates daily agricultural practices in a spatially explicit landscape. The advantage of this simulator is the possibility to reproduce realistic configuration and composition values for crops at the landscape level by considering user-defined crop rotations and crop phenological stages. Crop phenology and crop rotations used in this paper (see Appendix S1: Table S1) are the ones described in Thierry et al. (2017) with the addition of wheat volunteer, which is regrowth of wheat shortly after it is harvested. Wheat volunteer is common in our study system but is usually removed by farmers early November. In our model, wheat volunteer is simulated using the same properties as wheat seedlings and was removed at latest on 7 November, based on observed events in the study system. Simulating aphid dynamics A detailed description of the model can be found in the supplementary material (see Appendix S2) using the ODD (overview, design concepts, detail) protocol for describing models (Grimm et al. 2010). The aim of this model was to represent dynamics of R. padi throughout the year in relation to climate, immigration, and habitat quality and availability. Four potential host crops were included in the model: corn, sorghum, wheat, and wheat volunteer. Habitat quality was represented by different reproduction rates of aphid depending on the host and its growth stage (Kieckhefer and Gellner 1988), and through different carrying capacities (crops vs volunteer). We did not consider pastures or other natural elements of the landscape as potential habitat in this model, in relation to evidence of habitat specialization of cereal aphids in French agricultural landscapes (Gilabert et al. 2014). To model the population dynamics, different submodels were put together simulating aphid development, reproduction, mortality, and dispersal. Populations were represented using a cellular automaton, with cells of 30 9 30 m, each possibly containing an aphid population composed of adults and nymphs and covering the entirety of the 2 9 2 km landscape (67 9 67 grid for a total of 4489 cells). For the sake of simplicity, plant density was not considered in the model to estimate populations. Aphid immigration was derived from 5 yr of suction trap collected in Montpellier, France, from the AGRAPHID network between 1997 and 2001. To avoid population explosions due to favorable conditions (no pesticide interventions), we considered a daily mortality rate due to natural enemies (a 30% potential daily mortality rate; Arrignon et al. 2007) in relation to daily mean temperature (see Appendix S2, mortality submodel). Exploring agronomic scenarios We explored the potential effects of two scenarios describing plausible changes in agricultural practices in the Vall ees et Coteaux de Gascogne region on R. padi dynamics. In the first scenario, we simulated the effect of a change in crop type by replacing all corn with sorghum. Sorghum is much less demanding in terms of water needs than corn (Farr e and Faci 2006) and is increasingly popular in local crop rotations where summers are getting dryer over time due to a decreasing time trend in rainfall (Juvanon Du Vachat 2014). In our second scenario, we simulated the effect of a drastic land-use change, through the abandonment of livestock farming. We replaced all temporary pastures with wheat in the crop rotations. This change has already been initiated in the early 2000s in this region (Choisis et al. 2010) mainly due to the intensity of work needed for livestock farming and the low financial benefits of it compared with crop farming (Ryschawy et al. 2013). Simulation planning We explored six different versions of the simulation model (Table 1): (1) a reference model integrating habitat quality and a 30% potential daily mortality rate due to predation pressure, (2) a null model where phenological stages of all crops had the same effect of aphid dynamics, (3) a high predation model where predation pressure was increased (50% potential daily mortality rate), (4) a low predation model where predation pressure was decreased (10% potential daily mortality rate), (5) a pasture scenario where livestock was abandoned for wheat, and (6) a sorghum scenario where corn was replaced entirely with sorghum. Models (2), (3), and (4) were compared to the reference model (1) to explore the effects of habitat quality and predation pressure in the model. Models (5) and (6) were compared to the reference model (1) for scenario exploration. Each model was simulated a total of ten times, and each simulation was set to last 10 yr. The 10yr window was chosen since it allowed the longest crop rotation to be simulated once and others multiple times. The agricultural practices observed within a 10-yr window are usually relatively stable, and this period appears appropriate according to socioeconomical and global changes. To fit this 10-yr window, our 5 yr of suction data was repeated twice. With aphid immigration being highly correlated with climate (Klueken et al. 2009), we also repeated the corresponding 5 yr of weather data associated with the location of the trap twice. Details of both data sets are available in the supplementary material (see Appendix S1). Statistical analysis Using the R statistical software (Version 3.4.3, R Development Core Team 2016), we conducted the statistical analyses in two steps: (1) identification of the effects of climate, aphid immigration, habitat availability, and previous season aphid densities on seasonal aphid densities at both the crop scale and the landscape scale, and (2) comparison of seasonal aphid densities at the crop level between the reference scenario and the two theoretical agronomic scenarios. The simulation v www.esajournals.org model produced daily outputs for all parameters, and aphid densities were recorded at the cell level. To facilitate the analysis of our outputs, we divided the year into four 3-month periods (winter, January-March; spring, April-June; summer July-September; and fall, October-December), representing seasonality throughout the year. Temperature was summarized over the 3-month intervals by calculating cumulated degrees. Aphid immigration was transformed as the total sum of immigrating aphids over each season. Landscape composition was considered as the total area assigned to each crop at the seasonal level. Finally, aphid densities were summed up at the crop level daily as a density per square meter and were averaged across all days where the crop was available during the season. An example of the data used to analyze model outputs is available in the supplementary material (see Appendix S3: Table S1). While crop metrics and aphid densities are simulated, climate and immigration data are based on a 5-yr data set and thus are of a sample size of 20 unique values. Correlations between climate and landscape variables were tested using Pearson's correlation tests (see Appendix S3: Table S2). The variables retained in our analyses were chosen so that no pair showed strong correlations (>|0.5|) following Cohen's benchmark for large effect (Cohen 2013). Correlation between weather and crop metrics is bound to happen because of seasonality; thus, correlation between both was not used to select variables. Strong correlation between sorghum and corn, and wheat and wheat volunteer led us to consider only sorghum and wheat in our models, as they are the covers increasing in our scenarios. To explore the effects of climatic and habitat availability parameters on simulated aphid densities, we ran two types of statistical methods, one at the crop level, focusing on aphid densities within each crop individually to see how well models could predict densities, and another at the landscape level where aphid densities were not differentiated by crops. We decided to compare these two methods since explanatory variables and seasonal availability varied across crops using the reference scenario outputs. The unit of observation for these statistical models was the seasonal average aphid density per square meter within fields. These were estimated by averaging model outputs (daily aphid densities) into a unique seasonal value. Since our simulation spanned over a 10-yr window, each replicate produced a total of ten data points per season, and we repeated the simulation ten times. The main difference among these two approaches was how the unit of observation (aphid densities) was estimated. For crop-level models, we only considered aphid densities within each crop individually, while for landscape-level models, aphid densities were averaged across all crops. Scenario exploration Notes: The parameters that could differ among versions were how habitat quality was considered (same across all crops/ phenological stages or specific for each crop/phenological stage pair), the daily potential mortality rate by predation pressure (low, average, or high), and the crop rotations considered in the simulation (current practices, pasture scenario, or sorghum scenario). The models used were all general linear mixed models (GLMMs). Each explanatory variable was standardized between 0 and 1 using the range01 function from the modEvA package (M arcia Barbosa et al. 2013) to allow estimates to be comparable within and between models. All GLMMs were modeled using a Poisson distribution (see Appendix S3: Table S4) and an observation-level random intercept. An observation-level random intercept was added to the model to account for a possible overdispersion of the data (Lee and Nelder 2000). We used the Akaike information criterion for model selection through the dredge function in the MuMIn package (Bart on and Barton 2013). Marginal R 2 was calculated for the fixed effects using the r.squaredGLMM function (MuMIn package) and reported for every model to assess how well the model predicted our response variable (Nakagawa and Schielzeth 2013). Significance of fixed effects for each model was estimated using a type II ANOVA (car package; Fox et al. 2012). To compare seasonal aphid densities among the reference model and the two agronomic scenarios, we used the emmeans package (Lenth et al. 2018) with a general linear mixed model with a random intercept on crop type (see Appendix S3: Table S5). Application to the Vall ees et Coteaux de Gascogne landscape By coupling the aphid model with ATLAS, we simulated pluriannual population dynamics of R. padi in a dynamic agricultural landscape. We explored the effects of different predation rates by natural enemies in the model (Fig. 1). Higher predation pressure (0.5 mortality rate when natural enemies are active) caused extremely low population levels in late summer. Lower predation rates (0.1 mortality rate when natural enemies are active) created much higher population concentrations in spring compared with the other two scenarios. The reference scenario, with a predation pressure of 0.3 daily mortality rate when predators are active, produced in-field dynamics matching known patterns, with a period of colonization and settlement, followed by a strong population reduction (specifically in wheat fields) due to the intervention of natural enemies. We also explored the effect of considering the quality of the successive host crops and their impact on aphid dynamics through different reproductive rates by comparing the reference model to the null model (Fig. 1). Considering stage-specific reproductive rates led to wheat housing higher densities of aphids at its late phenological stages, while summer crops were of lower habitat quality for aphids overall in comparison with the null model. The relation between environmental parameters and aphid densities In the crop-level models, the significance and directionality of the effect of explanatory variable varied greatly across crops and seasons (Table 2). We used marginal pseudo-R 2 to identify models where total variance was poorly explained by our fixed effects. The corn-spring, sorghumspring, and volunteer-fall performed the poorest (marginal r 2 ≤ 0.1) and are thus not further discussed. Temperature was a significant predictor of aphid densities across all wheat models (negative effect of higher temperature on aphid abundance in winter and spring, positive effect of higher temperature on aphid abundance in fall) and the volunteer-summer model (negative effect of higher temperature on aphid abundance). Aphid immigration was also a significant predictor across models, with higher aphid immigration leading to lower aphid densities in the wheat-spring and volunteer-summer models while leading to higher aphid densities in the wheat-fall and corn-summer models. The two landscape composition variables (area assigned to wheat and area assigned to sorghum) had contrasting effects when significant, with wheat having a negative effect in the wheat-fall model, while sorghum had a positive effect in the volunteer-summer and corn-summer models (Table 2). Finally, aphid densities within crops in the previous season seemed to be a strong predictor of aphid densities across the six models that performed best. Directionality of the effects varied across crops and seasons, with a strong negative effect of aphid abundance in volunteers in the previous season on aphids in wheat during the winter. In the landscape-level models where aphid densities were averaged at the landscape level, the spring, summer, and fall models performed v www.esajournals.org poorly (marginal r 2 ≤ 0.1) and had no predictive power (Table 3). In winter, populations responded to the same drivers as wheat since it was the only available crop during that season. Exploring agronomic scenarios Daily areas assigned to each crop varied between the reference scenario (1) and the two crop change scenarios (5 and 6; Fig. 2a). In the pasture scenario, all temporary pastures in the crop rotations were replaced by wheat (see Appendix S1: Table S1). This led to a 50% increase in the average area assigned to wheat and wheat volunteer throughout the seasons compared with the other two scenarios. This represents a change from wheat covering about 30% Fig. 1. Mean fitted aphid density per square meter of host crop in the landscape throughout a specific year (year 3) in a randomly chosen simulation for different predation pressure values and phenology models. Dashed lines separate the year into four different seasons. Crop availability throughout the year is indicated below the graph. Predation pressure is applied as a daily mortality rate when temperature values are high enough for natural enemy activity. The habitat quality model takes into account different reproduction values for aphids depending on the crop they are currently feeding on and its growth stage, in contrast to the null model, which uses a similar rate across all crops and stages. We explored three values of predation pressure with the habitat quality (top). We also explored the effects of habitat quality by comparing the null phenology model and the habitat quality phenology model (bottom), with a predation pressure fixed at the rate of 0.3. The mean values of aphids were fitted using a general additive model smoothing (GAM). Notes: Each fixed effect is described by its coefficient value. Chi-square, degrees of freedom, and estimated significance (P value; ns = not significant, †P < 0.1, *P < 0.05, **P < 0.01, ***P < 0.001) are obtained using a type II Wald chi-square test. For each model, we also report a marginal pseudo-R 2 representing the variance explained by fixed factors. Bold values are significant effects. of the area assigned to fields each year to almost 45% on average. In the sorghum scenario, all corn in the crop rotations was replaced by sorghum (see Appendix S1: Table S1). This led to a 200% increase in the area assigned to sorghum yearly. This represents an increase from sorghum covering around 3.5% of the area assigned to fields to 10% on average. Overall, the pattern of Notes: Each fixed effect is described by its coefficient value. Chi-square, degrees of freedom, and estimated significance (P value; ns = not significant, **P < 0.01, ***P < 0.001) are obtained using a type II Wald chi-square test. For each model, we also report a marginal pseudo-R 2 representing the variance explained by fixed factors. Bold values are significant effects. simulated aphid dynamics remained consistent across all scenarios, except for the sorghum scenario where corn was no longer present as a possible habitat for aphids (Fig. 2b). Mean aphid densities within wheat did not vary across scenarios (Figure 3). Aphid densities in wheat volunteer (summer and fall), corn (summer), and sorghum (spring and summer) significantly differed among scenarios. Aphid densities in wheat volunteer were higher in the pasture scenario during fall and lower during summer compared with the other two scenarios. Aphid densities in sorghum were lower in the sorghum scenario in spring and higher in summer compared with both other scenarios. Finally, aphid densities in corn in summer were lower in the pasture scenario compared with the reference scenario. DISCUSSION Our study highlights the complexity of interactions between climate, landscape, and population dynamics of aphids. Ecological predictors of aphid densities varied not only temporally (seasons) but also among crop types. Our framework offers an easy-to-apply method to inform integrated pest management while considering these levels of complexity and exploring the consequences of potential changes in agricultural practices through the simulation of scenarios. Comparison between crop-level and landscapelevel models indicates that crop-level models had better predicting accuracy for estimating aphid densities within the landscape, highlighting strong crop-specific dynamics. Our results highlight interesting key crop-specific seasonal processes that potentially play a role in regulating aphid densities at the landscape scale and help inform future decision-making. Simulating seasonal aphid dynamics and understanding the effects of landscape parameters In our simulations, there were no year-year dynamics of aphid densities due to a constant natural reset of aphid populations. Simulated R. padi dynamics were affected by a strong bottleneck during the end of summer and start of fall when wheat volunteer is the major remaining resource available within the landscape. The limited quality of this cover, which leads to low reproduction rates for aphids within this habitat due to poor nutritional quality, was not sufficient to maintain high densities of aphids within the landscape. This led to habitat discontinuity, which, coupled with high predation pressure during warmer periods, reduced aphid populations drastically. While it is difficult to compare this to field data due to the use of pesticides in situ, such population crashes have been commonly observed for arthropods in agricultural systems (Karley et al. 2003, Heiniger et al. 2014). Nevertheless, our proposed approach could be applied to systems where organic agriculture is predominant (Wyss et al. 2005). Krauss et al. (2011) discovered that pesticide applications might actually free aphid dynamics from predation pressure, which could easily be tested using our approach by adding a natural enemy population model on top of our current submodels and introducing pesticide application scenarios. Habitat quality played an important role in the spatial-temporal distribution of R. padi populations. R. padi population dynamics within both simultaneously available summer crops were highly contrasted with clear temporal discrepancies between aphid dynamics within sorghum and corn. Aphid populations tended to develop in sorghum in spring and reach very low levels in summer, while the opposite occurred in corn, due to different quality values of these crops and their growth stages (Kieckhefer and Gellner Fig. 3. Bar plots of mean aphid densities per hectare across all simulations for each crop. All data points are represented by black dots. Aphid densities are separated by season and by scenario (reference, pasture, sorghum). Statistical differences are represented using lowercase letters where different letters represent a statistically significant difference between two scenarios. The absence of letters represents no statistical difference among scenarios within the respective season. 1988). We found the same patterns in the literature, where sorghum infestations usually occur as soon as the crop appears (Chantereau et al. 2013), whereas corn infestations occur at the flowering stages (Brown et al. 1984). Drivers of aphid densities varied greatly among seasons and among crops. Overall, croplevel models performed better in predicting aphid densities compared with landscape-level models, highlighting the importance of considering inter-habitat variability. Temperature, which plays a key role in regulating ectotherm population dynamics (Huey and Berrigan 2001), was a significant predictor of aphid densities in wheat across all seasons in which the crop is present. Temperature not only regulated population dynamics through development and mortality but also influenced predation pressure by natural enemies, an important driver of aphid mortality at the end of winter and start of spring. This echoed empirical studies that highlighted early predation as a strong regulator of pest population dynamics during spring and summer (Raymond et al. 2014). Habitat availability was not a consistent predictor of aphid densities across seasons and crops. This could be related to R. padi's dispersal behavior that allows for rapid colonization of agricultural landscapes in favorable conditions (Parry 2013), thus leading to individuals always finding a suitable habitat within the landscape. Immigration was a strong predictor of aphid densities throughout the year and landscape. During fall, a season particularly sensitive to aphid infestations within wheat (Pike and Schaffner 1985), we observed a significant effect of the number of aphids immigrating into the landscape on aphid densities within wheat. In relation to R. padi dynamics, fall is the season during which a part of the population will produce sexual, winged aphids. In the western and southern parts of France, where the primary host is rather rare, most of the population is anholocyclic (Simon et al. 1991) and can survive within crops during winter if conditions are not too extreme. A minor percentage of the population migrate to their primary host (Prunus padus) as sexual individuals where they produce their offspring by oviposition to anticipate unfavorable winter weather conditions (Leather and Dixon 1981). This hypothesis should be evaluated by gathering precise data on the proportion of sexual aphids caught in suction traps in fall in our region. In more temperate areas, such as Brittany in France, the main source of aphids colonizing wheat in fall is corn (Vialatte et al. 2006). In our case study region, there was no overlap between summer and winter crops, and such transfers were impossible. On the other hand, wheat volunteer is known as an alternative source of aphid colonizers (Hawkes andJones 2005, Vialatte et al. 2007). Aphid densities in volunteer in fall had a strong negative impact on densities in wheat during the next season. Competition and density-dependent mortality in late fall could make it difficult for R. padi populations to survive within the landscape until wheat emerges later in the winter. Studying the potential effects of agronomic scenarios Changing agricultural practices led to variation in aphid densities among both crops and seasons. The pasture scenario, which strongly increased the area assigned to wheat and thus wheat volunteer, presented lower aphid densities in wheat volunteer in summer but higher densities in fall. In the sorghum scenario, increasing the area assigned to sorghum within the landscape (in place of corn) led to reduced densities in sorghum in spring and increased densities in summer. These observations in both volunteer and sorghum can be associated with the theory of concentration/dilution exposed by Tscharntke et al. (2012). They state that increasing the surface of a resource can lead to increasing the density of a specialist herbivore until a threshold is reached at which a dilution effect occurs, given that it is impossible for the population to forage the whole area available. This hypothesis was illustrated on coffee-pollinating bees (Veddeler et al. 2006) where increasing the concentration of flowers at a field scale leads to a concentration effect of bee densities, while doing the same at the landscape scale leads to a dilution effect. This opposes Root's concentration hypothesis (Root 1973), which states that the more a resource is present within the landscape, the higher the chances are for detection and successful colonization by herbivores. Such effects can also be partly explained by reproductive behavior, where bees reproduce in limited amounts while aphid v www.esajournals.org continuous reproduction favors population accumulations. Sorghum is a favorable habitat during spring and summer, where local dispersal is favored. Hamb€ ack and Englund (2005) have highlighted the importance of considering dispersal and immigration as one of the key drivers of the resource concentration hypothesis. Thus, increasing sorghum in spring led to the dilution of aphid populations, unavailable to forage the whole area, while increasing sorghum in summer led to a concentration effect, due to the limited number of other cereal habitats in the landscape. Concentration of R. padi populations in sorghum could be problematic in regard to agricultural changes. Increasing the relative abundance of sorghum within crop rotations could lead to sorghum fields potentially acting as local reservoirs for aphid dispersal and recolonization in other crops after pesticide treatments. This could be particularly true in ecosystems where local transfers of aphids from crop to crop play an important role, such as in Brittany (Vialatte et al. 2007). Thus, land managers should be aware of this potential risk if integrating sorghum within their crop rotations and act accordingly to prevent any potential outbreaks of R. padi during spring and summer. The abandonment of pastures for winter crops, explored in the pasture scenario, does not seem to influence R. padi densities, and thus, no recommendations can be made for land managers in regard to this agronomic change. CONCLUSION Our model is a first attempt at modeling the spatial-temporal dynamics of both an agricultural landscape and cereal aphid population dynamics to better comprehend the main drivers of seasonal aphid densities. We recognize that this is an application focused on a unique species of aphids, in a unique agricultural landscape of the southwest of France. While many assumptions behind model calibration, such as the choice of landscape simulator, crop rotations, and pest species, affect directly the model outputs, our study highlights the importance of considering individual crops as unique habitats to better comprehend the interactions between landscape and pest dynamics. With the integration of future empirical studies to validate the hypotheses emerging from model simulations, our scenario exploring framework could help identify key processes for designing future agricultural landscapes allowing for low pest densities. ACKNOWLEDGMENTS We would like to thank the AGRAPHID network for their data on aphid monitoring. We would also like to thank Benoît Persyn from INRA (Avignon) and M et eo France for the weather station data. HT developed the simulation model, analyzed the results, and wrote the manuscript. AV developed the theoretical concept around this study. HP helped with developing the biological processes behind the aphid dynamics within the model. CM helped with the conceptualization of the simulation model. All coauthors contributed to manuscript writing and gave final approval for publication.
Timely Access to Mental Health Services for Patients with Pain Introduction: Efficient access to pediatric mental health services is a growing concern as the number of patients increases and outpaces efforts to expand services. This study outlines interventions implemented using quality improvement (QI) science and methodology to demonstrate how a clinic embedded in a large children’s hospital can improve access to the first appointment for a population seeking pain management services. Methods: A process improvement project started with a QI team, whose members designed interventions to change scheduling practices. Initial changes involved decreased time between calls to families, and efforts to streamline notifications among clinicians. Additional interventions included a close examination of waitlist assignment based on appropriateness and assessing patient interest in treatment. Results: Within 3 months of implementation, a significant decline in wait time occurred for patients seeking services for pain management, from 106 to 48 days. This change remained stable for 6 months. In light of a sharp increase in referrals and wait time during the study period, efforts to engage additional clinicians in managing referrals resulted in wait time to stabilize at an average of 63 days to the first appointment. This change remained for 10 months. Scheduling changes did not negatively affect other providers. Conclusions: This study demonstrates the application of QI science to improve patient access to mental health care. Future directions will focus on enhancing the use of the electronic health record, along with previsit family engagement. INTRODUCTION The number of young people seeking outpatient mental health services has increased significantly over the past 20 years. 1 The National Council for Behavioral Health predicts that a substantial increase in demand for mental health services will occur by 2019, posing new challenges for providers, with 15 million people eligible for Medicaid and an additional 16 million more covered by private insurance. 2 An increase in wait time length seems to be a growing concern among mental health systems of care. 3 Wait time following intake poses a serious complication to patient access for several treatment settings. In 2012, the Children's Hospital Association indicated an average wait time of 7.5 weeks for child and adolescent psychiatry appointments, with similar wait times observed in 5 US cities. 4 In serious psychiatric cases, waiting for an appointment may increase hospitalization rate, a chance for relapse, and even suicide risk. 5 Researchers have attempted to identify a link between wait times and overall patient care. Osadchiy and Diwas developed a "willingness to wait" variable and found that long wait times appeared to dissuade many potential patients from seeking help, evidenced by a decrease in booked appointments and an increase in no-shows for those already booked but with lower willingness to wait. 6 Similarly, Westin et al reported an increase in refusal rates for the first appointment when informed of long waits. Those patients who scheduled an appointment after a long wait time terminated early. 7 Schraeder and Reid suggested patients are more likely to contact other providers as wait time grows, suggesting a tendency to "shop around," possibly inflating other provider waitlists. 8 Corso and Greenspan found that delayed access to care was the largest obstacle to patient satisfaction. 9 Unfortunately, wait time seems to affect each phase of treatment: patient scheduling, engagement during treatment, and satisfaction following treatment. Strategies to remedy the waitlist problem include reorganizing the scheduling process and creating more immediate options for families. Williams Introduction: Efficient access to pediatric mental health services is a growing concern as the number of patients increases and outpaces efforts to expand services. This study outlines interventions implemented using quality improvement (QI) science and methodology to demonstrate how a clinic embedded in a large children's hospital can improve access to the first appointment for a population seeking pain management services. Methods: A process improvement project started with a QI team, whose members designed interventions to change scheduling practices. Initial changes involved decreased time between calls to families, and efforts to streamline notifications among clinicians. Additional interventions included a close examination of waitlist assignment based on appropriateness and assessing patient interest in treatment. Results: Within 3 months of implementation, a significant decline in wait time occurred for patients seeking services for pain management, from 106 to 48 days. This change remained stable for 6 months. In light of a sharp increase in referrals and wait time during the study period, efforts to engage additional clinicians in managing referrals resulted in wait time to stabilize at an average of 63 days to the first appointment. This change remained for 10 months. Scheduling changes did not negatively affect other providers. Conclusions: This study demonstrates the application of QI science to improve patient access to mental health care. Future directions will focus on enhancing the use of the electronic health record, along with previsit family engagement. inflation. 5 When that capacity was not available, others advised implementing a mid-level assessment team that was able to quickly assess patients, address pressing concerns, and provide further recommendations. 10,11 Some solutions seem to favor restructuring the intake process itself. Weaver et al recommend a direct intake process that involves scheduling patients' first appointment within the first call, thereby eliminating waitlists. 12 However, this strategy does not guarantee patient show rate, especially if the family scheduled several weeks or months in the future. Clow et al suggest a reduction in process error by improving waitlist management. 13 By reviewing waiting lists, the authors found that time to the first appointment decreased, identifying an appointment type with a shorter wait or perhaps to a different referral source altogether. Trends in waitlist management seem to use a combination of process improvements, including various functions during the first point of contact, streamlining steps, and assuring waitlist accuracy. All of these interventions are attempts to reduce time to the first appointment and improve access to treatment. The following study combines several of these processes in a quality improvement (QI) initiative to improve access to care. According to the authors of The Improvement Guide, basic tenets to making a successful process improvement include: innovation that is measurable, launching the project on a small scale, securing feedback during the process to minimize disruption, and an end result that will benefit all customers. 14 This study utilized these principles to determine whether changes in scheduling procedures improve wait times and show rate, and increase satisfaction. METHODS This study launched a process improvement project to develop a mechanism to decrease wait time by 30 days for patients referred to a hospital-based pediatric psychology clinic using QI methodology of the Institute of Healthcare Improvement (IHI). 15 The IHI standards consist of executing a process improvement change that will result in lasting improvement for those most affected by the change and be able to sustain those changes indefinitely. 14 Study procedures were consistent with Nationwide Children's Hospital institutional review board guidelines and considered exempt from the review process. This hospital is a large pediatric primary and tertiary hospital serving a population of more than 2 million people, including contiguous counties. The Psychology Department includes 16 providers who are integrated into 19 pediatric medical subspecialties and provide more than 23,000 visits per year. Procedures The first step in the IHI methodology consisted of the development of an interdisciplinary QI team. A format for suggested interventions, called a key driver diagram, created guidelines for the timing of proposed changes. Data collection took place at multiple points, gathering baseline data before the start of the change process and then at times coinciding with the implementation of new interventions. This process improvement project started on a small scale, targeting 1 clinician who scheduled over 130 new patients yearly and had a waitlist time above the department average. Using 1 clinician for the initial implementation helped to refine the process and ensure that the extension of these practices did not negatively interfere with scheduling practices for other clinicians. All referrals to the targeted clinician were for pain management. There was not a set number of new patient slots per month. Rather, scheduling staff filled slots when the clinician expressed availability for new patients. Interventions An interprofessional team consisting of the targeted psychologist (project lead), a QI coordinator, statistician, and administrative staff met to identify current challenges to scheduling and brainstorm potential changes. This team developed a process map to outline the current state of the scheduling process and identify potential points of improvement, such as delays in communication between the clinician and the scheduler, and the amount of time schedulers spent to reach families (see Fig. 1). Information from this map helped to create the key driver diagram and develop targeted interventions to standardize communication and improve scheduling efficiency. Ultimately, this process identified a way to enlist additional providers to treat patients with pain (see Fig. 2). The team conducted serial plan-do-study-act (PDSA) cycles, a time when the QI team observes and collects data to determine whether a proposed intervention is associated with expected change. The scheduling staff suggested 2 interventions to test during the first PDSA cycle: (1) limit the amount of time between attempts to contact patients to 2 days, and (2) limit the number of scheduling attempts to 2. By using a notification feature in the electronic health record (EHR) scheduling screens, schedulers were able to initiate deferment options to remind them to contact families 48 hours after the initial call. Once schedulers made a second attempt to reach families, they closed the charts and moved on to another patient. Of note, if the family of a closed chart contacted the office, they were scheduled. During this phase, scheduling staff used the process map tool and proposed additional modifications to improve the rate of patient contact. Before this study, it was customary for clinicians to "hand pick" specific patients for scheduling. The new method suggested designating several new patients needing appointments. This change allowed scheduling staff to move rapidly through the patient list, requiring less communication from the clinician. Wait times were remeasured 3 months after the start of the project. During a second PDSA cycle, team members reviewed the scheduling procedures and proposed further changes. First, schedulers examined the targeted clinician's waitlist to determine the appropriateness of referrals. Scheduling staff examined details of referrals to confirm the need for specialization. Also, staff contacted waiting families to inform them of approximate wait time based on their position on the list. Schedulers asked families whether they wished to continue to wait or receive another referral to an outside agency. Families not reached during this inquiry stage remained on the waitlist. The team conducted a third PDSA cycle 9 months after the start of the project and made further changes to address a significant increase in referrals. Schedulers conducted a thorough evaluation of the waitlists across 4 additional clinicians. There were large inequities in length of waitlists, ranging from 1 to 5 months. As a result, providers with similar specialties and openings on their caseloads agreed to see additional patients waiting for pain management services. The same procedures from our first 2 PDSA cycles extended to these additional clinicians, opening more possibilities for referral assignment. Measures Wait time was defined as the number of days between the placement of the referral and the date of the first scheduled appointment. Because of the large size of the department, monthly tracking of this metric started with 1 targeted clinician. A balance measure examined whether the process improvement initiative resulted in any unintended negative consequences to the scheduling process for 8 other clinicians. These consequences might include delays in scheduling for other providers because of reallocation of resources or decreased work satisfaction among schedulers, potentially leading to staff turnover. Schedulers tracked time taken away from scheduling of other providers due to these recent changes in scheduling. Moreover, they discussed their work satisfaction as these changes took place. Both outcome and balance measures were tracked using statistical process control methodology. Control charts, or x-bar charts, plotted the wait time to first scheduled appointment by the number of patients scheduled. Baseline (preintervention) data were entered into the x-bar chart with the centerline (mean) and control limits (± 3 SD) of variation for this period. Monthly data were plotted on the control chart while holding the centerline constant from the baseline period. Using IHI guidelines, the centerline needed revision when there was a significant change in values. Additionally, a 2-sample t test determined statistically significant changes to the mean wait time. RESULTS Data collection covered 24 months, capturing an additional 17 months of baseline data before the change in procedures. The x-bar chart in Figure 3 reveals a baseline mean of 106 days that families were waiting for services for the targeted clinician. On average, the staff scheduled 5.2 patients per month during the baseline period. After the first PDSA took place, there was a decrease to a mean of 66 days in wait time after 3 months of implementation. The mean number of days continued to drop after the second PDSA occurred, resulting in an overall low of 33 days wait time. Nine months after the start of process improvement implementation, a 2-sample t test revealed a significant change in the wait time from the beginning of the study (P < 0.001, 95% CI), resulting in a midline shift to 48.2 days. Also, the average number of patients scheduled increased gradually from 5.2 to 5.9 patients scheduled per month (see Fig. 3). Wait time increased to a mean of 108 days midway through data collection. While undesirable, this increase reflected program growth associated with the hiring of new faculty, resulting in a 68% increase in referrals over 3 months because of program development with additional pain populations. A subsequent centerline shift occurred after the third PDSA when the wait time decreased again to a mean of 63.4 days (P < 0.001, 95% CI). Wait times have remained at this lower level for the last year. Contacting families twice within 2 days, and subsequently closing the chart, did not prevent families from scheduling. Based on record keeping over 1 week, only 8% of families called back to schedule an appointment after closing a chart. When scheduling staff contacted patients to assess interest in remaining on the waitlist, some families no longer needed scheduling due to linkage with other services (7%). Only 1 family requested other referrals. A special cause data point appeared at 2 points before the study start, revealing a dramatic dip in wait time for patients scheduled in month 3 and month 9. This shift did not reflect any preliminary change in process improvement. Rather, requests to prioritize cases with urgent needs significantly decreased the typical wait time for patients during these months. The x-bar chart of the wait time for 8 additional clinicians not associated with these interventions, the balancing measure revealed no negative impact on their scheduling procedures. Time to the first appointment for these clinicians remained the same throughout the intervention period, as shown in Figure 4. Also, schedulers expressed high satisfaction related to the process change, noting more independence and efficiency with reduced steps in the scheduling process. Ultimately, the schedulers requested the expansion of this process to other providers. There was no staff turnover during this time. The department collects satisfaction ratings quarterly for all clinicians. During the study period, a few families did express dissatisfaction with the amount of wait time for services. Because respondents remain anonymous, it is not clear whether these comments reflected pain patients. However, the no-show rate dropped from 18% pre intervention to 10% in the 2 years following the process improvements. DISCUSSION Ensuring timely access to therapeutic interventions is critically important, especially in light of the growing number of children seeking help for mental health services, and the urgency that may come with pain or chronic medical conditions. For patients not in crisis but still in need of prompt interventions, having short access time can improve treatment engagement and overall satisfaction. Our pediatric hospital organization has made an unprecedented commitment to significant financial and personnel resources and new physical infrastructure to improve access to care and promote the best outcomes for children and adolescents with behavioral health diagnoses. 16 Despite this unparalleled investment and growth in resources, long wait times, and access to specialty mental health services remains a challenge. Responsible and effective stewardship of the organization's expansion in behavioral health services will assure the long-term impact of the investment. This project demonstrated several interventions that led to a decrease in wait times despite a changing landscape of providers and an increase in referral numbers. There did not seem to be an additional treatment burden, as the average number of patients scheduled to the target clinician remained relatively stable. Executing improvements to administrative procedures to systematize communication and frequency of calls decreased days to first appointment and improved the ability to move more rapidly through a waitlist. Maintaining accurate waitlists and inquiring about patient interest also minimized inflation of wait times. Conducting regular checks on waitlists helped identify other possible referral linkages for patients while shrinking wait times when families no longer needed services. Further, capitalizing on the general strengths of additional clinicians and minimizing the need for specialization facilitated the contribution of other providers toward a common goal of improved access. 17 This finding is contrary to other studies that suggest specialization is a way to decrease inflated wait times by limiting services to a specialized patient population. 18 By expanding the pool of clinicians, patients received efficient and effective care. There was an increase in days to first appointment midway through the data collection period. This paralleled changes in personnel and a large increase in referrals. The increased efficiency in scheduling for the primary clinician led to a decrease in no-shows and plans for the extension of this process change to all clinicians. Moreover, administrative staff reported higher satisfaction with these new procedures, and patients experienced a decrease in access time overall. Despite improvements, inconsistencies with scheduling emerged as some clinicians left, and new clinicians were hired. The scheduling staff optimized the use of the EHR to monitor call frequency and to stay within the designated time frame. As such, information from this project has been instrumental in designing subsequent QI initiatives that further streamline the scheduling process, and explore increased functionality of the EHR (eg, creating alerts, using shared patient lists). Also, cross-training personnel will help to improve sustainability despite periodic changes in staffing and referrals. Additional efforts aimed at increasing the number of professionals who can cross-cover with different patient populations are helping to meet the demand for referral increases among particular patient populations. There were some limitations to this study. First, we did not collect satisfaction surveys from the targeted population. It would be important to seek input about whether patients felt the current wait time was tolerable and determine how many patients did not schedule because they identified other treatment options with shorter wait times. Given the small scope of this study, it will be interesting to see how this same methodology affects other clinicians with high wait times and long waitlists. QI projects are currently underway in the department to increase patient and family engagement before the start of treatment. In particular, future studies will assess patient readiness and attitudes toward treatment as a predictor of engagement and completion in specialized treatment for pain management. Considering the vast numbers of patients seeking treatment for pain-related problems, information collected before treatment that can potentially be linked to the successful completion of treatment may help guide schedulers to triage cases to other types of services. Immediate and brief consultation via phone versus traditional therapy could benefit some patients. 10,11 Providing alternatives to traditional therapy, in conjunction with a better triage process, are possible next steps to sustain short wait times into the future. The QI team is considering additional modifications to the scheduling process for future measurement. Currently, when families do not respond to prescheduling calls to assess interest, they remain on the waitlist. One proposed change involves sending letters to assess interest in treatment and requiring a response before scheduling. If families never call to indicate interest, their referral is closed. Families who eventually call can still be scheduled; however, implementing these steps earlier in the process may be a viable way to move through waitlists more rapidly, decrease no-show rates, and help to care for children faster. CONCLUSIONS Patients needing mental health care should never have to wait a long time for their first appointment. Although engagement in treatment involves several factors, lengthy wait times certainly complicate the odds of a positive outcome. This QI study illustrated the effectiveness of several interventions suggested in previous studies, demonstrating the sustainability of a process improvement initiative in a busy psychology clinic.
BSAC: Bayesian Strategy Network Based Soft Actor-Critic in Deep Reinforcement Learning Adopting reasonable strategies is challenging but crucial for an intelligent agent with limited resources working in hazardous, unstructured, and dynamic environments to improve the system utility, decrease the overall cost, and increase mission success probability. Deep Reinforcement Learning (DRL) helps organize agents' behaviors and actions based on their state and represents complex strategies (composition of actions). This paper proposes a novel hierarchical strategy decomposition approach based on Bayesian chaining to separate an intricate policy into several simple sub-policies and organize their relationships as Bayesian strategy networks (BSN). We integrate this approach into the state-of-the-art DRL method, soft actor-critic (SAC), and build the corresponding Bayesian soft actor-critic (BSAC) model by organizing several sub-policies as a joint policy. We compare the proposed BSAC method with the SAC and other state-of-the-art approaches such as TD3, DDPG, and PPO on the standard continuous control benchmarks -- Hopper-v2, Walker2d-v2, and Humanoid-v2 -- in MuJoCo with the OpenAI Gym environment. The results demonstrate that the promising potential of the BSAC method significantly improves training efficiency. The open sourced codes for BSAC can be accessed at https://github.com/herolab-uga/bsac. Introduction In Artificial Intelligence (AI) methods, a strategy describes the general plan of an AI agent achieving short-term or long-term goals under conditions of uncertainty, which involves setting sub-goals and priorities, determining action sequences to fulfill the tasks, and mobilizing resources to execute the actions [1]. It exhibits the fundamental properties of agents' perception, reasoning, planning, decision-making, learning, problem-solving, and communication in interaction with dynamic and complex environments [2]. Especially in the field of real-time strategy (RTS) game [3] and real-world implementation scenarios like robot-aided urban search and rescue (USAR) missions [4], agents need to dynamically change the strategies adapting to the current situations based on the environments and their expected utilities or needs [5,6]. From a single-agent perspective, a strategy is a rule used by agents to select an action to pursue goals, which is equivalent to a policy in a Markov Decision Process (MDP) [7]. More specially, in reinforcement learning (RL), the policy dictates the actions that the agent takes as a function of its state and the environment, and the goal of the agent is to learn a policy maximizing the expected cumulative rewards in the process. With advancements in deep neural network implementations, deep reinforcement learning (DRL) helps AI agents master more complex strategy (policy) and represents a step toward building autonomous systems with a higher-level understanding of the visual world [8]. Furthermore, in task-oriented decision-making, hierarchical reinforcement learning (HRL) enables autonomous decomposition of challenging long-horizon decision-making tasks into simpler subtasks [9]. Moreover, the hierarchy of policies collectively determines the agent's behavior by solving subtasks with low-level policy learning [10]. However, a single strategy might involve learning several policies simultaneously, which means the strategy consists of several tactics (sub-strategies) or actions executing a simple task, especially in the robot locomotion [11] and RTS game [12] domain. As a branching variant of the Dueling Double Deep Q-Network (Dueling DDQN) [13], the authors in [14] introduced the action branching architectures called Branching Dueling Q-Network (BDQ) through concatenating the selected sub-actions in a joint-action tuple. Recently, the soft actor-critic (SAC) approach [15], an off-policy actor-critic algorithm based on the maximum entropy framework, has shown to be one of the leading approaches for model-free off-policy DRL and one of the promising algorithms implemented in the real robot domain [16]. Although significant progress has been achieved in those domains, DRL is still hard to explain formally how and why the randomization works, which brings the difficulty of designing efficient models expressing the relationships between various strategies (policies) [17]. Especially, it is wellknown that the naive distribution of the value function (or the policy representation) across several independent function approximators can lead to convergence problems [18]. Contributions In this paper, we first introduce the Bayesian Strategy Network (BSN) based on the Bayesian net to decompose a complex strategy or intricate behavior into several simple tactics or actions. An example of a BSN-based strategy decomposition (or action dependencies) of a Biped Robot is shown in Fig. 1 Related Work Reinforcement learning is a framework that helps develop self-learning capability in AI agents (like robots), but it is limited to the lower-dimensional problem because of complexity in memory and computation; Deep RL integrates the deep neural network implementing function approximation and representation learning to overcome the limitation of RL [24]. On the other hand, current research and industrial communities have sought more software-based control solutions using low-cost sensors with less operating environment requirements and calibration [25]. As the most promising algorithm, such as SAC [16], DRL ideally suits robotic manipulation and locomotion because of no predefined training data requirement. Furthermore, the control policy could be obtained by learning and updating instead of hard-coding directions to coordinate all the joints. More specifically, compared with value-based RL, policy-based RL can avoid the policy degradation caused by the value function error and is easier to apply in the continuous action space problem [8]. Especially for the actor-critic algorithm can overcome policy-based methods' common drawbacks such as data efficiency, but it hardly converges in large-scale RL problems as a classical policy gradient algorithm derived from policy iteration [15]. As a deterministic off-policy actor-critic algorithm, DDPG [20] can learn competitive policies through low-dimensional observations based on the same hyperparameters and network structure, but it is impractically implemented in complex environments with noise interference. On the other hand, many model-free DRL algorithms require many new samples in each gradient step, such as trust region policy optimization (TRPO) [26], PPO [19], and A3C [27], which is inefficient to learn the policy and increases the complexity of the tasks. As a maximum entropy framework method, SAC [15] substantially improves models' performance and sample efficiency by integrating off-policy updates with a stable stochastic actor-critic formulation. However, like the conventional DRL approach, SAC still uses one actor policy network to fit the Q-value distribution. Considering solving problems in multidimensional strategy or action space, if we can optimize for each action or strategy dimension with a degree of independence and organize them appropriately, it has the potential to trigger a dramatic reduction in the number of required network outputs [14]. To address this gap, we propose a novel DRL architecture termed Bayesian Soft Actor-Critic (BSAC), which is based on the soft actor-critic (SAC) approach. In BSAC, we decompose the agent's strategy (or action) into sub-actions and hierarchically organize them as various Bayesian Strategy Networks (BSN). This way, several sub-policies (sub-actors) can be formed to the corresponding joint policy generating the distribution of a complex strategy or action, which can better fit the Q-value distribution and increase the convergence and efficiency. This approach enables imposing some prior knowledge about the domain at hand in a systematic way. Background and Preliminaries This section provides the essential background about Bayesian Networks and Deep Reinforcement Learning. When describing a specific method, we use the notations and relative definitions from the corresponding papers. Bayesian Networks A Bayesian Network structure G is a directed acyclic graph whose nodes represent random variables X 1 , · · · , X n . Let P a G Xi denote the parents of X i in G, and NonDescendants Xi denote the variables in the graph that are not descendants of X i . Then G encodes the following set of conditional independence assumptions, called the local independence, and denoted by I (G): For each variable X i : (X i ⊥ N onDescendants Xi | P a G Xi ). In other words, the local independence state that each node X i is conditionally independent of its non-descendants given its parents [28]. Furthermore, a Bayesian Net can be presented as the Chain Rule of conditional probabilities (Eq. (1)). Deep Reinforcement Learning The essence of reinforcement learning (RL) is learning from interaction. When an RL agent interacts with the environment, it can observe the consequence of its actions and learn to change its behaviors based on the corresponding rewards received. Moreover, the theoretical foundation of RL is the paradigm of trial-and-error learning rooted in behaviorist psychology [29]. Furthermore, deep reinforcement learning (DRL) trains deep neural networks to approximate the optimal policy and/or the value function. The deep neural network serving as a function approximator enables powerful generalization, especially in visual domains, general AI systems, robotics, and multiagent/robot systems (MAS/MRS) [30]. The various DRL methods can be divided into three groups: value-based methods, such as DQN [31]; policy gradient methods, like the PPO [19]; and actor-critic methods, like the Asynchronous Advantage Actor-Critic (A3C) [27]. From the deterministic policy perspective, DDPG [20] provides a sample-efficient learning approach. On the other hand, from the entropy angle, SAC [15] considers a more general maximum entropy objective retaining the benefits of efficiency and stability. Here, we briefly discuss actor-critic methods and SAC as follows: Value-based methods The Deep Q-network (DQN) is the breakthrough work in DRL, which learns policies directly from high-dimensional inputs. It uses the experience replay method to break the sample correlation and stabilizes the learning process with a target Q-network [31]. DQN minimizes the mean-squared error between the Q-network and its target network using the loss function (Eq. (2)). Policy gradient methods Policy gradient methods optimize the parameterized policy directly. Specifically, the PPO method samples data by interaction with the environments and optimizes the objective function (Eq. (3)) with stochastic gradient ascent [26]. Here, r t (θ) denotes the probability ratio. Actor-Critic methods Actor-Critic architecture computes the policy gradient using a value-based critic function to estimate expected future rewards. Especially, the A3C trains multiple agents in the environments simultaneously and computes gradients locally [27]. The objective function of the actor is shown in Eq. (4), where H θ (π(s t )) is an entropy term used to encourage exploration. Soft Actor-Critic SAC is an off-policy actor-critic method that can be derived from a maximum entropy variant of the policy iteration method. The architecture consider a parameterized state value function V ψ (s t ), soft Q-function Q θ (s t , a t ), and a tractable policy π φ (a t |s t ). It updates the policy parameters by minimizing the Kullback-Leibler divergence between the policy π and the Boltzmann policy in Eq. . Deep Deterministic Policy Gradient DDPG is based on the actor-critic architecture, which constructed an exploration policy µ by adding noise sampled from a noise process N to the actor policy. Meanwhile, the Q function, as the critic, tells the actor what kind of behavior will obtain more value. The training process uses the loss function of the critic (expressed in Eq. (6)). Through the sample policy gradient, the actor can be updated using Eq. (7). Methodology Building on top of the well-established suite of actor-critic methods, we introduce the incorporation of Bayesian Networks to decompose a complex actor (policy) into several simple independent subactors or sub-policies termed Bayesian Strategy Network (BSN). Then, we integrate the idea of the maximum entropy reinforcement learning framework in SAC, designing the method that results in our Bayesian soft actor-critic (BSAC) approach. Bayesian Strategy Networks (BSN) An overview of the BSN implementation in actor-critic architecture is presented in Fig. 2. Supposing that the strategy T consists of m tactics (t 1 , . . . , t m ) and their specific relationships can be described as the Bayesian Strategy Network (BSN) ( Fig. 2(a)). We consider the probability distribution P i as the policy for tactic t i . Then, according to the Eq. (1), the policy π(a T ∈ T , s) can be described as the joint probability function (Eq. (8)) through each sub-policy (sub-actor) π i (t i , s), correspondingly. The training process using the actor-critic architecture with multiple actors using the BSN (i.e., the Bayesian chain) is represented in Fig. 2 Furthermore, building different BSN models for a specific scenario might have distinct performance since they will present various joint policy (action) distributions to fit the Q-value distribution. In other words, it is essential to understand the dependent relationships among each tactic (actions or strategies) when building the BSN, and a specific scenario can be expressed as the corresponding BSN if we can clarify the conditional probability for each tactic. From the reward perspective, if the rewards' mechanism also can reflect the corresponding Q-value distribution of the expected dependent actions or strategies combinations [32], it will significantly improve the sample efficiency and save the training time for the specific BSN model. Derivation of Sub-Policy Iteration Considering that the agent interacts with an environment through a sequence of observations, strategy (action combinations), and rewards, the agent's goal is to select strategy in a fashion maximizing cumulative future reward. Accordingly, we can describe the relationships between actions in the strategy as a BSN, represented in Eq. (8). More formally, we can use the corresponding deep convolution neural networks to approximate the strategy Policy Network (Actor) in the Eq. (8) as Eq. (9). Here, A is the joint action or strategy space for the policy π; θ i and a it are the parameters and action space of each sub-policy network π i . On the other hand, the Value Network (Critic) q(s, A t ; w) evaluate the performance of the specific joint action A using the value function in Eq. (10) with a parameter w. We can calculate the corresponding parameters' gradient descent using Eq. (11). Through this process, we decompose the strategy policy network π T into several sub-policies networks π i and organize them as the corresponding BSN. Furthermore, according to Eq. (11), each sub-policies uses the same value network to update its parameters in every iteration. Bayesian Soft Actor-Critic (BSAC) Our method incorporates the maximum entropy concept into the actor-critic deep RL algorithm. According to the additivity of the entropy, the system's entropy can present as the sum of the entropy of several independent sub-systems [33]. In our method, for each sub-policy evaluation step of soft policy iteration, the joint policy π will calculate the value to maximize the sum of sub-systems' π i entropy in the BSN using the below objective function (Eq. (12)). In order to simplify the problem, we assume that the weight and the corresponding temperature parameters α i for each action are the same in each sub-system. The soft Q-value can be computed iteratively, starting from any function Q : S × A → R and repeatedly applying a modified Bellman backup operator T π [15]. In the Bayesian Soft Actor-Critic, the T π is given by Eq. (13). Considering that the evaluation of each sub-policy applies the same Q-value and weight, the soft state value function can be represented in Eq. (14). Specifically, in each sub-policy π i improvement step, for each state, we update the corresponding policy according to Eq. (15). Here, Z π old (s t ) is the partition function to normalize the distribution. Furthermore, the soft policy evaluation and the soft policy improvement alternating execution in each soft sub-policy iteration guarantees the convergence of the optimal maximum entropy among the sub-policies combination. Here, we will use function approximators for both the Q-function and each sub-policy, optimizing the networks with stochastic gradient descent. Instead of utilizing one policy to generate the actions, we organize the agent's behaviors and actions as a BSN and implement several sub-policies to integrate them as the corresponding action or tactic combinations connected through a Bayesian chain. Considering a parameterized state value function V ψ (s t ), soft Q-function Q θ (s t , A t ), and several tractable sub-policies π φi (a it |s t ), the parameters of these networks are ψ, θ, and φ i respectively. Then, the joint policy parameters can be updated by minimizing the expected KL-divergence in Eq. . Especially for each sub-policy network, we implement a Gaussian distribution with mean and variance generated by neural networks building the action distribution to sample the corresponding sub-action. By integrating every sub-policy, we can form the policy combination fitting the specific joint action or strategy distribution. Furthermore, the joint actions can be generated by sampling from the different sub-distributions, which are domain-specific. As discussed above, we integrate the BSN into the SAC [15] algorithm and extend the SAC approach to the proposed BSAC policy evaluation, improvement, and iteration, similar to the foundations in SAC as follows: Lemma 1 (Bayesian Soft Policy Evaluation). Consider the soft Bellman backup operator T π in the Eq. (13) and define Q k+1 = T π Q k . The sequence Q k will converge to the soft Q-value of the joint policy π for each sub-policy π i as k → ∞, i ∈ m. Proof. Due to the additivity of the entropy [33], we can rewrite the entropy augmented rewards and the update rule as Eq. (17) and (18), respectively. Then, through the policy evaluation of the standard convergence results [29], the assumption |A| < ∞ guarantees the BSAC entropy augmented reward to be bounded. Here, we assume that each sub-policy π i has the same weight in the joint policy π. Lemma 2 (Bayesian Soft Policy Improvement). Let the joint policy π new optimize the minimization problem defined in Eq. (15) and π old ∈ Π, then Q πnew (s t , Proof. Supposing the joint policy is π old ∈ Π, its soft state joint action value and soft state value are Q π old and V π old , respectively. By defining π new as Eq. (15) and (19), we can deduct the Eq. (20). Theorem 1 (Bayesian Soft Policy Iteration). Iterating the Bayesian soft policy evaluation and Bayesian soft policy improvement from any joint policy π ∈ Π converges to a joint policy π * for Q π * (s t , A t ) ≥ Q π (s t , A t ) and (s t , A t ) ∈ S × A, assuming |A| < ∞. Proof. According to the Lemma 2, the sequence of Q π k is monotonically increasing and converges to the specific π * . Here, k is the iteration less than infinity. Furthermore, considering for all the joint policy π ∈ Π, π = π * at convergence having Eq. (22), we can get a soft value Q * (s t , a t ) which is larger than the other soft value of any joint policy in Π. Hence the joint policy π * is optimal in Π. Loss Function There are numerous ways to calculate the distributed temporal-difference (TD) errors across the sub-actions aggregating to a specific loss, and a straightforward approach defines the loss to be the expected value of a function of the averaged TD errors across the sub-actions [14]. More specifically, the soft value function is trained to minimize the squared residual error of each sub-policy in BSN, and we define the loss to be the expected value of the mean squared TD error across the sub-policies as expressed in Eq. (23). Here, D i is the sub-distribution sampling from the previous states and sub-actions. According to the Eq. (16), we can write the corresponding objective equation as Eq. (24) in updating each sub-policy parameter. Each sub-policy uses the neural network transformation in the reparameterization process (Eq. (25)). where is the input noise vector sampled from some fixed distribution. Instead of using one policy network with a Gaussian distribution in SAC to fit the distribution of the Q-value, BSAC generates several simple distributions based on the BSN to adapt to the given model. Algorithm 1: Bayesian Soft Actor-Critic (BSAC) Algorithm 1 Get initial parameter vectors ψ,ψ, θ, φ1, · · · , φm; 2 for each iteration do 3 for each environment step do 4 a1 t ∼ π φ 1 (a1 t |st) 5 . . . In other words, we can generate a more suitable joint policy distribution by organizing several simple sub-policy networks to fit the corresponding Q-value distribution in the reward mechanism. Like the SAC, we also consider two Q-functions to mitigate positive bias in the policy improvement and use the minimum one for the value gradient. Furthermore, we implement a replay buffer collecting experience from the environment with current policy and updating the parameters of the approximators through the stochastic gradients from batches. The proposed Bayesian Soft Actor-Critic (BSAC) algorithm is described in Alg. 1. Experiments and Results We evaluate the performance of the proposed BSAC agent in several challenging continuous control environments with varying action combinations and complexity. We choose the MuJoCo physics engine [22] to simulate our experiments in the OpenAI's Gym environment [23]. 17 20 In our experiments, we use three of the standard continuous control benchmark domains -Hopper-v2, Walker2d-v2, and Humanoid-v2. We first study the performance of the proposed BSAC against the state-of-the-art continuous control algorithm, the SAC [15] and other benchmark DRL algorithms, PPO [19], DDPG [20], and TD3 [21]. Then, we compare the performance of the BSAC based on Hopper-v2 experiments In this experiment, we decompose the hopper's behaviors into three sub-actions -hip action, knee action, and ankle action -and organize them as a chain in the corresponding BSN (Fig. 3). In this BSN, the output of tactics t 1 and t 2 are the input of tactics t 2 and t 3 , respectively, expressing the conditional dependent relationships between the three actions. In other words, for the joint action of the hopper, we described it as the combination of three distributions corresponding to the joint distribution in the BSN formalized in Eq. (26). Furthermore, in this BSAC model, we implement three sub-policies networks (sub-actors) which can generate the three policy distributions -P (t 1 ), P (t 2 |t 1 ), and P (t 3 |t 2 ). Then, we can get the hip, knee, and ankle actions through sampling from those distributions and integrate them as one joint action. Although the BSAC and SAC outperformed TD3, DDPG, and PPO in this implementation, the results shown in Fig. 3 demonstrate that the BSAC competitively performs against SAC, presenting faster convergence and higher average rewards per episode. Due to introducing the maximum entropy, our method indicates both sample efficiency and learning stability compared with other methods. Walker-v2 experiments Here, we build a BSN model for the Walker2d-v2 domain and decompose the walker's behaviors into five actions -hip action, left knee action, right knee action, left ankle action, and right ankle action. According to the feature of the walker, these actions can be organized as a tree structure in the BSN, which is slightly complex compared to the Walker-v2 BSN (see Fig. 4). Similarly, we can formalize the BSN for this Walker domain in Eq. (27). For the corresponding BSAC model, we use five sub-policies networks to approximate the distributions -P (t 1 ), P (t 2 |t 1 ), P (t 3 |t 1 ), P (t 4 |t 2 ), and P (t 5 |t 3 ) -in the BSN, respectively. Especially, for the tactic t 1 , the sub-policy needs to generate two actions -left hip action and right hip action -as an input for the following tactics t 2 and t 3 . Through sampling from the five distributions, we can integrate them as one joint action for the walker. Comparing the performance of the BSAC with SAC, TD3, DDPG, and PPO in the Walker2d-v2, we further prove that BSAC can achieve higher performance than other DRL algorithms. Furthermore, with the increasing complexity of the agent's behaviors and strategy, decomposing the complex behaviors into simple actions or tactics and organizing them as a suitable BSN, building the corresponding joint policy model in the BSAC can substantially increase training efficiency. Humanoid-v2 experiments In the Humanoid-v2 domain, it has higher complexity than other domains in the MuJoCo Gym collections. We implement different BSN models in this domain based on action decomposition approaches for the humanoid body. First, we consider the 5 five policies (tactics) composition (BSAC-5P), as shown in Fig. 5. Here, we organize the joints of the humanoid as the specific BSN as formalized in Eq. (28). In the BSAC-5P model, the joint policy is represented as five different sub-policies, which generate 1) abdomen action, 2) the actions of right hip and right knee, 3) the actions of left hip and left knee, 4) the actions of right shoulder and right elbow, and 5) actions of left shoulder and left elbow, respectively. Fig. 5 shows that the BSAC also has a faster convergence speed and performance than other methods in the Humanoid-v2 experiments. Comparing different BSN models This section analyzes two other BSN models corresponding to different action decomposition methods in the Humanoid-v2 domain. In Fig. 6, for the BSAC three sub-policies model (BSAC-3P), it generates the distribution of the abdomen actions P (t 1 ), the distribution P (t 2 |t 1 ) of the actions of shoulder and elbow, and the distribution P (t 3 |t 1 ) of the actions of hip and knee, respectively. Within a sub-space, the actions are independent. For example, we do not consider the conditional dependence between the actions of the shoulder and elbow, but merging them as one joint action (t 2 ) depends on the abdomen joint (t 1 ). And the same situation is in the joints (t 3 ) of the hip and knee actions. The results shown in Fig. 6 demonstrate that all BSAC models can achieve higher performance than the SAC. On the other hand, compared to the BSAC-3P models' performance, the BSAC-9P presents more advantages than BSAC-5P and BSAC-3P, and the five sub-policies model presents the worst performance among them. It implies that the joint policy distribution designed in the BSAC-9P model is more similar to the Q-value distribution than the other BSAC models'. The BSAC-9P also describes the more reasonable relationships among those actions in the current reward mechanism. Specifically, a conventional RL approach is to specify a unimodal policy distribution, centered at the maximal Q-value and extending to the neighboring actions to provide noise for exploration [32]. Especially the exploration is biased toward the local passage, the agent refines its policy there and ignores others completely [35]. In other words, if we can design a suitable joint policy distribution consisting of several simple policy distributions to fit the corresponding Q-value distribution, it will essentially boost the sample efficiency in the agent's training. Generally speaking, the reward mechanism plays a crucial role in the agent's training, directly affecting the agent's final behaviors and strategies. Based on the rewards mechanism, the BSAC provides an approach to generating a more suitable joint action or strategy to fit the value distribution, which improves the convergence efficiency and the performance of the model. Therefore, ways of designing the BSAC for a specific domain need to be studied further. Limitations of Our Work As we discussed in Sec. 1, SAC theoretically guarantees the optimal solution converging in the finite episode, but it only considers a single policy (actor) to fit the complex Q-value distribution, leading to the inefficient sample exploration issue. In order to overcome this limitation, we introduce the Bayesian strategy networks (BSN) into DRL and propose the Bayesian SAC (BSAC), which learns the Q-function through several sub-policies organized as BSN based on the specific scenario. Although the proposed BSAC shows excellent performance compared to the state-of-the-art algorithms, it still has some issues that need further proof and improvement, such as the saturation problem and the efficiency of the BSAC model optimization. Moreover, the complexity of the BSAC is larger than SAC since it involves more parameters to learn than SAC. However, considering the balance between the sample complexity and superior performance, BSAC provides a more general, flexible, and efficient approach to learning complex problems. Conclusions We introduce a novel agent strategy composition approach termed Bayesian Strategy Network (BSN) for achieving efficient deep reinforcement learning (DRL). Through the conditional coupling of individual tactics or actions based on the Bayesian chain rule, BSN decomposes an intricate strategy or joint action into several simple tactics or actions and organizes them as the knowledge graph. Then, by designing corresponding sub-policies networks according to the BSN, we can build a joint policy to generate the complex strategy or action distribution. Furthermore, based on the Soft Actor-Critic (SAC) algorithm, we propose a new DRL model termed the Bayesian Soft Actor-Critic (BSAC), which integrates the BSN and forms a joint policy better adapting the Q-value distribution.
Contemporary Drug Therapy for Renal Cell Carcinoma—Evidence Accumulation and Histological Implications in Treatment Strategy Renal cell carcinoma (RCC) is a heterogeneous disease comprising a variety of histological subtypes. Approximately 70–80% of RCC cases are clear cell carcinoma (ccRCC), while the remaining subtypes constitute non-clear cell carcinoma (nccRCC). The medical treatment of RCC has greatly changed in recent years through advances in molecularly targeted therapies and immunotherapies. Most of the novel systemic therapies currently available have been approved based on ccRCC clinical trial data. nccRCC can be subdivided into more than 40 histological subtypes that have distinct clinical, histomorphological, immunohistochemical, and molecular features. These entities are listed as emerging in the 2022 World Health Organization classification. The diagnosis of nccRCC and treatments based on cancer histology and biology remain challenging due to the disease’s rarity. We reviewed clinical trials focused on recent discoveries regarding clinicopathological features. Introduction Approximately 70-80% of clear cell carcinoma (RCC) cases are diagnosed as clear cell carcinoma (ccRCC); the remaining subtypes are categorized as non-clear cell carcinoma (nccRCC). Accordingly, the development of new drugs has focused on ccRCC, with little attention given to nccRCC. The prognosis of patients with metastatic renal cell carcinoma (mRCC) has improved with the use of immuno-oncology (IO) agents compared to that achieved through the use of cytokines [1] and molecularly targeted therapy (TT) [2][3][4]. Novel systemic therapies are now available, having been approved based on ccRCC clinical trial data. Patients with nccRCC have few evidence-based treatment options and tend to have poor prognoses. As a disease type, nccRCC includes various histological subtypes with distinct clinical and biological characteristics. Herein, we review the pivotal clinical trials for advanced RCC therapy, organized by histological types, and highlight avenues for further research. Target Therapy Era Since December 2005, several molecular therapy agents have been approved for the treatment of advanced RCC [9][10][11][12][13][14][15][16]24]. ccRCC is associated with a mutation or inactivation of the von Hippel-Lindau (VHL) gene and the resultant over-expression of vascular endothelial growth factor (VEGF) [25]. The first drug to target VEGF in the treatment of ccRCC was the monoclonal antibody bevacizumab [24]. In addition, multi-targeted tyrosine kinase inhibitors (TKI) including sorafenib, sunitinib, pazopanib, axitinib, cabozantinib, and lenvatinib are currently in use [9][10][11][12][13][14]. Mammalian target of rapamycin (mTOR) is the second validated therapeutic target used by the inhibitors temsirolimus and everolimus [15,16]. For more than a decade, sequential treatment with target agent monotherapy has been the leading approach to treatment, helping to improve survival. However, for first-line therapies, the response duration is estimated at 12 months; subsequently, treatment resistance may develop, highlighting a need for alternative strategies [13,14]. ccRCC with Sarcomatoid and/or Rhabdoid Differentiation All RCC types involving sarcomatoid or rhabdoid features are associated with poor prognosis [26]. In ccRCC, these features reduce the efficacy of target therapies such as VEGF-TKI and mTOR-I. A phase II single-arm trial of sunitinib and gemcitabine in patients with sarcomatoid features aimed to assess the impact of including cytotoxic agents in the regimen. The Overall Response Rate (ORR) was 26%, with median Time to Progression (TTP) and OS rates of 5 months and 10 months, respectively. The combination might be more efficacious than either therapy alone; however, it is not as valuable in RCC patients without sarcomatoid features [27,28] (Table 2). Papillary RCC (pRCC) is the most common subtype of nccRCC, and some evidence regarding cytokine efficacy in this context is available. The Program Etude Rein Cytokines (PERCY) Quattro trial investigated IFNα, IL-2, medroxyprogesterone, and a combination thereof in the treatment of this disease. Patients with various RCC histology types, including nccRCC, were randomized in a two-by-two factorial design. No objective response was observed in the nccRCC patients with papillary (n = 21), chromophobe (n = 4), collecting duct (n = 1), or sarcomatoid (n = 3) subtypes. No evidence of a survival benefit was reported [34] (Table 3). Target therapy era: target VEGF or mTOR SWOG 1107 was an RCT dedicated to pRCC patients (n = 50). The following study compared VEGF-TKI tivantinib with or without EGFR-TKI erlotinib as a first-and secondline treatment. The median PFS (2.0 months vs 5.4 months) and OS (10.3 months vs 11.3 months) were comparable in both arms [35]. In the Phase III Advanced Renal Cell Carcinoma (ARCC) trial, high-risk RCC patients with various histological types were randomized to receive IFNα or mTOR inhibitor temsirolimus. A subgroup analysis of the outcomes for nccRCC (n = 37), which included mostly patients with pRCC, showed a median PFS (mPFS) of 7.0 months and a median OS (mOS) of 11.6 months for temsirolimus; the corresponding values for IFNα were 1.8 months and 4.3 months, respectively. The disease control rate (DCR) was reported in 41% and 8% of the patients receiving temsirolimus and IFNα, respectively. Temsirolimus was more effective at improving the PFS and OS rates than IFNα in patients with nccRCC [36]. Two principal RCTs have compared everolimus to sunitinib as a first-line treatment for advanced nccRCC with various histological types. In the ASPEN trials [37], pRCC (n = 70 of 109) was associated with an ORR of 5% and an mPFS of 5.5 months for everolimus, and an ORR of 24% and mPFS of 8.1 months for sunitinib. The OS estimates were not reported. In the ESPN trial [38], pRCC (n = 27 of 72) was associated with an mPFS of 4.1 and mOS of 14.9 months for everolimus; the corresponding values for sunitinib were 5.7 months and 16.6 months, respectively. Overall, sunitinib was preferred over everolimus as a first-line treatment in pRCC. Furthermore, several single-arm phase II trials involved patients with papillary histology alone. The RAPTOR trial (n = 50), which evaluated everolimus as a first-line treatment, resulted in an mPFS of 4.1 months and an mOS of 21.4 months. The SUPAP trial (n = 61) evaluated sunitinib as a first-line therapy for pRCC in two patients with either type 1 or type 2 disease; the mPFS rates were 6.6 months and 5.5 months, and the mOS rates were 17.8 months and 12.4 months, respectively. Both trials showed slight differences, and both sunitinib and everolimus remain used as a first-line treatment [39,40]. A separate trial evaluated a combination therapy with everolimus plus bevacizumab as a first-line treatment, which resulted in an ORR of 29% for nccRCC. In a subgroup analysis (n = 18 of 34), compared to disease types other than pRCC, pRCC including papillary features of unclassified RCC (uRCC; named RCC NOS by the WHO in 2022) was associated with an ORR of 43% vs 11%, and an mPFS and mOS of 12.9 vs 1.9 and 28.2 vs 9.3 months, respectively [41,42]. Recently, a phase II study of first-line lenvatinib plus everolimus in nccRCC reported that in pRCC (n = 20 of 31), the ORR and DCR were 15% and 85%, with mPFS and OS rates of 9.2 months and 11.7 months, respectively [43]. Biology-driven era; target MET MET is a well-documented alteration in pRCC. MET inhibitors are potential treatments for diseases with papillary histology. Foretinib is a dual MET/VEGFR2-targeting inhibitor. In a phase II study (n = 67), patients treated with first-and second-line foretinib had an mPFS of 9.3 months (mOS not reached). The associated ORR estimates were in the range from 9% to 50% without and with a germline mutation in MET, respectively [47]. A phase II trial (n = 109) involving savolitinib, which is a highly selective MET inhibitor, as an any-line treatment for pRCC, reported an ORR of 18% for MET-driven disease but an absent value for MET-independent disease. Meanwhile, the mPFS rates for patients with MET-driven and MET-independent pRCC were 6.2 months and 1.4 months, respectively [48]. Crizotinib is a TKI that targets MET in addition to ALK and ROS1. The CREATE (n = 23) trial with any-line treatments with crizotinib for pRCC type 1 showed a two out of four (50%) PR in MET alteration patients but a 6.3% ORR in MET wild-type patients [44]. These results suggest that the molecular characterization of MET status was a better predictive marker of a response to MET inhibitors in pRCC. In 2020, the results of SAVOIR phase III RCT were published [45]. This trial involved pRCC patients with MET-driven tumors (chromosome 7 gain, MET or HGF amplification, or MET kinase mutation) randomized in a one to one ratio to receive either savolitinib or sunitinib. Only 60 patients met the criteria because of a lack of MET-driven alterations even though 254 patients were screened. The mPFS rates were 7.0 months and 5.6 months with savolitinib and sunitinib (Hazard Ratio (HR), 0.71; p = 0.313), respectively. The mOS was not reached for the patients on savolitinib while it was 13.2 months for sunitinib (HR, 0.51; p = 0.110). The ORR estimates were 27% and 7% for savolitinib and sunitinib, respectively, with some evidence of better efficacy and lower toxicity in savolitinib than in sunitinib [48]. In 2021, the SWOG1500 PAPMET phase II trial reported results that may change the standard of care for advanced pRCC. Cabozantinib was evaluated alongside sunitinib, savolitinib, and crizotinib in primarily treatment-naïve patients with pRCC (n = 147). The PFS estimates were better for cabozantinib than for sunitinib (median 9.0 vs 5.6 months; HR, 0.60; p = 0.019), with the corresponding ORR estimates of 23% and 4%, respectively. Savolitinib and crizotinib were removed from the trial due to poor outcomes. The mOS was 20.0 months for cabozantinib and 16.4 months for sunitinib. These results are consistent with those of the CABOSUN trial, which applied the same randomization in ccRCC [46]. Chromophobe RCC (chRCC) Cytokine era Metastatic chromophobe RCC (chRCC) is a very rare disease; consequently, it has received no dedicated trials. The PERCY Quattro trial investigated patients with various RCC histology types including nccRCC. No objective response was observed in the evaluable patients with chRCC (n = 4) [34]. Target therapy era In general, chRCC is an indolent subtype of RCC; however, 5-10% of patients with progressing disease have poor outcomes [61]. No RCT has been dedicated to chRCC. However, the ASPEN and ESPN trials included chRCC patients ( Table 3). The ASPEN trial (n = 16 of 109) reported an mPFS of 11.4 months for everolimus and that of 5.5 months for sunitinib (OS data not shown). The ESPN (n = 12 of 72) trial reported the mPFS as not reached and an mOS of 25.1 months for everolimus; the corresponding values for sunitinib were 8.9 months and 31.6 months, respectively. These results suggest slight PFS benefits for everolimus, although the differences were not significant. Recent findings from a phase II trial of first-line lenvatinib plus everolimus in nccRCC are available. In patients with chRCC (n = 9 of 31), the ORR and DCR were estimated at 44% and 78%, respectively, with an mPFS of 13.1 months (mOS not reached) [43]. Biology-driven era The genetic basis of sporadic chRCC remains limited; consequently, no RCT has been conducted to date. Collecting Duct Carcinoma (CDC) Chemotherapy Collecting duct carcinoma (CDC) is an aggressive subtype; however, there have been no dedicated RCTs to date. CDC behaves like a more aggressive urothelial cancer type rather than RCC. Accordingly, a commonly used medical treatment for advanced CDC is platinum-based chemotherapy. There have been three single-arm phase II trials concerning traditional chemotherapy in this context. First, a study with gemcitabine plus cisplatin or carboplatin (n = 23) reported an ORR of 26%, with an mPFS of 7.1 months and an mOS of 10.5 months [54]. Second, a study with VEGF-TKI sorafenib in combination with gemcitabine and cisplatin (n = 26) reported an ORR of 30.8%, a DCR of 84.6%, an mPFS of 8.8 months, and an mOS of 12.5 months [55]. Third, the BEVAEL trial using bevacizumab plus gemcitabine and platinum in CDC and SMARCB1-deficient renal medullary carcinoma (RMC) reported an ORR of 39% with an mOS of 11 months (n = 26 of 34). Based on these results, gemcitabine and cisplatin regimens, without the addition of other agents, remain the standard treatment for patients with CDC; nevertheless, the outcomes remain poor [56]. Target therapy era A phase II trial of sunitinib for nccRCC (n = 6 of 57) included the CDC subtype, reporting an ORR of 0% and an mPFS of 3.1 months [57]. In addition, the BONSAI (n = 23) phase II single-arm trial of cabozantinib as a first-line treatment for mCDC was presented at ESMO 2021, reporting an ORR of 35% with an mPFS of 4 months and mOS of 7 months. The authors concluded that cabozantinib has promising efficacy and acceptable tolerability in mCDC patients [58]. TFE3-and TFEB-Rearranged RCCs (Formerly Microphthalmia Transcription Factor (MiT) Family Translocation RCC (tRCC) in WHO2016) tRCCs are very rare tumors that are more aggressive in adults [61,62]. No RCTs have focused on these patients. Some retrospective studies have shown a modest response to target therapy [63]. Fumarate Hydratase (FH)-Deficient RCC Hereditary leiomyomatosis and renal cell cancer (HLRCC) is familial cancer syndrome associated with an aggressive RCC type, which is caused by germline FH mutation. Sporadic FH mutations can also occur [61,64]. An FH mutation may inactivate the enzyme and change the function of tricarboxylic acid. A phase II study of bevacizumab and erlotinib enrolled a total of 83 patients with pRCC undergoing first-and second-line treatments (AVATAR trial); the sample was split approximately evenly between HLRCC and sporadic papillary RCC. HLRCC was associated with an ORR of 64% and a PFS of 21.1 months; the corresponding values for pRCC were 35% and 8.8 months, respectively. This regimen may be a suitable option for a select population [53]. SMARCB1-Deficient Renal Medullary Carcinoma (RMC) Chemotherapy RMC is a rare RCC type characterized by the loss of tumor suppressor SMARCB1 and high mortality rates. No RCT has focused on this subtype; however, several retrospective studies have been reported. RMC does not respond to TKIs; thus, platinum-based chemotherapy such as carboplatin plus paclitaxel is the preferred first-line therapy. Nevertheless, the associated response rate remains at 29% and the response duration tends to be brief (n = 52). As a second-line treatment, gemcitabine plus doxorubicin (n = 16) has shown some clinical activity in patients with platinum-refractory RMC (ORR, 18.8%; PFS and OS of 2.8 months and 8.1 months, respectively) [59,60]. Biology-driven era-target EZH2 An EZH2 inhibitor termed tazemetostat was recently approved by the Food and Drug Administration (FDA) for the treatment of another SMARCB1-deficient malignancy, namely, epithelioid sarcoma [65]. A phase II trial (NCT02601950) involved 14 patients with RMC and one patient with RCCU-MP; however, enrolment in the tazemetostat trial has been suspended due to safety concerns. The loss of SMARCB1 may induce proteotoxic and replication stress; thus, a proteasome inhibitor is a potential therapeutic agent. A phase II clinical trial (NCT03587662) is evaluating the combination of the proteasome inhibitor ixazomib with gemcitabine and doxorubicin in RMC [66]. The results of these studies are forthcoming. ccRCC The CheckMate 025 and CheckMate 214 studies, pivotal phase III studies of nivolumab (NIVO) and nivolumab + ipilimumab (NIVO + IPI), reported a significant OS benefit with a moderate improvement in PFS. In addition, several IO + VEGF or TKI combination therapy regimens were approved after the publication of results from pivotal phase III trials such as KEYNOTE 426, JAVELIN Renal 101, Immotion 151, Checkmate 9ER, and CLEAR. Most of these regimens replace sunitinib, which had previously been the standard treatment (Table 1). Currently, we are experiencing the era of IO combination therapy (IO combo). The latest ESMO guidelines [67] for all risk groups recommend an IO combo as a first-line therapy. Meanwhile, the OS signals in favorable risk patients are still immature and not yet superior to sunitinib. IO Monotherapy CheckMate 025 has shown superior efficacy for nivolumab over everolimus in patients with ccRCC previously treated with one or two antiangiogenic regimens with improved safety and tolerability [68]. Concurrently, the use of an IO re-challenge following other IO combination therapies is considered an experimental approach rather than the standard of care. IO Doublet The CheckMate 214 data demonstrated a survival benefit for patients treated with nivolumab and ipilimumab compared with those treated with sunitinib in intermediate/poor risk advanced RCC, ushering in the front-line IO era. Data from a 5-year follow-up study on an IO doublet showed a durable clinical benefit with NIVO + IPI, suggesting that patients that show a disease response and remain alive at the 3-year mark may continue with those outcomes at the 5-year mark, presenting a plateau in the tails [17,18]. An IOdoublet rechallenge following IO monotherapy is an experimental approach associated with a lower response rate than expected at this point [69]. Target Therapy in IO Era The role of target agents, specifically, monotherapy, is controversial. In first-line therapy, VEGF-TKI is recommended in combination with IO agents; it remains an acceptable alternative unless IO therapy is contraindicated or not available. Target therapy alone may be another option in selected patients, such as those presenting with low-volume, asymptomatic, and slow-growing disease. Conditionally, we should keep checking the longer follow-up mature data of OS signals with these favorable risk patients, as a recent report showed a loss of the long-term OS benefits observed in the KEYNOTE-426 study [70] As a subsequent therapy, target agents that have not been given are recommended. Robust prospective data after IO combo therapy are lacking, but a few prospective data and retrospective data support the expectations for sequencing therapy. In our data, with respect to the AFTER I-O study, patients that participated in CheckMate 025 or CheckMate 214 (n = 45) who received target agents after nivolumab with or without ipilimumab were analyzed retrospectively. The median PFS2 of NIVO and NIVO + IPI was 36.7 and 32.0 months, respectively. The median OS from first-line therapy was 70.5 months for patients treated with NIVO, while it was not reached with NIVO + IPI. The safety profile of each TT after each IO was similar to previous reports regarding the use of first-line therapies. These results indicate that sequential target therapy after IO may improve survival; nevertheless, these findings should be approached with caution, as they are from a small retrospective study [71,72]. A separate retrospective study assessed the clinical effectiveness of target therapy after IO therapy-treated patients who received VEGF-TKI had improved clinical outcomes with respect to mTOR inhibitor following IO therapy [73]. ccRCC with Sarcomatoid and/or Rhabdoid Differentiation The CheckMate 214 post hoc analyses of nivolumab plus ipilimumab (NIVO + IPI) for ccRCC with sarcomatoid features showed promising results. During the minimum follow-up period of 42 months, the median OS was better for NIVO + IPI (not reached) than for sunitinib (14.2 months), with the corresponding HR of 0.45. The PFS estimates were also better for NIVO + IPI than for sunitinib (median 26.5 vs. 5.1 months; HR, 0.54). The reported ORR was 60.8% with NIVO + IPI vs 23.1% with sunitinib, with CR rates of 18.9% vs 3.1%, respectively. Furthermore, the other IO combination trials such as Keynote 426, Immotion 151, and JAVERIN Renai101 evaluated sarcomatoid RCC in subgroup analyses. The updated analysis of CheckMate 9ER and CLEAR trials evaluated outcomes stratified by sarcomatoid features in 2021, showing that an IO combo achieved better outcomes than sunitinib as a first-line treatment in advanced sarcomatoid ccRCC. Recent studies have suggested that the benefits of IO combo therapy may be associated with high genomic instability, an elevated T-effector signature, and higher PD-L1 expression and tumor mutational burden compared to those in RCC without sarcomatoid features. These results suggest that an IO combo may be a suitable first-line treatment for sarcomatoid differentiation in RCC [30][31][32][33] (Table 2). Non-ccRCC pRCC The CALYPSO study was a phase II trial investigating the combination of MET and PD-L1 inhibition in advanced pRCC. This trial enrolled 41 patients who were either VEGF TKI-naïve or -refractory. The patients received both highly selected MET inhibitor savolitinib and anti-PD-L1 agent durvalumab. The ORR was 29%, with an mPFS and mOS of 4.9 months and 12.3 months, respectively. Among 14 patients with MET-driven tumors, the confirmed RR was 57% with a response duration of 9.4 months. Further, the mPFS and mOS with MET-driven tumors were 10.5 months and 27.4 months, respectively. The PFS was substantially longer for the patients with MET-driven than non-MET-driven tumors. This IO combo of savolitinib and durvalumab has encouraging clinical activity in patients with MET-driven pRCC [51]. In some RCTs for nccRCC with various histologies, pRCC was the most common subtype. Two RCTs with a single IO agent, anti PD-L1 pembrolizumab, or anti PD-L1 nivolumab showed inconsistent results. The phase II Keynote427 cohort B treated with first-line pembrolizumab included 165 nccRCC patients. Overall, 118 of the 165 pRCC patients showed ORR and DCR rates of 28.8% and 47.5%, respectively, compared to the corresponding values from the intention to treat (ITT) population of 26.7% and 43.0%, respectively. The phase III/IV Checkmate 374 trial with nivolumab enrolled 44 nccRCC patients, who received zero to three prior treatments, and reported an ORR of 8.3% and DCR of 50.0% in 24 of 44 pRCC patients compared to 13.6% and 50.0% reported in the ITT population. This discrepancy in the ORR rates between the two RCTs may be accounted for by the study's eligibility criteria and the inclusion of pretreated patients [49,50]. A phase II trial (NCT03635892) of nivolumab and cabozantinib (IO combo) in patients with nccRCC is ongoing. MET and multi-targeted agent cabozantinib and IO drugs have shown favorable effects as monotherapies in pRCC. A combination therapy is expected to show synergistic effects. This trial enrolled 47 patients with advanced nccRCC who had received no prior systemic therapy or a single line of treatment of other-than-IO drugs. The patients were divided into Cohort 1 (n = 40; pRCC, tRCC, or uRCC) and Cohort 2 (n = 7; chRCC). The median follow-up period was 13.1 months, and the results were reported at ASCO 2021. In Cohort 1, most patients (n = 26, 65%) were previously untreated; meanwhile, 14 (35%) patients had one prior treatment with VEGF-TKI or mTOR-I. In this cohort, the ORR, DCR, PFS, and OS estimates were 47.5%, 97.5%, 12.5 months, and 28 months, respectively, suggesting this combination has promising efficacy and safety profiles in pRCC, tRCC, and uRCC [52] (Table 3). chRCC Four trials involving IO drugs for nccRCC reported subgroup data for chRCC. The Keynote-427 cohort B using pembrolizumab (n = 21) as a first-line treatment showed an ORR of 9.5% and a DCR of 33.3%. The Checkmate 374 trial with nivolumab (n = 7) reported an ORR of 28.5% and a DCR of 85.7%. A third study using atezolizumab plus bevacizumab reported an ORR of 10%. A recent phase II trial (NCT03635892) of nivolumab and cabozantinib in patients with nccRCC cohort 2 included chRCC (n = 7), which showed no response. The ORR and DCR estimates were 0% and 71.4%, respectively (Table 3). In general, chRCC is a low-malignancy tumor type with a 5-10% risk of progression and metastasis. A multi-center re-evaluation study by Ohashi et al. suggested a new grading system based only on the presence of sarcomatoid differentiation and necrosis, which was an indicator of a limited response to treatment and poor prognosis [74]. In chRCC, mTOR-I and VEGF-TKI yielded responses comparable to those observed in other nccRCC subtypes, while I-O therapies combined with other agents did not improve outcomes regardless of having a better potential towards the sarcomatoid subtype [49,50,52,73]. tRCC One retrospective study (n = 24) with various IO drugs used as a second-line treatment for metastatic tRCC reported ORR and DCR rates of 16.7% and 29.2%, respectively, with a PFS rate of 2.5 months. A recent retrospective analysis combining the IMDC and Harvard datasets reported ORR (25.0% with IO and 0% with TKI) and mOS (62.4 months with IO and 10.3 months with TKI) estimates. The authors concluded that IO therapy may be more beneficial than VEGF target therapy in tRCC [75,76]. RMC Three trials are exploring the use of IO in patients with RMC. Most recently, a phase II trial (NCT03274258) enrolled RMC patients to assess the efficacy and safety of treatment with NIVO + IPI. Another phase II trial (NCT02721732) using pembrolizumab for rare tumors (n = 4 of 127) and a phase I study (NCT02496208) using cabozantinib and nivolumab alone or with ipilimumab for metastatic UC and other genitourinary tumors (n = 3 of 54) included some RMC patients. These data may help develop novel treatments, including IO therapies and biologic agents [77][78][79]. Among cytokine therapies, high-dose IL-2 therapy provokes a durable response, but its treatment-related AE has limited its use [5]. Bempegaldesleukin (BEMPEG) is a PEGylated IL-2, a type of novel IL-2 receptor agonist, which is a stable fusion protein designed to activate and proliferate CD8+ T cells and NK cells. In a phase I study, BEMPEG was welltolerated; combined with nivolumab, it showed a promising ORR (71%) and manageable toxicity in untreated RCC patients. A phase III PIVOT-09 trial is currently investigating BEMPEG + nivolumab vs sunitinib or cabozantinib (investigator's choice) as a first-line treatment for advanced ccRCC. This trial aims to evaluate the ORR and OS in the IMDC intermediate/poor risk and ITT populations. The secondary aim of this study is to estimate the PFS in the IMDC intermediate/poor risk and ITT populations, as well as to evaluate its safety profile, associated PD-L1 expression (predictive biomarker), and patients' quality of life [80,81]. In April 2022, it was announced that this study did not meet the prespecified threshold for statistical significance. The data have not been shared; however, a review and publication of the interim findings are expected. Triple Combination IO + IO or IO + TKI combination strategies have shown promising results; a trial to maximize their associated benefits is ongoing. COSMIC-313 is a phase III study evaluating the efficacy and safety of nivolumab + ipilimumab with or without cabozantinib in previously untreated patients with IMDC-intermediate or poor-risk aRCC. A prolonged PFS (HR 0.73) with triplet therapy was presented at ESMO 2022, and the secondary endpoint OS needs a further follow-up [82]. Another phase III study is comparing triple combinations (pembrolizumab + quavonlimab + Lenvatinib or pembrolizumab + belzutifan + Lenvatinib). Subsequent Therapy IO-IO Combination The phase II FRACTION-RCC/NCT02996110 trial involves a combination of NIVO + IPI after immunotherapy for patients with advanced RCC. The primary outcomes will be the ORR, DOR, and PFS rates. The secondary outcomes will include adverse events and serious adverse events. This trial will assess novel IO-IO combination therapy in patients with disease that was refractory to previous-line treatments [69]. IO-TKI Combination IO + IO combo and IO + TKI combination may be feasible subsequent-line therapies post-frontline immunotherapy. The CONTACT-3 study is a phase III trial of cabozantinib with or without atezolizumab in several advanced RCC histology types, aiming to evaluate PFS and OS rates. The TiNivo-2 study of tivozanib with or without nivolumab aims to evaluate the PFS as the primary outcome, as well as the OS, ORR, and DOR rates and safety profile as the secondary outcomes [83,84]. HIF2α Inhibitor Emerging agents are being designed to inhibit the transcription factor hypoxia-inducible factor (HIF), specifically, the HIF2α subunit. The phase I/II data concerning an oral HIF2α inhibitor, belzutifan, used for patients who experienced disease progression on IO/TKI therapy, has shown an ORR of 24% and a disease control rate of 80% across all risk groups [85]. A phase III trial of belzutifan monotherapy vs everolimus in previously treated patients is ongoing [86]. Synergistic effects have been observed in treatments with a combination of modalities; therefore, HIF2α inhibitors are being studied in combination to determine their efficacy and safety. A phase II study is investigating the combination of belzutifan with cabozantinib in patients who experienced disease progression after first-and second-line therapies. A separate phase III study is examining the efficacy of the combination of belzutifan with lenvatinib vs cabozantinib in patients with disease progression after first-line immunotherapy. The results from HIF2α inhibitor trials may add novel treatment options for patients with disease progression after immunotherapy and/or target therapy [87,88]. A three-arm phase III study aims to evaluate the efficacy and safety of pembrolizumab + belzutifan + lenvatinib or a coformulation of pembrolizumab and quavonlimab (CTLA4 inhibitor) + lenvatinib versus pembrolizumab + lenvatinib for patients with advanced ccRCC [89]. GAS6-AXL Pathway Inhibitor AXL is a member of the TAM family together with the high-affinity ligand growth arrest-specific protein 6 (GAS6). The GAS6/AXL signaling pathway is associated with tumor cell growth, metastasis, invasion, angiogenesis, drug resistance, and immune regulation. In ccRCC, the constitutive expression of hypoxia-induced factor 1α leads to an increased expression of AXL. AXL overexpression has been associated with the development of resistance to VEGF inhibitors and the suppression of the innate immune response. Batiraxcept, a GAS6-AXL pathway inhibitor, had been tested in a phase Ib/II trial in combination with SOC drugs such as cabozantinib and nivolumab in patients with advanced ccRCC (NCT4300140) [84]. In this trial, early data have suggested that batiraxcept added to cabozantinib has no dose-limiting toxicities while showing some evidence of favorable clinical activity. The phase II portion of this study is currently open to recruitment [90]. nccRCC 4.2.1. IO Doublet CheckMate 920 is a multi-arm, phase IIIb/IV clinical trial of nivolumab plus ipilimumab treatment in patients with previously untreated advanced RCC and clinical features mostly excluded from the CheckMate214 study (i.e., non-clear cell RCC, brain metastases, and poor performance status). The primary endpoint was an incidence of grade ≥ 3 immune-mediated adverse events. The key secondary endpoints included PFS and the ORR, DOR, and TTF rates. The exploratory endpoints included OS rates [91]. IO-TKI Combination (Triple Combination) Several phase II trials are ongoing to elucidate the role of IO + TKIs in nccRCC. Enrollment has been completed for a highly anticipated phase II trial of triplet therapy with cabozantinib, nivolumab, and ipilimumab in nccRCC [92]. Discussion Historically, RCC is known as an immuno-potent cancer with various clinical episodes (anecdotes) and has revealed a modest degree of susceptibility to cytokine therapy. The VHL gene, which is identified as the responsible gene in VHL disease, is a hallmark gene in sporadic ccRCC as well. Thus, the VEGF cascade has become a target for molecular therapies. The efficacy of some targeted drugs has been shown in key clinical trials. In admission, mTOR is also another target and mTORi, everolimus, and temsirolimus are introduced in the treatment of advanced and/or metastatic RCC. As it is true for various other cancers, based upon the molecular understanding of carcinogenesis and development, genetic alteration in cancer cells subcategorized as 'driver mutation' and 'passenger mutation' has led to the word 'dirty cancer', which is applied to cancers with a number of genetic changes. This type has the characteristic of being an inappropriate cancer type for targeted therapy. As for ccRCC, the new rationale using the blockade of immune checkpoint, including PD-1, PD-L1, and CTLA-4, was subjected to clinical trials. There have been drastic changes made to the treatment strategies for many types of cancer. Vigorous efforts towards exploring the specific key characteristics that determine favorable responses have not yet shown robust evidence besides PD-L1 expression. Nevertheless, the theoretical assumption, i.e., a higher mutation burden provides more neo antigens that might be responsible for tumor rejection, transforms 'dirty cancer' into a 'favorable cancer' for new immunotherapies. Discussing this issue from a pathological and molecular pathological point of view, although molecular profiles do not currently affect the care of patients with RCCs of any histological subtypes, emerging data from clinical trials of immuno-oncology agents combined with or compared to VEGF inhibitors suggest that distinct gene expression signatures reflecting a prominence of angiogenesis or immune infiltration correlate with the presence of sarcomatoid differentiation and response to therapy and could support personalized therapy choices in the future [17,30,93,94]. However, nccRCC is a very heterogeneous disease that can be further subdivided into more than 40 histological subtypes with clinical, histomorphological, immunohistochemical, and molecular features. These entities are emerging in the new 2022 World Health Organization (WHO) classification [64]. It should be noted that the tumor microenvironment-related gene expression signatures observed in nccRCCs may be due to different molecular pathways than those in ccRCC and may cause other hidden gene expression signatures, resulting in different therapeutic effects. The correlations between genomic alterations (such as the tumor mutation burden, neoantigen load, and chromosomal copy number alterations), the tumor microenvironment's characteristics, and clinical response for target and IO therapies are under investigation [94,95]. Given the limited data on nccRCCs with a small number of cases in the previous studies, comprehensive clinicopathological and molecular genetic investigations with a large cohort for nccRCC patients at risk of metastasis are desirable for each specific nccRCC subtype in order to choose and develop more effective treatment strategies. Another major challenge in nccRCC is diagnosis. Due to the rarity of nccRCC, differential diagnosis remains difficult in some cases [96][97][98][99]. Notably, in a re-appraisal series of 33 cases diagnosed originally as so-called 'unclassified' RCC in patients aged 35 years or younger, 22 of 33 (66%) were reclassified as eosinophilic-solid and -cystic RCCs, FH-deficient RCCs, and succinate dehydrogenase-deficient RCCs [100]. Importantly, in clinical practice, the differential diagnosis between RCCs from UC, especially for FH-deficient RCCs, and highgrade distal nephron-related adenocarcinomas with overlapping morphology, including CDC and RMC with poorer prognoses, is sometimes extremely difficult but essential, nonetheless. In one large multi-institutional cohort study, 25% of cases initially diagnosed as potential CDCs were reclassified as FH-deficient RCC by immunostaining for FH and 2-succinocysteine [93]. A similar rate of reclassification from CDCs to FH-deficient RCCs or SMARCB1-deficient RMC occurred in a recent comprehensive genomic-profiling study detecting FH and SMARCB1 mutations [101]. Combination therapy with conventional anticancer drugs remains the standard of care. Comprehensive pathological investigations are needed to choose appropriate treatments. Conclusions Regarding ccRCC, we have so many treatment options. One of the important questions is which of these options are appropriate: to combine these agents or to sequence them to maximize the outcome. We need good biomarkers for IO and/or target therapy to resolve certain issues. HIF inhibition is a novel, promising treatment target; several trials involving HIF2α inhibitors are ongoing. nccRCC are composed of various genetically and histologically different cancers. However, most of the active advancing prospective trials for patients with nccRCC emulate the developed regimens for ccRCC. Insights from molecular biology have helped elucidate oncogenic mechanisms, which fall into several subsets, based on biological characteristics. The treatments based on cancer histology and biology require further evidence regarding said characteristics.
Going Granular: Equity of Health Financing at the District and Facility Level in India Abstract Health financing equity analysis rarely goes below the state level in India. This paper assesses the equity and effectiveness of public spending on health in the state of Odisha. Using district-level public spending data for the first time, it sheds light on the incidence of public spending by geography and by type of services. There are three key findings. First, it identifies the weak link between district spending and district need, proxied by poverty rates or lagging sectoral outcomes, highlighting the potential for a more needs-based approach to public resource allocation. Second, the results indicate that at the household level health spending by the state is not pro-poor, especially in public hospitals, underscoring the need to improve access to care for the bottom 40% at these facilities. Third, an exhaustive analysis of micro-level treasury data brings into focus the importance of reforming public finance data systems to support evidence-based policy at the sub-state level. Significant district-wise variation in key health financing and equity indicators, combined with growing policy interest in the district level, underscore the utility of further empirical work in this area. Introduction There has been a rapid increase in health equity research in recent years. This trend can be attributed to several factors, including advances in methods, the popularization of tools, the development of rich databases for country comparison, and an elevation of the profile of health equity metrics in the context of international goals. [1][2][3][4] Taken together, these factors are helping to make the case that equity considerations should figure prominently in national and international policy initiatives aimed at promoting the achievement of Universal Health Coverage (UHC). Health equity matters in India. There is a wealth of data showing significant inequalities in all aspects of health, including risk factors, access to care and outcomes. This is true of socioeconomic inequality but also for gender, caste, religion, and other groups. 5 At the same time, India matters hugely for the global pursuit of equitable UHC. It ranks 79th out of 111 countries in a comprehensive cross-national analysis of UHC achievement. 6 Nearly one-quarter of all households suffering catastrophic health expenditures worldwide are Indian. 7 In brief, India is probably the single most important country for the attainment of Sustainable Development Goal 3.8 which calls for the achievement of UHC globally. Health equity is best analyzed at the sub-national level in India. The average population of an Indian state, at nearly 50 million, is larger than the global average for countries. The average population of an Indian district, about 1.9 million, is larger than almost 50 countries. The National Family Health Survey's most recently released round was representative at the district level for the first time. The significance of district health systems is reinforced by the fact that 96% of outpatient care and 85% of inpatient care happens in the patient's home district. But with few exceptions, health equity analysis seen through a financing lens rarely goes below the state level in India, largely due to the opaque structure of government budgets. 8 This paper aims to address this gap. It highlights the challenges and possibilities of health financing equity analysis at the district level in India by presenting the methods and results of a benefit incidence analysis undertaken in the eastern state of Odisha. The case for focusing on the district level is also supported by India's federal policy environment. Health is a state subject under the constitution. In recent years there has been greater devolution of generalpurpose funds from the center to the states and greater discretionary spending powers in the hands of state governments, presenting an opportunity for state policymakers to recast policies and programs to address local needs. The need for policy localization also reflects a broader shift whereby the district has emerged as a key unit of policy focus. Led by NITI Aayog-the premier policy think tank in Government of India-India's Transformation of Aspirational Districts Program, launched in 2018, aims to expedite socioeconomic progress in approximately 115 priority districts nationwide. The goal is to support these lagging areas to implement core schemes across six priority sectors, through coordinated action that unites the districts, states, and the center in a common purpose. Odisha has 10 aspirational districts, the third most of any state. Background on Odisha Odisha has a population of about 42 million as of the 2011 Census and a per capita gross state domestic product of about 1450 USD . The urban share of the population is 17%, compared to 31% nationally. Between 2005 and 2012 (the latest available figure), absolute poverty fell from 59% to 33%, the fastest rate among all Indian states. The decline was sharp in both rural and urban areas. 9 There was also significant progress in non-monetary indicators of well-being such as access to basic services including electricity, water and sanitation. 10 But progress has been uneven across people and places. A poverty rate of 63% among the Scheduled Tribes (who account for one-fifth of the state population) is the highest in the country. Poverty is also concentrated in the south and west (Figure 1). Districts in Odisha are highly unequal. Using monthly per capita consumption expenditure, we find that 85% of the inequality in the state is explained by within-district inequality. The Gini coefficient varies from 0.14 in Malkangiri to 0.40 in Sundargarh, with a state-wide average of 0.33. 11 There are also stark spatial differences within the state in health outcomes. A striking feature in Odisha is the high reliance on publicly provided health services, even among the better off. The reliance on government health facilities is much higher than in the rest of India, particularly for outpatient care. The public sector accounts for 81% of all inpatient visits (compared to 45% in India) and 72% of all outpatient visits (25% in India). In this context, understanding who benefits from public spending in health, and by how much, is important. Against a background of wide spatial inequalities, high reliance on public services and an increasingly local policy landscape, it is important to understand the link between public expenditures and sectoral outcomes including equity at the district level. While district-level data on key outcomes is typically available for many indicators and the associated spatial patterns are familiar to most state-level policy makers, there is far less knowledge of how sectoral expenditures vary at the district level. Little is known about how health spending varies from one district to another, or how effective is public spending in reaching the poor. These are important policy questions that merit closer study through benefit incidence analysis (BIA). Methods This paper applies benefit incidence analysis to determine whether public spending in health is progressive (pro-poor) or regressive (pro-rich) along the welfare distribution of households. 12 The objective is to assess the performance of the state's public health system in delivering services to its citizens, especially the poor and vulnerable. The data collection required to conduct BIA can also serve to shed light on other aspects of health system performance, as described below. The first step in standard BIA is to assign a value to public services. For this, we use the cost at which health services are delivered. The unit cost of providing a service is calculated as total government spending on a service divided by the number of users of that service (for example, total outpatient hospital spending per outpatient visit). The number of users can be estimated from either administrative sources or household surveys. The potential advantage of administrative data is that it is intended as a census of all users, whereas household surveys may be constrained by small sample sizes and low frequency. The use of household surveys is more common in developing countries as administrative data typically suffers from poor data quality and access issues. In Odisha, both administrative (Health Management Information System, HMIS) and household survey (National Sample Survey, NSS) data are available to identify users of public services. The administrative sources point to nearly 50% more inpatient cases at public health care facilities than in the NSS. We compiled three years of administrative data on patient visits namely, 2014-15, 2015-16 and 2016-17 and found that the numbers were quite stable over time and across districts. On balance, the administrative data appear more reliable than the NSS data for the purpose of calculating unit cost levels at granular levels, in part because despite pooling the state and central samples, the sample sizes are not always large enough for some districts to calculate unit costs by facility and type of service. The two approaches yield very different unit costs in absolute terms, but the distributional impacts using the two approaches are similar. The results using administrative data for estimating unit costs are presented here. The next key step is to combine unit costs of public services or subsidies with users of these services. Information on the latter is typically obtained from household surveys that, in addition to data on utilization of public services by households, also provide information on out of pocket spending and income or consumption status. Users are then aggregated by income or consumption levels to compare how subsidies are distributed along the welfare distribution. In Odisha, 96% of outpatient care and 81% of inpatient care takes place in the patient's home district, and therefore it can be reasonably assumed that service users are from the same district where the facility is located. The main exception is admission to medical college hospitals, of which there were only three across Odisha in the relevant year, and therefore these costs are excluded when calculating district-wise unit costs. The last step is to net out user fees by households to gain access to public services. In some contexts, households contribute substantially to service provision despite large government subsidies, and this contribution often varies by income group. 13 However, in Odisha and in India more broadly, the cost recovery by government to cover service delivery costs tends to be low. For example, as presented below, about 85% of out of pocket spending for inpatient care is for medicines and tests not provided by the facility. We make three key contributions to the methodology on BIA in India. First, we use district-level public spending and utilization data for the first time. By doing so we reveal the geography of incidence within the state that is often masked by regional aggregation. Previous inequality and BIA studies in India have a national or state focus. 14-17 Those with a state-level focus assume the same unit cost for service delivery across the entire state. Such an approach overlooks the heterogeneity within states-between the state capital and other districts, between villages and cities. Second, we calculate unit costs and incidence by type of service. Specifically, we look at incidence by both type of care (inpatient or outpatient) and facility (primary health care/community health care center or public hospital). This is important to account for heterogeneity in the consumption of health care by socioeconomic status. The third contribution is to document the complex process of compiling fund flows within the state. This is useful both to identify ways in which the state government can strengthen how public spending data are compiled and disseminated to support evidence-based policymaking and to create a roadmap for other researchers wishing to conduct similar analysis in other states. It should be noted that benefit incidence analysis has a number of limitations. First, the term "benefit" implies a certain level of quality of care during service delivery such that the patient's condition improves, which might not be the case. It is sometimes called "expenditure incidence" as a result. Second, standard BIA reveals average benefit incidence-based on the entire stock of public spending-whereas a more policy-relevant metric is often marginal benefit incidence-that is, which population groups would benefit from the additional rupee spent on the margin. Third, BIA does not illuminate any model of demand behavior that explains why a particular pattern is observed. 18 Other caveats are noted in the data section. Data BIA requires administrative data on public spending at a granular level and household survey data showing utilization patterns. In India, information on public spending within states is complex and opaque due to fragmented financing streams and nonstandard accounting and Public Finance Management (PFM) structures. Moreover, the concept of a district budget does not exist, making the estimation of local spending an arduous task. A detailed description of how public spending data were compiled at granular levels can be found in a background note. 19 A brief summary is provided here. There are two main sources of district-level public spending data in Odisha: state treasury transactions and Centrally Sponsored Schemes (CSSs). The state treasury maintains an account of all government fund receipts, transfers and expenditure across all districts through a system of Drawing and Disbursing Officers (DDOs). There are nearly one thousand DDOs mapped to the 168 treasuries in Odisha. The DDOs operate at different administrative levels and act as the main financial intermediaries within the sub-national public finance structure. In recent years, the Integrated Financial Management System (IFMS) portals allows public access to granular spending data. However, data in the IFMS portal in Odisha were found to be incomplete and the documentation limited. A more complete dataset comprising of DDO-level public spending data organized by detailed budget heads obtained directly from the Finance Department of the state for the financial year 2015-16 is the source of granular public spending data for this study. Capital spending is not included due to significant data gaps. The year 2015-16 was chosen to align closely with the year of household survey data collection, which was 2014. Public spending data for 2014-15 were missing for some districts. The CSS are the second source of public spending data. These schemes are the primary vehicle through which the Union government finances and manages social policy spending in India. While the aggregate CSS spending at the state level is available from the treasury, the district-wise breakup is not. These are captured in a separate information system known as the Public Finance Management System (PFMS). The PFMS and the treasury data are not linked. The districtlevel CSS data from the PFMS was sourced from the line departments and then merged with the treasury data. For health, the National Health Mission (NHM) is the main CSS, accounting for one-third of the total health spending in Odisha. NHM spending is assigned to the primary care level, although we recognize that some is incurred as community-level care for which no service delivery volumes could be obtained. The data collection exercise, which entailed scraping data from the publicly available IFMS portal, accessing detailed budgetary data directly from the Finance Department, and augmenting with CSS data from the line departments, was lengthy and cumbersome. The budget codes for health spending are organized by location of delivery of health services (rural or urban) and the systems of medicine (allopathy or traditional systems). Mapping this to spending by health facilities or by type of service (inpatient or outpatient) is challenging. An approach that combined information on the DDOs with the object-head level treasury data was adopted to classify spending by health facility. Altogether, about 22,000 treasury entries for health were analyzed and nearly 86% of the revenue expenditures were classified by health facility. Information on health spending by type of service-inpatient or outpatient care-within a facility type is not available in any budgetary data. We refer instead to a costing study done for government facilities in the state of Punjab to arrive at expenditure ratios by type of care in Odisha. 20 It is also notable that the above approach was also attempted in Maharashtra and Rajasthan, but could not be completed due to data limitations, highlighting that states differ in their PFM systems. Availability and access to granular data on households is an additional challenge. The National Sample Surveys (NSS) on Health and Social Consumption are the standard source for household and individual-level data on utilization and out of pocket spending on health services in India. The central statistical agency, Ministry of Statistics and Programme Implementation (MoSPI), conducts sample surveys that provide national and state level estimates. The central sample is usually not large enough to conduct in-depth district-level analysis. The Directorate of Economics and Statistics (DES)-the nodal statistical agency of a state-administers the same surveys at the state level and the data from both the surveys is pooled to generate robust district-level estimates. There are often lengthy delays in processing the state sample data. The pooled NSS data from 2014 is the key source of household data in this paper. A more recent NSS round was completed in 2017-18, but only the central sample has been released, and the questionnaire no longer distinguishes between utilization at the PHC/CHC level versus hospitals. Government Health Spending at the District Level Public spending on health was 4.8% of total government spending in Odisha in 2015/16, or 1.1% of state GDP. While low by international standards, this is similar to what the central and other state governments in India spend on health. The analysis revealed that two-thirds of this spending is on outpatient care, mostly at Primary Health Care (PHC) and the Community Health Care (CHC) centers. Inpatient care accounts for 19% of the total (Table 1). These shares are broadly similar across most districts, with the main exceptions being the three districts with medical colleges, Cuttack, Ganjam and Sambalpur. Overall, the spending pattern reveals the relative prioritization of primary care and a comparatively modest share for hospitals, suggesting good allocative efficiency. The average cost of service delivery is not uniform across districts. For inpatient care, the unit cost is about 2,600 INR per visit at a PHC/CHC center compared to 1,600 INR per visit at a public hospital, reflecting lower volumes at the PHC/CHC level. (PHC and CHC spending is combined to align with NSS data, which does not make a distinction between the two, although their service profiles differ with CHCs providing more inpatient care). While unit costs for outpatient care vary widely by district, they do not vary much by facility type. The unit costs for outpatient care at a PHC/CHC center are similar to those at a public hospital (Figure 2). What explains the wide variation in public spending across districts in Odisha? One candidate is the degree of urbanization and its impact on service delivery costs and/or the cost of living. However, using population density as a proxy for urbanization, we find that this relationship is weak. The spatial variation could also be linked to varying district needs, as more public resources may be directed to places with higher poverty levels or worse health outcomes. However, this is not the case. Public spending on health is poorly correlated with poverty and key health outcomes such as the infant mortality rate (Figure 3). Mirroring the spending pattern, the number of doctors and beds per 1,000 population also varies significantly by district but is not correlated with health needs. These findings suggest weak targeting, as fewer public resources are flowing to those districts with the greatest need. Average out of pocket spending on health for those that seek inpatient care at a public facility is about 6800 INR, or 10% of total annual household consumption expenditure. But this varies widely by quintile. The largest share is spent on medicines followed by tests ( Figure 4). For outpatient care, the average cost per visit is about 600 rupees, almost all of which is spent on medicines. The standard approach in BIA is to net out cost recoveries to government or user fees to calculate net benefits. While out of pocket spending on health for families using public services is significant, a very small share of this appears to accrue to government as the bulk is spent on medicines and diagnostics. Doctor fees are less than 10% of total spending. Thus, the difference between gross and net subsidies is likely to be very small. Benefit incidence results using gross subsidies are presented below. Benefits from Public Spending on Health The first-generation approach to calculating subsidies in health in the Indian literature was to apply the state-level unit cost to everyone seeking health care. Under this approach, the incidence results are driven by differences in utilization rates in the household survey. This paper instead uses granular information on public spending by type of health facility (PHC/CHC or public hospital) and service (inpatient or outpatient). We find that unit costs differ significantly across these categories (Tables 2 and Tables 3). A simple delineation of inpatient and outpatient care helps, but still misses out important differences across facilities. On average, unit costs are higher at PHC/CHCs than at public hospitals, possibly because patient volumes are much higher at hospitals and therefore greater economies of scale can be realized. When unit costs are combined with household utilization patterns, it is apparent that the distribution of subsidies in Odisha is not very progressive (pro-poor), especially at the hospital level. On a state-wide basis, the results are similar to those attained from the firstgeneration approach. However, additional insights are gained at a more granular level. Combining data on service and facility type reveals that inpatient care at PHC/CHCs is less regressive than at public hospitals ( Figure 5). The bottom 40% receive 48% of the subsidies on inpatient care at PHC/CHCs in contrast to just 22% at public hospitals. We find similar results for outpatient care, but at public hospitals this is not as regressive as inpatient care ( Figure 6). The bottom 40% receive 50% of the subsidies at PHC/CHCs and 33% at public hospitals. Lastly, there is also wide variation in the incidence of public spending at the district level. Government spending at public hospitals is regressive in most districts for both inpatient and outpatient care (Figure 7 and Figure 8). Interestingly, in some districts public spending at government hospitals is more regressive for outpatient care than for inpatient care. However, further analysis indicated that there are few significant correlations between spending levels and inequality of access at the district level. A better understanding of why some districts are more unequal than others could help to identify tailored policy interventions. Discussion Equity is a core objective in the pursuit of universal health coverage, in India and globally. The Odisha benefit incidence results presented here suggest there is ample scope for improvement in this regard by revisiting prevailing health financing arrangements with a sharper focus on equity objectives to help better reach the poor. One policy implication relates to the equity of public spending and resource allocation at the district level. Spatially, with fewer public resources flowing to districts with higher poverty rates and worse health outcomes, public spending in health appears to be delinked with population need. The weak correlation between spending and key outcomes at the district level is in part a reflection of historical input-or norm-based budgeting. Health spending is based largely on the number of facilities in a district, and the number of doctors, nurses and other staff who (are willing to) work at those facilities. There is minimal use of evidence on population needs (e.g., disease burden) or demand when resources are allocated across districts. A gradual shift toward needs-based resource allocation formulas-common in many health systems but not prevalent in India-to inform fund flow to the district level could help mitigate this mismatch. The expansion of still nascent demandside financing initiatives, according to which money follows the patient, could also help. But the district perspective is only part of the story. There is extensive socioeconomic inequality within districts. A second policy implication is that there is significant room to improve the targeting of public spending to the poor and vulnerable with a special emphasis on government hospitals, which are disproportionately used by the better off. These results suggest that accessibility is a key factor determining health care use at public hospitals. Barriers to access could include physical distance to reach district hospitals (including transport costs), social barriers to care-seeking at certain facilities, or preexisting knowledge of the financial costs to attaining care (including drugs and lab tests) that are typically higher at hospitals. Although it is currently not participating in the national Pradhan Mantri Jan Arogya Yojana (PM-JAY) program launched in 2018, Odisha has recently expanded its own investments in targeted government-sponsored health insurance schemes covering hospitalization for the poor, including the Biju Swasthya Kalyan Yojana. On a pan-India basis, eligibility under these schemes, even with imperfect targeting, indicates a more pro-poor gradient than the benefit incidence of government subsidies at the hospital level as shown here. Other measures, such as targeted interventions to address socioeconomic barriers to access, including transport subsidies or better availability of medicines and diagnostics at government hospitals (for which high OOP is often incurred), could also help. More broadly, out-of-pocket expenditures for households could be reduced if governments spend more on health. Finally, the paper also makes a case for strengthening public finance data systems, and the state's statistical architecture more broadly. At present, gaining insights on the level and distribution of public spending at the local level and by type of service is extremely cumbersome due to data challenges. Straightforward questions such as how health spending varies by district and how much is spent on PHCs are not readily answerable without an exhaustive analysis of an elusive dataset as was done here. In a complex federal structure and fund flow system, data should be an aid, not a bottleneck, to public policy. Simple steps could be taken to improve public finance data management with the aim of supporting evidencebased policy making. First, a unique geographic code could be institutionalized in the computerized treasury system to allow for easier tracking of fund flow at the local level and to facilitate spatial analysis. Local authorities could also benefit from PFM system data. Second, the treasury and PFMS systems could be integrated to allow for improved coordination and expenditure monitoring across line departments and with centrally sponsored schemes. Third, citizen access to treasury data should be enhanced. Together these steps could significantly enrich the possibilities for health financing equity analysis at a granular level. Conclusion India's quest to achieve pro-poor UHC should be supported with robust health financing equity analysis. Given the country's size and diversity, such work is best undertaken at a granular level to generate more nuanced and policy-relevant findings. To our knowledge, this paper presents the first benefit incidence analysis in India that aims to unpack spending patterns at both the district and facility level. Further empirical work in this direction would be valuable in other states, and this study identifies simple PFM data system changes that could make such exercises much easier to implement in the future. The findings highlight the need for ongoing health financing reforms, both on the supply-side and demand-side, to make government health spending more pro-poor.
A transient cortical state with sleep-like sensory responses precedes emergence from general anesthesia in humans During awake consciousness, the brain intrinsically maintains a dynamical state in which it can coordinate complex responses to sensory input. How the brain reaches this state spontaneously is not known. General anesthesia provides a unique opportunity to examine how the human brain recovers its functional capabilities after profound unconsciousness. We used intracranial electrocorticography and scalp EEG in humans to track neural dynamics during emergence from propofol general anesthesia. We identify a distinct transient brain state that occurs immediately prior to recovery of behavioral responsiveness. This state is characterized by large, spatially distributed, slow sensory-evoked potentials that resemble the K-complexes that are hallmarks of stage two sleep. However, the ongoing spontaneous dynamics in this transitional state differ from sleep. These results identify an asymmetry in the neurophysiology of induction and emergence, as the emerging brain can enter a state with a sleep-like sensory blockade before regaining responsivity to arousing stimuli. Electrophysiological evidence shows that many anesthetic-induced neurophysiological dynamics undergo relatively symmetric transitions: shifts in spectral power, spatial correlations, phase-amplitude coupling, and spike coherence that are observed during anesthetic induction gradually reverse as drug concentrations are lowered (Breshears et al., 2010;Lee et al., 2010;Purdon et al., 2013;Mukamel et al., 2014;Vizuete et al., 2014). However, it is also clear that the process of emerging from anesthesia is not identical to anesthetic induction. Emergence occurs at lower anesthetic doses than induction (Friedman et al., 2010), and this hysteresis suggests that state-dependent processes also shape the transitions in and out of anesthesia. Behaviorally, some patients experience delirium, a transient state of agitation and confusion which can arise during emergence from anesthesia (O'Brien, 2002), suggesting that distinct neural mechanisms may underlie emergence. EEG and local field potential recordings have suggested that the process of emergence may involve stepping through discrete dynamical states (Lee et al., 2011;Hudson et al., 2014). Electrophysiological studies in rodents show that propofol-induced coherent alpha and delta oscillations, which appear to mediate the functional disruption of thalamus and cortex during anesthesia, recover in a spatiotemporal sequence during emergence that is different from induction (Flores et al., 2017). These observations are consistent with a history-dependent process, in which the current brain state influences the process by which the next brain state is reached. However, a neurophysiological mechanism or network dynamic that is engaged selectively during emergence, rather than induction, is not known. eLife digest General anesthesia is essential to modern medicine. It allows physicians to temporarily keep people in an unconscious state. When infusions of the anesthetic drug stop, patients gradually recover consciousness and awaken, a process called emergence. Previous studies using recordings of electrical activity in the brain have documented spontaneous changes during anesthesia. In addition, the way the brain responds to sounds or other stimulation is altered. How the brain switches between the anesthetized and awake states is not well understood. Studying the changes that happen during emergence may help scientists learn how the brain awakens after anesthesia. A key question is whether the changes that occur during emergence are the reverse of what happens when someone is anesthetized, or whether it is a completely different process. Knowing this could help clinicians monitoring patients under anesthesia, and help scientists understand more about how the brain transitions into the awake state. Now, Lewis et al. show that people go through a sleep-like state right before awakening from anesthesia-induced unconsciousness. In the experiments, recordings were made of the electrical activity in the brains of people emerging from anesthesia. One set of recordings was taken in people with epilepsy, who had electrodes implanted in their brains as part of their treatment. Similar recordings of brain electrical activity during emergence were also made on healthy volunteers using electrodes placed on their scalps. In both groups of people, Lewis et al. documented large changes in electrical activity in the brain's response to sound in the minutes before emergence. These patterns of electrical activity during emergence were similar to those seen in patients during a normal stage of sleep (stage 2). Patients who were about to wake up from general anesthesia had suppressed brain activity in response to sounds, such as their name. Moreover, this sleep-like state happened only during emergence, indicating it is a distinct process from going under anesthesia. The experiments also suggest that the brain may use a common process to wake up after sleep or anesthesia. More studies may help scientists understand this process and how to better care for patients who need anesthesia. In addition to shifts in spontaneous neurophysiological dynamics, sensory processing is also strongly affected by induction and emergence from general anesthesia. Sensory-evoked potentials (event-related potentials, ERPs) index specific phases of cognitive information processing and can provide diagnostic measures of unconscious patients King et al., 2013). Several studies of ERPs during anesthesia have shown that disruption of higher level cognitive processing is reflected by a reduction in amplitude of the mismatch negativity (MMN), potentials evoked by unexpected sensory input. The MMN declines in amplitude during induction of anesthesia (Simpson et al., 2002;Heinke et al., 2004), whereas lower-level responses such as the auditory steady-state response persist during sedation, and are abolished at deep anesthetic levels (Plourde and Picton, 1990). Cortical responses to direct stimulation using TMS are more spatially constrained and less complex during propofol-induced unconsciousness (Sarasso et al., 2015), consistent with fragmentation of large-scale brain network activity during propofol anesthesia (Lewis et al., 2012). The propagation of sensory information through thalamocortical circuits is thus differentially affected at increasing doses of anesthesia, with higher-level, longer-latency responses extinguished at low drug levels and then further suppression of short-latency evoked activity at high drug levels. At the deepest levels of anesthesia, when brain activity enters a state of 'burst suppression' alternating between periods of isoelectric silence (suppressions) and periods of high-amplitude activity (bursts), sensory stimuli can trigger the onset of a burst (Hartikainen et al., 1995;Kroeger and Amzica, 2007). It is therefore clear that external sensory input can still influence cortical activity during profound anesthesia. However, evoked responses during burst suppression are qualitatively different than those observed during normal sensory processing, as they typically manifest as a largeamplitude burst containing the spectral dynamics of the pre-bursting state (Lewis et al., 2013), rather than the distinct ERP waveform with classical components related to specific phases of cognitive information processing seen in the waking state. Sensory input during burst suppression thus appears to drive nonspecific cortical activity rather than effective processing of sensory information. The neural dynamics supporting the brain's ability to spontaneously recover wakeful consciousness, regain sensory perception and resume complex cortical responses following the profound disruption caused by general anesthesia are not well understood. Late components of the ERP continue to be disrupted even after patients have recovered consciousness and early components have returned to baseline (Plourde and Picton, 1991;Koelsch et al., 2006), suggesting that emergence represents a graded and prolonged return to the normal awake state rather than a simple reversal of anesthesia induction. It is still unclear what ongoing brain dynamics contribute to altered sensory processing during emergence from anesthesia. Here, we use two independent datasets À intracranial recordings from patients emerging from anesthesia after surgery, and high-density EEG recordings from a study of emergence in healthy volunteers under controlled laboratory conditions À to provide a multiscale analysis of neural dynamics during emergence from anesthesia. By defining the trajectory of changes in ongoing neural dynamics and sensory evoked responses during the process of emergence, we identify a new transitional brain state that occurs just before emergence from anesthesia. This state is marked by stimulus-evoked cortical down states that resemble the K-complexes which are hallmarks of stage two non-rapid eye movement (N2) sleep. However, its spontaneous dynamics qualitatively differ from sleep. We show that this state occurs primarily in the minutes prior to awakening, identifying a novel transitional brain state that is selective to anesthetic emergence. Results We analyzed intracranial recordings from 12 patients (13 sessions) with intractable epilepsy during emergence from propofol general anesthesia. Subjects were implanted with subdural electrocorticography (ECoG) and/or penetrating depth electrodes (1095 total electrodes). Emergence recordings took place immediately after completion of clinically indicated surgery to implant intracranial electrodes. Recordings began during maintenance of anesthesia through the clinical infusion of propofol (Figure 1a), and continued after the infusion was stopped as the patient emerged from anesthesia and regained consciousness. In 8 of these subjects, recordings were also obtained during a gradual anesthetic induction when patients returned for a second surgery 1-3 weeks later. We presented auditory stimuli every~3-6 s throughout the emergence period, allowing us to assess cortical Median of events within 2 s of stimulus onset across all subjects, including all channels with at least five event trials (n = 190 channels). Shaded region is quartiles. Sign has been flipped to be negative across all channels. (E) Mean waveform in channel with the most events for an example subject, aligned to the event peak (n = 38 events). Shaded region is standard error. (F) Mean waveform aligned to peak across all subjects and mean gamma power during the event (n = 13 sessions, 13 channels, 1339 events). Shaded region is standard error. Auditory stimuli can induce large-amplitude evoked potentials during emergence During emergence from general anesthesia, we observed that in a subset of trials, auditory stimuli elicited a large (>100 mV) and slow (duration >1 s) evoked potential (Figure 1b,c) across many electrodes. We developed an automatic detection algorithm to identify these events, which we termed large potentials (LP). LPs were defined as events of >400 mV amplitude lasting >400 ms (see Materials and methods for additional details). We chose these thresholds to conservatively detect only large events while ignoring small or ambiguous LP-like events. 16% of electrodes (n = 1095 electrodes) exhibited at least five events using this detector. This number included electrodes from every patient, as at least two electrodes with ! 5 LPs were detected in each emergence session. To characterize the relationship between these events and the auditory stimulation, we analyzed all trials on which an LP occurred within two seconds of stimulus onset. The mean stimulus-triggered event on each electrode ( Figure 1d) had a median peak amplitude of 236 mV (quartile range (QR): 183-295), a value that was lower than the detection threshold due to averaging together events with slightly different peak times. The peak of the mean stimulus-triggered event occurred 1.01 s (QR:0.7-1.38) after stimulus onset, and lasted 0.28 s (full-width at half max; QR: 0.05-0.47), a waveform that was far slower and larger in amplitude than typical auditory-evoked responses in the awake state. The LPs thus rank among the largest electrophysiological signals observed in human cortex, indicating synchronization of electrical signaling among a substantial fraction of the local neuronal population. The average stimulus-aligned waveform across patients can be temporally blurred due to differences in timing across subjects and electrode locations. To more precisely assess the amplitude and waveform of these events, we selected the electrode with the most events in each subject, and analyzed the mean waveform of all detected events aligned to their peak. The peak-aligned events on these electrodes were larger (median amplitude = 550 mV) and had an asymmetric morphology ( Figure 1c, d, e), with a sharper onset than offset (mean rise = 165 ms, mean fall = 285 ms, 95% confidence interval (CI) for difference=[84 156] ms, bootstrap; p=0.0002, Wilcoxon signed-rank test) and large post-peak rebound. Aligning to stimulus onset thus confirmed these events were auditoryevoked, whereas analyses aligning to the peak demonstrated that the waveform of the events was large and asymmetric, with substantial variability in exact time-to-peak. The large, slow, and asymmetric waveform of the LPs resembles K-complexes (KCs), a characteristic electrophysiological graphoelement that occurs spontaneously or following sensory stimulation during stage two non-rapid eye movement (NREM) sleep (Loomis et al., 1938;Colrain, 2005;Halász, 2016). The KC corresponds to a cortical DOWN state (Cash et al., 2009), in which local neuronal firing is suppressed. To test whether LPs mark a similar cortical dynamic, we analyzed high-frequency power in the LFP, which is correlated with local spike rates (Ray and Maunsell, 2011), during all detected LPs. We selected the electrode with the most LPs in each subject and computed the peak-triggered power, and found that LPs correlated with a strong reduction in broadband gamma-range (40-100 Hz) power (À1.29 dB, CI=[À0.4-2.5], bootstrap; p=0.04, Wilcoxon signedrank test, Figure 1f,g), suggesting they too represent a DOWN state with suppression of neural activity. This peak-locked analysis included both stimulus-evoked and spontaneous events. A substantial proportion (28%) of detected LPs appeared to occur spontaneously, as they were not preceded by an experimental stimulus within 2 s, although other auditory input present in the clinical environment may have contributed to their generation. When the spectral analysis was instead performed relative to the onset of the auditory stimulus, including only trials where LPs appeared within 2 s of a stimulus, we found that this decrease in high-frequency power reached a minimum at 1.3 s post-stimulus, suggesting that the auditory-evoked potentials were also associated with prolonged suppression of neuronal activity. This slow timecourse is also similar to the timing of auditory-evoked KCs during sleep (Colrain et al., 1999). Large evoked potentials involve a spatially distributed frontotemporal network Intracranial recordings provide precise, millimetre-scale spatial resolution, enabling mapping of the cortical sources of LPs. We measured the amplitude of the mean stimulus-evoked response across all electrodes, on trials that evoked an LP in at least one electrode. We aligned mean responses to stimulus onset, to allow consistent comparisons across channels that could exhibit different peak times. Most subjects exhibited LPs on multiple electrodes, with amplitude of the evoked potential varying widely across regions ( Figure 2a). However, many electrodes exhibited no sign of an LP Evoked potentials in a patient with a long emergence recording, shows that LPs appear as the propofol concentration declines, and then subside shortly after the patient's first spontaneous movement (seen at~2200-2900 s). Z-scored ERPs averaged in sliding window of 60 s every 15 s. Gray shading covers window with insufficient (<8) events for averaging. (B) Normalized evoked potentials in a patient with both an emergence and an induction recording. The pattern is asymmetric, with stimulus-locked LPs occurring only during emergence. Z-score shows mean ERP normalized to the prestimulus baseline in each time window. This patient was under light anesthesia at the end of surgery and LPs appeared even before the propofol infusion was turned off. (C) Amplitude of the peak ERP across all subjects, locked to ROC (movement onset) and normalized to pre-stimulus baseline. Evoked potential amplitude across all subjects peaks in the 400 s prior to ROC, and then returns to baseline after ROC, indicating that LPs mostly occur in the minutes preceding ROC. As a control, the peak pre-stimulus baseline z-score across subjects is plotted in black, with gray shading indicating its mean value ±3*st.dev. over time. (D) Boxplot of absolute value of mean ERP amplitude at 0.5-1 s post-stimulus in the eight subjects with both induction and emergence recordings. ERPs are small at baseline, sedation, and post-LOC. They are largest in the bin after propofol is turned off and before ROC. (E) Mean spectra across patients within the same 3 min time bins, red bars indicate frequency bands with significant difference (p<0.05, bootstrap). The post-ROC state has greater low-frequency (<2 Hz), alpha/beta (~10-24 Hz) and gamma (~30-50 Hz) power than the awake pre-anesthesia baseline (n = 7 subjects). (F) Same, Figure 3 continued on next page despite showing ongoing local electrophysiological activity, indicating that these were not global cortical events. The percentage of grid and strip electrodes with at least five detected LPs was highest in frontal and temporal cortex (39% of frontal electrodes, 36% of temporal electrodes, Figure 2b), which had significantly higher proportions than the mean rate (26%, CI=[24 29], p<0.05, Bonferroni corrected binomial test). Fewer parietal electrodes exhibited detectable LPs (11%). We also found that LPs were recorded on 35 of the 129 depth electrodes placed in gray matter (27%), including on deep contacts placed in hippocampus. The peak timing and morphology of the evoked potential varied across space within individuals ( Figure 2c). Overall, our intracranial recordings suggest that LPs were restricted to a specific frontotemporal network of cortical regions rather than a globally coherent slow wave. Sensory-evoked LPs occur during a time-limited transitional state To determine the timecourse of the stimulus-evoked LPs, we computed sliding window measures of the mean evoked response over time, including all trials, on the electrode for each subject that exhibited the most LPs. The LPs were primarily observed after propofol was turned off but before the patient exhibited signs of recovery (Figure 3a,b). This effect was seen in the mean amplitude of the ERP over time: the normalized ERP amplitude increased across subjects in the~300 s prior to the first behavioral sign of emergence, and subsided again thereafter (Figure 3c). In the eight patients who had recordings in both induction and emergence, we analyzed the mean ERP amplitude relative to behavioral state changes and found that the LPs occurred predominantly during emergence, particularly in the pre-return of consciousness (pre-ROC) period, and not during induction ( Figure 3d). Since our induction used a gradual infusion (Figure 1a), patients were guaranteed to pass through a plasma concentration level during induction that matched their level at emergence, demonstrating this transient state was selective to the process of emergence rather than only a particular dosage level. To test what ongoing dynamics accompanied this transitional LP state, we analyzed spectral content within each epoch. We found that the dynamics during emergence were substantially different from induction, exhibiting significantly greater low-frequency (<2 Hz) and alpha power even after awakening ( Figure 3e). Comparing the three minutes immediately after behaviorally defined loss of consciousness (LOC) during induction, and the three minutes immediately prior to return of consciousness (ROC), a smaller but otherwise similar power difference was evident (Figure 3f). Evoked LPs detected in scalp EEG reveal asymmetric induction and emergence dynamics While the intracranial recordings suggested asymmetry between induction and emergence, due to time constraints in the operating room we were not able to measure intracranially over prolonged periods. To test these dynamics in a more controlled setting and in a population of healthy subjects, we next analyzed scalp EEG data recorded during a stepped infusion of propofol in healthy volunteers Mukamel et al., 2014), during presentation of auditory stimuli that were click trains, words, and the subject's own name. This stepped infusion protocol induced slow changes in propofol concentration and behavioral responses (Figure 4a,b). The steady-state auditory evoked potential to the auditory click train stimuli also declined, quantified as the induced power at 40 Hz, corresponding to the click frequency ( Figure 4c). To confirm this decrease was selective to the auditory-evoked band rather than broadband, we also analyzed power at a 'control' frequency of 22 Hz (i.e., not the stimulus frequency) and found no change. We next examined the traces and demonstrates a broadband increase in power above 10 Hz in the emerging state, relative to immediately after LOC during induction (n = 8 subjects). Red bars indicate significant differences at p<0.05 (bootstrap). DOI: https://doi.org/10.7554/eLife.33250.007 The following source data is available for figure 3: Source data 1. Mean amplitudes of the ERP for each intracranial subject across conditions. DOI: https://doi.org/10.7554/eLife.33250.008 found that large evoked potentials were clearly visible during emergence after large-amplitude slow oscillations subside (Figure 4d,e,f). To apply the same LP detector, we focused on a frontal EEG electrode, as frontal electrodes had high LP rates in our intracranial data ( Figure 2b) and did not exhibit the large auditory-evoked potentials of the temporal electrodes. We detected stimulus-evoked LPs (peak >7 s.d., Figure 4) during emergence from propofol anesthesia in 4 out of 10 subjects, despite the lower spatial resolution of the scalp recordings (Figure 4e,f,g). If the detection threshold was lowered (peak >5 s.d.), we could also observe brief traces of similar events in the induction period in 3 of the 10 subjects. However, these periods were brief and infrequent (Figure 4e), suggesting that this brain state occurs primarily (but not exclusively) during emergence (Figure 4-figure supplement 1). While we detected these events in a frontal electrode, LP events were observed broadly across the scalp (Figure 4-figure supplement 1), consistent with the widespread spatial profiles we observed in the intracranial data. These results in healthy volunteers confirmed that the LPs were not related to epileptic events in the patients. Furthermore, they show that LPs occurred primarily during emergence (Figure 4h,i, fig. supp. 1) even in these experiments with a prolonged induction period, lasting more than twice as long as the emergence period. In these subjects, the LPs were also found to be stimulus-selective: they occurred preferentially in response to the sound of words and names, and did not occur following click-train stimuli (Figure 4h, Figure 4-figure supplement 1). In contrast, no such stimulus selectivity was observed in the intracranial patients, as each stimulus type could elicit LPs (in the channel with the most events in each subject, LPs occurred within 2 s of 21% of word stimuli; 20% of sounds stimuli; 22% of clicktrain stimuli). A key difference between these two datasets was the relative frequency of the name and word stimulus categories, which were infrequent (20% names/words, 80% clicks) in the scalp data but were evenly distributed in the intracranial data (30% words, 40% clicks, 30% sounds). The increased saliency of an infrequent stimulus may thus increase the probability of an LP, similar to reports for KCs during sleep (Colrain et al., 1999). We observed LPs for a prolonged period that could extend after the initial ROC in the scalp EEG dataset (Figure 4h,i, Figure 4-figure supplement 1), whereas LPs were only present before ROC in the intracranial dataset (Figure 3d, Figure 4-figure supplement 1). This difference likely reflects the differences in arousal state across these two datasets: in the intracranial study, the drug was completely shut off and patients emerged rapidly as drug levels monotonically decreased. In contrast, in the scalp EEG dataset, the propofol levels were lowered in a gradual, stepped fashion (Figure 4a), leading to a prolonged emergence period over tens of minutes. These large LPs therefore may be present not only in the minutes prior to any sign of ROC, but may continue through emergence until a relatively heightened arousal state is reached. Evoked responses strongly resemble spontaneous K-complexes during sleep Although the LPs shared some properties with the spontaneous KCs that occur during N2 sleep, the propofol emergence period could be expected to also exhibit significant differences from natural sleep. To test the similarity between events during sleep and during emergence, we obtained intracranial recordings during sleep from a subset of the patients (n = 3 patients). To compare the LPs detected during propofol with the spontaneous KCs during sleep, we first verified that the automatic detection algorithm could identify events in the sleep datasets. We found that 64% of manually identified KCs were also flagged by the automatic detector, suggesting this approach could be used to quantitatively compare the two phenomena within this patient cohort (although the high number of misses, 36%, suggests it should not be employed as a KC detector for more general purposes). The LPs recorded during emergence and the sleep KCs shared an overall profile of large (>100 (Figure 5b). The spatial distribution of sleep KCs also appeared very similar to that seen during emergence from propofol anesthesia (Figure 5c,d). To test this spatial similarity, we computed the mean event waveform across all electrodes, triggered on the peak of each event detected in a single electrode with a large number of events in both the sleep and propofol recordings. We found that the mean Figure 5c, Figure 5-figure supplement 1), meaning that electrodes with large KCs were likely to also exhibit large LPs during emergence. Overall, the shared waveform, spatial distribution, and timing of these events suggest that the LPs observed during propofol emergence may engage the circuit mechanisms that generate KCs during natural sleep. Given the resemblance of the LPs to KCs, we next tested whether ongoing spectral dynamics within the LP period resembled N2 sleep. We computed the power spectrum during a manually selected period exhibiting LPs during emergence, and compared these to segments of sleep recordings manually identified as N2 sleep. We found substantial differences in these spectra, with propofol emergence exhibiting more power across a broad frequency range of 10 to 40 Hz throughout all recorded cortical regions ( Figure 6, median difference = 5.6 dB, CI=[3.9 6.1], bootstrap; p<0.001 in each subject, Wilcoxon signed-rank test). In addition, the sleep spectra exhibited clear spindle power (10-14 Hz) peaks across cortical regions, whereas the emergence spectra exhibited either no peak or a spatially restricted frontal alpha (~10 Hz) peak characteristic of deep propofol anesthesia (Figure 6c). These results demonstrate that while some common neurophysiological events can be observed in stage two sleep and in this transient emergence period, emergence is a distinct brain state that is not identical to sleep. Discussion Using both intracranial ECoG and scalp EEG recordings, we found that emergence from general anesthesia is accompanied by a transient state in which auditory stimuli can evoke large potentials (LPs) corresponding to all-or-none cortical suppressions lasting several hundred milliseconds. LPs strongly resemble the K-complexes observed in N2 sleep, although the neural dynamics of emergence from general anesthesia nevertheless represent a distinct state. This state appeared primarily during emergence and foreshadowed the return of behavioral responsiveness, suggesting it represents a distinct brain state through which patients transition as they recover consciousness. Our data indicate that the brain's response to propofol is hysteretic, such that the current state is determined not only by the drug concentration but also by the recent history of the brain's activation. The brain state we observed appears to be distinct from the sedated state experienced by patients during slow anesthetic induction, as it was exclusively observed during emergence and not induction of general anesthesia in the intracranial recordings. A small number of LPs were detected during induction of anesthesia in a subset of subjects over the course of extremely long (>1 hr) inductions in the scalp EEG dataset, but these were rare and vastly outnumbered by the more frequent LPs occurring during emergence. In addition, a previous intracranial study of slow (~1 hr) inductions of propofol general anesthesia did not report analogous events (Nourski et al., 2017), suggesting this phenomenon is primarily a signature of emergence. We also found that this transitional state is not identical to sleep: comparing neural dynamics during sleep and emergence from general anesthesia in the same subjects identified substantial differences in the power spectrum. The frontal alpha rhythm characteristic of propofol anesthesia is still present during emergence (Feshchenko et al., 2004;Murphy et al., 2011;Purdon et al., 2013), but not during N2 sleep, indicating these are distinct brain states. Spontaneous alpha rhythms during propofol are thought to be generated by increased inhibitory tone in thalamic circuits, causing an intrinsic~10 Hz dynamic to emerge (Ching et al., 2010). These alpha rhythms are still present during the LPs, suggesting the thalamus may be exhibiting an altered excitatory/inhibitory balance as compared to sleep. However, despite the difference in spontaneous dynamics, the LP events themselves share many common properties with sleep, exhibiting highly similar waveforms and spatial profiles. In addition, LPs occurred at higher rates in response to more salient stimuli. A similar effect has also been found in sleep, as salient stimuli (such as rare stimuli or the subject's own name) produce larger KC peaks during sleep (Colrain et al., 1999;Perrin et al., 1999). These events may therefore reflect an analogous effect of arousing stimuli in sleep and emergence, which could conceivably be related to some similarity in circuit state, such as ongoing tonic vs. bursting dynamics in thalamus. The common morphology of the LPs we observe during emergence and the KCs characteristic of sleep suggest that similar circuit mechanisms are engaged by auditory stimuli despite differences in the ongoing spontaneous dynamics. There is evidence that neuromodulatory arousal systems mediate emergence from general anesthesia, distinct from induction. Disruption of orexinergic signaling increases the time required for emergence from anesthesia, but does not change the dose-response sensitivity for induction (Kelz et al., 2008). Coherent alpha (8-12 Hz) and delta (1-4 Hz) oscillations develop rapidly and pervasively across medial prefrontal cortex and thalamus at loss of consciousness induced by propofol, and likely mediate the functional disruption of these areas, contributing to the state of unconsciousness (Flores et al., 2017). During emergence, these oscillations dissipate in a sequence distinct from induction, beginning with superficial cortical layers and medial and intralaminar thalamic nuclei, following known cortical and thalamic projection patterns for dopaminergic and cholinergic signaling (Flores et al., 2017). Neuromodulatory activity during emergence could therefore create unique cortical and thalamic circuit states that enable LP responses to sensory stimulation. Given the similarity between LPs under anesthesia and sleep K-complexes, similar mechanisms might also play a role in modulating levels of arousal during sleep. The LPs we observe are qualitatively different from the ongoing slow oscillations that occur during deep anesthesia (Steriade et al., 1993;Breshears et al., 2010;Murphy et al., 2011;Lewis et al., 2012). LPs occur after the ongoing slow oscillation has largely subsided and reflect an isolated cortical DOWN state elicited by auditory stimulation, rather than a rhythmic cortical dynamic. However, the occurrence of LPs increases power in the same low-frequency bands of the spectrum that are occupied by the slow oscillation. Future studies may therefore need to take care that their analyses differentiate between these two distinct states, as increased low-frequency power may indicate isolated LP occurrence and foreshadow awakening, and will be important to distinguish from the slow oscillations of deep anesthesia. While the LPs were strikingly large, they may have been obscured in previous studies due to the brief and transient nature of the state in which they occur. In addition, we observed substantial heterogeneity across patients in terms of the frequency and timing of the LPs. In the intracranial data this heterogeneity may be partially explained by variation in electrode location, the duration and complexity of the surgery, and dosage of clinical medications administered to each patient. In the scalp EEG data, however, drug levels were controlled and no surgery was performed, yet heterogeneity across subjects was still present. This heterogeneity is also consistent with clinical observations, as patients are much more variable in how long they take to emerge than they are in induction. Following anesthetic emergence, patients exhibit variable levels of arousal, with some patients taking hours to return to alertness (Larsen et al., 2000). While animal studies have reported stereotyped transitions between states (Hudson et al., 2014), possibly due to increased experimental control and genetic similarity between individual animals, human studies have suggested that individuals may exhibit different trajectories during emergence from anesthesia (Hight et al., 2014), and undergo different transitions between distinct, potentially sleep-like states (Chander et al., 2014). This variability may reflect individual differences in arousal regulatory circuits or even in drug diffusion rates across the brain. It may also be that some patients pass through the transient stage too quickly for it to be identified using our analyses. Another possibility is individual physiological differences, such as receptor density or vascular properties, could modulate the relative rate of drug clearance in cortex and subcortex, and that only some individuals may experience this state. However, since these events were detected in all the intracranial patients we studied, it may be that the transient state occurs in most patients but is more challenging to detect in scalp EEG due to blurring of signals measured at the scalp. In addition, the healthy volunteers received a smaller total amount of propofol than the clinical patients, and may therefore have been more likely to emerge too rapidly to detect this brief state. The precise circuit mechanisms that generate the LP phenomenon are not clear, and will be challenging to identify with certainty using data from human subjects. However, we suggest that the sleep K-complex may share some mechanistic parallels with the LPs observed here. The KC is an isolated cortical DOWN state (Cash et al., 2009) and is likely to also involve thalamic circuits (Jahnke et al., 2012;Mak-McCully et al., 2014). While previous animal studies have identified spontaneous KCs during maintenance of ketamine-xylazine anesthesia (Amzica and Steriade, 1998a), these occurred as part of an ongoing slow oscillation rather than the isolated auditory-evoked events seen here and during N2 sleep. Moreover, those events were not selective to anesthetic emergence, suggesting they represent a different phenomenon. Stimulus-evoked potentials in animal studies have primarily reported stimulus-evoked responses with a faster timecourse than the LPs reported here (Amzica and Steriade, 1998b), perhaps reflecting a different phenomenon during relatively stable states of anesthesia in most animal studies, compared with dynamic changes during awakening. Future animal studies should therefore track the gradual process of emergence to identify the mechanisms of the isolated LPs identified here. One possible mechanism is that increased thalamic activation leads to strong stimulation of the thalamic reticular nucleus (TRN), leading to a thalamic and subsequent cortical suppression. This theory would be consistent with animal studies that have induced slow waves through stimulation of TRN and suppression of thalamocortical neurons (Lewis et al., 2015), and with human imaging studies demonstrating that emergence is associated with increased activity in subcortical arousal structures such as thalamus (Långsjö et al., 2012). Alternatively, it may be that an inhibitory shift in the excitatory/inhibitory balance in cortex leads to a local profound suppression in response to sensory input, generating a local LP that can then spread across cortex or through corticothalamic projections. Future studies could explore these theories further through causal manipulations of cortical and thalamic activity during a gradual emergence process. These future investigations could also address some limitations of the current study. As intracranial electrodes are placed solely based on clinical need, we did not obtain whole-brain coverage, and had no thalamic recordings. Animal studies could investigate more systematically the spatial profile of the observed LPs. In addition, due to the nature of our experiment taking place in the operating room, we were constrained in timing and could not record throughout a continuous induction, maintenance, and emergence. In addition, the induction and emergence recordings were not counterbalanced in time due to the ordering of implant and explant procedures. They could potentially exhibit small changes in electrode position and signal-to-noise-ratio. While our data suggest no major difference in recording quality that could explain the striking LPs we observe, and we observe the LPs in scalp EEG as well despite opposite temporal ordering, more subtle phenomena could depend on differences in the recordings across these sessions. Highly controlled volunteer studies, as in our scalp data, will therefore be useful counterparts to any future intracranial investigations of these phenomena. Finally, our patient sample was small due to the rare nature of these recordings, and therefore we could not examine how the heterogeneity of LP dynamics might relate to emergence time or other clinical outcomes. Gathering datasets in larger patient cohorts would be very valuable for investigating how these dynamics can inform patient monitoring and predict functional outcomes. In particular, the LP events could potentially be used to monitor depth of anesthesia or predict when a patient will emerge, or they may be found to relate to emergence-related clinical outcomes, such as delirium. Future clinical studies would be highly beneficial for investigating these questions. In summary, we identified a transient brain state that occurs asymmetrically during emergence from general anesthesia. While deep states of anesthesia have been well characterized and exhibit stereotyped electrophysiological signatures, tracking transitions between states demonstrates the existence of transient and heterogeneous dynamics that occur selectively in the minutes before emergence. This state engages similar sensory-evoked circuit dynamics as in sleep, suggesting the brain may sometimes experience a sleep-like sensory blockade before recovering from general anesthesia. Clinical setting Written informed consent was obtained from all patients and experimental procedures were approved by the Massachusetts General Hospital/Brigham and Women's Hospital Institutional Review Board. The enrolled patients had medically intractable epilepsy and underwent surgery to implant intracranial electrodes for clinical monitoring purposes (Figure 1-figure supplement 1). The location and number of electrodes implanted was determined by clinical criteria without regard to this study. Recordings were performed in the hospital operating room as the patients emerged from propofol general anesthesia. Recordings began after surgery was completed and while the clinical infusion of propofol was still running, and continued throughout the period after the infusion was stopped and patients emerged from anesthesia, until patients had to be disconnected for transport outside of the operating room. No seizures were recorded in these data. Patients received the typical clinical regimen of medication throughout the surgery (including paralytics and analgesics), and in most cases the maintenance infusion also included remifentanil. We acquired intracranial recordings from 15 patients during emergence. Data from two patients were excluded due to poor recording quality, and data from one patient was excluded due to failure of the auditory stimulus equipment. A second emergence recording was acquired from one patient with electrodes implanted in different locations, and this recording was treated as another subject in the analysis (total analyzed = 13 sessions, drawn from 12 individuals, five female, mean age 34.5 years, range 21-48 years). In four sessions, patients had only depth electrodes, and in nine sessions they had both subdural grid/strip and depth electrodes. Eight of these patients were also studied during gradual induction of general anesthesia when they returned 1-3 weeks later to undergo electrode removal surgery. In the induction recording, propofol was infused gradually using STANPUMP software with a target plasma concentration rising linearly over 10 min to a maximum of 6 mg/mL (Schnider et al., 1999). Behavioral task -Intracranial recordings Auditory stimuli were presented every 3.5-4.5 s with uniform temporal jittering (11 sessions) or every 6 s (two sessions) using EPrime software and air-tube earphones to avoid stimulus-related artifacts in the electrophysiology data. Stimuli consisted of either a click train with a frequency of 40 Hz in one ear and 84 Hz in the other, lasting 2 s; a non-verbal sound (e.g. door closing, alarm); or a spoken word. Stimulus types were pseudorandomized throughout the presentation. Words and sounds were of neutral or negative affect; these distinctions were not analyzed in detail here. During the induction of anesthesia prior to the start of the surgery, patients listened to the stimuli and were asked to press a button to indicate whether the stimulus was a word. During emergence, stimulus presentation began near the time that the propofol infusion was stopped, and continued until the patient became responsive. The total presentation duration was 20 min, and if the patient had not yet emerged at that time then the presentation was restarted. Only two patients began performing the task at emergence. Due to this behavioral observation, clinical staff also periodically (approximately every~1-2 min) asked subjects to open their eyes. Return of behavioral responsiveness was marked manually using two definitions: the first spontaneous movement observed by research staff (labeled 'First movement'), and the time at which patients began responding to verbal requests to open their eyes or move their hands (labeled 'First response', defined as return of consciousness (ROC)). In 8 of these patients, the same behavioral task was used during induction of anesthesia 1-3 weeks later when patients returned for surgery to remove the intracranial electrodes. The task began 4 min prior to the start of the gradual propofol infusion and continued for 4 min after the target plasma level reached its maximum level. Intracranial electrophysiology data During anesthetic induction and emergence, intracranial recordings were acquired from depth and/ or subdural grid and strip electrodes, with placement selected solely by clinical staff for clinical purposes. Recordings were acquired with an XLTEK acquisition system at a 2000 Hz sampling rate. Bad electrodes were manually identified and excluded from further analysis. Depth and strip electrodes were re-referenced to a bipolar montage in which an adjacent contact was subtracted from each channel. Grid electrodes were referenced to a Laplacian montage by subtracting the mean of the immediately neighboring electrodes. Data were detrended, lowpass filtered below 200 Hz, downsampled to 500 Hz, and highpass filtered above 0.16 Hz. Automated event detection The automated detector was designed to conservatively select events, missing some events but also reducing false positives. Since occasional large artifacts interfered with event detection, automatic detection of spontaneous events was restricted to the longest manually identified continuous segment with acceptable recording quality. All other timepoints were excluded from the automatic detection window. This approach was chosen due to the nature of the intracranial recording: we began recording as soon as possible, but clinical interaction with the patient at the beginning and end of the experiment, as well as connecting and disconnecting electrodes, led to very large artifacts at these timepoints whereas we obtained a long, stable recording during the emergence process. The median duration of this segment across patients was 650 s (inter-quartile range: 580-1410 s). This long segment typically still included some periods with noise, which were rejected automatically in further analyses as described below. Data were first filtered between 0.2-4 Hz. All positive and negative peaks with an amplitude of at least 400 mV were identified. The duration of this peak, defined as the amount of time spent over a threshold of 40 mV, was required to be at least 400 ms. Peaks with amplitude greater than 1200 mV were discarded as artifact, and events occurring within 500 ms of a previous event were discarded. All events within a single electrode were required to have the same polarity, selected as whichever polarity was most frequent across all automatically detected events, since the referencing montage allowed potentials to be either negative-or positive-going depending on local polarity and electrode positioning. Event-locked analysis Trials with a range (peak-trough difference) exceeding 1500 mV were discarded as artifact. Event trials were defined as those trials with an automatically detected event occurring within 2 s of stimulus onset. The mean of all event trials was computed for each electrode that had at least five event trials. Because different electrodes had different polarities, the sign for negative-going electrodes was flipped. The median and quartiles were then computed across the pool of all electrode waveforms (restricted to electrodes with at least five event trials) and all 12 patients. Analyses of individual waveforms (e.g. Figure 2c) selected the electrode with the most detected events in each patient. Rise times and fall times were computed on the mean waveform for each selected electrode, by calculating the amount of time it took to rise and fall from a threshold of 200 mV to the peak of the mean event waveform. Bootstrapped 95% confidence intervals were calculated by resampling across subjects with replacement 1000 times, and reporting the 2.5th and 97.5th percentile of the resulting distribution. Spectral analysis Spectrograms were computed using the electrode with the most LPs in each subject. Triggered spectrograms were computed relative to the peak of the LP waveform selected by the automatic detector. Spectral analysis was performed using multi-taper estimation (Chronux, http://chronux.org, [Bokil et al., 2010]). The analysis used three tapers and a sliding window of 200 ms duration every 50 ms. Spectrograms were normalized within frequencies to the mean power at that frequency between [À2-1] s prior to the peak. Broadband gamma power was computed by taking the mean power between 40 and 100 Hz, relative to the mean gamma power in the [À2-1] s window. Statistical analyses of gamma power were performed on the mean gamma power in the 300 milliseconds post-peak using the Wilcoxon signed-rank test. Spectra for ongoing spontaneous dynamics ( Figure 3) used six 30 s epochs within a continuous 3 min time window, using 19 tapers. Spectra were downsampled by a factor of 4 for display. Statistical comparison between time windows was performed by a hierarchical bootstrap resampling procedure: (1) resample subjects; (2) resample epochs within subjects; (3) compute the mean spectrum for each time window on the resampled time windows; (4) calculate the difference between the two spectra. This procedure was repeated 1000 times to obtain 1000 bootstrap estimates of the difference in the spectra; differences outside the [2.5,97.5] percentile for more contiguous frequency points than the spectral resolution of the multitaper estimate were labelled as significant and marked in red. One subject was excluded from the post-ROC vs. awake baseline comparison because electrode quality became too poor (s.d. >500 mV) after the patient emerged due to motion artifacts. Shaded error bars in the plot were computed in Chronux using jackknife estimation. Spatial analysis of evoked potentials Because the automatic detector imposes an artificial threshold on amplitude for events, the spatial analysis was performed on the stimulus-evoked potential over all electrodes. This analysis included only trials that were identified as generating an LP on at least one electrode, and excluded any trials with amplitude above 1500 mV as noise. The peak amplitude of the mean evoked potential in each electrode was plotted in color on a 3D reconstruction of the cortical surface generated using Freesurfer (Fischl, 2012) and with grid and strip electrode coordinates registered to the surface of the brain (Dykstra et al., 2012). To categorize the spatial location of electrodes, the nearest anatomical label from the Freesurfer automatic subcortical segmentation or cortical parcellation (Destrieux et al., 2010) was assigned. Electrodes identified as being in white matter and electrodes in regions with fewer than five contacts (e.g., putamen, occipital cortex) were excluded from the spatial analysis. Statistical testing of which of the nine regions had significantly high proportions of electrodes with >5 LPs was performed with a binomial test, comparing each region to the mean across regions, with a Bonferroni correction for multiple comparisons across regions. Displayed grid timecourses are lowpass filtered below 30 Hz and downsampled to 100 Hz for display. Timecourse analysis Sliding window plots over induction and emergence were calculated by averaging all trials within a window of 60 s sliding every 15 s. For z-score analysis (Figure 3c), the peak amplitude of the ERP was normalized to the standard deviation of the 1.5 s pre-stimulus, across each 60 s window, and the plots display the resulting z-scores. Calculations were only included when at least eight stimuli occurred within the window. When analyzing mean evoked amplitude across time windows (Figure 3d), a 3 min period for each window was defined, and the mean evoked response was computed. The mean amplitude in the 0.5-1.5 post-stimulus window was then computed for each subject. As before, one subject was excluded from the post-ROC condition because electrodes began to be disconnected and recording quality was not usable. Sleep-intracranial comparison Recordings of natural sleep were obtained for three of the intracranial recording patients during their hospital stay (after the emergence recording and prior to the induction recording). An experienced neurophysiologist (G.P.) scored the sleep data and manually labelled the onset and offset times of a subset of clearly visible KCs in the intracranial recordings for initial validation of the approach. Sleep data was acquired on a clinical system with a sampling rate of either 500 or 512 Hz. To match the propofol recording, the same reference electrodes were used for each electrode as in the emergence dataset, and then all electrodes were filtered between 0.16 Hz and 200 Hz. Any electrodes where the same reference electrodes were not available in both datasets were excluded. For analysis of median peak amplitude in individual events, electrodes with at least four events in each dataset were included. The histogram reflects all detected events on these electrodes, whereas the statistical test drew the same number of events from both the sleep and the propofol datasets for each subject. For bootstrap confidence interval estimation, data across subjects were pooled due to the small number of patients, and the bootstrap drew from datapoints pooled across the three patients. For comparison, within-subject statistics are also presented. To compare the spatial distribution of events across both datasets, event times were selected from a single electrode with at least 20 events in both datasets, and then the peak-triggered waveform across all electrodes was computed using these selected times. The mean value of the peak-triggered waveform between 100 ms pre-peak and 100 ms post-peak was calculated, and this mean event value was then compared across electrodes. Spectra were compared by identifying four 30 s windows of clean recordings with high LP rates in the emergence dataset, and randomly selecting four 30 s consecutive windows of N2 in the sleep dataset. Spectra were computed using Chronux with 19 tapers, downsampled by a factor of 4 for display, and error bars were computed with the jackknife method at p<0.05. Scalp EEG dataset Scalp EEG analysis used data that was previously published Mukamel et al., 2014) with full details provided in those publications. Briefly, healthy volunteers underwent monitoring with 64-channel EEG during a slow infusion of propofol, targeting a stepped increase from 0 to 5 mg/mL plasma concentration over one hour, and then a stepped decrease until the subjects recovered consciousness. Stimuli consisted of click trains (2 s duration), words, or the subject's own name, with stimulus type pseudorandomized throughout the experiment. 80% of the stimuli were click trains, 10% were words, and 10% were names. The LP analysis used a single frontal EEG electrode. For each stimulus presentation, we subtracted the mean and divided by the standard deviation during the 2 s pre-stimulus period. We then computed the maximum stimulus-evoked amplitude during the 1 s following stimulus presentation, and averaged these over 1 min windows.
Quantum Invariants of the Pairing Hamiltonian Quantum invariants of the orbit dependent pairing problem are identified in the limit where the orbits become degenerate. These quantum invariants are simultaneously diagonalized with the help of the Bethe ansatz method and a symmetry in their spectra relating the eigenvalues corresponding to different number of pairs is discussed. These quantum invariants are analogous to the well known rational Gaudin magnet Hamiltonians which play the same role in the reduced pairing case (i.e., orbit independent pairing with non degenerate energy levels). It is pointed out that although the reduced pairing and the degenerate cases are opposite of each other, the Bethe ansatz diagonalization of the invariant operators in both cases are based on the same algebraic structure described by the rational Gaudin algebra. I. INTRODUCTION Strong pair correlations are observed in fermionic many body systems which energetically favor large wave function overlaps. This phenomenon is known as pairing and it plays an important role in our understanding of many body physics (See Ref. [1] for a review). Historically, the physical significance of pairing was first realized with the microscopic theory of superconductivity developed by J. Bardeen, L. N. Cooper and J. R. Schrieffer (BCS) in 1957 [2]. Following the success of the BCS theory, the idea of pairing was carried over to other areas of physics as well. In particular, pairing now plays an essential role in the nuclear shell model as the residual interaction between nucleons and successfully recounts for various properties of atomic nuclei [3,4]. In order to investigate the influence of pairing on nuclear properties, many authors have used exact analytical solutions of nuclear shell model which are available in some simplified cases. For example, in Ref. [5], Kerman considered pairs of nucleons coupled to angular momentum zero occupying a single orbit and introduced the quasi-spin formalism in which these pairs can be treated within suitable representations of the angular momentum algebra. He used this formalism to write down the exact energy eigenstates and to analyze the influence of pairing on the collective vibrations of nuclei. Quasi-spin formalism can also be extended to the case of several orbits in which case the quasi-spin angular momenta corresponding to different orbits commute with each other and the pairing term has the form of a coupling between these angular momenta. This observation establishes a direct link between the fermion pairing models and the interacting spin models (see Refs. [6,7,8] for reviews). The exact diagonalization of the later model was carried out by R. W. Richardson in 1962 in the case of the orbit-independent (i.e., reduced) pairing interaction [9]. Later, it was clarified that the exact solvability of the pairing Hamiltonian in the reduced pairing case can be understood in terms of a set of quantum invariants which commute with one another and also with the Hamiltonian [10,11]. These invariants are called rational Gaudin magnet Hamiltonians since they stem from the work of M. Gaudin who was originally trying to find the largest set mutually commuting operators for a given system of interacting spins. In Ref. [10], Gaudin also showed that the pair creation and annihilation operators which are used in building the simultaneous eigenstates of the rational Gaudin magnet Hamiltonians form an algebra which is today known as the rational Gaudin algebra. It is worth mentioning that the rational Gaudin algebra is related to the rational solution of the classical Yang-Baxter equation which appears as an integrability condition in many contexts. As a result, the rational Gaudin magnet operators and the rational Gaudin algebra have found many other applications in physics (see Refs. [6,7,8] for reviews and Refs. [12,13,14,15,16] for some interesting applications). They have also been generalized to include other underlying algebraic structures (i.e., higher rank algebras, super-algebras and deformed algebras) besides the angular momentum algebra. There is an extensive literature on this subject and the interested reader may find the Refs. [17,18,19,20,21,22,23] useful. Although the Richardson-Gaudin solution is successfully used in nuclear physics, the assumption of reduced pairing sometimes proves to be too stringent. In many cases, the effective residual interactions between the nucleons are best described by a pairing force whose strength differs between the orbits 1 . The Hamiltonian is frequently used to describe such an orbit dependent pairing interaction. Here, j denotes the total angular momentum of an orbit and ε j denotes its energy. The overall strength of the pairing term against the kinetic term is measured by the constant |G| which has the dimension of energy whereas the relative pairing strengths are measured by the dimensionless constants c j . Richardson-Gaudin scheme mentioned above applies to the special case of this Hamiltonian in which all c j 's are the same whereas all single particle energy levels ε j are different from one another (the reduced pairing case). The focus of this paper, however, is the opposite case in which all the c j 's are different from one another and all single particle energies ε j are the same (the degenerate case). The problem described by Hamiltonian given in Eq. (1) is exactly solvable in both the reduced pairing case and the degenerate case. As mentioned above, in the reduced pairing case the solution was given by Richardson-Gaudin scheme and the corresponding quantum invariants are the rational Gaudin magnet Hamiltonians. In the degenerate case, the exact energy eigenvalues and eigenstates were obtained in a series of papers by Pan et al [25] and by Balantekin et al [26,27] and the purpose of the present paper is to identify the corresponding quantum invariants in the degenerate case. In addition, it will be shown that the quantum invariants in the degenerate case can be simultaneously diagonalized with the help of the algebraic Bethe ansatz method. An interesting observation regarding the Bethe ansatz diagonalization is that although the reduced pairing and the degenerate cases are opposite of each other, the Bethe ansatz diagonalization of the invariant operators in both cases is connected with the rational Gaudin algebra. The organization of this paper is as follows: Section II is a brief review of the quasispin formalism and it also serves to introduce some notation. In Section III, a short review of the Richardson-Gaudin formalism and the rational Gaudin magnet Hamiltonians is presented. The main results of this paper, i.e., the quantum invariants in the degenerate case and their simultaneous diagonalization with the Bethe ansatz method are presented in Section IV. This section also contains a discussion about a symmetry in the spectra of these quantum invariants relating eigenvalues corresponding to different number of particles. In Section V, we consider the rational Gaudin algebra and point out its relationship with the Bethe ansatz diagonalization in both the reduced pairing and the degenerate cases. Section VI summarizes the main conclusions. The details of some of the Bethe ansatz calculations can be found in the Appendix. II. QUASI-SPIN FORMALISM AND THE EXACT SOLUTIONS OF THE PAIRING HAMILTONIAN In the quasi-spin formalism, nucleon pairs coupled to angular momentum zero are created and annihilated at the level j by the operators respectively. Together with the operator they obey the well known angular momentum commutations relations As a result, one has an angular momentum algebra (the so called quasi-spin algebra) for each orbit j such that those angular momenta corresponding to different orbits commute with one another. The pair number operator for the orbit j is given byN It is related to the operator S 0 j given in Eq. (3) by the formula where Ω j is the maximum number of pairs which can occupy the level j. Note that j is always an half integer because of the spin-orbit coupling in the nuclei. As a result, if there are no unpaired particles at the level j, then Also note that the pairing term in the Hamiltonian given in Eq. (1) does not act on the unpaired particles. If there is an unpaired particle at the level j, its effect will be i) to add a constant ε j to the Hamiltonian because of the kinetic term and ii) to reduce the maximum number of pairs which can occupy the level j by one, i.e., to take Ω j to Ω j − 1. But here it will be assumed that there are no unpaired particles in the system. In this case, Eq. (6) implies that i.e., quasi-spin algebra corresponding to the level j is realized in the Ω j /2 representation. Therefore, in addition to the physical angular momentum quantum number j, we also have the quasi-spin quantum number Ω j /2 for each level. The states respectively represent the situations in which i) the level j is not occupied by any pairs and ii) it is maximally occupied by pairs. In the presence of several orbits with angular momenta j 1 , j 2 , . . . , j n , the state represents a shell which contains no pairs whereas the state represents a shell which is fully occupied by pairs. The pairing Hamiltonian given in Eq. (1) can be written in terms of the quasi-spin operators given in Eqs. (2) and (3) Note that the operator j c j S + j in Hamiltonian (12) creates a pair of particles in such a way that c j can be viewed as the probability amplitude that this pair is found at the level j. For this reason the coefficients c j are usually called occupation probability amplitudes and they are normalized as Although an occupation probability amplitude is a complex number in general, the parameters c j can be taken as real without loss of generality. Because if one c j is complex, a unitary transformation can always be performed on the quasi-spin algebra corresponding to the level j to make that c j real. Also note that the Hamiltonian in Eq. (12) contains a constant term j 2ε j Ω j which comes from Eq. (6). This constant term is not dropped because it guarantees that the energy of the empty shell is zero, i.e., Using the commutators given in Eq. (4), one can show that the fully occupied shell |0 is also an eigenstate of the Hamiltonian with the energyĤ Unlike the empty shell |0 and the fully occupied shell |0 , the eigenstates of the pairing Hamiltonian corresponding to a partially occupied valance shell are unknown in the most general case. But, as mentioned in the Introduction, exact energies and eigenstates are known in the two opposite cases. Namely, the reduced pairing case characterized by and the degenerate case characterized by These solutions will be reviewed in the next two sections together with the corresponding quantum invariants. But before closing this section, mention must be made of a third case in which exact eigenstates of the pairing Hamiltonian are known. This solution is available in the presence of two orbits with unequal energies and unequal occupation probability amplitudes, i.e., [28]: But this third case will not be considered in this paper. Because the main interest of this paper is the quantum invariants of the pairing Hamiltonian and in the case of a two level system we have only two quantum invariants which are simply the Hamiltonian itself and the total pair number operator. III. REDUCED PAIRING AND THE GAUDIN MAGNET OPERATORS In the reduced pairing case, described by Eq. (16), the pairing Hamiltonian given in Eq. (12) becomeŝ Here d = 1/n is known as the level spacing and its appearance in the Hamiltonian is due to the normalization condition (13). Using a variational technique, Richardson showed in Ref. [9] that the eigenstates of the Hamiltonian given in Eq. (19) containing N pairs of particles are in the form where the pair creation operators J + (ξ) are given by and |0 is the state with no pairs defined in Eq. (10). The values of the parameters ξ 1 , ξ 2 , . . . , ξ N which appear in Eq. (20) are to be determined by solving the system of equations simultaneously for k = 1, 2, . . . , N (see Ref. [9]). These equations generally have several distinct solutions. For each one of these solutions we have an eigenstate in the form of Eq. (20) and the corresponding energy is given by The quantum invariants of the Hamiltonian in Eq. (19) are the rational Gaudin magnet Hamiltonians mentioned in the introduction [10,11]. They are given by 2 where The rational Gaudin magnet Hamiltonians mutually commute with one another and with the Hamiltonian H R , i.e., for every j, j ′ = 1, 2, . . . , n. The Hamiltonian itself is not an independent invariant and it can be written in terms of the operatorsR j aŝ Similarly, the total pair number operator can also be written in terms of the operatorsR j aŝ As a result of Eq. (26), the eigenstates of the Hamiltonian are at the same time simultaneous eigenstates of rational Gaudin magnet operators as well. Let us denote the eigenvalues of the invariantR j corresponding to the eigenstate with N pairs by E (N ) j . In other words, for the empty shell and for the eigenstates containing N pairs described by Eqs. (20)(21)(22). The eigenvalues E and respectively. The pairing Hamiltonian in Eq. (19) is only one of the exactly solvable models which can be built using the rational Gaudin magnet Hamiltonians. Various other linear or nonlinear combinations of rational Gaudin magnet operators can be used to built other useful exactly solvable models (see, for example, Refs. [13,14,16]). IV. INTEGRABILITY IN THE DEGENERATE CASE In the case of several orbits having the same energy but different occupation probability amplitudes, i.e., when the conditions in Eq. (17) are satisfied, the first term in the pairing Hamiltonian given in Eq. (12) becomes a constant which is proportional to the total number of pairs in the shell. Discarding this term, one can write the Hamiltonian asĤ Exact eigenvalues and eigenstates of the Hamiltonian given in Eq. (33) were obtained in Refs. [25,26,27]. The purpose of this paper is to introduce the corresponding quantum invariants, i.e., the set of operators which commute with one another and with the Hamiltonian in Eq. (33). The fact that the rational Gaudin magnet operators given in Eqs. (24) mutually commute with one another is independent of the values of the parameters ε j . Naturally, one can try to replace the parameters ε j in the Gaudin operators with some arbitrary functions of c j and try to determine the form of these functions so 7 that the new operators commute with the Hamiltonian in Eq. (33) as well. It turns out, however, that such as course of action does not yield the quantum invariants of the Hamiltonian given in Eq. (33) 3 . In order to find the invariant operators one can consider general number conserving operators in the form where A j , B j D jj ′ and F jj ′ are some arbitrary coefficients. The condition that the above operators commute with one another and with the Hamiltonian in Eq. (33) gives us the allowed values of these coefficients. A straightforward calculation shows that the desired operators are given bŷ These operators mutually commute with one another P j ,P j ′ = 0 (36) for every j and j ′ . They also commute with the Hamiltonian given in Eq. (33) and with the total pair number operator: The HamiltonianĤ D and the total number operatorN are not independent invariants but they are related to the operatorsP j by the formulas As a result of Eqs. (36) and (37), the invariantsP j have the same eigenstates as the pairing Hamiltonian H D given in Eq. (33). These eigenstates were given in Refs. [25,26,27] with the help of the Bethe ansatz method [29]. In what follows, the corresponding eigenvalues of the invariant operatorsP j will be presented. A summary of the results of this Section can be found in Table I. Following Refs. [25,26,27], let us introduce the pair creation and annihilation operators Here, x is a complex variable and S ± j are the quasispin operators introduced in Eq. (2). The Hamiltonian in Eq. (33) itself can be written in terms of these operators aŝ 8 The eigenstates of the HamiltonianĤ D which are also simultaneous eigenstates of the invariantsP j can be written in terms of the pair creation and annihilation operators in Eq. (40). Below, these eigenstates which were obtained in Refs. [25,26,27] will be reviewed in the order of increasing number of pairs and the corresponding eigenvalues of the invariant operatorsP j will be given. Empty shell: The empty shell |0 given in Eq. (10) obeyŝ where E (0) j is given by Eigenstates with N = 1: The eigenstates with one pair of particles fall in two classes. The statê is an eigenstate whereŜ + (0) is obtained by putting x = 0 in the operator given in Eq. (40). This state was first suggested by Talmi in Ref. [30] and was shown to be an eigenstate of a class of Hamiltonians including the Hamiltonian in Eq. (33). In addition to the state in Eq. (44), the stateŜ is also an eigenstate if x is a solution of the Bethe ansatz equation The eigenvalues of the operatorsP j corresponding to the eigenstates described above will be denoted by λ (1) j and µ (1) j , respectively, i.e.,P jŜ + (0)|0 = λ (1) jŜ The eigenvalues λ (1) j and µ (1) j can easily be computed using the commutators given in Eq. (4) together with Eqs. (40) and (46) as follows (see the Appendix) Note that the eigenstate in Eq. (44) is unique whereas the eigenstate in Eq. (45) represents several eigenstates. Because in general the Bethe ansatz equation (46) has more than one solutions and for each one of them we have an eigenstate in the form of Eq. (45). As a result, Eq. (48) also represents several eigenvalue-eigenstate equations. Eigenstates for 2 ≤ N ≤ N max /2: The results given above can be generalized to the states corresponding to a shell which is at most half full. Let N max = j Ω j denote the maximum number of pairs which can occupy the shell in consideration. Then for 2 ≤ N ≤ N max /2 the results obtained for one pair generalizes as follows: The stateŜ which has N pairs of particles is an eigenstate if the parameters z k are all different from one another and obey the following system of Bethe ansatz equations for every m = 1, 2, . . . N − 1. In addition, the statê which also has N pairs of particles is an eigenstate if the parameters x k are all different from one another and satisfy the following system of Bethe ansatz equations: for every m = 1, 2, . . . , N . Note that the states given in Eqs. (51) and (53) Eigenstates for N max /2 < N : In order to write down the eigenstates and eigenvalues corresponding to a shell which is more than half full, we introduce the operator Empty shell: One pair of particles in the shell: N = 1 order to obtain the eigenvalues one should first solve a system of Bethe ansatz equations which are nonlinear and coupled to each other, solving them usually proves to be much more convenient then a direct numerical diagonalization method. Exact analytical methods for solving the Bethe ansatz equations also exist in some simplified cases (see, for example, Refs. [16,26,31]). The quantum invariants obtained in this paper are the counterparts of the well known rational Gaudin magnet Hamiltonians which play the same role in the reduced pairing case. It is worth mentioning that since the quantum invariants are mutually commuting operators, they can be used to build various other integrable models besides the ones considered in this paper. We also pointed out that the integrability of the pairing Hamiltonian in both the reduced pairing and the degenerate cases is connected with the rational Gaudin algebra. The generalizations of the rational Gaudin algebra to different underlying algebraic systems (such as higher order Lie algebras, quantum algebras or superalgebras) have been used to study the reduced pairing model and the related rational Gaudin magnet Hamiltonians in more general frameworks. The question then naturally arises whether or not one can do the same generalization for the degenerate pairing model and the related quantum invariants too. For example can the invariant operators of the degenerate pairing given in Eq. (35) be generalized to other algebraic systems and then used to study different integrable many body systems? The answer of this questions goes beyond the scope of this paper and will be considered elsewhere. APPENDIX A: OBTAINING THE EIGENSTATES WITH BETHE ANSATZ METHOD The simultaneous eigenstates and eigenvalues of the degenerate the operators given in Eq. (35) can be obtained using the method of algebraic Bethe ansatz. In this method, one first constructs a Bethe ansatz state [29] which includes some undetermined parameters and then substitutes this state into the eigenvalueeigenstate equationP j |ψ = E j ψ . The requirement that the Bethe ansatz state obeys the eigenvalueeigenstate equation yields a set of equations called the equations of Bethe ansatz, whose solutions determine the values of the parameters in the Bethe ansatz state. For example, in order to obtain the eigenstates with one pair of particles, one can start from a generic state in the form where S + (x) is defined in Eq. (40). Using the commutators given in Eq. (4), one can show that the action of the operatorP j on such a state is given bŷ Clearly, if we choose x in such a way that the second term on the right hand side of Eq. (A2) vanishes, i.e., if These results can be easily generalized to a state in the form which has N pairs of particles where N ≤ N max /2. The parameters x 1 , x 2 , . . . , x N are in general complex and they are all different from one another. In fact, by acting on the state given in Eq. (A5) with the operatorsP j one can easily show that if any two of the two parameters x 1 , x 2 , . . . , x N are the same, then the state in Eq. (A5) cannot be an eigenstate. If, on the other hand, the parameters x 1 , x 2 , . . . , x N are all different from one another, then by acting on the state given in Eq. (A5) with the operatorsP j given in Eq. (35), we find If, on the other hand, one of the parameters, say x 1 , is chosen to be zero (we cannot choose more than one x k to be zero since the parameters must be different from one another) then the Bethe ansatz equation (A7) is automatically satisfied for k = 1. In this case, the remaining parameters x 2 , . . . , x N are to be found by solving the N − 1 equations
Dietary Geraniol by Oral or Enema Administration Strongly Reduces Dysbiosis and Systemic Inflammation in Dextran Sulfate Sodium-Treated Mice (Trans)-3,7-Dimethyl-2,6-octadien-1-ol, commonly called geraniol (Ge-OH), is an acyclic monoterpene alcohol with well-known anti-inflammatory, antitumoral, and antimicrobial properties. It is widely used as a preservative in the food industry and as an antimicrobial agent in animal farming. The present study investigated the role of Ge-OH as an anti-inflammatory and anti-dysbiotic agent in the dextran sulfate sodium (DSS)-induced colitis mouse model. Ge-OH was orally administered to C57BL/6 mice at daily doses of 30 and 120 mg kg(−1) body weight, starting 6 days before DSS treatment and ending the day after DSS removal. Furthermore, Ge-OH 120 mg kg(−1) dose body weight was administered via enema during the acute phase of colitis to facilitate its on-site action. The results show that orally or enema-administered Ge-OH is a powerful antimicrobial agent able to prevent colitis-associated dysbiosis and decrease the inflammatory systemic profile of colitic mice. As a whole, Ge-OH strongly improved the clinical signs of colitis and significantly reduced cyclooxygenase-2 (COX-2) expression in colonocytes and in the gut wall. Ge-OH could be a powerful drug for the treatment of intestinal inflammation and dysbiosis. INTRODUCTION More than 90% of the 100 trillion cells in the human body are microbes, most of which reside in the digestive tract and are collectively known as the intestinal microbiota (Yaung et al., 2014). The bacterial flora is extremely dense and diverse and shapes fundamental physiological processes such as digestion and the development of gut-associated lymphoid tissues and systemic immunity. The intestinal microbiota plays a crucial role in maintaining colonic homeostasis, while microbial dysbiosis can contribute to a wide spectrum of disease (Kamada et al., 2013). Inflammatory bowel disease (IBD), which includes Crohn's disease (CD), and ulcerative colitis (UC), is a chronic inflammatory disorder of the intestinal tract associated with abdominal pain, intestinal bleeding, weight loss, and diarrhea (Koloski et al., 2008). The etiology of IBD is unknown but the one dominant hypothesis is that the inflammation results from altered or pathogenic microbiota in a genetically susceptible host. A growing body of literature implicates the abnormal overgrowth or dominance of particular bacterial species in the pathogenesis of IBD. Notably, mouse model studies of IBD have shown protection against the development of IBD in a germ-free environment, corroborating the role of gut flora in the pathogenesis of this spectrum of illnesses (Missaghi et al., 2014). As in humans, the two most abundant bacterial phyla in C57BL6/J mice are the Firmicutes (60-80% of sequences) and the Bacteroidetes (20-40%). Few bacteria are present in the mouse gut soon after birth. The neonate is inoculated with microorganisms by the mother and the environment and the microbiota is fully established when the mouse reaches adulthood at around 8 weeks, even if it is still susceptible to changes in composition (Laukens et al., 2015). In healthy adults, diet changes remain the major player in microbiota dynamics. Essential oil mixtures have been shown to play a significant role in the modulation of animal gut microbiota (Oviedo-Rondón et al., 2006) but their mechanism(s) of action remain incompletely understood (Thompson et al., 2013). Essential oils (EO) are volatile natural complex compounds characterized by a strong odor and synthesized by aromatic plants as secondary metabolites (Bakkali et al., 2008). They are highly complex natural mixtures which may contain up to 60 components at widely varying concentrations. In nature, EO play important roles in the protection of plants acting as antibacterial, antiviral, antifungal, and insecticidal agents (Bakkali et al., 2008;Fang et al., 2010). Recently, EO have been used in animal feed to treat infections, manipulate gut fermentation, and improve productivity (Wallace et al., 2010). Geraniol (Ge-OH) is a naturally acyclic monoterpene component of EO extracted from lemongrass, rose, and other aromatic plants. Several studies on the biological activities of Ge-OH have shown it to be a highly active antitumoral, antimicrobial compound, with antioxidant and anti-inflammatory properties (Ahmad et al., 2011;Thapa et al., 2012;Khan et al., 2013). Ge-OH's antimicrobial activities do not seem to have specific cellular targets. Like other EO, Ge-OH is a hydrophobic compound able to bind to the bacterial wall modifying its dynamic organization, with a consequent loss of ions and ATP depletion (Di Pasqua et al., 2006;Turina et al., 2006). In addition to bacterial growth inhibition, especially effective on Gram-positive bacteria (Thapa et al., 2012), Ge-OH also damages bacterial proteins, and lipids (Burt, 2004;Oussalah et al., 2007). Ge-OH effectively modulates the drug resistance of several Gram-negative bacterial species such as E. aerogenes, E. coli, and P. aeruginosa by restoring drug susceptibility in strains overexpressing efflux pumps (Solórzano-Santos and Miranda-Novales, 2012). It is important to emphasize that human pathogenic bacteria are more sensitive to Ge-OH than are commensal species even if the nature of this selectivity remains unsettled (Singh et al., 2012). Ge-OH has antioxidant activities in eukaryotic cells (Khan et al., 2013). By reducing oxidative stress, Ge-OH may prevent drug-induced mitochondrial dysfunction in hepatocytes (Singh et al., 2012). In vivo, it proved able to enhance neurodegeneration in a mice model of Parkinson's disease (Rekha et al., 2013). In vitro and in vivo, Ge-OH inhibits the expression of cyclooxygenase-2 (COX-2; Chaudhary et al., 2013), a key enzyme in inflammation (Strillacci et al., 2010). The anti-inflammatory properties of Ge-OH have been assessed on different animal models and in this context it has been shown that its molecular target is not only COX-2 but also NF-kB (Marcuzzi et al., 2011;Khan et al., 2013;Medicherla et al., 2015). Considering all its activities, Ge-OH seems to be an excellent candidate for the treatment of gut and systemic inflammations and for the control of gut dysbiosis. Medicherla et al. (2015) have already proved that Ge-OH effectively modulates experimental colitis, but the possibility of using this molecule as a therapeutic agent has yet to be demonstrated. Their study did not consider the chemical characteristics of Ge-OH that require specific formulations to be administered. They administered Ge-OH orally diluted in saline, forgetting that Ge-OH is insoluble in aqueous solutions in which it rapidly tends to separate from the water. Moreover, once separated from water, Ge-OH reaches high concentrations at which it could irritate the gut mucosa. Since, this substance rapidly crosses enterocyte monolayers (Heinlein et al., 2014), its site of action and its impact on the microbiota should also be evaluated. To determine whether Ge-OH could become a therapeutic option in humans, we administered Ge-OH in appropriate oral or enema formulations to dysbiotic mice and compared its effects with one of the standard therapies currently used to manage gut inflammation in IBD patients. Ge-OH Oral Formulation Ge-OH oral formulation was optimized for the administration route chosen and for a possible transition to use in humans as it has a strong smell and very unpleasant taste. In addition, Ge-OH is completely water insoluble and could irritate the mucosae if administered pure. The oral formulation was then optimized for a slow release of Ge-OH using a patented soy lecithin incapsulation. Natural Ge-OH (analytical grade, >98% pure) and soy lecithin were purchased from Prodasynth (Grasse, France). All the other reagents were purchased from SIGMA-Aldrich (St Louis, MO, USA). The stable suspension was prepared by Cedax Srl (Forlì, Italy) by adding Ge-OH (ρ = 0.899 g/cm 3 ; 17% by weight) to a solution containing sucrose (16%), deionized water (22%) and soy lecithin (25%), and ethanol (20%) as preservative (patent PCT WO 201 1/128597). The suspension was stored at 4 • C and administered by oral gavage to Ge-OHtreated mice (4, 5, or 18 µl of Ge-OH suspension brought to the final volume of 100 µl with Ge-OH-free suspension). A Ge-OH-free suspension containing sucrose (16%), soya lecithin (25%), and ethanol (20%) was administered to the control group. Ge-OH Enema Formulation The Ge-OH formulation for enema administration was prepared using glycerin to increase the viscosity of the solution and thereby facilitate both intracolonic injection and colonic retention. Enema Ge-OH solution was prepared as follows: natural Ge-OH (4% v/v) was added to a solution containing PBS and glycerol (30%v/v). An amount of solution corresponding to 120 mg kg (−1) (body weight, die) was freshly prepared and administered by enema during the acute phase of colitis. A control solution mixed as previously described but without Ge-OH was also prepared and administered to the enema control group. Enema treatments were administered via a 16G venous catheter (diameter 2 mm, length 48 mm; BD Bioscience, Buccinasco, Italy) advanced through the rectum into the colon until the tip was 10 mm proximal to the anus. A venous catheter was applied to a 1 ml syringe and the suspension was gently injected into the rectum. Animals were sedated using tiletamine 10 mg kg (−1) plus xylazine 2.5 mg kg (−1) during enema administration. Hydrocortisone Enema Treatment Enema administrations were prepared as follows: hydrocortisone (0.08% w/v, Sigma) was added to a solution containing PBS and glycerol (30%v/v). An amount of solution corresponding to 2.5 mg kg (−1) (body weight, die) was freshly prepared and administered by enema during the acute phase of colitis. Enema treatments were administered via a 16G venous catheter as previously described. Animal Treatment Sixty-four eight-week-old male C57BL/6 mice were purchased from Charles River Laboratories (Lecco, Italy). Animals were housed in collective cages with a controlled environment containing two mice each, at 22 ± 2 • C and 50% humidity, under a 12-h light/dark cycle. Mice were allowed to acclimate to these conditions for at least 7 days before inclusion in experiments and had free access to food and water throughout the study. The last group, VIII, called DSS+hydrocortisone enema, received four enema administrations of glycerol-PBShydrocortisone on days 19, 21, 23, and 25. This group was used as a model to understand how colitis is clinically modulated by a powerful drug. Hydrocortisone was administered by enema at doses of 2.5 mg kg (−1) body weight. The experimental design is schematized in Figure 1. The experiments were carried out in accordance European and Italian guidelines. They were approved by the Institutional Ethical Review Board of the University of Bologna and by the Italian Ministry for Research and were repeated twice. Disease Activity Index (DAI) DAI was calculated by the combined score of weight loss, stool consistency and bleeding, as detailed in Table 1. All parameters were scored from day 1 to day 37. Histological Evaluation of Colitis Mice (n = 2 for each experimental group) were anesthetized using Zoletil-100 [10 mg kg (−1) ; Virbac, Carros, France], and FIGURE 1 | Experimental design of the study. Animal treatment and the collection of feces, blood, and tissue are indicated (dark blue) in the grid. Frontiers in Pharmacology | www.frontiersin.org Xilor [2.5 mg kg (−1) ; Bio98, Milan, Italy] by intramuscular injection and sacrificed by cervical dislocation on day 25 (2 days after the end of DSS treatment, when the maximum DAI score was reached), and day 37, at the end of weight recovery. The colon was excised, rinsed with saline solution, fixed in 4% formalin and embedded in paraffin. Four micrometer sections were stained with hematoxylin-eosin and observed for histological assessment of epithelial damage by a pathologist in a blinded manner. Determination of Plasma Cytokine Levels Blood samples (200 µl) were taken from the tail vein on days 25 and 37 and collected in Eppendorf tubes. Blood was centrifuged at 1000 rpm for 10 min, and plasma was collected and stored at −80 • C until BioPlex analysis. Cytokine levels were determined using a multiplexed mouse bead immunoassay kit (Bio-Rad, CA, USA). The six-plex assays (IL-1β, IL-6, IL-10, IL-17A, IFNγ, TNFα) were performed in 96-well plates following the manufacturer's instructions. Microsphere magnetic beads coated with monoclonal antibodies against the different target analytes were added to the wells. After 30 min incubation, the wells were washed and biotinylated secondary antibodies were added. After incubation for 30 min, beads were washed and then incubated for 10 min with streptavidin-PE conjugated to the fluorescent protein, phycoerythrin (streptavidin/phycoerythrin). After washing, the beads (a minimum of 100 per analyte) were analyzed in the BioPlex 200 instrument (BioRad). Sample concentrations were estimated from the standard curve using a fifth-order polynomial equation and expressed as pg/ml after adjusting for the dilution factor (Bio-Plex Manager software 5.0). The sensitivities of the assay were 3.14 pg/ml (IL-1β), 1.34 pg/ml (IL-6), 1.38 pg/ml (IL-10), 2.38 pg/ml (IL-17), 1.38 pg/ml (IFNγ), and 2.73 pg/ml (TNFα). Samples below the detection limit of the assay were recorded as zero. The intra-assay CV was <14%. Characterization of the Intestinal Microbiota by HTF-Microbi.Array The intestinal mice microbiota was characterized using the fully validated diphylogenetic DNA microarray platform HTF-Microbi.Array. Targeting 33 phylogenetically related groups, this LDR-based universal array covers up to 95% of the mammalian gut microbiota. Gut microbiota analysis was performed on days 18, 25, 29, and 38. Total DNA from fecal material was extracted using the QIAamp DNA Stool Mini Kit (Qiagen) according to the modified protocol previously reported (Candela et al., 2010(Candela et al., , 2012. Final DNA concentration was determined using NanoDrop ND-1000 (NanoDrop Technologies). A nearly full-length portion of 16S rDNA gene was amplified using universal forward primer 27F and reverse primer 1492R, according to the protocol previously described (Castiglioni et al., 2004) PCR amplifications were performed in a Biometra Thermal Cycler T Gradient (Biometra, Göttingen, Germany). PCR products were purified using the High Pure PCR Cleanup Microkit (Roche, Mannheim, Germany), eluted in 30 µl of sterile water and quantified with NanoDrop ND-1000. Slide chemical treatment, array production, LDR protocol, and hybridization conditions were performed as previously reported (Candela et al., 2012). Briefly, LDR reactions were carried out in a final volume of 20 µl containing 500 fmol of each LDR-UA HTF-Microbi.Array probe, 50 fmol of PCR product, and 25 fmol of the synthetic template (5 ′ -AGCCGCGAACACCACGAT CGACCGGCGCGCGCAGCTGCAGCTTGCTCATG-3 ′ ). LDR products were hybridized on universal arrays, setting the probe annealing temperature at 60 • C. All arrays were scanned and processed according to the protocol and parameters already described. Fluorescence intensities were normalized on the basis of the synthetic ligation control signal. The relative abundance of each bacterial group was obtained by calculating the relative fluorescence contribution of the corresponding HTF-Microbi.Array probe as a percentage of the total fluorescence. RNA Extraction and Real-Time PCR Colon specimens were collected immediately after sacrifice and total RNA was extracted using Trizol R reagent (Life Technologies, CA, USA) according to the manufacturer's instructions. Extracted RNA samples were treated with DNase I to remove any genomic DNA contamination using DNA-free kit (Ambion, USA) and reverse-transcripted using RevertAid TM First Strand cDNA Synthesis Kit (Fermentas, Canada). COX-2 and β-actin mRNAs were reverse-transcribed using random hexamer primers (Fermentas, Canada). COX-2 and β-actin mRNA levels were analyzed by real-time PCR using SYBR R Select Master Mix (Life Technologies, CA, USA) and StepOnePlusTM system (Applied Biosystems, CA, USA) according to the manufacturers' instructions. The melting curve data were collected to check PCR specificity. Each cDNA sample was analyzed in triplicate. COX-2 mRNA levels were normalized against β-actin mRNA and relative expressions were calculated using the 2-2 Ct formula. COX-2 primer pair: 5 ′ -TTC TCT ACA ACA ACT CCA TCC TC -3 ′ and 5 ′ -GCA GCC ATT TCC TTC TCT CC -3 ′ (247 bp product); β-actin primer pair: 5 ′ -ACC AAC TGG GAC GAC ATG GAG -3 ′ and 5 ′ -GTG GTG GTG AAG CTG TAG CC -3 ′ (380 bp product). Data Analysis Statistical analysis was carried out using GraphPad Prism 6 (GraphPad Software Inc., San Diego, CA, USA). Data are expressed as mean ± SEM of at least three independent determinations. Student's t-test, analysis of variance (one-way ANOVA) followed by Bonferroni's post-hoc test for multiple comparison were used to assess the statistical significance of the differences. Differences were considered statistically significant at P < 0.05. Euclidean distance of HTF-Microbi.Array relative abundance profiles were used to perform PCoA and analysis was accomplished using the R packages Made4, Vegan, and Stats (www.cran.org). Clinical Colitis Activity The effect of DSS and DSS-Ge-OH treatments was evaluated considering the DAI calculated as the sum of weight loss, stool consistency, and blending scores ( Table 1). All DSS-treated mice started to show mild clinical signs of disease 2 days before the end of the 1.5% DSS treatment (day 21) due to the simultaneous increase in stool consistency index and bleeding index (maximum DAI score = 2.3). The most evident clinical signs of each group were recorded between days 25 and 27 (Figure 2) with a maximum DAI score of 9.1 for the DSS group and with severe weight loss that peaked between days 25 and 28 (Figure 2A). Ge-OH at 30 mg kg (−1) reduced the DAI score of colitis during the acute phase but did not affect this index during the recovery phase ( Figure 2B). At this Ge-OH dose, the DAI score maintained the same trend observed in DSStreated mice. At the higher oral dose, Ge-OH reduced the DAI score for almost the entire duration of colitis and especially during the recovery phase. Statistical analysis of data in Figure 2B are provided in Supplementary Table 1. These positive Ge-OH effects were further enhanced when colitic mice were treated with enema-administered Ge-OH, resulting in a very low weight loss and a strongly reduced DAI score for the whole duration of colitis. Inflammatory Cytokine Profile of Colitis Plasma levels of IL-1β, IL-6, IL-10, IL-17, TNFα, and IFNγ were detected in blood samples from all experimental mice group at two different time points, one corresponding to the acute phase of colitis (day 25), and one at the end of the recovery phase (day 37). DSS treatment significantly increased (P < 0.05) all the cytokines measured, both at day 25 and day 37 (Figure 3). At day 25, oral administration of Ge-OH at the lower dose of 30 mg/mg kg (−1) did not modify the inflammatory profile of DSS-treated mice. Oral administration of the higher Ge-OH dose of 120 mg kg (−1) and Ge-OH 120 mg kg (−1) enema administration significantly decreased IL-10, IL-17, TNFα, and IFNγ ( P < 0.05), but neither IL-1β nor IL-6. At day 37 when colitis tended to become chronic, Ge-OH-treated mice showed a better inflammatory profile than DSS-treated mice. In particular, the lower dose of oral Ge-OH significantly reduced all the measured cytokines (P < 0.05). The higher oral dose and enema administration of Ge-OH significantly decreased IL-1β, IL-17, IFNγ, and TNFα (P < 0.05) but neither IL-6 nor IL-10. Histological Evaluation of Colitis Histological evaluation of the colon was made from the colocecal junction to the anus. Overall, the tissue damage tended to be limited to the terminal colon and rectum regions, and can be classified as moderate colitis (Figure 4). At day 25 (Figures 4A-C), the colon mucosa in the DSS-treated mice showed a diffuse loss of goblet cells, focal crypt abscesses, diffuse hyperemia, moderate cellular infiltration in the mucosa, and focal epithelial erosions. Diffuse hyperemia, mild loss of goblet cells, mild cellular infiltration but no crypt abscesses, or epithelial erosions were also present in the mucosa of oral Ge-OH-treated mice at both doses administered [see Supplementary Figure 1 for Ge-OH 30mg kg (−1) ]. The colon in the Ge-OH enema-treated mice was characterized by a lower mucosa distortion (elongation) and showed moderate loss of epithelium, and low leukocyte infiltrations. After weight recovery (day 37), the colon mucosa in the DSS-treated mice showed a diffuse loss of goblet cells, focal crypt abscesses, diffuse hyperemia, and mild cellular infiltration (Figure 4D), while the mucosa of oral Ge-OH-treated mice presented diffuse hyperemia but a milder loss of goblet cells, a milder cellular infiltration, and no crypt abscesses at with dose administered (Figures 4E,F). Colon mucosa in the enema Ge-OH-treated mice showed a normal architecture similar to that of healthy controls. In conclusion, histological and clinical improvements were evident in the Ge-OH-treated mice and particularly in the enema-treated animals. Ge-OH-Induced Microbiota Modifications Since the Ge-OH-free suspension itself did not induce microbiota alterations, we investigated the impact of Ge-OH treatment on DSS-induced microbiota dysbiosis in mice. Mice stools were collected on days 18, 25, 29, and 37. Figure 5 shows the phylogenetic structure of the intestinal microbiota characterized using the HTF-Microbi.Array universal platform. DSS treatment prompted profound, progressive, and transient changes in mice microbiota composition, compared to colitis-negative controls (group I), defining a peculiar microbiota trajectory during the induced colitis. In particular, on day 18, after 1 day of DSS treatment, the overall microbiota structure of DSS mice still resembled that of control mice. At day 25, after seven days of DSS, we observed a global temporary restructuring of the intestinal microbiota composition. At day 29 a transitory reduction of Bacteroidetes associated with an increase in Firmicutes was recorded. However, on day 37, DSS-treated mice recovered a microbiota structure similar to that of healthy controls. While oral Ge-OH treatment at 30 mg kg (−1) exerted only a mild impact on the temporal dynamics of DSS-induced microbiota dysbiosis, oral and enema treatment at a dose of 120 mg kg (−1) resulted in considerable protection against the transient DSS-dependent reduction of Bacteroidetes, favoring a faster recovery of a community profile similar to that of healthy controls. In particular, on day 25, Ge-OH at 120 mg kg (−1) (both enema and orally administered) triggered a Lactobacillaceae increase that reached a relative abundance of 11.2 and 9.7% respectively, notably higher than the corresponding value in control mice. This Ge-OH-dependent high relative abundance of Lactobacillaceae was maintained until day 29 after which Ge-OH-treated mice permanently recovered from the DSS-induced reduction of Bacteroidetes 8 days earlier with respect to the corresponding DSS-treated mice. These effects are certainly related to the antibacterial action of Ge-OH, evidenced by its low minimal inhibitory concentration (MIC) on model bacteria species (see Supplementary Table 2). Differently from what observed in DSS treated mice, in healthy mice Ge-OH treatment, FIGURE 2 | Weight change percentage (A) and disease activity index (DAI) score of colitis (B) in different mice experimental groups. Maximum DAI score was reached between days 25 and 27. Maximum weight loss (22%) was recorded between days 22 and 27. Weight recovery ends at days 37. Data are expressed as mean ± SD. Analysis of variance (one way-ANOVA) was performed (for weight changes only at days 26, 29, and 32) to assess the statistical significance of the differences. *P < 0.05 if compared to DSS group mean values. Statistical significance for DAI score differences (analysis of variance, one way-ANOVA) are reported in Supplementary Table 1. even at the dose of 120 mg kg (−1) (orally administered), did not produced the same marked changes in the microbiota. Indeed, the microbiota composition of mice treated with Ge-OH 120 mg kg (−1) showed a slight increase in Lactobacillaceae, Bacillaceae and Bacteroidetes families (see Supplementary Figure 2). FIGURE 3 | Plasma cytokine variations during experimental colitis, measured at days 25 and 37. Cytokines were determined using a 6-plex mouse bead immunoassay kit. Levels of IL-1β (A), IL-6 (B), IL-10 (C), IL-17A (D), IFN-γ (E), and TNFα (F) are shown. Data are expressed as mean ± SEM of at least three replicates (n = 9). # P < 0.05 in the comparison between to Ge-OH 30 and Ge-OH 120 or groups. *P < 0.05 if compared to DSS group. Down-Regulation of COX-2 through Ge-OH Treatment Since COX-2 plays a crucial role in the production of many lipid mediators involved in intestinal inflammation and is one of the major targets of IBD pharmacological therapy, we analyzed COX-2 mRNA expression in colon tissues during DSS-induced colitis (Figure 6). Our data support the previously reported finding that COX-2 mRNA significantly increases in the gut wall of DSS-treated mice (De Fazio et al., 2014). At day 25, we observed a significant increase (1.8-fold, P < 0.05) in COX-2 expression in the gut wall of DSS-treated mice. Ge-OH decrease the COX-2 expression in DSS treated mice returning it to values comparable to those of the control. tract. The mucosal immune system of IBD patients has lost the ability to self-regulate and remains chronically activated. IBD is a well-established risk factor for colon cancer (CRC) development, with an increasing incidence linked to younger age at IBD diagnosis, longer IBD duration, and more severe intestinal inflammation. Conventional IBD therapies include COX-2 inhibitors (aminosalicylates and their derivatives), corticosteroids, immunomodulatory drugs, antibiotics, and biologic drugs such as the monoclonal antibody against tumor necrosis factor alpha (TNFα), a pivotal pro-inflammatory cytokine able to start and maintain the inflammatory process in the gut. Besides antibiotics, probiotics have also been used in the treatment of ulcerative colitis to counteract dysbiosis (Bibiloni et al., 2005). Since, IBD usually relapses, all these therapies require long-term administration. Inflammatory bowel disease (IBD) comprises a group of chronic inflammatory conditions affecting the gastrointestinal Ge-OH is a non-toxic compound, classified as Generally Recognized As Safe (GRAS) by the US Food and Drug Administration. The European Food Security Agency (EFSA) hazard assessment conclusion for Ge-OH established a Derived No Effect Level (DNEL) of 13.5 mg kg (−1) for humans (General Population-Hazard via oral route), corresponding to 100-120 mg kg (−1) in mice. Ge-OH is currently receiving substantial attention for its antitumorigenic, anti-inflammatory, and antimicrobial effects that have been clearly demonstrated in vitro. Nevertheless, its role as an anti-dysbiotic agent in colon inflammation has never been investigated. Our study adopted a mouse model of DSS-induced moderate to severe colitis to evaluate the antimicrobial and anti-inflammatory therapeutic activity of Ge-OH doses considered safe. Ge-OH, orally administered at 30 and 120 mg kg (−1) halved the mice weight loss and reduced the disease activity index (DAI) of colitis. At histological level, Ge-OH was able to preserve crypt architecture and decrease leukocyte infiltration, with a much more evident effect at the higher dose (both enema or orally administered). Moreover, enema-administered Ge-OH strongly improved signs of colitis maintaining a lower DAI and preserving colon mucosa integrity. These clinical observations are further supported by a significant reduction of COX-2 mRNA expression in the colonic mucosa of Ge-OH-treated mice. Circulating cytokine levels are indicative of the overall inflammatory status of animals, with IL-1, IL-6, IL-17, and TNFα playing a key role in the pathogenesis of IBD (Muzes et al., 2012). TNFα is a master cytokine in IBD pathogenesis and its orchestrating role in colonic inflammation is confirmed by the efficacy of anti-TNFα therapy in IBD patients (Chaparro et al., 2012). The circulating TNFα level correlates with clinical activity both in ulcerative colitis and Crohn's disease (Bibiloni et al., 2005) and increases in acute phases of DSS colitis (Alex et al., 2009). So, while circulating TNFα and IL-17 levels seem to correlate with the DSS colitis clinical course, IL-1β, and IL-10 mainly correlate with the histological damage that tends to become chronic (Alex et al., 2009;De Fazio et al., 2014). The higher oral dose of Ge-OH significantly reduced circulating TNFα and IL-17 in Ge-OH-treated mice after weight recovery at the end of the experiments. This decrease was equally evident after Ge-OH enema administration. These results are in agreement with those obtained by Medicherla et al. (2015) who found a significantly reduced expression of the major pro-inflammatory cytokines FIGURE 5 | Temporal dynamics at the family level of the fecal microbial community of dextran sulfate sodium (DSS)-treated mice. The microbiota composition of healthy mice (CTRL), colitic mice (DSS), colitic geraniol orally treated mice [Ge-OH 30 mg kg (−1) , 120 mg kg (−1) ], and colitic geraniol enema-treated mice [120 mg kg (−1) ] is shown. Other Bacteriodetes and Firmicutes families that are not are listed separately have been combined into a single group. The microbiota composition of the mice group treated with Ge-OH-free oral suspension or Ge-OH-free enema suspension showed no differences from those of the healthy mice group. in the colon specimens (TNF-α, IL-1β, and IL-6), associated with reduced total and nuclear amounts of NF-κB (p65) after oral administration of Ge-OH [50 and 100 mg Kg (−1) ]. They also identified an antioxidant activity of Ge-OH at colon level, evaluated as a decrease in lipid peroxidation marker. DSS treatment compromises gut microbiota homeostasis, resulting in a dysbiosis characterized by a transient reduction of dominant mutualistic microbiota components such as Bacteroidetes, confirming previous findings (Nagalingam et al., 2011). Ge-OH oral and enema treatment at 120 mg kg (−1) protects DSS-treated mice against this transient reduction of Bacteroidetes, boosting a faster recovery of a healthy microbiota profile. Interestingly, 120 mg kg (−1) geraniol-treated mice presented a transient increase in the relative abundance of Lactobacillaceae from day 25 to day 29. This raises the question of whether the transient Ge-OH-dependent increase in Lactobacillaceae, heralding the recovery of a healthy profile, is somehow involved in promoting a faster recovery from FIGURE 6 | Geraniol (Ge-OH) modulates cyclooxygenase-2 (COX-2) expression in vivo in colon specimens during the acute phase (day 25). COX-2 mRNA expression was evaluated by real-time PCR. COX-2 mRNA levels were normalized against β-actin mRNA and relative expressions were calculated using the 2-2 Ct formula. COX-2 overexpression induced by DSS was significantly reduced by Ge-OH treatment. Data are expressed as mean ± SEM of at least three replicates (n = 6). *P < 0.05 compared to DSS group. # P < 0.05 compared to CTRL group. DSS-associated dysbiosis. Dysbiosis always includes a decreased bacterial biodiversity (Honda and Littman, 2012). GeOH 120 mg kg (−1) treatment was able to increase bacterial biodiversity in DSS-treated mice starting from the 10th day of Ge-OH assumption. It is likely that the decreased inflammation we observed in Ge-OH oral and enema-treated mice during colitis recovery is also due to the healthy microbiota status found in these mice. The central finding of this study in a colitis model is the multi-target effect of Ge-OH treatment that simultaneously targeted dysbiosis, local, and systemic inflammation and mucosal damage. The decreased activity of COX-2 in colon specimens is a clear demonstration of the anti-inflammatory effect of Ge-OH that contributes to the decreased mucosal damage. This effect certainly involves the colonic mucosa, even if it is reasonable to assume that in vivo Ge-OH may also target COX-2 expression in immune system cells inside the colon wall (Su et al., 2010). CONCLUSIONS IBD therapy is based on the use of anti-inflammatory molecules and immunomodulatory agents that act by strongly and nonspecifically inhibiting the inflammatory response but their longterm use might trigger the onset of severe side-effects. The effects of Ge-OH could be of great importance in the treatment of human IBD. Since geraniol's antimicrobial effect does not seem to induce bacterial resistance, a phenomenon commonly observed with conventional antibiotic drugs, it would be very interesting to ascertain whether Ge-OH is able to control dysbiosis and inflammatory status in human IBD patients. In addition, Ge-OH's anti-tumor activities could help reduce the risk of CRC in IBD patients. Thus, this investigation represents a preclinical assessment prior to developing further studies on the effects of oral and enema Ge-OH administration in patients with gut inflammation and/or dysbiosis. Since Ge-OH has its peak therapeutic effect on colitis when directly administered into the colon, it is of great importance to find oral delivery systems able to inhibit intestinal Ge-OH absorption after its oral administration. On the contrary, Ge-OH without any delivery system may be orally administered to obtain a systemic anti-inflammatory effect or to target other organs, such as brain.
Understanding the Psychological, Physiological, and Genetic Factors Affecting Precision Pain Medicine: A Narrative Review Purpose Precision pain medicine focuses on employing methods to assess each patient individually, identify their risk profile for disproportionate pain and/or the development of chronic pain, and optimize therapeutic strategies to target specific pathological processes underlying chronic pain. This review aims to provide a concise summary of the current body of knowledge regarding psychological, physiological, and genetic determinants of chronic pain related to precision pain medicine. Methods Following the Scale for the Assessment of Narrative Review Articles (SANRA) criteria, we employed PubMed/Medline to identify relevant articles using primary database search terms to query articles such as: precision medicine, non-modifiable factors, pain, anesthesiology, quantitative sensory testing, genetics, pain medicine, and psychological. Results Precision pain medicine provides an opportunity to identify populations at risk, develop personalized treatment strategies, and reduce side effects and cost through elimination of ineffective treatment strategies. As in other complex chronic health conditions, there are two broad categories that contribute to chronic pain risk: modifiable and non-modifiable patient factors. This review focuses on three primary determinants of health, representing both modifiable and non-modifiable factors, that may contribute to a patient’s profile for risk of developing pain and most effective management strategies: psychological, physiological, and genetic factors. Conclusion Consideration of these three domains is already being integrated into patient care in other specialties, but by understanding the role they play in development and maintenance of chronic pain, we can begin to implement both precision and personalized treatment regimens. Introduction The overarching definition of precision pain medicine is that diagnosis and treatment can be customized to an individual's specific risk profile. 1,2 At its most basic, the ideology is based on using all available patient-level data to target therapies for that individual with regard to prediction, prevention, diagnosis, and treatment of disease, with the aim of improving symptoms and quality of life. By incorporating a given patient's individual profile of biological (molecular disease pathway(s), genetic, proteomic, metabolomic), psychological, and environmental context variables along with comorbid conditions, personal/cultural preferences, and other characterizing data, we should be able to improve efficacy and lessen treatment side effects while decreasing resource waste and improving cost effectiveness. 1 In 2015, the United States launched the Precision Medicine Initiative which committed to fund and support research in the area of precision medicine in order to improve patient care and treatment outcomes. 3 There is considerable overlap, yet subtle differences, between the terms "precision medicine" and "personalized medicine" and historically these terms have been used interchangeably. Personalized medicine is an older term, defined as individualized care that is customized for individual patients based on their characteristics, which may include genetics, disease biomarkers, treatment history, and other factors, but typically is based on a specific patient's symptoms. 4 The goal of precision medicine is to maximize the accuracy by which patients are treated with existing treatment regimens and is informed through an understanding of the interrelation of an individual's profile of characteristics, including genetics, environment, and lifestyle, with specific inclusion of phenotypes and biological markers. These two approaches both explicitly depend on evidence-based medicine by incorporating problemsolving, application of research findings, clinical expertise, and patient preferences, values, and perspectives into the healthcare decision-making process. However, in line with a report from the National Research Council in 2011, 5 we prefer to use the term precision pain medicine as it incorporates stratifying individuals into subgroups using a broader spectrum of patient characteristics (psychological, physiological, and genetic/ molecular). This then allows clinicians to recommend treatments with the greatest probability of effectiveness based on these stratifications. Eventually, this approach may lead to development of new therapeutic strategies that can be tailored to the biological mechanisms at work in a specific patient, thus crossing over into personalized pain management; however, there is still more work to be done. We will focus the present review of precision pain medicine on psychological, physiological, and genetic factors, each of which represents multiple contributing subcategories. The relative contribution of each of these factor determinants will vary between patients, but it is their cumulative impact that increases or decreases a patient's risk for developing chronic pain and shapes the response to standard treatment strategies. The goal of this review is to provide a concise summary of the current body of knowledge regarding the three determinants of chronic pain related to precision pain medicine. By understanding all three domains and the role they play in the development and maintenance of chronic pain, we can begin to develop and implement both precise and personalized treatment regimens. Methodology The Scale for the Assessment of Narrative Review Articles (SANRA) criteria guided this review. 6 We employed PubMed/Medline to identify relevant articles using the primary database search terms (used in combinations as illustrated in Table 1 to query PubMed indexed articles): precision medicine, non-modifiable factors, pain, anesthesiology, quantitative sensory testing, genetics, pain medicine, psychological, pharmacogenetics/pharmacogenomics, biomarker, and next-generation sequencing. After reviewing the literature, we narrowed our focus to two broad categories Precision medicine and genetics 23,097 Precision medicine and genomics 14,648 Pain medicine and genetics 58,807 Pain medicine and genomics 12,128 Precision medicine and pain 1,557 Anesthesiology and precision medicine 1,014 Anesthesiology and psychological 3,343 Pain medicine and psychological 17,849 Quantitative sensory testing and psychological 297 Precision medicine and non-modifiable factors 7 Non-modifiable factors and pain 50 Precision medicine and non-modifiable factors and pain 0 Precision pain medicine and quantitative sensory testing 18 Precision medicine and pain and biomarker 145 Anesthesiology and pain and biomarker 1,185 Pain medicine and pharmacogenetics 1,289 of health determining factors that should be considered in any patient profile of risk, those that are modifiable (eg, those that can be changed; psychological function, physiological/ sensory function, lifestyle factors) and those that are nonmodifiable (eg, those that cannot be changed; age, sex, race, genetics) ( Figure 1). Of these, we have identified three exemplar determinants of health representing both modifiable and non-modifiable factors based on their potential contribution to patients' risk profiles for developing chronic pain and/or their response to pain management strategies. While these examples are not intended to be exhaustive of all variables relevant to a patient profile of risk, a growing body of evidence supports their novel relevance to the practice of precision pain medicine. Psychological factors of pain encompass an expansive category of factors including mood, maladaptive pain coping styles such as pain catastrophizing, poor self-efficacy, kinesiophobia, injustice, and sleep-related impairments. These psychological constructs can be assessed using comprehensive pain phenotyping, which allows the categorization of patients based on a set of characteristics (both subjective and objective) in order to predict risk for developing chronic pain and treatment response. Phenotyping is typically performed by assessing these psychological factors and maladaptive coping styles using a variety of validated patient-reported questionnaires. Figure 1 The evolution of precision pain medicine depends on identification of the risk factors and modulating variables that contribute to acute pain burden and the risk for transition to chronic pain. We highlight the contributions of two broad categories of factors, modifiable and non-modifiable, that contribute to risk for transition from acute to chronic pain. The combination of factors may provide insight into a patient's individual profile of risk for transitioning to chronic pain and point to novel pain therapeutic strategies designed to target individual mechanisms of risk. Anesthesia/analgesia can control acute pain (a primary risk factor for the development of chronic pain) and may also be used to treat chronic pain; however, efficacy can be affected by genetic factors and these should be integrated into any precision pain medicine approach. Created with BioRender.com. It is well known that anxiety and depression are two of the strongest predictors for the transition from acute to chronic pain. 7 There is also evidence that high levels of anxiety and stress can reduce a patient's analgesic response to opioids. 7 This has been well studied in rheumatological diseases, where it was found that pain catastrophizing along with depression played a larger role in a patient's subjective pain score than did objective radiographic evidence of disease. 7 Phenotyping can be useful for risk stratification of poor perioperative outcomes. Patients with higher pre-surgical anxiety scores have been found to have worse analgesic outcomes following total knee or total hip replacements. 8 One of the wellvalidated questionnaires recommended to determine the presence of depression and anxiety is the Hospital Anxiety and Depression (HADS) scale. 9 There is evidence that patients with higher HADS scores not only show a poorer response to opioids but also possess a greater incidence of opioid misuse. 10,11 Besides mood problems, having a positive or negative affect can influence chronic pain experiences and outcomes. Positive affect refers to a feeling state where pleasant moods and emotions promote positive approach-oriented behaviors and impart a sense of relaxation and contentment. Multiple studies have shown correlations between the presence of or interventions to promote a positive affect and improved outcomes for patients with chronic pain. [12][13][14] Multiple well-validated questionnaires are used to assess positive and negative affect including the Patient-Reported Outcomes Measurement Information System (PROMIS) pain interference questionnaire and the Positive and Negative Affect Schedule. 15 Multiple studies have shown a correlation between negative affect and a poor analgesic response to epidural steroid injections in the treatment of low back pain, which highlights that assessing this construct may help predict outcomes for pain interventional procedures. 16,17 Pain catastrophizing is another phenotypic maladaptive coping trait that has shown correlation to the development of chronic pain and influences response to pain treatments. Pain catastrophizing involves magnification, rumination, and helplessness. There is a large body of evidence that the presence of pain catastrophizing, which can be assessed by the Pain Catastrophizing Scale, plays a pivotal role in musculoskeletal pain. 18 If present, it is a strong pre-surgical predictor of a poor outcome. 19 Catastrophizing has also been shown to limit the response to standard therapies like cortisone, acetaminophen, and tramadol. 20 A recent meta-analysis by Schutze et al indicated the 3 best treatment tools for the management of pain catastrophizing to be cognitive behavioral therapy, acceptance and commitment therapy, and physical therapy. 21 When examining the correlations between psychological factors and the development and/or maintenance of chronic pain, it is easy to understand why this knowledge would be extremely useful when managing patients. This understanding would make it possible to optimize patients' psychological conditions prior to undergoing surgery and develop an opioid-sparing multi-modal analgesic plan in the preoperative setting. Other maladaptive coping traits to chronic pain that have been associated with negative chronic pain outcomes include kinesiophobia (fear of movement), 22,23 poor self-efficacy, [24][25][26] and injustice. 27,28 However, phenotyping and predictive tools to identify these psychosocial indicators in addition to interventions aimed at treating pain and modulating these characteristics have been shown to provide improved outcomes for chronic pain patients. [29][30][31][32][33][34][35][36] The presence of sleep disturbances is another important psychological determinant of health. It is well known that sleep disturbances and chronic pain frequently coexist. A large percentage of patients with chronic pain experience some form of sleep disorder. This relationship creates a paradoxical problem because, as a patient becomes more fatigued, their pain intensity rises and their ability to reduce pain is suppressed. 37,38 Alsaadi et al found that the probability of developing a sleep disorder increased by 10% for each point increase on the Visual Analog Scale (VAS). 39 As chronic pain and sleep disturbances work in tandem, this relationship should be utilized as a marker for health in order to provide a risk assessment for the development of pain. First, it is important to identify if the patient expresses any signs or symptoms of sleep disturbances with the use of questionnaires, including the validated Pittsburgh Sleep Quality Index (PSQI) score 40 and the Insomnia Sleep Index. 41 Karaman et al discovered that the presence of chronic pain was associated with significantly higher PSQI scores versus those without chronic pain. 42 These scores were also noted to be even higher in males versus females. 42 There is also evidence that patients suffering from sleep deprivation have a better response to the medication pregabalin than opioid medications like codeine. 43 Having access to this information provides alternative targets like sleep hygiene and may aid in drug selection like pregabalin. Table 2 Social Factors The paradigm of chronic pain also includes social factors in addition to psychological factors as described above. Social factors that are routinely studied in relation to chronic pain include: social support, social isolation, satisfaction with social roles, and social responses to pain behaviors. 44 Social support, and even perceived social support, has been positively correlated with better pain outcomes such as pain severity and improved overall functioning. [44][45][46][47] In a recent study of older adults with chronic pain, perceived social support was found to moderate the association between pain intensity and depressive symptoms. 48 In another study, perceived co-worker and supervisor support was predictive of a clinically relevant and functional recovery in army workers with non-acute and non-specific low back pain. 49 Social isolation is another construct that has been found to influence chronic pain and its downstream outcomes. In a study by Leung et al, social isolation was determined to be an important factor in not only the evolution of chronic pain in elderly individuals but was also associated with its onset. 50 It is also important to understand how the nature of other important social interactions, such as those surrounding employment, may influence chronic pain and chronic pain outcomes. Dissatisfaction with co-workers and lack of social support at work are among the predictors for pain-related work disability. 51,52 The social environment can also be utilized for adaptive purposes. Social support in the form of encouragement to complete tasks was negatively associated with pain-related disability. 53 Furthermore, educating and training loved ones and spouses in assisting with pain-related coping skills has been shown to improve functioning and self-efficacy in managing pain symptoms. 54,55 To assess social constructs for patient pain phenotyping, multiple validated tools are available including the PROMIS Social Health profile which includes 7 domains: instrumental social support, emotional social support, informational social support, companionship, satisfaction in participation in social roles, social isolation, and self-perceived ability to participate in social roles and activities. Physiological Factors Physiological factors include phenotypic indicators (eg, pain intensity, severity, location, and descriptors) as well as functional biomarkers of pain/sensory function assessed using quantitative sensory testing (QST), neuroimaging, and conditioned pain modulation (CPM). A biomarker is any characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic response to a therapeutic intervention. 56 While no single "pain biomarker" has been identified, a panel of measures may allow for a comprehensive assessment of risk for chronic pain development and/or help predict treatment response. 57 QST, neuroimaging, and CPM may serve as biomarkers by offering insights into the neurobiological processing changes that support the transition from acute to chronic pain as well as the efficacy of treatments designed to prevent this transition. Baseline pain has classically been examined by having patients fill out questionnaires at set intervals of time, which relies on recall over a period of days, weeks, and months. Newer and possibly more accurate ways to identify baseline pain variability include patient journaling or diaries. This allows patients to write down or electronically submit changes to their pain score in real time, eliminating the need to recall which can potentially lead to errors. Extensive effort has been placed in the development of electronic diaries for real-time symptom documentation. While there has been no difference seen in the ease of use between paper and electronic diaries, some studies have indicated patient preference and willingness to continue using electronic diaries over paper. 58 Handheld applications that could be downloaded to a smartphone or tablet could easily be taken with patients wherever they go, whereas they may be less likely to bring a written pain diary with them. The other advantage electronic diaries provide is the ability to present real-time data to the provider in order to make more frequent adjustments and recommendations regarding activity level and/ or medication doses/changes. So far, there is no research showing improvement in pain management from the use of electronic pain trackers and their ability to make frequent adjustments to care; however, this should be an area of focus in the future. For mechanistic characterization of pain, questionnaires like the Short Form McGill Pain Questionnaire, 59 Pain Quality Assessment Scale, 60 and PainDETECT 61 can be useful. These questionnaires allow patients to describe the type of pain they are having. Examples of pain descriptors include burning, heavy, paroxysmal, lightning, and sharp. There is evidence of improved response to pregabalin than placebo when pain was described as deep, electrical, or burning. 62 Neuropathic pain as a whole has a potential of being utilized as a phenotypic predictor. After determining that a patient has neuropathic pain, one can assess the sensory abnormality they are experiencing as a result by using the Neuropathic Pain Symptom Inventory (NPSI). 63 Patients with high NPSI scores showed equal responses to duloxetine and pregabalin, however those with lower NPSI scores had a large variation between the two drugs. 63 QST is a collection of methods designed to measure patient response to various stimulation (eg, mechanical, thermal, cold, pressure) in order to evaluate somatosensory function as well as identify the nature/presence of hyperalgesia and allodynia. 64,65 It is most commonly utilized to evaluate neuropathic pain conditions. The protocol with the best validation is the German Research Network on Neuropathic Pain (DFNS) battery, which can help determine detection and pain thresholds to both mechanical and thermal stimulation in addition to assessing for wind up pain. 66,67 Following completion of this protocol, patients can be assigned a profile. Previous research in this area has shown that QST profiles can have tremendous overlap between different neuropathic conditions, indicating that the profile is not pain syndrome-specific but patientspecific, thus hopefully allowing for individualized treatment regimens. [68][69][70] This was modeled by Demant et al when they showed improved treatment of neuropathic pain with hyperalgesia (sensory gain) using the sodium channel blocker oxcarbazepine vs minimal benefit in patients with neuropathic pain and sensory loss. 71 There has been a variety of research released within the past few years on the use of QST. One study found a correlation between the treatment response to botulinum toxin and thermal sensation in post-herpetic neuralgia, such that patients received greater benefit from botulinum toxin if they were found to have intact thermal sensation on QST prior to treatment. 72 Another study examined patients with pain following spinal cord injury. Results showed increased response to pregabalin if thermal sensation was intact and minimal benefit when thermal sensation was lost. 73 With these studies in mind, QST could form a critical pillar in the development of treatment regimens. With that said, time will need to be addressed as a primary limiting factor for implementation in the clinical setting, as the DFNS-QST is a rather lengthy test that averages 1-3 hours per patient. 74 Attempts at simplifying QST batteries for administration in the clinical setting have been reported and have shown that a variety of simple bedside tools (ice cubes, pinprick, cotton swabs) can reliably be used to quantify numerous QST parameters. [75][76][77][78][79] Further research is warranted via large multi-center trials in order to investigate whether bedside QST batteries can predict response to and improve pharmacologically directed therapies. CPM and temporal summation are two other categories of sensory testing that may serve as specific phenotypic markers of centralized pain conditions. CPM describes the body's ability to use one noxious stimulus to inhibit or reduce the response to another. 80 This phenomenon is referred to as diffuse noxious inhibitor control. 81 It is believed to involve opioid, serotonergic, and noradrenergic pathways. 82 The effectiveness of CPM can be tested by applying a noxious stimulus alone and in the presence of sustained stimulation. If CPM is functioning effectively, the pain from the original noxious stimulus should be reduced in the presence of a coexisting tonic stimulus. 83 This was studied by Yarnitsky et al, who found that patients with a poorly functioning CPM had a much better response to duloxetine in the treatment of diabetic peripheral neuropathy than those with properly functioning CPM. 84 Temporal summation, which is the increased perception of pain when an identical noxious stimulus is repeated, is also being studied. Patients with chronic pain conditions often exhibit higher levels of temporal summation. 82 While there have been few large multicenter studies conducted on this phenomenon, it is believed that it can be utilized as a predictor of the development of chronic pain following surgery. 85 Neuroimaging is another physiological assessment tool being utilized by researchers to identify neurobiologic mechanisms underlying chronic pain. As neuroimaging has advanced over the years, we have come to understand that pain perception and modulation is an extremely complex pathway involving multiple structures in the central nervous system (CNS). Multiple chronic pain states including fibromyalgia, chronic low back pain, osteoarthritis (OA), and complex regional pain syndrome (CRPS) have been studied utilizing neuroimaging modalities. The most commonly utilized modality is functional magnetic resonance imaging (fMRI), which can assess the activity and connectivity of different regions of the brain by detecting blood oxygen levels. 86 This can be done while the patient is at rest (resting state fMRI), performing a task, or with an evoked painful stimulus. Researchers have utilized functional neuroimaging to study changes in connectivity in multiple structures throughout the CNS including the primary somatosensory cortex, posterior insular cortex, thalamus, amygdala, hippocampus, and basal ganglia. Alterations consistent with increases in pro-nociceptive connectivity and decreases in anti-nociceptive connectivity have been found. [87][88][89][90][91][92][93] Few studies have been performed to investigate how interventions affect neuroimaging signatures pre-and post-intervention, but these and future studies may lead to findings that allow for the use of this modality to influence precision pain medicine. [94][95][96][97][98][99][100] Two other imaging modalities used are proton magnetic resonance spectroscopy (H-MRS) and positron emission tomography (PET). H-MRS can be used in order to determine the changes of pain-regulating neurotransmitters like GABA and glutamate. Clinical studies have shown an increased level of glutamate on H-MRS scans in patients with fibromyalgia compared to healthy controls. 101 PET scans are currently being utilized to evaluate opioid receptor density and binding capacity which may influence a patient's response to opioid medications. These types of imaging modalities show great promise for influencing precision medicine. 101 This evidence further supports the complexity of chronic pain and the drastic alterations it can cause to the CNS. By utilizing neuroimaging to identify which CNS structures are altered, patients can be further categorized beyond a chronic pain or fibromyalgia diagnosis, and mechanisms-based research can be performed in the hope that it may direct more targeted treatment regimens and direct appropriate drug dosing. 102,103 Understanding CNS alterations will also help select patients for appropriate clinical trials in order to further advance the future study of precision medicine. Genetic Factors Individual differences in the DNA sequence (genetics) and the structure of the genome (epigenetics) are estimated to account for up to 70% of the individual differences in pain sensitivity and susceptibility to chronic pain conditions [104][105][106][107][108] in addition to affecting the response to pain-relieving treatments (eg, pharmacogenetics). Individual differences in the DNA sequence are now being used in various subspecialties to assess risk for disease, disease progression, and other relevant health outcomes. 109,110 Incorporating genetics and epigenetic analysis into practice provides physicians the opportunity to tailor treatment regimens to specific disease processes, maximize drug efficacy, and minimize unnecessary adverse reactions without trial and error. 109 Single nucleotide polymorphisms (SNPs) are the most common variants and represent differences in the nucleic acid sequence at a given genomic location (ie, alleles) between individuals. The major allele is present in most of the population, and the less common (ie, minor) allele frequency varies but occurs in greater than 1% of the population. 111,112 This type of genetic variation occurs approximately every 1000th nucleotide, so it is estimated that there are roughly 4-5 million SNPs in the human genome contributing to the significant phenotypic variation across the population. Copy number variation (CNV), on the other hand, is a variation in the number of gene copies an individual carries relative to the baseline of two copies, one on each chromosome. While the role of SNPs and CNVs in pain susceptibility and/or painful disease progression still remains to be fully understood, there are well-known associations between these variations and treatment response that are already driving precision medicine approaches in other fields including cardiovascular medicine, 113,114 rheumatology, 115 and oncology. 116 While not explicitly related to precision pain management, genetic analysis is beginning to be applied by anesthesiologists, most often to predict the risk for relatively rare conditions that can develop during anesthesia exposure. One application that has become more common is the use of genetic analysis for prediction and treatment of prolonged paralysis/apnea following succinylcholine administration in patients with genetic variations in BCHE, the gene encoding butyrylcholinesterase (BCHE). More than 60 genetic variations have been identified in BCHE, affecting the quantity and quality (eg, enzymatic effectiveness) of the BCHE produced. 117 Currently, BCHE genetic testing is typically ordered only after a patient or close genetic relative has reported an episode of extended paralysis following succinylcholine. In addition, the Clinical Pharmacogenetics Implementation Consortium (CPIC) has also issued guidelines regarding the identification of "diagnostic mutations" within CACNA1S (encoding the calcium voltage-gated channel subunit alpha 1 S (CACNAIS)) or RYR1 (encoding ryanodine receptor 1 (RYRI)) genes responsible for the emergence of malignant hyperthermia (MH) following exposure to volatile anesthetics or the depolarizing muscle relaxant succinylcholine. 118 Most often, diagnostic and genetic testing follows an episode of MH, but the identification of individuals with MH through genetic testing could significantly reduce the relatively high morbidity (35%) 119 and mortality (12%) 120 of an MH episode. In this way, the technology for detecting genetic variations has become commonplace, but the translational application of this knowledge to predict anesthesia-related outcomes is only beginning to be used to improve drug efficacy and eliminate adverse reactions in the clinical setting. Pharmacogenetics could be applied to decrease opioid use for pain in a number of ways, including assisting in prediction of a patient's opioid analgesic response and individual opioid use disorder risk. Physicians have commonly relied on medications like morphine and oxycodone in the acute/post-surgical setting, but they come with a significant side effect profile and risk of tolerance and abuse. Both SNPs and CNVs are seen in the CYP2D6 gene encoding the hepatic enzyme cytochrome P450 family 2 subfamily D member 6 (CYP2D6), a key enzyme in the metabolism of ~25% of clinically used drugs, including the opioid medications codeine and tramadol. CYP2D6 is responsible for metabolizing these drugs into their biologically active metabolites, a conversion process that is required for the patient to receive optimal analgesic benefit. 121 This combination of SNPs and CNVs results in patient phenotypes ranging from poor drug metabolizers (low enzymatic activity) to ultrarapid metabolizers (very high enzymatic activity), 122 and can greatly impact the analgesic and side effect response to opioid medications. 123 Understanding the opioid metabolic profile allows the physician to avoid medications when they are unlikely to be effective and/or safe. If opioids are deemed to be the optimal choice for pain management, such as in cancer-related pain, application of pharmacogenetics could help determine the most appropriate opioid and dosage based on patient genotype and the associated metabolic phenotype. 124 Understanding the genetic profile of the CYP enzymes also provides opportunities for increasing precision in the use of non-opioid pain medications. Amitriptyline, a commonly utilized medication in pain management, undergoes metabolism by CYP2D6 as well as another member of the cytochrome P450 family, CYP2C19, encoded by the CYP2C19 gene. If a patient is a known poor metabolizer of amitriptyline, based on CYP2D6 and/or CYP2C19 genotype, then their treatment dose only needs to be about 50% of the standard dose. 125,126 However, if they are a rapid or ultrarapid metabolizer, then the recommendation is to avoid the use of amitriptyline or to administer roughly 110% of the standard treatment dose. 121,127 It is not only the analgesic efficacy of a drug that should be considered, but also the risk for adverse effects. More recently, this approach has been used in the specific therapeutic recommendations for avoiding adverse effects resulting from a non-steroidal anti-inflammatory drug (NSAID) by evaluating a patient's risk by CYP2C9 genotype 128 and the half-life of the NSAID prescribed. NSAIDs act by inhibiting COX-1 and/or COX-2, both of which play a role in prostaglandin production. 129 The adverse effects of NSAIDs include gastrointestinal bleeding, cardiovascular complications, and kidney damage, and risk increases with the dose administered and length of exposure. [130][131][132] While guidelines vary depending on half-life of the specific NSAID under consideration, recommendations broadly advise that individuals with normal CYP2C9 enzymatic activity levels (normal metabolizers) can tolerate a typical dosing regimen (prescription or non-prescription) while dosing, and duration should be reduced or avoided entirely in those with intermediate or poor metabolic phenotypes, respectively. 128 Knowledge of a patient's non-opioid genetic metabolism profile can allow the physician to prescribe the adequate treatment dose, while likely circumventing unnecessary medication adverse effects -the fourth leading cause of death in the United States. There are several genes where SNP genotype has been associated with differences in pain severity or risk for development of a chronic pain condition. A comprehensive summary of these genes and variations would be outside the scope of the current review but has been presented elsewhere. 133 The relationship between pain susceptibility and variations within these genes may help to identify patients who are at risk for disproportionately severe pain or who are most likely to develop chronic pain. These relationships could not only help to identify those patients at the highest risk for pain but also point to specific targets for novel precision pain therapeutic development designed to address the underlying mechanism of risk. Arguably, the most welldefined example in this category of "pain genes" is COMT, encoding the enzyme catechol-o-methyltransferase (COMT) responsible for the breakdown of catecholamines. 134,135 COMT SNP genotype is associated with the altered sensitivity to painful stimuli as well as the development of chronic pain conditions (eg, fibromyalgia, chronic widespread pain, irritable bowel syndrome, migraine headache) and may contribute to individual differences in morphine analgesic efficacy. 136 COMT genotype is not currently being used to inform precision pain management in the clinical setting but, moving forward, it may help to identify patients at the highest risk of developing chronic pain after interventions like surgery and chemotherapy. Members of the family of voltage gated sodium channels responsible for action potential generation and propagation within pain-sensitive neurons include SCN8A-SCN11A, encoding Na v 1.6, Na v 1.7, Na v 1.8, and Na v 1.9 respectively. Variations within these genes 137 were first implicated in monogenic disorders of altered pain sensitivity (eg, congenital insensitivity to pain, familial episodic pain syndromes, inherited erythromelalgia, and paroxysmal extreme pain disorder), 138,139 but more recently associations have been identified for pain sensitivity in nonpathologic individuals [140][141][142][143] and risk of developing chronic pain conditions. 140 While genotyping for this group of genes is not currently being used clinically outside of diagnosis for monogenic disorders, the genetic and functional validation of these channels in human pain has led to the development of selective sodium channel inhibitors to replace traditional local anesthetics. [144][145][146][147] In the future, the selection of sodium channel selective molecules could be tailored to the procedure as well as the patient's genotype to improve pain outcomes. Pharmacogenetics has, historically, focused on variations within the genomic DNA sequence associated with patient medication response. The related field of epigenetics focuses on alterations in gene expression that are not the result of alterations to the genomic DNA sequence, but still affect patient outcomes through control of gene expression and downstream end product availability. The epigenome encompasses the heritable components of the genome outside of the DNA sequence, which are involved in regulating gene and protein expression. To this end, individual differences have been noted in DNA methylation, histone acetylation, and histone deacetylation. DNA methylation and histone modifications exert critical control over the chromatin structure of the genome [148][149][150][151] to either promote or inhibit gene expression. 151,152 Health-care professionals use differences in gene and protein end-products expression to assess for specific disease states (eg, increased circulating CRP or decreased insulin), but incorporating epigenetics could help to unravel the mechanisms by which altered gene expression occurs and, potentially, shed light on how to harness that underlying process to improve patient health. In largely preclinical research, epigenetic modifications have been implicated in susceptibility to chronic pain and as a therapeutic target to prevent/treat pain. In rat models of both inflammatory and neuropathic pain, expression of histone deacetylase enzymes (HDACs) is positively correlated with the hypersensitivity; a phenomenon that is reversible with HDAC inhibitors including baicalin, valproic acid, and suberanilohydroxamic acid. [152][153][154][155] There have also been correlations made between histone acetylation and opioid receptor expression. Studies using mouse models have shown that neuropathic injury is also associated with histone-4 acetylation, thereby enhancing activity of neuron-restrictive silencer factor (NRSF) and suppressing expression of OPRM1, which is responsible for the production of μ-opioid receptors; however, HDAC inhibition blocked OPRM1 suppression by NRSF. 156 Hypermethylation of DNA CpG islands has been implicated in the incidence and severity of cancer-induced chronic pain via the increased production of endothelin-1, which has pro-nociceptive properties. 157 Importantly, while we have focused primarily on the inherited aspects of epigenetics, the literature suggests methylation and histone modifications are both nonmodifiable (ie, from parental chromosome donation at conception) and sensitive to modification across the lifespan due to environmental or lifestyle factors. Epigenetic modifications may be engaged in the perioperative period and serve as a key component linking acute surgical pain to chronic pain. Elevated levels of glucocorticoids released during the perioperative period secondary to the stress of surgery have the ability to disrupt DNA methylation, releasing key genes from transcriptional repression. This can result in C-fiber dysfunction, increased levels of pain promoting neurotransmitters, and altered responsiveness to morphine. 158,159 While the incorporation of epigenetics, and genetics more broadly, into evidence-based practice shows great promise, future studies are needed to identify the most clinically relevant modifications for pain and analgesia and develop strategies for use in precision diagnostic and treatment algorithms as well as non-opioid targeted therapies. Cancer-Related Pain While precision medicine has helped change the landscape of cancer research and treatment, there has been far less application towards the management and treatment of cancer-related pain. Cancer-related pain places significant burdens on a high percentage of patients and, unfortunately, less than half of patients who suffer from pain will obtain adequate relief. 160 Current guidelines for the treatment of cancer-related pain include the World Health Organization analgesic ladder, which begins with non-opioid medications like NSAIDs for mild pain and progresses to opioids ± nonopioids as pain becomes moderate to severe. 161 While this provides a good framework for treating and managing pain, it does not include specific guidance on opioid selection and dosing or interventional options. Pharmacogenetics has the potential to improve guidance in dosing and drug selection. For instance, focusing on SNPs of genes like OPRM1, where it is well known that patients possessing one or more G alleles have decreased transcription of opioid receptors as well as response to opioid binding, may help improve starting doses as well as titration. 162 Additionally, one multicenter cross-sectional study investigated alterations in CYP2D6 genotyping and pain management in cancer patients with oxycodone, but found no difference in pain scores despite showing significant differences of oxycodone metabolites including oxymorphone. 163 While this study did not show a difference in pain scores, there may be a benefit for a drug selection that has not been studied. While half of patients with cancer-related pain have insufficient pain control, 25% continue to suffer from inadequate pain control at death. 164 With suffering so high, it is important to recognize that interventional therapies in addition to medications may be necessary. A patient's pharmacogenetic profile may indicate that they are a poor candidate for medical therapy alone, in which case a referral to a pain specialist may be beneficial for evaluation of nerve blocks, neuromodulation, and intrathecal drug delivery. The impact of the biopsychosocial model of pain has been applied to cancer-related pain, and the present data may be helpful to clinicians providing precision pain medicine care to cancer patients with pain. Among cancer patients, much research has demonstrated the importance of psychosocial factors in the experience of pain. Individuals with cancer experience higher rates of psychosocial distress after their diagnosis and during their cancer treatment, and anxiety and depression have historically been reported as correlated with greater pain severity and poor pain outcomes. 165,166 However, in a recent publication of a large cohort of cancer patients with chronic cancer-related pain (n = 700), it was found that pain catastrophizing and sleep disturbance were consistently associated with elevated pain symptoms. 167 This correlation of increased pain severity and poor pain outcomes has been corroborated by other groups. 168 The impact of social constructs on pain and pain outcomes in cancer pain has been reported in a recent meta-analysis that identified that social support has been found to be associated with less postoperative pain after breast cancer surgery. 169 Furthermore, the presence of a strong social support network is associated with reduced cancer pain symptom burden, improved quality of life, and reduced distress in patients with chronic lymphocytic leukemia, 170 breast cancer, 171,172 and colorectal cancer. 173 Emerging research is also being performed to identify how physiologic pain processing parameters may correlate with pain outcomes in breast cancer surgery patients. In a study by Schreiber et al, it was found that breast cancer surgery patients who had reduced pain pressure thresholds and higher pain ratings after pinprick temporal summation had associations with the development of post-mastectomy pain syndrome. 174 Further research in this and other cancer pain populations is warranted to determine if QST is a strong predictive parameter in providing precision pain management to cancer patients. Limitations of the Field Precision pain medicine offers the promise of a novel set of solutions to the problem of chronic pain, through mechanism-focused prevention and individualized riskfocused treatment strategies. Unfortunately, the current state of the science is still focused on identification of the critical factors that make up the patient's profile of risk. Precision medicine is dependent on data, but that data https://doi.org/10.2147/JPR.S320863 DovePress Journal of Pain Research 2021:14 also constitute one of the primary areas of concern in the field of precision healthcare. Large volumes of highly specific patient data must be managed appropriately to protect patient privacy. In addition to protecting patient data, access to precision pain management strategies must not be restricted on the basis of financial means or socioeconomic status. In fact, given that the impact of psychological, physiological, and genetic factors could have differential impact on chronic pain risk based on race, ethnicity, sex, and other socioeconomic factors, the application of new findings to diverse groups must be based on evidence-based medicine and not on the assumption that all groups will benefit equally from precision pain management strategies that work for others. Currently, the integration of pharmacogenetics/genomics, nuanced phenotyping, and neuroimaging require the availability of significant infrastructure (eg, clinical expertise, equipment, facilities), but as the cost for these resources decreases and education for their application is more effectively integrated into medical training, they should become more widely used to benefit patients and reduce suffering. Another limitation includes who is managing pain. While the number of pain centers using multimodal assessment and treatment strategies grew in the 20th century, survey data suggest that only 15% of people living with pain have accessed specialty pain management services and more than 50% of pain management is happening in primary care settings. 175 While there has been significant progress made in the assessment and treatment of pain within these centers of expertise, the successful application of precision pain medicine for the masses depends on integration of these approaches into primary care as well as pain medicine. Limitations of the Present Study The goal of the present review was to provide a comprehensive overview of precision medicine as it pertains to the field of chronic pain management. While we focused on many areas that are known to be important in the field of precision medicine, we understand that a limitation of the present review is that not all biomarkers and phenotyping parameters were able to be included. Additionally, while many chronic noncancer and cancerrelated precision pain data have been presented, we recognize that there may be additional research on other pain syndromes that were not included. A strength of these limitations is that further narrative, scoping, or systematic reviews should be pursued given the importance and timeliness of the topic. Moving Forward Chronic pain continues to be a growing public health problem, requiring significant financial and health-care resources annually while negatively impacting the wellness and quality of life of millions. Identification of patient risk profiles by incorporating genetic and phenotypic data is key to the development of precision pain and analgesic medicine strategies. The evidence is clear that pain is an individualized experience and personalized and/or precision treatments could improve pain outcomes. As of this moment, however, we are still in the discovery phase with the goal of moving into evidence-based practice in the coming years. Once we have a clear understanding of the mechanisms that drive pain, we can then progress beyond the basic diagnosis and treatment of symptoms to the management of the underlying pathophysiology.
Identification, Analysis and Characterization of Base Units of Bird Vocal Communication: The White Spectacled Bulbul (Pycnonotus xanthopygos) as a Case Study Animal vocal communication is a broad and multi-disciplinary field of research. Studying various aspects of communication can provide key elements for understanding animal behavior, evolution, and cognition. Given the large amount of acoustic data accumulated from automated recorders, for which manual annotation and analysis is impractical, there is a growing need to develop algorithms and automatic methods for analyzing and identifying animal sounds. In this study we developed an automatic detection and analysis system based on audio signal processing algorithms and deep learning that is capable of processing and analyzing large volumes of data without human bias. We selected the White Spectacled Bulbul (Pycnonotus xanthopygos) as our bird model because it has a complex vocal communication system with a large repertoire which is used by both sexes, year-round. It is a common, widespread passerine in Israel, which is relatively easy to locate and record in a broad range of habitats. Like many passerines, the Bulbul’s vocal communication consists of two primary hierarchies of utterances, syllables and words. To extract each of these units’ characteristics, the fundamental frequency contour was modeled using a low degree Legendre polynomial, enabling it to capture the different patterns of variation from different vocalizations, so that each pattern could be effectively expressed using very few coefficients. In addition, a mel-spectrogram was computed for each unit, and several features were extracted both in the time-domain (e.g., zero-crossing rate and energy) and frequency-domain (e.g., spectral centroid and spectral flatness). We applied both linear and non-linear dimensionality reduction algorithms on feature vectors and validated the findings that were obtained manually, namely by listening and examining the spectrograms visually. Using these algorithms, we show that the Bulbul has a complex vocabulary of more than 30 words, that there are multiple syllables that are combined in different words, and that a particular syllable can appear in several words. Using our system, researchers will be able to analyze hundreds of hours of audio recordings, to obtain objective evaluation of repertoires, and to identify different vocal units and distinguish between them, thus gaining a broad perspective on bird vocal communication. Animal vocal communication is a broad and multi-disciplinary field of research. Studying various aspects of communication can provide key elements for understanding animal behavior, evolution, and cognition. Given the large amount of acoustic data accumulated from automated recorders, for which manual annotation and analysis is impractical, there is a growing need to develop algorithms and automatic methods for analyzing and identifying animal sounds. In this study we developed an automatic detection and analysis system based on audio signal processing algorithms and deep learning that is capable of processing and analyzing large volumes of data without human bias. We selected the White Spectacled Bulbul (Pycnonotus xanthopygos) as our bird model because it has a complex vocal communication system with a large repertoire which is used by both sexes, year-round. It is a common, widespread passerine in Israel, which is relatively easy to locate and record in a broad range of habitats. Like many passerines, the Bulbul's vocal communication consists of two primary hierarchies of utterances, syllables and words. To extract each of these units' characteristics, the fundamental frequency contour was modeled using a low degree Legendre polynomial, enabling it to capture the different patterns of variation from different vocalizations, so that each pattern could be effectively expressed using very few coefficients. In addition, a mel-spectrogram was computed for each unit, and several features were extracted both in the time-domain (e.g., zero-crossing rate and energy) and frequency-domain (e.g., spectral centroid and spectral flatness). We applied both linear and non-linear dimensionality reduction algorithms on feature vectors and validated the findings that were obtained manually, namely by listening and examining the spectrograms visually. Using these algorithms, we show that the Bulbul has a complex vocabulary of more than 30 words, that there are multiple syllables that are combined in different words, and that INTRODUCTION Vocal communication is an essential tool for transferring information. It serves a diverse range of species and is a topic of multi-disciplinary interest. Studying the regularities and contexts of bird vocalizations may provide keys to understanding numerous aspects of bird behavior. While being an essential part of various species' biology, the study of vocal attributes and the inference of the signaling properties remains a major challenge. This is because the information conveyed by vocal communication includes many components and facets that include physical attributes such as amplitude, frequency, rhythm, and intensity, as well as more complex aspects such as syllables, words, phrases and more (Kershenbaum et al., 2016). In addition, audio recordings produce a vast amount of digital data per vocalization. Furthermore, these parameters may be expressed differently, which leads to different patterns and correlations between populations and individuals that can be difficult to identify and even predict. This raises intriguing questions about the meaning of animal sounds (Bruno and Tchernichovski, 2019). According to a study of blue tits, for example, there is a correlation between the call length in males' courtship songs and extrapair paternity. In this case, the call length provides information about the quality of the singer (Kempenaers et al., 1997). Similar patterns were found with respect to rhythm in sparrows (Passerculus sandwichensis) and to variability of calls in warblers (Sylvia communis) (Balsby, 2000;Sung and Handford, 2020). From the calls' characteristics we can reveal information not only at the level of the individual, but also at the level of the species. For example, studies have shown that species with a large repertoire typically have plastic and non-permanent songs, indicative of learning abilities throughout their lifetime in the vocal domain. These species are called open-ended learners . Such large repertoires introduce multiple challenges when aiming to unravel the signaling properties behind the vocalizations. For instance, large repertoires have been found to indicate a high reproductive success in some species ), yet, in Botero et al. (2009), it was demonstrated in tropical mockingbird (Mimus gilvus) that the variation between vocal expressions decreased as the bird aged, and the expressions became more consistent. In this study system, individuals with more consistent performance tended to achieve higher dominance status and greater reproductive success (Botero et al., 2009). These unexpected patterns demonstrate the diversity and complexity of vocal communication systems. Manual annotation and analysis of bird song is laborious, time-consuming, and prone to subjective bias. Deep learning and algorithms for extracting audio parameters have the potential to overcome these limitations and challenges of reproducibility and of scaling up to large datasets. In recent years, analyzing digital recordings has benefited from the development of reliable automatic algorithms and deep learning, such as available software for syllable recognition and clustering (DeepSqueak, Coffey et al., 2019), an online tool for bird species identification (BirdNET, Kahl et al., 2021) and robust software for animal call detection in large and noisy databases (ORCA-SPOT, Bergler et al., 2019). Still, in many cases researchers rely on subjective naming of calls and on manual division of vocal units. In addition, in many studies the manual analysis is based on a limited amount of data and may miss out patterns which may be revealed only if enough data is automatically processed and analyzed. In a broader scope, the development of advanced automated tools for bio-acoustic analysis can support large-scale research and reveal organisms' vocal communication patterns, may facilitate monitoring of populations, and can be leveraged for management and conservation efforts in natural environments (Righini and Pavan, 2020;Kahl et al., 2021). In this study, using automatic signal processing algorithms and deep learning, we analyzed White Spectacled Bulbul (Pycnonotus xanthopygos) vocalizations. This species is a common, widespread passerine, and was selected as our model since it is characterized by tight social bonds between individuals and a wide repertoire of vocalizations (Shirihai and Svensson, 2018), used year-round by both sexes. We analyzed and characterized 660 base units of the White Spectacled Bulbul from recordings of 14.5 h, to investigate its repertoire and its use of different vocal units. Our analyses show that Bulbul calls are complex vocalizations-words, most of them composed of more than one base unit-syllable. The complexity of the Bulbuls' vocal communication can be revealed by intuitive hearing as well as by inspecting spectrograms, or by a more elaborate analysis. However, here we present a set of quantitative automatic methods that make up a pipeline of automatic detection of Bulbul calls, and an analysis of these vocal units that allows classification into different groups, both by supervised and unsupervised learning. These methods (1) allow objective validation of the robustness of words' and syllables' classifications; (2) carry out automatic identification and classification to pre-defined classes; and (3) provide the basis for a fully automated process of defining the word and syllable repertoires of a species or an individual. Our analyses show that the same syllables are used in different words and in distinct geographic populations. This pattern is very likely to indicate a complex hierarchical structure (Kershenbaum et al., 2016) and that the White Spectacled Bulbul is an open-ended learner vocalizing species. Furthermore, this pattern can imply the existence of a more complicated form of communication. The hierarchy of syllables and words provides a basis for investigating syntax questions that are today the focus of widespread interest (Menyhart et al., 2015;Suzuki et al., 2016;Bruno and Tchernichovski, 2019;Searcy and Nowicki, 2019). MATERIALS AND METHODS A block diagram of the processing and analysis stages of different base units of Bulbul's vocalization is depicted in Figure 1. Each of the procedures used in each block is detailed below. Data Set Our dataset was collected from eight SM4 automatic recorders of Wildlife Acoustics (Wildlife Acoustics, 2018) placed at four different locations (Figure 2), two recorders at each. The two recorders were placed several hundreds of meters apart to ensure that more than one individual was recorded in each location. The recordings were taken for a period of 6 months to a year, at dawn, noon, and dusk, for a total of 4 h per day, with each recording lasting from 30 to 60 min. Overall, more than 7,000 h were recorded. Six of the recorders were located in northern Israel-four in the Hula valley (She'ar Yashuv and Agmon Hula) and two in a nearby location on the Naftali Mountain range (Yiftah) that is characterized by different habitat and weather. The last two recorders were installed in the bird observatory in Jerusalem, a distinct population for comparison. Since all the recordings were carried out in natural habitats, they contain many types of background noise including other birds and animals, weather sounds and artificial sounds. We used several methods (bandpass filtering, median clipping, and small object removal) described in section "Word Analysis" to filter out the different noises (Figure 3). Pre-processing The acoustic signal was sampled at 44,100 KHz and filtered using a Band-Pass filter between 1 KHz and 3.5 KHz to eliminate background noise and preserve frequencies relevant to the Bulbul's vocalization. The signal was divided into short segments, either consecutive segments of equal size (0.5-1 s each, similar to the typical bird vocalization duration) with 50% overlap for automatic detection of acoustic events, or of variable size for extracted words or syllables. In both cases, for each segment the discrete short-time Fourier transform (STFT) is calculated using: where x(n) is the acoustic signal, w(n) is a han window used to multiply each frame, where frames of 512 samples (∼12 ms) with a hop size of 128 samples are regularly used. Consequently, a mel spectrogram was computed for each segment. Mel scale is a logarithmic-like scale based on the human auditory system that represents the sound frequencies in a similar way to how we and other animals perceive. Acoustic Feature Analysis for Analyzing Syllables In order to compare between vocalization units, we extract several parameters from each signal. Features were extracted both in the time-domain (e.g., zero-crossing rate and energy) and frequency-domain (e.g., spectral centroid and spectral flatness, MFC coefficients). The following features were used to characterize the spectrum: (a) Spectral Centroid(S c ) measures the center of mass of the spectrum, and is calculated as: where | X(k)| is the magnitude of the kth' bin of DFT, and f(k) is its center frequency. is the sound and is computed as the ratio of the geometric mean to the arithmetic mean of the energy spectrum: Klapuri and Davy (2007), which is a weighted standard deviation of the spectrum in a given audio segment: (d) MFB (log mel filter bank) are a set of filters arranged according to a mel-scale, a perceptually based frequency scale that aims to mimic the frequency perception of the human auditory system (Davis and Mermelstein, 1980). MFB is widely used in audio signal processing including bird analysis and music signal processing. It is calculated by using Discrete Fourier transform of each frame and applying overlapping triangular filter banks, where each filter output is a weighted sum of magnitudes of frequency bins within its support. Data reduction is also a benefit of this computation. (e) Mel Frequency Cepstral Coefficients (MFCC) The MFCCs, derived from the MFB by applying a discrete cosine transform are very common features in audio analysis. A total of 13 coefficients are computed for each frame, where the first four are used for the analysis. In Addition, two time-domain parameters were computed: (f) Zero Crossing Rate (ZCR) is defined as the number of times an audio waveform changes its sign within the duration of the signal, and is calculated as: Where K is the signal length. (g) Fundamental frequency f 0 − which is evaluated using the YIN (De Cheveigné and Kawahara, 2002) or the PYIN (Mauch and Dixon, 2014) algorithms. Legendre Polynomials In many passerine species, most of the spectral energy of the vocalization is concentrated around the fundamental frequency (Nowicki, 1987;Podos, 2001) since the avian vocal tract attenuates a greater part of the energy of higher harmonics. It is therefore reasonable to assume that a considerable portion of the information conveyed by bird vocalization may be attributed to the intonation, i.e., the fundamental frequency contour. In order to extract this information quantitatively, we modeled the fundamental frequency contour using a low degree Legendre polynomial, enabling it to capture the different patterns of variation from different vocalizations, so that each pattern could be effectively expressed using only 3-4 coefficients. This analysis may help us characterize and visualize the fundamental frequency patterns of various syllables which were subjectively divided to different groups. The usage of the Legendre polynomials for modeling the fundamental frequency was employed in different applications of speech and audio. For example, it has been used to model pitch contour for synthesizing intonation (Zhou et al., 1984), to describe mathematically the nuclear accents found in English in the English isles and to use it for intonation labeling (Grabe et al., 2007). It was also used for automatic language identification (Lin and Wang, 2005), as well as to detect sarcasm in speech and for analyzing prosody (Rakov and Rosenberg, 2013;Rakov, 2019). The Legendre polynomials is a system of orthogonal polynomials defined as: is the n-th order term. It can also be expanded with the polynomials {1, t, t 2 . . . } using Gram-Schmidt process. According to this definition the first four terms are: Frontiers in Behavioral Neuroscience | www.frontiersin.org Following Grabe et al. (2007), we used the first four polynomials, L 0 , L 1 , L 2 and L 3 which represent the average of the signal, its slope, quadratic trend, and wavelike shape, respectively. The following steps were carried out to fit the Legendre series p (t) toF 0 (t): a. A single syllable or vocalization unit is demarcated and excerpted from the acoustic recording. This was carried out using Audacity (Audacity, 2021). b. The sampled signal s (t n ) , n = 0, 1, ..., M − 1 of length M, where t n = nT s are the time samples, is filtered using a bandpass filter between 700 and 3,900 Hz, based on the range of frequencies characteristic of the White Spectacled Bulbul. c. For each sampled syllable or vocalization unit the fundamental frequency contour F 0 (t n ) is estimated with either PYIN (Mauch and Dixon, 2014) or the YIN (De Cheveigné and Kawahara, 2002) algorithms, or by using a simple zero-crossing rate analysis signal z(t n ). In many cases the latter is preferred, since the pitch detector algorithms (YIN and PYIN) which were developed mainly for speech and music signals, may not be robust enough for noisy bioacoustic data. Furthermore, the ZCR computation yields a good estimation of F 0 (t n ). d. A polynomial fit is used after scaling the time axis to be between −1 and 1. The estimated contour,F 0 (t n ), is modeled using an m-th degree Legendre series defined as: where L j (t) is a Legendre polynomial and a j is its corresponding coefficient. The polynomial series is a least square fit to the datâ F 0 (t n ), where the fitting process is carried out by solving an overdetermined set of linear equations of the form: where V (t) is the pseudo Vandermonde matrix of t, a is the vector of coefficients. Word Analysis Bulbuls have complex vocalizations which are described as words that consist of several base units-syllables, and intervals in between. Extraction of features from these complex units is complicated due to their various number and type of syllables, as well as their varying intervals. Therefore, we aimed to create one feature vector that describe the entire vocalization. For this, we used the mel-spectrogram, applied to the raw isolated word signal, with 35 mel filters between a low frequency f L and a high frequency f H . We set f L to 700 Hz and f H to 3,900 Hz, according to the range of frequencies for most Bulbul vocalizations. Consequently, a variation of median clipping (Lasseck, 2014;Fukuzawa et al., 2016) following by a small object removal is applied. These two simple image processing techniques are applied to increase the SNR, since in most of the recordings a high background noise is present: (1) Median clipping-in this technique a binary mask is generated for masking background noise, where for each time-frequency point (i, j), its corresponding spectrogram value S(i, j) is compared to a threshold value which is based on the median of the corresponding row and column of that point. Thus, the median clipped spectrogram S mc (i, j) is obtained by: where F is a multiplication factor set here to be 3.5 and S L is the lowest value in the spectrogram which is set to −80 dB. (2) Small object removal-used to remove small blobs which are probably irrelevant to bird vocalizations and may stem from background noise. This is carried out by converting the median clipped mel-spectrogram to a binary matrix, and for each non-zero entry calculating its immediate non-zero neighbors. Non-zero entries whose number of neighbors is below a pre-defined threshold are zeroed, and a binary mask is obtained. The final spectrogram is then obtained by: Alternatively, the usage of a white top-hat transform (Sonka et al., 2014) was examined with no significant difference. An example of a mel-spectrogram before and after these processing operations is depicted in Figure 6. Finally, to compare between feature vectors that represent different words with variable length, we transform the vectors' dimensions to a fixed size by zero padding. Alternatively, the spectrogram was calculated using a fixed number of frames with fixed duration and variable hop length. Synthesis To demonstrate that Legendre polynomial coefficients can extract most of the vocal information from Bulbul calls (at least as perceived by a human), and to use another method to validate the parameters we use, we generated a synthetic vocalization based solely on these coefficients. The synthesis is carried out using the following steps: a. For each input signal with one syllable-a pre-processing is applied, which includes downsampling to a sampling rate of 11,025 Hz, bandpass filtering of the signal using cut-off frequencies (700, 3,900) and demarcation of the syllable boundaries. b. Fundamental frequency contour of the syllable is evaluated, using either pitch detection algorithm or applying a ZCR analysis on the bandpassed signal. The evaluation is carried out using a frame length of 64 samples (5.8 ms) and a step size of 32 samples (2.9 ms). The result of this stage is a vector of consecutive fundamental frequency values. c. A three-degree Legendre polynomial is fitted to the fundamental frequency contour, and four Legendre coefficients are obtained. d. Using the coefficients, a Legendre series is fitted for the time points for which the fundamental frequency contour was evaluated. e. For each frame m, a short sinusoid is produced, with frequency p 0 (t m ) using: x(n) = Re A · e j(2·π·p 0 (t m )·n+ϕ) where A is the signal amplitude, n is the sample index, and ϕ is the phase shift which is corrected for each frame to avoid phase discontinuities. The initial phase is set to ϕ t m = 0 and then for next frames it is set according to the final phase of the former sinusoid: where N is the frame size, T 0 (t m ) is the fundamental period evaluated for the m th frame and T s is the sampling interval. The concatenation of all the frames yields a chirp-like signal, with a fundamental frequency contour according to the evaluated Legendre series. Visualizing by Data Reduction To validate the division into different syllables or words made by listening and observation, which may be subjective, we first tagged 660 vocalizations, containing 1,004 syllables, all collected from nine separate audio files that were recorded at the same location with the same device, each lasting an hour. Further, an algorithm for dimension reduction was utilized, based on the spectral analysis data, to objectively examine the proposed grouping, both for words and syllables. We used PCA (Principal Component Analysis) and t-SNE algorithm (tdistribution Stochastic Neighbor Embedding, Van der Maaten and Hinton, 2008) for dimension reduction and visualization. Both methods perform a mapping from a high dimension to a low dimension of 2-3, so that the proximity or distance between points in the high dimension is maintained in the low dimension. Syllables feature vectors were reduced from 8 dimensions to 2, and words feature vectors from 1,190 dimensions to 2. We expect that syllables or words that were divided by listening into one group should be in the same cluster, whereas feature vectors from sounds classified subjectively as belonging to different groups would be divided by the algorithm into different clusters and would be far apart. While methods based on linear algorithms such as PCA may not yield clear results, the usage of a nonlinear method such as t-SNE, may show the clustering and separation of sounds in a manner similar to their definition on an auditory basis. For analysis and computations, we used Python 3.8 and suitable packages; Librosa (McFee et al., 2015) for audio signal processing, and Scikit-learn (Kramer, 2016) for data analysis and data reduction. Dataset for this analysis and codes are available on GitHub (see Data Availability Statement). Detection of Bulbul Calls The analysis of words and syllables receives as input an audio signal where the relevant vocalization is located. It is therefore necessary to identify and extract the desired call events from long and noisy recordings. This can be done manually; however, the number of call events that can be derived in this way is limited. A machine learning approach should be applied in order to extract thousands of vocalization units for further analysis. We used several Deep Neural Networks (DNN), and in particular Convolutional Neural Networks (CNN) (Goodfellow et al., 2016) to automatically detect the Bulbuls' calls in the recordings. Most of the recordings are between half an hour to 1 h long and contain intense background noise as well as other birds' and other animals' vocalizations (including human speech). Several models were tested for the detection: (a) A CNN with 5 blocks of convolution and max pooling layers, connected to a 90 hidden units fully connected (FCN) layer and an output layer with a total of 1,117,781 trainable parameters; (b) A resnet architecture with 14 convolution layers in resnet blocks, connected to a FCN layer of 90 hidden units with a total of 625,397 trainable parameters; (c) A mini-Xception model (Chollet, 2017(Chollet, , 2021 with 7 convolution layers and a total of 717,249 trainable parameters. The input for all the models was obtained by dividing the acoustic signal into consecutive segments of 1 s each, with an overlap of 50%. For each segment, a log mel-spectrogram was calculated by using frames of 2,048 samples (∼48 ms for fs = 44,100) and hop size of 700 samples. The mel-spectrogram is a matrix of 50 × 60 (number of mel filters x number of time bins), which was pre-processed by median clipping and small object removal for noise reduction. The CNN model (Figure 7) is composed of 5 blocks of convolution: a first block of a convolution layer with 32 3 × 3 kernels following by a max-pooling layer (2 × 2) and a Relu activation function. The following convolution blocks are the same, where the number of kernels doubles at each block. After flattening the output feature map of the final convolution layer, a fully connected layer of 90 units is applied with dropout of 0.5. Finally, the output layer with one unit and a Sigmoid activation function and threshold value of 0.5 yields a binary output of (Bulbul = 1/non-Bulbul = 0). Consecutive segments predicted as Bulbul (1), were merged into one call event for further processing. The training set for the detection included 57 recordings of variable durations, with a total duration of more than 8 h, which were annotated manually. The annotations were made by examining the spectrograms and listening to the corresponding sounds, and the start and end times of each identified vocalization were listed ("strong labeling, " Mesaros et al., 2021). This dataset contains several thousands of Bulbul calls, along with other birds, human activity, and many other sounds from various sources. We used 70% of this dataset for training and the remainder for testing. Out of the training data, 10% was randomly selected and served for validation. A segment-based evaluation is applied, and each segment is considered a bird call if at least 30% of it is overlapped with an annotated Bulbul vocalization event. For data augmentation we used five different methods to increase variability of the data, thus improving the robustness of the networks: (a) Adding white noise to each Mel-spectrogram. Convnet Network Performance-Bulbul Event Detection We measured the performance of the DNNs in detecting Bulbul's vocalizations using a test dataset of 3 h with several thousand individual calls, which also contained high background noise from other birds and animals, as well as anthropophonic and geophonic sounds (Righini and Pavan, 2020). The test dataset was pre-processed with the same procedure used for the training dataset, which included MFB calculation, median clipping, and small object removal. The test set was randomly selected from the recordings dataset, and a segment-based evaluation was carried out using a 1 s segment. A correct identification rate of 75% (True Positive Rate, or recall, i.e., the ratio between the number of Bulbul vocalization segments correctly identified, to the total number of segments with Bulbul vocalization in the test recording set) was yielded by the CNN described in section "Detection of Bulbul Calls, " with a relatively low False Positive occurrences of less than one third (27%) of the True detections. In a manual examination of the results, the non-identified calls (false negatives) were usually further from the microphone or very noisy. The Resnet and the mini-exception models yielded similar results. A Wide Repertoire of Distinct Words That Repeat Themselves The White Spectacled Bulbul demonstrated a broad vocabulary of more than 30 distinct words. Over 660 calls were tagged, named, and analyzed, and were manually categorized as 13 different words (see examples in Supplementary Figure 1). Each word was represented by a mel spectrogram of 35 × 34 which was cleaned and filtered as described in section "Word Analysis." Two computational analyses were performed to visualize the 1,190 dimensions of the mel-spectrogram as a two-dimensional map-PCA and t-SNE. Figure 8A shows the PCA result where each dot represents a word, and each color represents one unique naming tag. As shown, the vocalizations that were perceived and categorized as belonging to the same word by a human expert were also mapped to the same region on the 2-D plane of the unsupervised PCA (groups of similar colors). This grouping is further demonstrated using a second unsupervised method: the tSNE analysis ( Figure 8B). The tSNE plot places most of the words (different colors) in well-defined, separate clusters. These results suggest that Bulbuls use distinct words that appear nonrandom as they repeat themselves across different recordings throughout the year. Our manual process of naming words and categorization aligns well with these unsupervised dimensionality reduction analyses. Different Words Are Composed From the Same Shared Base Units A total of 1004 audio signals containing 22 different syllables were excerpted from the words and manually categorized and tagged with a number by listening and examining the spectrogram (see examples in Supplementary Figure 2). Syllables were represented by an eight-parameter feature vector, which includes-syllable length, spectral flatness, spectral centroid, bandwidth, and four Legendre polynomials coefficients-based on the fundamental frequency contour. The results of both PCA and tSNE are provided in Figures 9, 10A, respectively. Figure 10B shows that using only the Legendre coefficients as parameters is sufficient to describe the variance of the acoustic signal. In Figure 10A, words (denoted with capital letters) are composed of different syllables. The same syllables often appear in different words. This analysis can serve as an effective test or validation for manual assessments; for example, we found that two syllables from different words that clustered together were initially misidentified as different syllables. Later listening and visual inspection of the spectrograms confirmed that they represent a single syllable in different contexts. These results show that there is a collection of distinct syllables that repeat themselves and appear in different words, indicating that different words are constructed from the same shared units of similar and non-random syllables. Classification of Words From a New Dataset The final stage in the automatic pipeline presented above is to classify the segments detected as Bulbul vocalizations by the deep CNN into their corresponding classes. For this purpose, we first applied the trained CNN model presented in section "Detection of Bulbul Calls" to a 3-h long recording dataset. A group of 800 segments were detected as Bulbul calls and demarcated using the model. These were used to construct two test datasets: 1. A dataset of 126 segments consisting only of words recognized as belonging to the predefined repertoire, 2. A dataset containing 200 segments selected randomly from all detected segments. Later examination found that these included both known words (101 segments) and unknown segments that cannot be classified into existing word-categories. These segments include words that the researcher has yet to annotate, a mix of words (when more than one bird sings in unison), fragments of words (initial or final syllables), as well as a few false positives (i.e., not a Bulbul vocalization). Using the dimensionality reduced PCA representation described in section "A Wide Repertoire of Distinct Words That Repeat Themselves" for training, the high-dimensional mel-spectrograms of the test segments were projected into the reduced PCA space to produce a low-dimensional representation of the test data. Consequently, three simple classifiers were used to classify the test segments: A k-nearest neighbor (KNN) classifier with K = 3, a nearest centroid classifier, where the prediction of the test word is set according to the label of the closest centroid among all centroids of the training groups, and a Support Vector Machine (SVM) with a radial basis function kernel. The classification results of applying the classifiers on the first fully annotated test set are summarized in Table 1. As can be seen, using a 10-dimensional representation a very high classification accuracy was obtained, of 95.2, 94.4, and 96.8%, for the nearest centroid, KNN and SVM, respectively. Even better scores were achieved using a 100 dimensional representation. The same pre-processing was used in the second dataset, in which the detected words were selected randomly. However, to reject the unrelated segments detected erroneously as Bulbul, a threshold value was set, based upon the distances of all training word samples from their respective closest centroid. For each test segment, whenever the nearest centroid distance is higher than the threshold, this instance is discarded. Using this procedure, most of the non-Bulbul segments were rejected, as well as some Bulbul vocalizations. A classification accuracy of 77% was achieved for this dataset. Evidently, when the CNN is used to identify words that were included in the training repertoire, this classification tool can guarantee a fully automated process, with very high recognition rates. When unknown vocalizations are also considered, recognition rates are lower. These can be improved in a number of ways; the researcher may inspect the detected words before classification to remove the irrelevant vocalizations, and high accuracy results could be also achieved by applying simple classification tools. DISCUSSION The field of bio-acoustic research is rapidly expanding, with technological advances facilitating new approaches to fundamental biological questions and new applications in conservation. This includes utilization of deep neural networks in ecological studies for monitoring and processing large datasets of field recordings (Bergler et al., 2019;Dufourq et al., 2021). As in our framework, most of these studies use a convolutional neural network, with an augmentation approach similar to ours. These are typically complemented by pre-processing and postprocessing stages, which are tailored to the specific species, environment and future use of the data. Several frameworks that automate the analysis of animal communication and quantify vocal behavior have been developed. These include studies and software packages such as Sound Analysis Pro (Tchernichovski and Mitra, 2004), which includes automatic segmentation and calculation of acoustic features as well as clustering of vocal units, and DeepSqueak (Coffey et al., 2019), in which a regional CNN (Faster rCNN) and k-means are used to detect and cluster mouse ultrasonic vocalizations. Similar to our syllable analysis approach, these programs extract different acoustic features to characterize and differentiate between vocal units, and use unsupervised methods to visualize and classify the data. In our The analysis includes only 4 Legendre coefficients. In both figures same syllables appears in different words. For example, the purple dots that represent the syllable "towy," were excerpted from two different words (denoted by the letters D and H) and are clustered together. Similarly, the syllable "ti" (dark green), derived from six different words, resides in the same region both in PCA and tSNE analyses. study, however, we used solely Legendre Polynomials to capture the shape of the fundamental frequency contour, demonstrating that very few coefficients are sufficient to effectively express the different patterns of syllable variation. Goffinet et al. (2021) used a variational autoencoder (VAE) to extract features in reduced latent spaces, employed on mouse USV and zebra finch vocalizations, and demonstrated the effectiveness of a latent space representation when compared with handpicked selected features in different vocal analyses. This kind of analysis shares the data-driven approach applied in our word analysis process and seems effective for recognition of patterns and characterization of units from the complex and high-dimensional data of vocal communication. However, most of these studies used recordings in artificial environments (did not contain high background noise) or were designed for a specific species, and it is challenging to apply them to non-model passerine in the field. Several significant cross-disciplinary challenges in the study of vocal communication still exist (Prat et al., 2017;Teixeira et al., 2019;Mercado and Perazio, 2021). These methodological The classification accuracy is the ratio between the number of correctly classified words and the total number of words in the test dataset (of 126 words). challenges arise due to the vast amount of digital data produced, the large number of parameters that can be potentially extracted (e.g., frequency, duration, pitch, etc.) and the lack of clear hypotheses regarding the parameters and the signals they convey (Suzuki et al., 2019). A basic conceptual challenge is the categorization of vocal units, and more generally, the definition of the repertoire. Most often, the construction of a repertoire dictionary is an expression of guidelines defined by the researcher. Vocal units can be categorized, for example, by a "hard" division (only the exact call is considered the same word), or a "soft" division (a variety of calls that sound alike are considered the same word). This scheme may or may not express the way that animals perceive or use their repertoire (Kershenbaum et al., 2016;Mercado and Perazio, 2021). Moreover, while human perceptual properties may be fit for such a task, this may cause additional methodological challenges, may add room for inconsistencies and reduce reproducibility. Thus, quantitative validation of vocal categorization may aid in overcoming these challenges. By taking advantage of the benefits of automatic analysis we overcome these challenges in two ways: 1. Processing large amounts of data-Our CNN model which is used for identifying Bulbul sounds is highly efficient since it reduces manual work and processes big datasets. Further, deep learning has the ability to recognize discriminative patterns in a non-trivial way and can consider combinations of multiple variables that the human auditory system may miss. With the right adjustments, this network can be used with noisy recordings taken in the wild, to identify various bird species and perhaps other animals with wide vocal repertoire. As part of a post-processing, the call events can be analyzedboth for audio analysis (e.g., clustering and comparing between calls) and for statistical analysis (e.g., call events per day and daily or seasonal fluctuation of calls). Validation and avoidance of biases-Our automatic analysis for syllables and words can validate our subjective classification and makes it possible to significantly reduce biases that may occur in manual analysis. The categorization process can be done manually by characterizing vocal units in a parametric analysis (like frequency, etc.) into distinct words. But, it is a long and tedious process, made more difficult by large amounts of audio files. Linear and non-linear dimensionality reduction and visualization techniques as well as supervised classification schemes can demonstrate that the manual choices which have been made in early stages are to an extent consistent. Further, it can highlight mistakes and reveal new insight about the categorization. As seen in our results, by using PCA, two syllables from different words which were cataloged differently were found to be the same syllable. In addition to validating the manual work, the use of our syllable analysis tool enables us to compare similar syllables in different geographic regions in order to identify minor differences. Furthermore, the word analysis tool allows to compare dialect differences between populations, identify which words are unique and which are shared, and investigate if these differences are correlated with geographical distances or genetic differences. During our fieldwork, cameras were placed next to some of the automatic recorders. Further behavioral research can be conducted by analyzing these thousands of short video clips using other deep learning models. Additionally, since females and males are morphologically similar, sex and age of the recorded individuals are unknowns. This information, and possible sex differences in vocalizations can be obtained by combining audio recordings, corresponding video clips and DNA samples (research in progress). Our study revealed a particularly large vocal repertoire produced by various Bulbul populations. We confirmed through our analysis that the repertoire is derived from a combination of the same basic units which are used to generate new or unique words. This result may be explained in at least two ways. First, is that the Bulbul uses an efficient hierarchical repertoire by maximizing a limited stock of syllables for composing a variety of different words. These signal combinations can be cultural or socially transmitted. Another explanation is that syllables are innate and there may be genetic constraints upon the neural control or physiological mechanisms, limiting the production of syllables, while words, which are made up of syllables, can be invented or learned. The combination of the large repertoire together with a vocal structure of words comprised from syllables may suggest that the Bulbul is an open-ended vocal learning species (Cornez et al., 2017). Our pipeline provides a robust framework that enables us to process large amounts of data with very little manual intervention, and to classify and validate our findings in an unsupervised analysis. Using the pipeline, raw noisy recording can be processed down to the level of a single word or syllable. This facilitates further analysis and research on Bulbul vocal communication, opening the door to investigation of question such as whether the emergence of novel words characterizes isolated populations, whether different Bulbul calls convey a specific message or information and whether the syllable arrangement into words has certain rules that operate over them. The framework's code and documentation are available on GitHub in the following link: https://github.com/BulbulNET? tab=repositories. The code can be utilized for study of Bulbul vocalizations as is and can easily be adapted to analysis of vocalizations of other passerines that share similarities with the Bulbuls' vocalization structure. We would be happy to assist in the incorporation of the code or parts of it into new pipelines that are being developed for such studies with the goal of generating new insights into the complex world of animal acoustic communication. DATA AVAILABILITY STATEMENT The analysis methods and datasets for this study can be found in GitHub, in the following link https://github.com/BulbulNET? tab=repositories.
Multidrug-resistant profile and prevalence of extended spectrum β-lactamase and carbapenemase production in fermentative Gram-negative bacilli recovered from patients and specimens referred to National Reference Laboratory, Addis Ababa, Ethiopia Background The emergence of multidrug-resistance (MDR), production of extended-spectrum β-lactamases, and carbapenemase in members of fermentative gram-negative bacilli are a serious threat to public health. Objective The aim of this study was to determine the burden of multi-drug resistance, the production of extended-spectrum β-lactamases (ESBLs), and carbapenemase in fermentative Gram-negative bacilli in Ethiopian Public Health Institute. Materials and methods A cross-sectional study was carried out from December 2017 to June 2018. Different clinical samples were collected, inoculated, and incubated according to standard protocols related to each sample. Bacterial identification was performed by using the VITEKR 2 compact system using the GNR card. Antimicrobial susceptibility testing was carried out by the Kirby-Bauer disc diffusion method. Production of ESBL and carbapenemase were confirmed by combination disc and modified Hodge Test method respectively. Results A total of 238 fermentative Gram-negative bacilli were recovered during the study period, among which E.coli were the predominant isolates followed by K. pneumoniae. The highest percentage of antibiotic resistance was noted against ampicillin (100%) followed by trimethoprim/sulfamethoxazole (81.9%). The isolates showed better sensitivity towards carbapenem drugs. Out of 238 isolates, 94.5% were MDR and of which 8.8% and 0.8% were extensively and pan drug resistant, respectively. Nearly 67% and 2% of isolates were producers of ESBL and carbapenemase, respectively. The isolation rates of MDR, ESBL, and carbapenemase producing stains of the isolates were ≥70% in intensive care unit while the isolation rates in other wards were ≤25%. Conclusions The findings of this study revealed that the burden of MDR and ESBL was high and carbapenemase producing isolates were also identified which is concerning. This situation warrants a consistent surveillance of antimicrobial resistance of fermentative Gram-negative bacilli and implementation of an efficient infection control program. Introduction Fermentative Gram-negative bacilli (FGNB), belonging to the family of Enterobacteriaceae, are an important cause of diseases in humans, among which urinary tract infections, bloodstream infections, hospital-and healthcare-associated pneumonia, and a number of intraabdominal infections are the most important [1,2] Antimicrobial resistance in this group of bacteria has been recognized by the World Health Organization as one of the most significant problems challenging human health [3]. This problem is further compounded by the emergence of multi-drug resistant (MDR), β-lactamase, and carbapenemase-producing bacterial pathogens [1,4]. Acquisition and transferring of antibiotic resistance genes within or via different species of Gram-negative bacteria through mobile plasmids and transposons are reported to be the principal cause of the production of β-lactamases [5,6]. Of particular importance is the production of extended-spectrum β-lactamases (ESBLs) that have the capacity to hydrolyze higher generation cephalosporin and cause resistance to many drugs including the third-generation cephalosporines, such as cefotaxime, ceftriaxone, and ceftazidime [7,8]. Extended-spectrum β-lactamases producing FGNB have also been identified to coexist with resistance to other antimicrobial classes [9,10] rendering the most useful drugs ineffective ultimately limiting treatment choices for infections. The main drivers for the development of resistance are antimicrobial selection pressure and the spread of the resistant organisms [11,12]. Widespread and indiscriminate use of a broad-spectrum antimicrobial agent by a physician to treat an infection not only impacts the specific pathogen causing the disease but also kills populations of susceptible organisms that form a part of normal flora. In addition, the widespread use of antimicrobials as growth promoters in agriculture and animal husbandry creates a selective pressure that favours bacteria that are resistant to microbial, which can easily transferred to human through various chains [13,14]. Although carbapenem antibiotics have been used as a last alternative to treat infections caused by multidrug-resistant FGNB. This is due to the fact that carbapenems are β-lactam drugs that are structurally different from penicillins and cephalosporins that have the widest spectrum of activity among the β-lactams with excellent activity against members of the FGNB. However, the activity of these antibiotics has been impaired by the development of drug-resistant strains against potent carbapenems due to extensive exposure of bacteria to antibacterial agents [15,16]. The emergence and global spread of multidrug-resistance amongst bacterial pathogens implicated in causing both nosocomial and community-acquired infections are a major threat to public health everywhere. [17,18]. The problem is far more important in bacterial species belonging to the FGNB because of their ubiquity in the environment and the relative ease of acquisition of plasmids containing genes that encode for ESBLs and other resistance genes that confer resistance to many other classes of antibiotics [7,19,20]. Despite the escalating burden of multidrug resistance (MDR), ESBLs and carbapenemase production in FGNB across the globe, data regarding the prevalence of MDR, ESBLs, and carbapenemase-producing FGNB in Ethiopia are limited. The objective of the present study is to determine the prevalence of MDR, ESBL and carbapenemase-producing FGNB. Their reliable determination plays a vital role in the successful management of infection and implementation of valid therapeutic strategies. Materials and methods This prospective cross-sectional study was conducted at Ethiopian Public Health Institute (EPHI) in Clinical Bacteriology and Mycology Reference Laboratory from December 2017 to June 2018. The EPHI clinical microbiology laboratory is the only national reference and research laboratory where patients from different parts of the country are referred for culture ID and sensitivity tests. The laboratory is accredited by Ethiopian National Accreditation Office (ENAO) as a referral bacteriology laboratory since July 2017. The African Society for Laboratory Medicine (ASLM) awarded this laboratory with a certificate of recognition for achieving ISO accreditation and best practice in Laboratory Medicine after going through ASLM SLIPTA audits at their annual conference conducted in Abuja, Nigeria in 2018. Patients referred to Ethiopian Public Health Institute (EPHI) from Addis Ababa health facilities who are clinically suspected of bacterial infection and having request paper filled by physicians for culture and sensitivity and willingness to participate in the study were enrolled. Different clinical samples were submitted to the laboratory and processed following standard procedures. Specimens collected were inoculated onto appropriate isolation culture media (Blood culture broth, Blood agar, Chocolate agar, and MacConkey agar plate) and incubated at 35-37˚C according to standard protocols for each sample. In cases where a delay in culturing was unavoidable, appropriate transport media were used. All commonly isolated fermentative gram-negative bacilli recovered from the various clinical specimen during the study period were included. Duplicate isolates from the same patient were excluded from the study. The data were collected using a pre-developed data collection form from the request paper. All the necessary variables were included in the request form and using data collection form these variables such as socio-demographic (age and sex), type of specimen, types of health facility from where the patient or specimen were referred, location of the patients at various wards, previous antibiotic exposure was collected by the principal investigator. Isolates were preliminarily characterized by colony characteristics and Gram-stain reaction. Bacterial identification was performed by the VITEK R 2 compact system using the GN R cards, in accordance with the manufacturer's instructions (bioMérieux, France). Test for ESBL production All the strains which showed a diameter zone of inhibition of less than 27mm for cefotaxime and less than 22 mm for ceftazidime, were subjected to the ESBL confirmatory test. ESBL production was performed by a combination disc method in which discs of ceftazidime (CAZ) and cefotaxime (CTX) alone and in combination with clavulanic acid (CA) (10μg) were used [21]. The antibiotic discs were placed onto Mueller-Hinton agar plate seeded with a turbidity suspension of an isolate equal to that of a 0.5 McFarland turbidity standard. A difference of 5 mm between the zone of inhibition of a single disk and in combination with clavulanic acid was considered positive for an ESBL producer [21]. Test for carbapenemase production Bacterial isolates which were resistant to imipenem (IPM 10 μg), meropenem (MEM 10 μg) and ertapenem (ERTμ10) based on CLSI breakpoints [21] were subjected for confirmation for carbapenemase production. Confirmation for carbapenemase production in FGNB was conducted by Modified Hodge Test (MHT) where Mueller-Hinton agar plate was inoculated with a 1:10 dilution of a 0.5 densitometer standardized suspension of over-night sub-cultured E. coli ATCC 25922 and streaked for confluent growth using a swab. A 10 μg ertapenem disk was placed in the center, and each test isolate was streaked from the disk to the edge of the plate. A positive Modified Hodge Test (MHT) was indicated by clover leaf-like indentation of the E. coli ATCC 25922 growing along the test organism growth streak within the disk diffusion zone [21]. Quality assurance Performance of all media and antibiotics were checked by recognized standard strains using E. coli ATCC 25922, and Pseudomonas aeruginosa ATCC 27853. Standardization of carbapenemase and ESBL tests was performed using, K. pneumoniae ATCC BAA 1705 and ATCC 700603 and E. coli ATCC 25922 as positive and negative controls respectively. Data analysis and interpretation The data was collected, cleaned and analyzed using SPSS version 20. Frequency and percentages of MDR, carbapenemase and ESBL producing gram-negative bacteria were calculated. Tables and figures were used for data presentation. Ethics and consent to participate The study was carried out after the approval of the Internal Review Board (IRB) of Department of Medical Laboratory Sciences (DRERC/323/17/MLS) and permission letters were also obtained from Ethiopian Public Health Institute. Data collection was started after obtaining informed written consent from study subjects and assent the form was completed and signed by parents or guardians for those study subjects � 16 years of age. All the information obtained from the study subjects were coded to maintain confidentiality. Results Out of 947 different clinical samples were submitted to the laboratory during the specified time period, and bacterial pathogens were recovered from 306 among which 238 were fermentative gram-negative bacilli. Of these isolates, 138 were recovered from inpatients and 100 isolates were recovered from outpatients department. Among 238 isolates, 61.7% (147/238) were isolated from urine and 27.3% (65/238) were isolated from blood. E. coli was the dominant isolate accounting for 60.5% (144/238)) and K. pneumoniae was the second predominant species representing 30.3% (72/238) of the total isolates (Table 1). 136 (57.14%) of the isolate were recovered from patients that have one or two types of the previous history of antibiotic exposure before specimen collection. Their exposure includes the first-line antimicrobial agents up to the last treatment option (carbapenem). Of 136, 67 isolates were recovered from patients which were empirically treated with two types of different antibiotics before specimen collection whereas 69 of the isolates were recovered from patients which were empirically treated with one types of antibiotics. Of 69, 21 were isolated from patients empirically treated with CRO, followed by CIP (15), SXT (6) and of 67, 22 were isolated from patients empirically treated with CRO and VA, followed by Amp and Gn (17), CRO and Mer (6). Percentage of the antibiotic resistance profile of bacterial isolates against 22 antibacterial agents is summarized in Table 2. The highest percentage of antibiotic resistance was noted against ampicillin (100%) followed by trimethoprim/sulfamethoxazole (81.9%), piperacillin (80.3%), and, tetracycline (80.3%). Fermentative Gram-negative bacteria showed the lowest resistance towards carbapenem drugs; imipenem, meropenem, doripenem, and ertapenem resistance rate is 1.7% for each. This was followed by amikacin and piperacillin-tazobactam combination with the overall resistance rate of 7.9% and 26.5%, respectively. (Table 2) Out of 238 fermentative Gram-negative bacilli isolates, 94.5% were MDR of which 8.8% and 0.8% were XDR and PDR, respectively. Among 144 strains of E. coli, 99.3% were MDR of which 18.1% were XDR. Similarly out of 72 isolates of K. pneumoniae, 90.3% were MDR of which 11.2% and 2.8% were XDR and PDR respectively. 83.3% and 75% strains of C. freundii and E. cloacae were MDR but none of the strains of the two species were XDR and PDR producers. Nearly, 67% and 2% of FGNB were producers of ESBL and carbapenemase, respectively. Extended-spectrum β-lactamases producing were produced in 76.4%, 63.2%, 62.2% and 50% of K. pneumoniae, E. coli, E. cloacae, C. freundii respectively. None of the strains of E. coli, E. cloacae, and C.freundii produced carbapenemase but, carbapenemase was produced by 5.6% of K. pneumoniae strains ( Table 3). Prevalence of MDR, ESBL and carbapenemase-producing FGNB by location of wards are depicted in "Fig 1". Most of MDR resistant isolates were recovered from patients at intensive care unit ward (73%) followed by the medical ward (17.4%). Similarly, the isolation rate of ESBL producing isolates was higher among patients in the intensive care unit (75.7%) rather than from those in the medical ward. The recovery rate of carbapenemase-producing isolates were three-fold more among patients in an intensive care unit than in medical wards. The prevalence of MDR, ESBL and carbapenemase production in relation to clinical samples is presented in "Fig 2". The prevalence of MDR isolates was greater in urine (62.5%) than blood (28.4%) specimens. The isolation rate of ESBL producing isolates was almost two-fold (58.5%) higher in urine than blood culture (33.3%). There was no difference in the recovery rate of carbapenemase-producing isolates between urine culture and blood culture. The prevalence of MDR, ESBL and carbapenemase-producing FGNB among antibiotictreated and non-treated patients is shown in "Fig 3". The prevalence of MDR, ESBL and carbapenemase production were higher among previously treated patients than non-treated. The ratio of MDR, ESBL and carbapenemase among treated to non-treated patients were 58. % to 41%, 58.8% to 41.2% and 70% to 30%, respectively. Discussion In the present study, FGNB were tested against 22 antibacterial agents. Antibiotic resistance profile of FGNB against the first-line drugs were remarkably high. Resistance profile of FGNB against drugs of β-lactam/β-lactamase inhibitor combinations extends from 26.5% for piperacillin/tazobactam to 100% for ampicillin indicating that β-lactams with β-lactamase inhibitor demonstrate better activity. Antibiotic resistance profile of FGNB against other first-line drugs such as tetracycline, nitrofurantoin, and trimethoprim/sulfamethoxazole was also high. Again the overall resistance rate of FGNB against cephalosporin including the extended-spectrum βlactam antibiotics was above 70%. Similarly, except for amikacin, the percentage antibioticresistance rates of FGNB against aminoglycosides and fluoroquinolone was above 40%. This finding correlates with the study conducted in other parts of Ethiopia Gondar University teaching hospital [23] and Nepal [24], where resistance proportion of the first line and other antibiotics were also high. However our finding disagrees with a study conducted at Dessie, Ethiopia [25], and Turkey, institute of Cardiology [26] where the lowest resistance proportion against the first line and other higher generation antibiotics were indicated. This variation in antibiotic resistance proportion might be due to geographical difference and study period. Antibiotic resistance profile of FGNB against carbapenems, however, was seen to be low. Our finding in this regard was in line with earlier reports [23,27]. Our study demonstrated that E. coli, E. cloacae, and C. freundii were 100% susceptible to all carbapenems tested, but 5.6% of the strains of K. pneumoniae were resistant to all the carbapenems antibiotics (doripenem, ertapenem, imipenem, and meropenem). In the present study, out of 238 FGNB, 94.5% were MDR, among which 8.8% were XDR and 0.8% were PDR. There are published data on MDR fermentative gram-negative bacilli in Ethiopia but there are few data with a proper definition of MDR and no XDR and PDR. However, the values of MDR recorded in the present study do not substantially deviate from earlier studies [23, 28, and 29]. The highest MDR strains were detected from E. coli (99.3%) of which 18.1% were XDR followed by K. pneumoniae (90.3%) of which 11.1% were XDR. More MDR and XDR strains in E. coli than K. pneumoniae in our study could be due to the fact that the number of E. coli strains isolated were more than K. pneumoniae. In regard to XDR and PDR, the current study is supported by the findings of previous studies [30,31] however our finding strongly disagree with a study conducted by Ahmed Hasanin et al [32] in the prevalence of XDR strains of FGNB. In this study lower XDR E. coli (18%) and K. pneumonia (11%) were reported as compared to the previous study where higher XDR K. pneumoniae (52%), and E. coli (47%) were reported [32]. The reason for the difference in the prevalence of XDR might be due to the definition used to classify isolates into XDR and the location of the patient from (inpatient or outpatient) from where the specimen is obtained. Increased use of over-thecounter antibacterial drugs, incomplete course of therapy, and prolonged therapy for recurrent bacterial diseases are commonly practised in Ethiopia. These practices could be cited as possible factors for the high prevalence of MDR and XDR bacterial species noted in the current study. Multidrug-resistant (MDR) strains of ESBL producing FGNB are of particular concern. The phenotypic data generated in the current study demonstrated a considerably significant prevalence of ESBL producers, where 66.8% of FGNB produced ESBL. This study is lower than other studies conducted in Ethiopia and other countries [27,33,34]. 78.6% overall prevalence of ESBL was reported in Addis Ababa, Ethiopia by Legese et al [27], 85.8% in Gondar, Ethiopia by Feleke Moges et al [33] and 79.3% in Tanzania by Manyahi J et al [34]. The overall prevalence of ESBL production among FGNB (66.8%) in our study is almost in line with a study conducted in Uganda [35] where 62.0% ESBL was reported. In contrast to this study, many researchers from various parts of Ethiopia was reported lower prevalence of ESBL producing FGNB. Dejene et al reported 57.7% in Addis Ababa [36], Siraj et al;38.4% in Jimma [37], Mengistu et al; 23% in Jimma [38]. Other African countries also share a lower prevalence of ESBL than our study. In Burkina Faso (58.0%) [39], Ghana; (49.3%) [40], and in Tanzania (45.2%) [41]. Variation in the prevalence of ESBLs in different studies among clinical isolates might be due to variation in geographic areas, a period of study (ESBL rapidly changing over time), awareness in the utilization of broad-spectrum antibiotics, an infection control system, target population, sample size and method of ESBL detection. In the current study higher prevalence of ESBL were documented in K. pneumoniae (76.4.6%) followed by E. coli (63.2%) that in line with a study conducted in Addis Ababa, Ethiopia [36]; K. pneumoniae (78.6%) and E. coli (52.2%), Jimma, Ethiopia [37]: (K. pneumoniae 70.4%, E. coli 28.2%), and Uganda [35]: (K. pneumoniae 72.7%, E. coli 58.1). However, higher ESBL prevalence was reported in E. coli than K. pneumoniae in other studies [39,42]. Our study showed that the most common sample from which ESBL producing strains were from urine samples in lines with other studies where urine was the major source of ESBL-producers [35,43]. However, blood was reported as a major source of ESBL-producers by other researchers [36,39]. This could probably be due to the number of strains isolated from each specimens. Carbapenemase production in the present study was about 2% which is a much lower than prevalence rate reported in a study carried out in Gondar, Ethiopia by Feleke et al and in Addis Ababa, Ethiopia by Legese et al [27] where both of them reported a higher prevalence of 12% and 16% carbapenemase among the FGNB respectively. 2% prevalence in this study is comparable with a study conducted in Gondar, Ethiopia by Eshetie et al [23] where 2.7% K. pneumoniae were carbapenemase producer. However our finding strongly disagrees with a study done in Nigeria [44] and Tanzania [45] where the highest percentage of carbapenemase was documented (39.02%) and (35%) respectively. This difference in the prevalence of carbapenemase-producing FGNB indifferent studies might be due to extensive utilization of carbapenem antibiotics, ESBL-producing strains of K. pneumoniae, in particular, that have other resistance mechanisms to other class of antibiotics such as carbapenems might be responsible for the emergence of carbapenemase resistant strains of K. pneumoniae. The present study demonstrated that the isolation rate of MDR strains, ESBL, and carbapenemase-producing strains of FGNB were �70% in an intensive care unit while the isolation of the same elements in other wards were �25%. Inappropriate and excessive antibiotic use, insufficient availability of infection prevention and control programs, and increased use of invasive medical devices, and invasive procedures at an intensive care units have been implicated as risk factors for the development of MDR strains and production of ESBL and carbapenemase production [46,47]. Furthermore, more MDR, ESBL and carbapenemase-producing strains were documented in antibiotic treated subjects than that were not treated. This is given because extensive exposure of bacteria to antibacterial agents is the main factor promoting the emergence and spread of MDR, ESBL, and carbapenemase-producing bacteria. Conclusion and recommendations We have noticed an increased MDR as well as ESBL among the isolated fermentative gramnegative bacilli in the study site. Very high resistance was recorded against Ampicillin, piperacillin, Sulfamethoxazole-trimethoprim and Tetracycline, hence empirical treatment with these antibiotics is not encouraged. Carbapenems resistant Klebsiella pneumonia was also identified in this study which is concerning the health care issue in Ethiopia. Therefore routine infection preventions strategies such as the rational use of antimicrobial agents both in Animals and humans, developing antibiotic stewardship for health facilities and implementing strong surveillance of AMR, are needed to prevent and control the spread of antimicrobial-resistant pathogens in health care settings. Limitations of the study The isolates that revealed extensively and pan drug-resistant in this manuscript might be susceptible to tigecycline, colistin or fosfomycin were not tested due to lack of the antibiotics from the local market and ESBL and carbapenemase enzymes were characterized by only phenotypically. And also data used in this the study were collected from the patients referred from Addis Ababa health facilities, which don't represent the real image of Ethiopia. Therefore, large scale study that includes all antibiotic panels, molecular epidemiology of the ESBL and carbapenemase genes and sites from different parts of the country are needed to indicate the real image of AMR in Ethiopia. Supporting information S1 Raw Data. Demographic data and various risk factors compiled from patients and specimens referred to EPHI during the study time period. (XLSX)
Effect of a High Proportion of Rye in Compound Feed for Reduction of Salmonella Typhimurium in Experimentally Infected Young Pigs Public health concerns and the potential for food-borne zoonotic transmission have made Salmonella a subject of surveillance programs in food-producing animals. Forty-two piglets (25 d of age and initially 7.48 kg) were used in a 28 d infection period to evaluate the effects of a high proportion of rye on reducing Salmonella Typhimurium. Piglets were divided into two diet groups: control diet (wheat 69%) and experimental diet (rye 69%). After a one-week adaptation period, all piglets were orally infected with Salmonella Typhimurium (107 log CFU/mL; 2mL/pig). Salmonella in fecal shedding were evaluated at day 1, 3, 5, 7 and then weekly after infection. At the end of the experimental period (at day 28 after infection), the piglets were euthanized to sample feces, cecal digesta contents and ileocecal lymph nodes to determine the bacterial counts of Salmonella. The results suggest that the bacterial counts in the experimental group fed rye diets showed evidence of reducing Salmonella fecal shedding from day 14 onwards and decreasing the number of Salmonella in cecal digesta. However, the translocation of Salmonella in ileocecal lymph nodes was not affected. Furthermore, feed intake, weight gain and feed conversion did not differ between the groups (p > 0.05). Introduction Salmonellosis is primarily a zoonosis and interventions are possible at any stage from farm to fork [1]. Salmonella enterica serovar Typhimurium is strongly associated with pigs, with infections in humans caused by contaminated pork and meat products, causing an impact on public health [2,3]. In Germany, 33% of acquired infections or 13,529 cases of salmonellosis were reported in 2018 [4], making salmonellosis the second most commonly reported bacterial gastrointestinal disease in Germany after Campylobacter enteritis [4]. With regard to reducing the number of Salmonella cases in the feed-to-food chain, it has frequently been demonstrated that the use of antimicrobial agents in food animals favors the development of resistance among foodborne pathogens like Salmonella spp. [5]. Since antimicrobial resistance is on Germany Sales GmbH & Co. KG, Weilheim, Germany) during the respective trial period (24.5 ± 0.8 • C) and the lighting program in the stable was set at a twelve-hour day and night rhythm, so that the light was switched on from 07:00 to 19:00. Before beginning the trials, the stable and all materials were disinfected and tests were carried out to confirm that they were free of Salmonella species contamination. To rule out Salmonella contamination of the feed and thus entry thereof via the feed into the infection stable, the feed batches were examined in advance for Salmonella. To prevent cross-contamination between the individual pens, each pen was equipped with its own utensils (brooms, trays, spatulas). Protective clothing, disposable boot covers and gloves could be changed before entering the corresponding area of the stable. The positions of the groups in the stable were changed between the trials. Diets The composition of diets is described in Table 1. The piglets were allocated to two groups (control: N = 21, n = 7 and experimental: N = 21, n = 7) and fed ad libitum with the complete pelleted feed depending on whether wheat served as the control group (contained 69% wheat) or rye (experimental group; contained 69% rye, Table 1), according to the previous result from Wilke [14]. Feed chemical composition as well as particle size distribution of the diets are summarized in Table 2. Diets were analyzed by standard procedures in accordance with the official methods of the Verband Landwirtschaftlicher Untersuchungs-und Forschungsanstalten (VDLUFA [23]). The standardized methods of the Department of Animal Science, Aarhus University, Denmark were used to evaluate the NSP and arabinoxylans levels. Feed particle size distribution was assessed by a wet-sieve method of Wolf et al. [24]. Experimental Design The Salmonella Typhimurium strain in this study was obtained from the field study [25]: S. Typhimurium (antigenic formula: 1,4,5,12:i:1,2, Phage type DT 193). Three consecutive trials of the infection experiment followed in accordance with the same investigation scheme (Figure 1). For the experiment carried out after a one-week adaptation period, all piglets were orally infected (day 0) with 2 mL of a broth containing~1 × 10 7 colony-forming units (CFU) S. Typhimurium/animal directly into the throat using a drencher as already described [26]. After the infection, animals were used in a 28-day experiment to determine the effects of control (69% wheat) or experimental group (69% rye) in complete pelleted feed, and to determine a possible effect of the diets on the duration of and level of Salmonella excretion. Fecal samples of the piglets were examined microbiologically for Salmonella species at defined time points (at 1, 3, 5, 7, 14, 21 and 28 days post-infection (dpi), as shown in Figure 1) to evaluate the status of the Salmonella infection. At the end of the experimental period (28 dpi), cecal chyme and lymph nodes were tested for Salmonella. During the experimental phase, performance parameters, i.e., feed intake, were recorded on a weekly basis (Figure 1). To prevent cross-contamination between the animals during the infection, animals were weighed before the infection (0 dpi) and again at the end of the experiment (28 dpi; Figure 1). Corresponding parameters, average body weight gain (BWG) and feed conversion ratio (FCR) were determined. In accordance with standardized Salmonella diagnostics, a blood sample was taken from each animal on the day of the experimental infection and at the end of the experiment (Figure 1). Serum was examined serologically for specific antibodies against Salmonella; Salmonella antibody ELISA (IDEXX Swine Salmonella antibody Test, IDEXX Europe B.V., Hoofddorp, the Netherlands) was used. The cut-off value was ≥10% optical density (OD). Bacteriological Analyses and Salmonella Detection Samples were taken for bacteriological analysis, and bacterial counts were determined for Salmonella detection. All test sections were carried out following the DIN EN ISO 6579-1 and 6579-2 guidelines [27,28]. Briefly, all collected samples were first placed for pre-enrichment in buffered peptone water (BPW; Oxoid Deutschland GmbH, Wesel, Germany), the ratio of which was 1:10, based on the volume of fecal or cecal chyme sample or the mass of the lymph node. For the next process of qualitative analysis, an incubation overnight at 37 • C took place. Each BPW-inoculated sample was placed with three drops (100 µL/drop) on modified semisolid Rappaport-Vassiliadis agar (MSRV; Oxoid Deutschland GmbH, Wesel, Germany) and incubated for a further 24 h (h) at 41 • C. After the incubation period, the evaluation was carried out macroscopically. If the result was positive, a white-grayish cloudy swarming zone spread over the entire agar resulting from the drops. Suspected Salmonella samples were also spread on the second selective nutrient media, xylose lysine deoxycholate agar (XLD; Oxoid Deutschland GmbH, Wesel, Germany) and Brilliance Salmonella agar (Oxoid Deutschland GmbH, Wesel, Germany) so that a confirmed result could be read after a 24 h incubation period. In addition, Salmonella quantitative analysis of the fecal and cecal contents was performed as previously described [28,29]. Due to the high number of samples, a most probable number (MPN) method was used. One gram of the homogenized fecal or cecal content and 9 mL of BPW were vortexed. The bacterial count of the sample material was determined by a serial dilution with BPW in a deep well block (Sarstedt AG & Co, Nümbrecht, Germany). Quantitative proof of the dilution steps in a microtiter plate (Sarstedt AG & Co, Nümbrecht, Germany) was carried out in triplicate, as already described in detail [26]. After a 24 h incubation at 37 • C, the total volume of each well was transferred to another deep well block filled with MSRV agar and incubated for 24 h at 41 • C. The results were confirmed by cultural cultivation on Brilliance Salmonella agar, and subsequently the number of bacteria was calculated by an MPN software program [30]. To confirm the experimentally used Salmonella strain, salmonella-like colonies on Brilliance Salmonella agar were finally excluded by serotyping. Statistical Analysis The SAS software package version 7.1 (SAS Institute, Cary, NC, USA) was used for the statistical evaluation. The evaluation was carried out in cooperation with the Institute for Biometry, Epidemiology and Information Processing of the University of Veterinary Medicine Hannover, Foundation. Measurements such as mean values and standard deviations were calculated for the descriptive statistics. Salmonella results and performance parameters, i.e., body weight and feed intake, were analyzed at the level of the individual animal. For evaluating the quantitative parameters between the control and experimental groups, two-way analysis of variance (ANOVA) with group and trial as independent factors was conducted. For analysis differences concerning distribution in qualitative parameters (Salmonella detection in cecal content, etc.), the chi-squared homogeneity test was used. Differences with a significant level of p < 0.05 were considered significant. Salmonella Prevalence The results of this study demonstrated that at the beginning of the post-infection period (1, 3, 5, 7 dpi), all the pigs were colonized. There were no significant differences in mean bacterial counts (log 10 ± SD) of Salmonella in the feces between the wheat and rye groups. A peak in Salmonella shedding occurred at day 5 after infection in our study. However, at day 14 onwards after infection, feeding a pellet diet containing 69% rye was associated with a significantly lower bacterial count of Salmonella in fecal samples than in those taken from the group fed with 69% wheat in the diet, as shown in Figure 2 and in detail in Table S1 (control and experimental, 14 dpi: 3.30 a ± 0.50 and 2.62 b ± 0.18 (p < 0.001), 21 dpi: 3.11 a ± 0.36 and 2.40 b ± 0.65 (p < 0.001), 28 dpi: 3.02 a ± 0.45 and 2.36 b ± 0.57 (p = 0.001), respectively). content, etc.), the chi-squared homogeneity test was used. Differences with a significant level of p < 0.05 were considered significant. Salmonella Prevalence The results of this study demonstrated that at the beginning of the post-infection period (1, 3, 5, 7 dpi), all the pigs were colonized. There were no significant differences in mean bacterial counts (log10 ± SD) of Salmonella in the feces between the wheat and rye groups. A peak in Salmonella shedding occurred at day 5 after infection in our study. However, at day 14 onwards after infection, feeding a pellet diet containing 69% rye was associated with a significantly lower bacterial count of Salmonella in fecal samples than in those taken from the group fed with 69% wheat in the diet, as shown in Figure 2 and in detail in Table S1 (control and experimental, 14 dpi: 3.30 a ± 0.50 and 2.62 b ± 0.18 (p < 0.001), 21 dpi: 3.11 a ± 0.36 and 2.40 b ± 0.65 (p < 0.001), 28 dpi: 3.02 a ± 0.45 and 2.36 b ± 0.57 (p = 0.001), respectively). After oral infection with S. Typhimurium, different diets had significant influence on the counts of Salmonella in the cecal content ( Table 3). The bacterial counts in the cecal content also differed significantly (log10 CFU/g; control group: 3.34 a ± 0.50, experimental: 3.08 b ± 0.56; p = 0.038; Table 3). However, cecal content and ileocecal lymph node samples were qualitatively Salmonella positive (control: 100% and 61.9% and experimental: 100% and 66.7%, respectively; Table 3), but no differences could be seen between the two groups. After oral infection with S. Typhimurium, different diets had significant influence on the counts of Salmonella in the cecal content ( Table 3). The bacterial counts in the cecal content also differed significantly (log 10 CFU/g; control group: 3.34 a ± 0.50, experimental: 3.08 b ± 0.56; p = 0.038; Table 3). However, cecal content and ileocecal lymph node samples were qualitatively Salmonella positive (control: 100% and 61.9% and experimental: 100% and 66.7%, respectively; Table 3), but no differences could be seen between the two groups. Table 3. Number of Salmonella-positive samples from cecal content and ileocecal lymph nodes at dissection after oral infection with S. Typhimurium. Salmonella Test Control Experimental Cecal content Quantitative log 10 CFU/g 3.34 a ± 0. Three consecutive experiments were conducted using piglets from different sows. The results of the counts of Salmonella (log 10 CFU/g fecal sample) for different trials and different diets are shown in Table 4. After initial statistical evaluations, a marked effect for the factor trial was shown, with significant effects at 1 dpi (p < 0.001), 7 dpi (p = 0.036), 21 dpi (p = 0.001) and 28 dpi (p < 0.001). An effect of diet seems to be present for days 14 (p < 0.001), 21 (p < 0.001) and 28 (p < 0.001) after infection. In addition, a bacterial count for Salmonella in cecal content was significantly affected by the diet (p = 0.035). Interestingly, the two-way analysis of variance (ANOVA) showed significant interactions between the factors diet and trial, which were observed for days 5 and 7 (p = 0.011 and p = 0.041, respectively; Table 4). On the other hand, there was not a significant effect of the type of diet by the trial interaction at day 3 after infection (p = 0.188; Table 4). Serological Test for Specific Antibodies Against Salmonella Optical density (OD)% was used to determine the Salmonella antibody status in blood samples from piglets. In both groups, on the day before infection, all of the pigs were seronegative. At day 28 after infection, regarding the number of positive blood samples from piglets, there was no significant difference in seroprevalence detected between the groups (p > 0.05; Table S2). Animal Performance As a result of the evaluations carried out, there were no significant differences in the performance parameters between the groups. During the experimental period, before and after infection, there were no significant differences in the body weight (BW), as shown in Figure 3 and in detail in Table 5 and Supplementary Table S3. Furthermore, feed intake was measured individually for the entire trial period. Differences between the groups regarding the mean weekly feed intake (FI) were not statistically significant (control and experimental; mean FI at week 5 (21-28 dpi, in kg): 9.78 a ± 1.52 and 9.40 a ± 1.48; p = 0.136; Figure 3 and Table S3). Animal Performance As a result of the evaluations carried out, there were no significant differences in the performance parameters between the groups. During the experimental period, before and after infection, there were no significant differences in the body weight (BW), as shown in Figure 3 and in detail in Table 5 and Supplementary Table S3. Furthermore, feed intake was measured individually for the entire trial period. Differences between the groups regarding the mean weekly feed intake (FI) were not statistically significant (control and experimental; mean FI at week 5 (21-28 dpi, in kg): 9.78 a ± 1.52 and 9.40 a ± 1.48; p = 0.136; Figure 3 and Table S3). (28 dpi). No significant differences were noted between the groups for BW at each time point (0 dpi (p = 0.085) and 28 dpi (p = 0.980)) and body weight gain (BWG; Table 5). The average daily weight gain (ADG) in the control group was about 639 g and in the experimental group, 629 g (Table 5). Similarly, the feed conversion ratio (FCR = feed requirement in kg per kg BWG) was not significantly affected between the wheat and rye diet groups (1.50 and 1.51, respectively). Discussion In recent years, the interest in rye has increased in terms of its good sustainability [31]. However, the use of rye in large amounts in diets for pig production is not typical because it can be associated with ergot poisoning [32]. On the other hand, rye has a high dietary fiber content [17], the fraction of NSPs in rye being more fermentable for monogastric animals than is the case for other cereals [13]. Nowadays, varieties of rye have been developed which reduce its susceptibility to ergot contamination; therefore, we can include rye in diets for swine [11,33]. The present study shows results of a feeding concept in young pigs fed with a high proportion of rye, and its effects on Salmonella prevalence, along with analyzing its effects on performance. In terms of Salmonella shedding, a significant reduction in the number of bacterial counts of Salmonella was seen in feces from 14 day onwards when considering the entire infection period for the experimental group. Feeding a diet containing 69% rye to young pigs may play a significant role in reducing Salmonella shedding after infection. In addition, the prevalence of Salmonella was found to be significantly lower in the cecal chyme of pigs fed with rye diets. Hence, if there was a reduction in the incidence of Salmonella in feces of pigs, which is the most likely source of contamination in the pig production chain, then it may help to reduce the risk of human salmonellosis. To our knowledge, fermented rye has been previously studied to control Salmonella in nursery pigs [34]. Fabà et al. [34] reported that pigs fed with organic acids combined with fermented rye had reduced S. Typhimurium shedding compared to pigs fed with organic acids plus coated butyrate (3.11 and 3.87 log 10 CFU/g, respectively) over the 21 d period post-challenge with 10 9 CFU/mL. However, the pigs in our study were infected with 10 7 CFU of S. Typhimurium, so comparison across infection studies is difficult. In this respect, an infection dosage in the present study could provide more opportunities to determine the effects of diet, which corroborates with other studies [8,35,36]. The S. Typhimurium oral challenge resulted in an effective infection in our study. At the beginning of the trial, all the pigs in the study showed a Salmonella-negative result. Then, after oral infection, nearly all infected animals excreted Salmonella until the end of the study. In addition, it is assumed that the challenge dose in practice or natural infection is often established with low or moderate numbers of Salmonella when compared with our study [36]. However, the scientific literature on the usefulness of a high proportion of rye in diets in a dry and pelleted form for Salmonella reduction in livestock animals, specifically young pigs, is scarce. A reduction in Salmonella shedding in pigs in this study was possibly related to the components in rye such as dietary fiber, including NSPs. Compared to the total concentration of NSPs and arabinoxylans between groups, the mean concentration was higher in rye in the present study. The concentration of total NSPs was 123 g/kg dry matter (DM) in wheat and 140 g/kg DM in rye. In addition, the mean concentration of arabinoxylans was 63 g/kg DM and 74 g/kg DM in wheat and rye diets, respectively ( Table 2). The concentrations of carbohydrate fractions in wheat and rye were in general agreement with those values of previous studies [17,18]. The NSP content of rye has traditionally been put forward to explain the effects on gut health benefits for pigs, especially arabinoxylans (8-9%) and fructans (approximately 3% and range up to about 6%) [33,37]. Rye has greater concentrations of arabinoxylans and fructans than other cereal grains [38,39] that can be changed by microorganisms to butyrate in the digestive tract [10,17]. According to Wilke [14], the effects in terms of nutritional physiology have been evaluated. In the group with 69% rye in the diet, significantly higher concentrations of lactate in the anterior digestive tract were found, as well as an increased entry of fermentable organic substances in the large intestine (approx. 35% higher butyrate concentrations) in the ingesta of cecum and colon, the preferred colonization area of Salmonella, in comparison to those animals given a wheat-based diet [22]. How rye affects Salmonella colonization is unclear. Consequently, the antimicrobial effects of lactate and butyrate seem to play an important role against Salmonella [14]. High butyrate concentrations in the hindgut have been found when a high amount of rye in diets is given to young pigs [14]. The focus on the mechanism of action of butyrate in reducing Salmonella has been explained in various studies [7,9,19,40,41]; similar mechanisms possibly play a role in our study. Further investigations are needed to better elucidate the Salmonella shedding reduction and other health-promoting mechanisms potentially associated with rye on the physiological processes in the gastrointestinal tract and the stability of the intestinal microbiota of pigs. Less fecal Salmonella excretion indicates less colonization of Salmonella in the intestine and reduced severity of the infection [42,43]. In addition, in our study, the lymph node was used to confirm colonization. A translocation of Salmonella from the intestine into the adjacent ileocecal lymph nodes was investigated in all experimental runs. The piglets were totally Salmonella positive at the time of dissection (in cecal chyme), but positive evidence in lymph nodes was found in twenty-seven of a total of forty-two animals (control: n = 13; experimental: n = 14). Less is known about the role of rye in the translocation effect of Salmonella in the pig model. However, in previous experiments with poultry, this was determined with other pathogens such as Escherichia coli and Clostridium perfringens [44,45]. When observing the elevated bacteria translocation while feeding a diet rich in NSPs [44], the prolonged transit time of the intestinal contents possibly induces proliferation of pathogenic bacteria. Moreover, Tellez et al. [45] also reported that a rye diet negatively influences bacterial translocation and intestinal viscosity in poultry. Nevertheless, the viscosity does not pose a large problem in swine as much as in poultry [46]. The outcome of the current study showed that the use of compound feeds with higher dietary proportions of rye in young pigs caused no significant differences in performance, i.e., average body weight gain, feed intake and feed conversion ratio compared with those of animals fed with generally high amounts of wheat. Similar results were obtained by Grone [15] and Wilke [14], who observed no negative impact on growth performance when young pigs were fed diets containing a high proportion of rye. Previous studies have shown positive effects of feeding rye on performance data [46][47][48]. Among the beneficial effects of the dietary fiber components in rye, it can provide energy to the pig mostly via the digestion of starch and fermentation of fiber in the hindgut. Moreover, the NSPs in rye may promote greater butyrate production and improve intestinal health [33]. Furthermore, scientific studies have evaluated the feed digestibility in pigs concerning diets containing rye [48]. The rye-based diets had higher digestibility coefficients for dry matter, crude protein and gross energy than barley when rye was included in a young growing pig diet at 60% [49]. Thacker et al. [46] suggest that rye may have much more potential in pig performance when administered in pellet form. Furthermore, to improve feed efficiency and average daily gain, the NSP enzyme would be recommended for diets containing high rye levels [47]. Contrary to the present results, a previous study on the use of rye in pig feeding reported that if more than 50% of the barley or wheat in a swine diet was replaced with rye, a significant reduction in pig performance was observed [50]. The reason for rye being less palatable is that it is more bitter than other small grains such as wheat [51]. Alkylresorcinols in rye may be responsible for the unpleasant taste, but reduced levels of alkylresorcinols are found in hybrid rye [11]. Conclusions The results of this study support the view that a high proportion of rye might contribute to reducing Salmonella shedding via feces in young pigs. In addition, overall, the animal performance was good, so that no relevant differences could be observed between the rye and wheat diets. It can be suggested that up to 70% rye in compound pelleted feed poses no problem for animal health and welfare, additionally decreasing possible contamination with Salmonella. Another advantage that puts rye in a better light today than in the past is its sustainable benefits, namely, it needs less fertilizer, demands substantially less water, can reduce CO 2 emissions in pork production in comparison with wheat [52] and its price, as it is an inexpensive raw feed material compared to other cereals [11]. Nonetheless, to our knowledge, this is the first time that high amounts of rye were used in a Salmonella challenge in a young pig model. Furthermore, our results can provide useful information, prompting further studies of a field trial on a commercial pig farm or large-scale farming study over a longer period, not only in young pigs, but also in all phases of pig production. However, we cannot consider all factors when only a small group of pigs are observed. Therefore, research is needed to better understand the gut and fecal microbiota composition underlying the effects observed when a high proportion of rye is fed to pigs. Supplementary Materials: The following are available online at http://www.mdpi.com/2076-2607/8/11/1629/s1, Table S1: Counts of Salmonella (log 10 CFU/g fecal sample) of piglets after an experimental oral infection with Salmonella Typhimurium for different diets. Table S2: Salmonella antibody status in serum of piglets in control and experimental groups. Table S3: Performance data during the entire experimental period.
Nitrogen oxide removal by non-thermal plasma for marine diesel engines The transportation industry plays an important role in the world economy. Diesel engines are still widely used as the main power generator for trucks, heavy machinery and ships. Removal technology for nitrogen oxides in diesel exhaust are of great concern. In this paper, a gas supply system for simulating the marine diesel engine exhaust is set up. An experimental study on exhaust denitration is carried out by using a dielectric barrier discharge (DBD) reactor to generate non-thermal plasma (NTP). The power efficiency and the denitration efficiency of different gas components by NTP are discussed. The exhaust gas reaction mechanism is analyzed. The application prospects of NTP are explored in the field of diesel exhaust treatment. The experimental results show that the power efficiency and energy density (ED) increase with the input voltage for this system, and the power efficiency is above 80% when the input voltage is higher than 60 V. The removal efficiency of NO is close to 100% by NTP in the NO/N2 system. For the NO/O2/N2 system, the critical oxygen concentration (COC) increases with NO concentration. The O2 concentration plays a decisive role in the denitration performance of the NTP. H2O contributes to the oxidative removal of NO, and NH3 improves the removal efficiency at low ED while slightly reducing the denitration performance at high ED. CO2 has little effect on NTP denitration performance, but as the ED increases, the generated CO gradually increases. When simulating typical diesel engine exhaust conditions, the removal efficiency increases first and then decreases with the increase of ED in the NO/O2/CO2/H2O/N2 system. After adding NH3, the removal efficiency of NOx reaches up to 40.6%. It is necessary to add reducing gas, or to combine the NTP technology with other post treatment technologies such as SCR catalysts or wet scrubbing, to further increase the NTP denitration efficiency. Introduction Today, diesel engines are still widely used in the elds of road transport and non-road machinery and are dominant as the main power and generation power units in the marine sector. Exhaust pollutants such as hydrocarbons (HC), particulate matter (PM), SO x and NO x are inevitably generated during the operation of the diesel engine. These exhaust pollutants not only cause acid rain, building and soil corrosion, but also endanger human health and lead to diseases such as cancer and respiratory diseases. The SO x and NO x emission of ocean-faring vessels has been limited by IMO's MARPOL73/78 Annex VI. 1 The removal technologies of diesel exhaust pollutants include pre-treatment, in-machine treatment and post-treatment. 2,3 The post-treatment technologies that can remove a variety of diesel exhaust pollutants at the same time have gained higher attention. Plasma, the fourth form of matter, was discovered in the mid-18th century. It can generally be classied into high temperature plasma, thermal equilibrium plasma, and non-thermal equilibrium plasma depending on the temperature of the particles. It is used in a variety of elds such as welding and cutting, 4 surface modication of materials, [5][6][7] and the removal of contaminants. [8][9][10] At present, the research on non-thermal equilibrium plasma (NTP) is the most extensive. The methods for generating NTP mainly include electron beam method, 9,11,12 microwave irradiation method, [5][6][7]13,14 high voltage discharge method (including DC, AC and pulse power), 2,15-17 etc. The method of pulse power combined with the dielectric barrier discharge (DBD) reactor to generate NTP has many advantages, such as higher power efficiency, uniform and silent discharge. [18][19][20][21] So it receives more attention. NTP used for exhaust post-treatment began in the 1970s and is one of the hotspots. It is almost no secondary pollution and has a good application prospect. There are two ways for NTP to remove NO. One is to form N 2 and O 2 by reduction, and the other is to form higher valence oxides by oxidation, such as NO 2 , N 2 O 5 , etc. 12,22 Moreover, diesel exhaust contains a large amount of N 2 (z76%) and O 2 (z14%), 23 and because the dissociation energy of O 2 (5.2 eV mol À1 ) is smaller than N 2 (9.8 eV mol À1 ), 24 O 2 will be converted to stronger oxidizing substance such as oxygen radical ($O) and O 3 . These reasons make it more unfavorable to remove NO by the reduction route. NO which is difficult to remove in the exhaust gas is converted into other components, whether it is reduced to N 2 or oxidized to high-valent nitrogen oxides, the denitration efficiency can be increased. We mainly focus on the denitration efficiency, and the exact percentage of NO reduction to N 2 will no long be discussed in the paper. Zhang and Zhou 25,26 used pulse corona discharge plasma combined with lye absorption for desulfurization and denitration simultaneously. They studied NO oxidation efficiency and removal efficiency by parameters such as gas ow, discharge current, NO concentration and SO 2 concentration. But they did not consider the effect of O 2 concentration on NTP NO x removal efficiency. Some studies 24,27 show that with the increasing of O 2 concentration, the removal efficiency of NO x gradually decreases, and there is so called critical oxygen concentration (COC) at which the NO x removal efficiency is zero, which means the reduction rate of converting NO x to N 2 and O 2 is equal to the oxidation rate of converting N 2 to NO x . And critical O 2 concentration may change with initial NO concentration. Mok et al. 16 used pulsed corona discharge NTP to study the effect of O 2 concentration, humidity and peak voltage on the removal efficiency, but the NO concentration was only 210 ppm (1 ppm ¼ 1 mL L À1 ) in their experiment. Zhao et al. 24 studied the removal efficiency of corona discharge NTP at different O 2 concentrations (0-13.6%), but the NO concentration in this study was only 437 ppm, which was much smaller than that of typical real ship diesel exhaust. What's more, both H 2 O and CO 2 are inevitable components in diesel exhaust. These components may affect the NTP denitration reaction, while Zhao 24 and Aritoshi 27 did not consider the effect of H 2 O on NTP denitration performance. And most researchers have not considered the effect of CO 2 on NTP denitration performance. Also, the denitration mechanism of NTP is not the same under different gas composition, and still needs further exploration. In addition, because the exhaust gas temperature changes with the diesel engine power, and engine exhaust gas generally contains SO 2 (the concentration varies with the sulfur content of the fuel used, such as the concentration of SO 2 in the exhaust gas is about 600 mL L À1 for marine large low-speed two-stroke diesel engine when using heavy fuel oil of 3.5% sulfur content 23 ), both temperature and SO 2 will have a certain impact on the NTP denitration efficiency. Chmielewski et al. 12 studied the effects of different temperatures (70 C and 90 C) and the SO 2 concentration (0-2000 mL L À1 ) on the NTP denitration efficiency. Their experimental results show that the plasma denitration efficiency will increase with the temperature, so as the SO 2 concentration. We want to investigate the NTP denitration efficiency in the poor cases and the application prospect of NTP. Therefore, the inuence of temperature and SO 2 concentration on the NTP denitration efficiency is not considered in this paper. All experiments are carried out at room temperature of 25 C and the concentration of SO 2 is zero. In summary, in order to study the effects of energy density, different initial NO, O 2 , NH 3 , H 2 O and CO 2 on the NTP denitration performance, a simulating diesel exhaust supply system was set up in this paper. A coaxial cylindrical DBD reactor was designed and fabricated with quartz glass. Non-thermal plasma was generated by pulsed power. NO x removal mechanism was proposed and analyzed by experimental results. The application prospects of NTP were explored in the eld of diesel exhaust treatment. Approach The exhaust composition of a typical large-scale low-speed twostroke diesel engine is shown in Table 1. NO accounts for more than 90% of NO x in the diesel exhaust, the rest mainly being NO 2 , and NO is relatively more difficult to remove. Therefore, this paper mainly focuses on the removal of NO. The concentration of NO in the diesel exhaust will change with the engine load, and generally does not exceed 1500 mL L À1 . Therefore, three NO concentrations indicating the emissions at different engine loads are studied in the paper, including the low NO concentration (500 mL L À1 ), the medium NO concentration (1000 mL L À1 ) and the high NO concentration (1500 mL L À1 ). When NH 3 is needed, it is added at an ammonia-nitrogen ratio of 1. The highest O 2 concentration in engine exhaust gas is 14%, and in order to study the changes of critical O 2 concentrations with initial NO concentrations, the nal selected O 2 concentrations are 1%, 5%, 8%, 10% and 14%, respectively. H 2 O can generate a variety of strong oxidizing free radicals in NTP, including $HO 2 , $OH and H 2 O 2 . They will have a certain inuence on the NTP denitration performance. Therefore, three groups of experiments with H 2 O concentrations of 0%, 3.5% and 5.1% are selected for comparison. Because the experimental results show that CO 2 has little effect on the NTP denitration performance, 0% and 4.5% CO 2 concentration are selected for comparative analysis in this paper. Considering the different components in the diesel exhaust and the performance of the power source, the gas components of each experiment vary from simple to complex. Because N 2 accounts for the largest proportion in the exhaust gas, it is used as carrier gas in our experiments. In addition, O 2 accounts for the second proportion in the exhaust gas and has the fundamental effect on the NTP denitration performance, the experiments of single component gas in NTP are carried both in NO/N 2 system and O 2 /N 2 system. Other gases do not seem to be major players in this case and given the length of the article, they are not included in this paper so far. The following research is carried out: (1) The denitration performance of NTP is strongly related to the energy density in DBD, and the output efficiency of the power source is an important factor to be considered in future industrial applications. Therefore, the energy density and power supply efficiency varying with different input voltages and currents were studied rstly under a typical diesel exhaust condition, i.e., 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + 4.5% CO 2 + 76.2% N 2 . Secondly, to understand the removal mechanism of the single-component NO exhaust, and the effect of the NO concentration on the power supply characteristics under a specic ideal condition, we carried out a group of experiments in NO/N 2 system, where initial NO concentrations were 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , respectively. (2) Since N 2 and O 2 are the main components of diesel exhaust, O 2 will be converted to a large amount of oxygen active particles with strong oxidative in NTP system, and N 2 will also be converted to nitrogen active particles. These active particles will recombine to a certain concentration of NO x , which will partially offset the NO x removal effect of NTP. Therefore, the oxidative formation of NO x at different energy densities was investigated in O 2 /N 2 system with the O 2 concentration of 1%, 5%, 10% and 14%, respectively. (3) It is expected that NO is converted to N 2 and O 2 through the reduction route. And the mechanism of NO removal by reduction way needs further study. Therefore, we carried out these experiments in NO/N 2 system with the initial NO concentrations of 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , respectively. (4) As mentioned above, there is a critical oxygen concentration (COC) that makes the NO x removal efficiency zero. When the O 2 concentration is higher than the COC, the NTP will not have denitration ability with the ED increasing. It even leads to the increase of the NO x . While when the O 2 concentration is lower than the COC, the removal efficiency increases with the ED. What's more, the COC is also related to the initial NO concentration. Therefore, we investigated the range of the COC at initial NO concentrations of 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , respectively. (5) NH 3 is oen used as reducing agent in traditional selective catalytic reduction (SCR) denitration, while the NTP denitration performance and mechanism are not well understood when NH 3 exists. To this end, the effect of NH 3 in NO/O 2 /N 2 system was studied at O 2 concentrations of 1%, 5%, 8%, 10% and 14% with initial NO concentrations of 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , respectively. (6) H 2 O is an inevitable component in diesel exhaust. H 2 O will generate strong oxidizing hydroxyl radical ($OH) by the action of NTP, which will further inhibit NO reduction removal. When NH 3 is added to the system, the NO removal reactions will be more complicated. In this paper, the NTP denitration performance was studied in NO/O 2 /H 2 O/N 2 system with initial NO concentrations of 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , O 2 concentration of 14% and H 2 O concentrations of 0%, 3.5% and 5.1%, respectively. These results were also compared with that when NH 3 was added. (7) CO 2 is the fourth major component of diesel exhaust. At present, most studies ignore the effect of CO 2 on NTP denitration performance. However, CO 2 may convert to CO in NTP, and CO has been used as the reducing agent for denitration in some reports. [28][29][30][31] Therefore, in order to study the possible effects of CO 2 on NTP denitration, we added 4.5% CO 2 in NO/ O 2 /H 2 O/N 2 system with initial NO concentrations of 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , O 2 concentration of 14% and H 2 O concentration of 5.1%, respectively. Then NH 3 was added in NO/O 2 /H 2 O/CO 2 /N 2 system to investigate the possible changes. (8) Finally, based on the above experimental results, the NTP denitration mechanism of simulating diesel exhaust is proposed in the paper, and the application prospect of NTP denitration in the eld of marine diesel exhaust is discussed. Experimental system The experimental system is shown in Fig. 1. It mainly includes gas supply unit, pulse plasma power unit, DBD reactor, ue gas analyzer (Testo350, Germany), and exhaust absorption device. The gas supply unit mainly includes gas cylinder, pressure reducing valve, mass ow controller (Beijing Sevenstar CS200A, are high-purity standard gas. NO is 10% standard gas with N 2 as the carrier gas, and so as NH 3 . H 2 O is added by the N 2 bubbling method with constant temperature water bath. The DBD reactor is made of quartz glass. The structure and dimensions are shown in Fig. 2. The le end of the reactor is inlet, and the right end is outlet and detection port. The inner diameter of the reactor body is 24 mm, and the outer diameter is 27 mm. A copper rod is inserted in the reactor body as high-voltage electrode which is 550 mm long and 15 mm in diameter. The surface of the copper rod is machined with thread which is 2 mm in pitch and 1 mm in depth. The outer surface of the reactor is coated with 60 mesh copper net connected with the low voltage electrode of the power source. The discharge space is 4.5 mm. Experimental method The total gas ow is kept constant at 2 L min À1 with N 2 as carrier gas in all experiments. The concentrations of N 2 , NO, CO 2 , O 2 , and NH 3 in the DBD reactor are adjusted by the mass ow controllers. The H 2 O is added in by bubbling method, and its concentration is controlled by the temperature of the water bath. Before each group of experiments, the plasma power source is kept off and the initial gas concentration is adjusted to the demanded level according to the ue gas analyzer's measurement. Then the plasma power source is turned on to generate NTP in the DBD reactor. The output power of the power source is adjusted by input voltage and input current. And the concentrations of the DBD reactor outlet gas at different powers are monitored continuously. The input voltage of the plasma power source is controlled by the voltage regulator, and the input current is controlled by the frequency adjustment knob of the plasma power source. When the input voltage is constant, the input current can adjust to the maximum value by the frequency knob. It is also the maximum input power at the voltage, and the frequency of the output sine wave is generally 6-8 kHz. The initial output of the plasma power source is sine waves with certain frequency, as shown in Fig. 3. The sine waves with certain frequency can be modulated into pulse waves with different duty cycles and pulse frequencies by the pulse modulator, as shown in Fig. 4. The attenuation ratio K of output voltage U out is 1000 : 1 in Fig. 3 and 4, which means the true coordinate unit of U out is kV (in order to be consistent with the calculation process below, the unit of U out is still set as V in the Fig. 3, 4 and 6). In all these experiments, the pulse frequency and the duty cycle are kept at 200 Hz and 50%, respectively. Data processing The NO x removal efficiency is calculated according to the imported and exported NO x concentration of DBD recorded by Paper the ue gas analyzer. The input power of the power source is calculated according to the input voltage and current. The output power of the power source is calculated according to the output waveform of the power source recorded by the digital oscilloscope. The power efficiency and the energy density were calculated further. The removal efficiency of NO x is calculated by the following formula: Here h NO x is the removal efficiency of NO x , %; C in is the inlet NO x concentration (the sum of NO and NO 2 concentration), mL L À1 ; C out is the outlet NO x concentration, mL L À1 . The input power of the plasma power source is calculated as follows: Here P in is the input power of the plasma power source, W; U in is the input voltage of the power source, V; I in is the input current of the power source, A. The output power of the plasma power source is calculated by the voltage-electric charge Lissajous method. [32][33][34] Its equivalent circuit is shown in Fig. 5. The measurement principle is that a 0.141 mF capacitor C M is connected in series with the lowvoltage end of the DBD reactor. The voltage U M of the capacitor C M is measured by the digital oscilloscope. The electric charge Q of the DBD discharge is equal to the capacitor C M . The current owing through the loop is calculated as: The discharge power is dened as: The U M and U out are measured by the digital oscilloscope. When the input voltage is 100 V and the input current is 0.8 A, the curves of the high-voltage signal U out and U M are shown in Fig. 4. The waveforms during 0.0150-0.0155 seconds of Fig. 4 are shown in Fig. 3. The Lissajous gure is shown in Fig. 6 with U out as the X-axis and U M as the Y-axis. The Lissajous gure area A of a single pulse period is calculated by integration tool of origin. The sensitivity of the oscilloscope in the X and Y directions is K X and K Y , respectively. The output power by voltageelectric charge Lissajous method can be dened as: The power efficiency of NTP can be expressed as: The output power of the plasma power source is equal to the power loaded in the DBD reactor. The energy density (ED) is usually used to represent the amount of energy that the DBD reactor acting on the ue gas. The calculation formula can be expressed as: Here ED is the energy density of the DBD reactor, kJ L À1 ; P out is the output power of the plasma power source, W; Q exhaust is the volume ow rate of the exhaust gas, L min À1 . The changes of ED and power efficiency Gas breakdown will occur when the voltage between the two electrodes of the DBD reactor is higher than a certain value. The breakdown voltage is closely related to the discharge space. And it is also related to the gas composition and the ED in DBD reactor. When the typical marine diesel engine exhaust is simulated, that is, 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + 4.5% CO 2 + 76.2% N 2 , the changes of ED and power efficiency with input voltage are shown in Fig. 7. When the input voltage is low (<20 V), the discharge is extremely unstable, and the input current is also small, so as the output power and power efficiency. As the input voltage increases (20-60 V), the DBD discharge becomes more and more stable, and the input current also increases gradually, so as the ED and power efficiency. When the input voltage is higher than 60 V, the input current and the ED gradually increases with the input voltage, and the power efficiency remains above 80%. As shown in Fig. 8, the power efficiency varies with the input voltage in the NO/N 2 system, where N 2 as the carrier gas and NO concentrations are 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , respectively. When the input voltage is lower than 20 V, the power efficiency does not exceed 70%, when the input voltage is higher than 60 V, the power efficiency remains above 80%. Compared with Fig. 7, when the gas components are different at the same input voltage, the power efficiency is basically the same, indicating that the gas composition has little effect on the power efficiency. In order to improve the power efficiency, the input voltage should be higher than 60 V. If the system is used in practice, it is recommended that the input voltage be higher than 60 V. However, when the system is scaled up, the optimal efficiency zone and the corresponding ED may be different, and should be determined in actual tests. Effect of NTP on single component O 2 As mentioned in the introduction, the main components in diesel exhaust are N 2 and O 2 . They will have a great inuence on NTP denitration. Therefore, we rst study the effect of NTP in the N 2 /O 2 system. The possible reactions may include R1-R18 (refer to the end of this section) at least. N 2 and O 2 will be converted into a variety of active particles in NTP. Herron 35 and Fernandez's 36 results show that only N( 2 D) and N( 4 S) active particles participate in the NO generation process. Zhao, 24 Herron, 37 The changes of DBD outlet gas concentration with ED in O 2 / N 2 system are shown in Fig. 9. Fig. 9(a) and (b) show the trend of NO and NO 2 with ED at different O 2 concentrations, respectively. From Fig. 9(a), it can be seen that NO will not be generated at low ED regardless of the O 2 concentration. However as the O 2 concentration and the ED increase, the NO concentration also increases. There will be no NO generated in the ED range of 0-7.6 kJ L À1 at 1% O 2 concentration. But when the O 2 concentration is 14% and the SIE is 7.3 kJ L À1 , the NO concentration is as high as 1018 mL L À1 . The reactions R7 and R8 may take place in the process. From Fig. 9(b), it can be seen that NO 2 is more easily generated at lower ED in the N 2 /O 2 system. The concentration of NO 2 increases rst and then decreases with the increase of O 2 concentration when the ED is higher than 2.1 kJ L À1 , indicating that the excessive O 2 will inhibit the generation of NO 2 . When O 2 concentration is 5%, it is most benecial to the formation of NO 2 . The concentration of NO 2 is up to 380 mL L À1 when the O 2 concentration is 5% and the ED is 0.9 kJ L À1 . Based on the above results, reactions R7-R16 may take place under our experimental condition. For R16, O 3 will be formed by NTP in the presence of O 2 . Because O 3 is more oxidative than O 2 , NO will be oxidized to higher valence oxides. The possible reactions are mainly R16-R18. 39 When the ED is 4 kJ L À1 and the O 2 concentration is 14%, the NO x concentration is 750 mL L À1 . According to the above experimental results, the generated NO x concentration by NTP gradually increases with the O 2 concentration and ED in O 2 /N 2 system. e + N 2 / e + N( 4 S) + N( 4 S) (R1) Fig. 10 shows the denitration performance of NTP in NO/N 2 system. Under this condition, NO is converted to N 2 by the reduction way, and the reactions mainly are R12 and R14. 40 It can be seen from Fig. 10(a) that the removal efficiency slightly decreases with the increase of the NO concentration under the same ED. However, the NO is almost completely removed when the ED reaches a certain value. The removal efficiency increases with the ED when the NO concentration is constant. The removal efficiencies are all above 95% when the NO concentrations and the EDs are 500 mL L À1 0.99 kJ L À1 , 1000 mL L À1 1.26 kJ L À1 and 1500 mL L À1 1.68 kJ L À1 , respectively. The reason for the above experimental result is mainly that the collisions between molecules are more intense with the increase of ED, and R14 is more likely to take place. Also, the reaction R1 and R2 will produce more $N active particles, and then the reaction R12 will increase the removal efficiency. Sun 34 got the similar results with us when the NO concentration was 2000 mL L À1 . It can be seen from Fig. 10(b) that while NO is removed by the reduction way, a small part of NO is also oxidized to NO 2 , which means that the reactions of R9 and R14 take place. And the higher the initial NO concentration, the more NO 2 generates. However, with the increase of the ED, the concentration of NO 2 increases rst and then decreases. When the ED reaches a certain value, NO 2 is completely removed mainly due to the reactions of R10, R11 and R15. Under the experimental condition, the highest removal efficiency of 99% is obtained in this paper. Effect of NTP on single component NO Zhao's results 24 show that with the increase of ED, the removal efficiency of NO x will reach 98.5% wi0.5%, and could not be further increased. It is mainly because that N 2 O which generated by the reaction of R19 is difficult to convert. When NH 3 is added, R20 may also take place, but R21 is more difficult to take place. However, the speculation cannot be conrmed in this paper due to that the ue gas analyzer we use could not measure the N 2 O concentration. Based on the above results, NO tends to reduce to N 2 by NTP in NO/N 2 system, and the removal efficiency is higher than 95%. Effect of O 2 on NTP denitration performance Tokunaga 41 and Zhao 24 believe that the removal efficiency of NO x will decrease with the increase of O 2 concentration, and there is a critical O 2 concentration to make the NO x removal efficiency zero. Tokunaga' results show that the critical O 2 concentration is about 3.6% when the initial NO concentration is 500 mL L À1 . 41 Zhao's results show that the critical O 2 concentration is about 2.5% when the initial NO concentration is 350 mL L À1 . 24 The NO concentration of the typical marine diesel engine exhaust is generally much higher than 500 mL L À1 . When the initial NO concentration is higher, the changes of the critical O 2 concentration are not very clear. Based on this, the NO x removal performance of NTP is studied when the initial NO concentrations, O 2 concentrations and the EDs change. Fig. 11 shows the NTP denitration performance under different conditions of NO and O 2 . Since the remove rate is close to 100% at lower energy density when O 2 concentration is zero, the experiments at higher energy densities are no longer performed. It can be seen that the higher the NO concentration, the higher the removal efficiency at the same O 2 concentration. The reason for this phenomenon is that the probability of NO collision with high-energy particles is higher in the reaction system with the increase of NO concentration. The greater the likelihood, the more NO will be removed by converting to N 2 . While it may be helpful to use the isotope labeling method to detect the migration of nitrogen atoms in NO to conrm the speculation, it is not included in this study due to experimental conditions. Secondly, it can be seen that the removal efficiency decreases with the increase of O 2 concentration when the initial NO concentration is constant. The removal efficiency is more than 90% at low O 2 concentration (less than 1%) when the ED is bigger enough. When the O 2 concentration is more than 14%, the removal efficiency is negative. And with the increase of the ED, the removal efficiency is further reduced. The critical O 2 concentration gradually increases with the initial NO concentration. The critical O 2 concentration range is 5-8% when the initial NO concentration 500 mL L À1 . Tokunaga 41 believes that the critical O 2 concentration is about 3.6% when the initial O 2 concentration is 500 mL L À1 . But our experimental results are not agreed with his. The possible reason is that the NTP generation methods are different. Tokunaga uses electron beam method and we use the DBD method. It is reported that the average free energy of electrons generated by the electron beam method is much higher than that of the DBD method. 42 Therefore, more N 2 which is difficult to excite is converted to $N by the electron beam method. So the reactions of R1-R9 are easier by electron beam method even at lower O 2 concentration. And then the chemical reaction direction is toward to NO x generation. The critical O 2 concentration range is 5-8%, 8-10% and 10-14% corresponding to the initial NO concentration 500 mL L À1 , 1000 mL L À1 and 1500 mL L À1 , respectively. The COC gradually increases with the initial NO concentration. The removal efficiency increases with the ED when the O 2 concentration is lower than the COC, and decreases with the ED when the O 2 concentration is higher than the COC. Effect of NH 3 on NTP denitration performance NH 3 is oen used as reducing agent for the removal of NO x in the conventional SCR method. At present, there are few studies on the NO x removal by NH 3 in NTP system, and the NO x removal mechanism by NTP + NH 3 is even more unclear. It is generally believed that the ammonia radical ($NH 2 ) plays a major role in the NO x removal in NTP system when NH 3 is added. The reactions of the $NH 2 formation are R22-R25 (refer to the end of this section). Fig. 12 shows the NTP denitration performance of NH 3 under different conditions of NO and O 2 . Taking the initial NO concentration of 1000 mL L À1 as an example, it can be seen from the comparison with Fig. 11 that when the O 2 concentration is 1% and the energy density is greater than 2.25 kJ L À1 , the NO x removal efficiency changes from 90% ( Fig. 11(b) shown) to 80% (as shown in Fig. 12(b)) when NH 3 is added. When the O 2 concentration is greater than 10%, the removal efficiency remains positive at low ED aer adding NH 3 . But with the increase of the ED, the removal efficiency gradually decreases and eventually becomes negative. When the O 2 concentration is 14% and the ED is 7.8 kJ L À1 , the lowest removal efficiency changes from À49.2% (as shown in Fig. 11(b)) to À62.4% (as shown in Fig. 12(b)) aer NH 3 is added. When the O 2 concentration is less than 5%, NO is removed by the reduction way and R13 is supposed to be the main reaction. The generated NO 2 is removed by the reaction R11. Aer the addition of NH 3 , the reactions of R22-R28 mainly take place. At low O 2 concentration, the addition of NH 3 leads the removal efficiency decreasing. The experimental result may be caused by the fact that in this experimental condition the removal of NO x is mainly caused by the collision, which breaks the molecular bonds between high-energy particles and molecules generated by NTP. And then the molecular bonds recombine to generate bigger bond energy molecules, such as N 2 . However, aer the addition of NH 3 , the probability of collision between the high energy Fig. 12 Denitration performance of NTP in NO/NH 3 /O 2 /N 2 system, (a) 500 mL L À1 NO + 500 mL L À1 NH 3 + O 2 , (b) 1000 mL L À1 NO + 1000 mL L À1 NH 3 + O 2 , (c) 1500 mL L À1 NO + 1500 mL L À1 NH 3 + O 2 . particles and the NO molecules is lowered, thereby the removal efficiency decreases. When the O 2 concentration is greater than 10%, the NTP removal efficiency becomes negative. The reason for this phenomenon may be that the O 2 concentration is much higher than NO, and the oxygen-active particles generated by collisions between high-energy particles and O 2 molecules are much more, making the reactions R7 and R8 more likely to react. Although the NO x can be decreased aer the addition of NH 3 at the low ED, the concentration of oxygen-activated particles increases with the ED. And the NO generation rate by oxidation reaction is higher than the removal efficiency by NH 3 reduction reaction, which eventually leads to the negative removal efficiency. Secondly, the experimental results show that the NO 2 concentration will be slightly reduced at different O 2 concentrations when NH 3 is added. Mizuno 43 believes that NO does not react with NH 3 at room temperature, and the conversion of NO to NO 2 is determined by the concentration of $O instead of the NH 3 . In the range of room temperature to 150 C, NH 3 only affects the removal efficiency of NO 2 instead of NO. Based on the ndings above, NH 3 has little effect on the critical O 2 concentration. However, when other initial conditions are constant, the removal efficiency will be greatly improved at the low energy density and further reduced at the high energy density aer adding NH 3 . Therefore, it should be avoided the energy density is too much when NH 3 is added, which will result in the oxidation of NH 3 These strong oxidizing active particles will contribute to the oxidative removal of NO. The possible reactions are R29-R33 (M is a third inert body in the reaction system). 44 Fig. 13 shows the changes of the NO x concentrations at the outlet when the inlet gases are 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + N 2 , i.e., H 2 O is added. Fig. 14 shows the denitration performance of NH 3 at different H 2 O concentrations in the system of 1500 mL L À1 NO + 14% O 2 + H 2 O + N 2 , i.e., both H 2 O and NH 3 are added. It can be seen from Fig. 13 that H 2 O nearly has no effect on the concentration of NO 2 . H 2 O can reduce the NO concentration a little bit at the low ED and increase the NO concentration at the high ED, which means H 2 O can increase the removal efficiency at the low ED and lower the removal efficiency at the high ED. However, there is no obvious regular pattern between H 2 O concentration and NTP removal efficiency with the increase of the ED (Fig. 14). The reactions mainly consist of R29-R38. When NH 3 is added, the removal efficiency increases rst and then decreases with the increase of the ED. When both NH 3 and H 2 O are present, the removal efficiency of NO x can be greatly improved. Fig. 15 shows the changes of NTP removal efficiency at different initial NO concentrations in the system of 14% O 2 + 5.1% H 2 O + NO + NH 3 + N 2 . Whether NH 3 is added or not, the removal efficiency of NO x gradually increases with the initial NO concentration. At constant initial NO concentration, the removal efficiency increases rst and then decreases with the increase of the ED. This conclusion has been shown in Section 3.5. The addition of NH 3 will contribute to the increase of removal efficiency in the whole ED range in the system of 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + NH 3 + N 2 , In general, H 2 O can increase the NTP removal efficiency at the low ED and reduce the removal efficiency at the high ED. When H 2 O and NH 3 coexist, the NTP removal efficiency is greatly improved. Therefore, the system consists of the following reactions: 4.7 Effect of CO 2 on NTP denitration performance CO 2 is also an inevitable combustion product in diesel exhaust, and most researchers have not explored the effects of CO 2 on the NTP denitration. Therefore, in this section we investigate the impact of CO 2 in the NTP system for different initial NO concentrations and the presence of NH 3 . The effects of 4.5% CO 2 on the DBD outlet gas concentration in the system of 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + N 2 are shown in Fig. 16(a). The effect of 4.5% CO 2 on the NTP removal efficiency at different initial NO concentration in 14% O 2 + 5.1% H 2 O + N 2 + NO is shown in Fig. 16(b). It can be seen from Fig. 16(a) that CO 2 almost has no effect on the concentration of NO and NO 2 at the DBD outlet. But when 4.5% CO 2 is added, a certain concentration of CO will be produced, and the outlet CO concentration will gradually increase with the ED. When the ED is 7.64 kJ L À1 , the CO concentration is up to 1920 mL L À1 . The possible reactions are R39-R43. And R40 is the main reaction because CO 2 concentration is much higher than the CO. It can be seen from Fig. 16(b) that CO 2 basically does not have any inuence on the NTP removal efficiency at different initial NO concentrations. The changes of NTP removal efficiency at different initial NO concentrations are shown in Fig. 17 when NH 3 is added in the system of 14% O 2 + 5.1% H 2 O + 4.5% CO 2 + 76.2% N 2 + NO. The CO concentration in the outlet increases with the ED instead of the initial NO concentration. The NTP removal efficiency increases signicantly when NH 3 is added. However, CO 2 has no effect on the NTP removal efficiency in the presence of NH 3 when compared with Fig. 15. In addition, we opened the DBD reactor aer the experiment of adding CO 2 , and a layer of black carbon was observed on the surface of high-pressure copper rod. This may be due to the further decomposition of CO, which means reaction R44 occurred. In summary, CO 2 nearly has no effect on the removal efficiency of NO x under different experimental conditions. But CO will be detected in the outlet and black carbon will be generated on the surface of high-voltage electrode aer adding CO 2 , and the CO concentration will gradually increase with the ED. 16,22,40,45 (1) The rst stage is the discharge. At this stage gas molecules are mainly bombarded by high-energy electrons. It breaks the molecular covalent bonds, changes the gas molecules into free radicals and excites some decomposed atoms to the unstable excited state. The following reactions mainly take place at this stage: R1-R4, R14 and R15, R22, R29, R39 and R44. (2) The second stage is the post-discharge. At this stage the excited-state atoms generated in the rst stage collide with the gas molecules to generate secondary radicals. Then the radicals collide with other particles leading to quench or new radicals' generation. The reactions are mainly R5 and R6, R16, R23-R25, R30-R33. In general, under our experimental conditions, the reaction mechanism of diesel engine exhaust in the NTP system is shown in Fig. 18. It mainly consists of two parts. The rst is that gas molecules generate various free radicals under the bombardment of high-energy electrons. The second is the radicals react with other particles. The reaction process of NO x with free radicals is described in the right of Fig. 18. Other researchers 16,22,40,45 get the similar results with us. While our experimental results show that CO 2 has little effect on the removal efficiency of NO x . CO 2 is converted to CO and even further converted to black carbon. Therefore, the reaction process of CO 2 in NTP is separately listed in the le of Fig. 18. Analysis of NTP application prospect As NO can be removed by oxidation or reduction, when the O 2 concentration is low, NO is mainly removed by the reduction route. When the O 2 concentration is high, many strong oxidative constituents are generated such as $OH, $O and O 3 . And then the NO is oxidized to NO 2 . But N 2 will be converted to NO x at high O 2 concentration and high energy density, causing the NTP removal efficiency to become negative. Although it is benecial to the NTP removal efficiency improvement when the initial NO concentration is higher, it is difficult to further improve removal efficiency only by NTP technology because of Fig. 17 Effect of CO 2 on NTP removal efficiency with the presence of NH 3 . Fig. 18 Reaction mechanism of diesel engine exhaust gas in NTP system. the high O 2 concentration in the marine diesel exhaust. In this paper, the highest removal efficiency is only 8.6% at 0.8 kJ L À1 energy density when inlet gases are 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + 4.5% CO 2 + 76.2% N 2 . Chmielewski 12 simulated the diesel engine exhaust conditions when burning heavy oil, and the removal efficiency obtained was also very low. Therefore, in order to achieve high-efficiency NO x removal at high O 2 concentration, it is necessary to combine NTP denitration technology with other methods. The possible methods include the following: (1) Adding reducing gas, such as H 2 , 46 NH 3 (ref. 11, 21 and 47-49) and HC 50-52 (including CH 4 , C 2 H 2 , C 2 H 4 , C 3 H 6 , etc.). In this paper, the highest removal efficiency of 40.6% is obtained at the energy density of 1.65 kJ L À1 when NH 3 is added. (2) Combining NTP with the denitration catalysts. 15,21,49,51,[53][54][55][56] The catalysts commonly used include molecular sieves, activated carbon and metal oxides. For example, we can combine NTP with traditional vanadium-based SCR catalyst. Because NTP can convert NO to NO 2 , which means increasing NO 2 /NO x ratio, it is benecial to increase the reaction rate of SCR. 57 The removal efficiency is generally above 90% in this way, but it is also necessary to add reducing gas. (3) Combining NTP with wet scrubbing technology. [58][59][60] This technology has matured on ships today. NTP can oxidize NO to NO 2 which is more soluble in water. Chmielewski 11 obtained the removal efficiency of 49% by combining NTP with wet scrubbing. And Yang 61 got the removal efficiency of more than 60% by combing electrolytic seawater with wet scrubbing. However, in general, the removal efficiency is still much lower than the traditional methods such as SCR technology. A major problem for NTP industrial applications is that energy consumption of NTP is very large. Improving the energy utilization rate can be solved by optimizing the structure of the DBD reactor and matching the reactor with the power source. Conclusions A non-thermal plasma denitration system based on simulated diesel engine exhaust was set up in this paper. The NTP was generated by dielectric barrier discharge reactor. The NO removal performance by NTP under different O 2 , H 2 O, CO 2 , NO, NH 3 concentrations and energy densities conditions were studied. The reaction mechanism of NO x in NTP system was proposed. The application prospect of NTP technology was analyzed. In the end we get the following conclusions: (1) For the experimental system, the power source efficiency gradually increases with the input voltage. When the input voltage is greater than 60 V, the power supply efficiency is basically maintained above 80%. (2) NO concentration increases gradually with the O 2 concentration and the ED in the N 2 /O 2 system. When the O 2 concentration is 14% and the ED is 7.3 kJ L À1 , the concentration of NO is up to 1018 mL L À1 . The amount of NO 2 increases and then stabilizes with the ED. When the O 2 concentration is 5% and the ED is 0.9 kJ L À1 , the concentration of NO 2 is up to 380 mL L À1 . Therefore, a certain concentration of NO x will be generated when introducing NTP in the air. (3) NTP has a high removal efficiency for NO/N 2 system. When low concentration O 2 exists, NTP removal efficiency is above 90%. And NO is mainly removed by the reduction route. NTP will have no denitration performance at high concentration of O 2 . However, NH 3 can inhibit the formation of NO x and improve the removal efficiency. At low O 2 concentration, the removal efficiency of NO x gradually increases with the ED; at high O 2 concentration, the removal efficiency becomes negative. And the higher the ED, the more NO x will generate. Therefore, the O 2 concentration plays a decisive role in NTP denitration performance. And the critical O 2 concentration increases with the initial NO concentration. (4) Under typical diesel engine exhaust condition, H 2 O has little effect on NO 2 when NH 3 is not added, but it can increase NO removal efficiency at the low ED and the excessive ED causes the increasement of NO at the outlet. The highest removal efficiency of 4.5% can be obtained in the system of 1500 mL L À1 NO + 14% O 2 + 5.1% H 2 O + N 2 . The highest removal efficiency of 18.9% can be obtained in the system of 1500 mL L À1 NO + 1500 mL L À1 NH 3 + 14% O 2 + N 2 . However, when H 2 O and NH 3 are added, the removal efficiency is up to 40.6% in the system of 1500 mL L À1 NO + 1500 mL L À1 NH 3 + 5.1% H 2 O +14% O 2 + N 2 because of the synergistic effect. (5) CO 2 nearly has no effect on the removal efficiency of NO x , but the concentration of CO will increase gradually with the ED. When the ED is 7.64 kJ L À1 , the CO concentration will up to 1920 mL L À1 . The reaction process of CO 2 is added in the reaction mechanism of diesel engine exhaust in NTP system. (6) Because of the high concentration of O 2 in the marine diesel engine exhaust, in order to further improve the removal efficiency of NO x , it is necessary to add reducing gas to the NTP reactor, or combine the NTP technology with SCR catalysts or other technologies such as wet scrubbing. Conflicts of interest There are no conicts to declare.
‘The Drugs Did For Me What I Couldn’t Do For Myself’: A Qualitative Exploration of the Relationship Between Mental Health and Amphetamine-Type Stimulant (ATS) Use Substance use and mental ill health constitute a major public health burden, and a key global policy priority is to reduce illicit and other harmful substance use. Amphetamine-type stimulants (ATS) are the second most used class of illicit drugs and a range of mental health issues have been documented amongst users. This paper explores the relationship between mental health and ATS use, through a thematic analysis of qualitative interviews with n = 18 current and former ATS users in England. The findings are presented by trajectory point of; (1) Initiation of ATS use; (2) continued and increased ATS use and (3) decreased and remitted ATS use. This work helps to develop understanding around the complex and bi-directional relationship between ATS use and mental health. Many ATS users lead chaotic lives and engage in multiple risk behaviours, however there is a need to better understand and conceptualise the dynamic interaction between different individual, social, environment and cultural factors that determine individuals’ mental health and substance use. There is no ‘one size fits all’ approach to prevention and treatment, and these findings highlight the need for more joined-up, tailored and holistic approaches to intervention development. Introduction Substance use and mental ill health constitute a major public health burden amongst the population 1 and globally a key policy priority is to reduce illicit and other harmful substance use. 2 Amphetamine-type stimulants (ATS) such as amphetamine, methamphetamine and methylenedioxy-methamphetamine (MDMA/Ecstasy) are the second most commonly used illicit drug worldwide, with an increase in production and use seen in recent years. 3,4 In 2018 lifetime prevalence of ATS use was estimated at 13.5 million (4.1%) for MDMA, and 11.9 million (3.6%) for amphetamines amongst 15 to 64 year olds in Europe, 5 with an estimated 1 in 11 adults in the UK reporting consumption at some point in their lives. 5,6 In England and Wales, the social and economic cost of illicit drug use, including policing, crime and healthcare is £10.7 billion per year. 7 However, the societal impacts are further reaching, with the practice of problematic substance use in our communities leading to increased levels of blood-borne viruses, drug-related deaths and drug-driven crime. Mental ill health, domestic abuse, offending and bereavement are often associated with problematic substance use, impacting upon substance users themselves, their children, and other significant individuals. 7 There is evidence of discrepancies in care quality for those with co-occurring mental health and substance use disorders, highlighting the need for more to be done to reduce this inequality, and support these individuals. 8 ATS have sometimes been viewed as recreational or 'safe' drugs, with users reporting perceived positive effects, such as increased sociability, energy, talkativeness and positive mood, whilst underestimating and less frequently reporting the negative social and health effects. 9,10 However, ATS use has been identified as a risk factor for poor mental health, and even serious mental illness, 11,12 with a range of issues documented amongst users, including depression, anxiety and changes in mood. 13 Heavy or prolonged use of some ATS, such as methamphetamine or novel psychoactive substances (NPS) have been noted to impact negatively upon individuals' physical health, mental health and neurological functioning, 14 with some users experiencing transient or persistent psychotic symptoms, agitation and insomnia. 11,[15][16][17][18] Short-term issues may be due to the acute effects of ATS intoxication, however prolonged effects may be related to withdrawal. 1 The relationship between, and co-occurrence of, mental health issues and substance use is complex and bi-directional in nature, and both may have a common set of underlying and often compounded causes located across individual and structural factors. [19][20][21] It has also been argued that individuals who experience mental ill health problems are more likely to be or become dependent on 22 and similarly, individuals who misuse substances appear to be more likely to develop or suffer from mental health problems. 23 Determining whether the mental health issue or problematic substance use occurred first is further complicated because the relationship between psychological symptoms and substance use is temporal, meaning that many individuals may experience both substance-induced and substance-independent mental health issues throughout the course of their substance use careers. 24,25 Despite existing research having established a link between ATS use and mental health issues, there is little known about the order of onset and the implications of this for treatment. 26,27 The mixed-methods European 'ATTUNE' study aimed to address a gap in knowledge on what shapes ATS use across the life-course; how to prevent and treat harmful ATS use and what influences different trajectories of consumption through individuals' lives. Prior to this, there was limited understanding around which factors lead to increased ATS use, or what could help to facilitate decreased use, or desistance. 28,29 This paper reports on a sub-sample of ATTUNE qualitative data collected from interview participants in the North East of England, and aims to specifically explore individual experiences of, and perspectives on the relationship between mental health and ATS consumption. This is important, as there has been little investigation of the relationship between ATS use and co-occurring mental health problems, 30 the management of which continues to be challenging for clinicians. 31 There is not yet an established substitute pharmacological treatment for dependent ATS users. 32 Whilst some considerable psychological and psychosocial intervention developments have been made in recent years, 33,34 treatment services are predominantly based on a medical model, geared towards alcohol and opioid dependencies, which ignores the social and environmental drivers of substance use, and characterises people who use substances as requiring external control. 35 Methods Design Qualitative research is recognised as enabling in-depth analysis of socially situated experiences, and can help to provide insight into otherwise unknown practices; ensuring better-informed public health policy decisions through the identification of optimal opportunities for intervention, prevention and treatment. 36 The qualitative phase of the ATTUNE study used indepth semi-structured interviews with ATS users and non-users, to provide an in-depth understanding of the lived experiences of participants and the factors which shape different trajectories of ATS use through the life-course. Recruitment and sample Interviews were conducted in the North East of England, a region that has experienced substantial economic decline since the 1980s, with high levels of unemployment, disability and economic inactivity. 37,38 To be eligible to participate, individuals must: have first used, or had the opportunity to use ATS at least 5 years previously; be over 18 years of age; have lived in the North East of England and have appropriate verbal and cognitive skills to provide informed consent. Relevant organisations, such as homeless charities, substance use services and probation teams across the North East region signposted potentially eligible participants to study information. Flyers and posters advertising the study were distributed amongst local community spaces, including cafes, bars and libraries. The study was also advertised online, through social media and via national academic, policy and practice community networks. ATS use was categorised as current (use within the past 12 months) and former (use over 12 months ago). Users were further categorised according to whether they were: current/former dependent users (positive [⩾4] Severity of Dependence Scale 39 [SDS] score); current/former frequent users (at least 10 days use, but not SDS positive); or current/former non-frequent users (less than 10 days use in a 12 month period). 28,40 Data collection Potential participants were provided with an information leaflet, explaining that their participation was confidential and anonymous, and were given the opportunity to ask any questions. If the individual was willing, able and eligible to participate, then written informed consent was obtained. The interviews were conducted face-to-face by 1 of 3 members of the research team (MA, LS, WM). A semi-structured interview topic guide was used, with questions and follow-up probes related to family, physical and mental health and key turning points in drug use trajectories, namely initiation, continuation, decrease, desistance and relapse of ATS use. The interviews were digitally audiorecorded and transcribed verbatim. We continued data collection until there was maximum diversity across the sample in terms of ATS use, age, gender and socioeconomic status 41 ; and data saturation was believed to have been achieved, as identified through repetition of responses and sufficient data to answer the research questions. 42 Participants received a £10 shopping voucher incentive for participating. Interviews lasted between 18 and 106 minutes (mean = 48 minutes). Analysis During an initial framework analysis 43 of the whole sample of n = 70 interviews, a sub-sample of n = 18 current and former ATS users were identified. The relationship between mental health and ATS consumption was already an identified knowledge gap in this area, [30][31][32]35 therefore further analysis was conducted using a thematic approach 44 by LS, to focus specifically on this issue. It is however important to note that this does not mean that the remainder of the whole sample experienced no mental health challenges, just that the focus here was on those for whom these challenges were a recurrent theme and central issue. Data were coded iteratively, using a combination of inductive and deductive approaches and emergent themes were discussed with AOD and HA. Bronfenbrenner's ecological theory was drawn upon in order to help better understand and unpack patterns and risk factors associated with co-occurring substance use and mental health. 45 This theory was used to guide the interpretation of data; and states that individuals are shaped not only by individual factors, but to a great extent by the social, economic and physical environments in which they are situated. 44 Considering interactions within and between these different ecological systems offers a way to focus on both intrapersonal and environmental factors, and the dynamic interplay between these factors in determining behaviours and health outcomes, 46,47 which was useful to consider whilst exploring the complex issue of interest. All data were coded and organised using NVivo 11. Findings The 18 current and former ATS users interviewed were aged 20 to 45 (mean age = 33.2 years; 56% male, 44% female). The characteristics of the participants are presented in Table 1. The findings are categorised in relation to 3 key turning points in ATS use careers: (1) initiation; (2) continued and/or increased use and (3) decreased and/or remitted use, exploring the interaction between mental health and ATS use at each turning point. Each category is presented in greater detail below, with verbatim quotations to illustrate the findings. Initiation of ATS use Participants discussed the initiation of ATS use with reference to a variety of factors, both in terms of strategies to promote or improve positive mental health and wellbeing, as well as functional or preventative strategies to cope with or prevent further deterioration of existing mental health issues. For many individuals, a combination of these were evident, and experiencing co-occurring stressors and chaotic life circumstances further compounded their ability to cope. A perceived mental health promoting benefit of ATS was improved alertness and increased energy levels, which participants believed allowed them to function better, and to negotiate the demands of their day-to-day lives. Many participants experienced multiple stressors in their lives, and this was one of the main contributing factors in their initiation of ATS use. They normalised their use as a way of as managing childcare (particularly mothers), and employment. Some participants also spoke about experiencing low selfesteem and a lack of confidence, therefore turning to ATS use to boost their self-belief, and navigation of social situations. They believed that using ATS improved self-concept, and fortified personality traits which they had previously perceived as fragile, making them feel more extraverted, presenting fewer neurotic symptoms and therefore better positioned to cope in their day-to-day lives. 'The drugs did for me what I couldn't do for myself. They made me confident. They made me talkative, especially drugs like Ecstasy'. Participants also reported that ATS provided them with an opportunity to escape problems they faced in their lives, which were causing significant distress, when they felt that it was not possible to overcome these issues or find solutions to improve their situation. These concerns included strained relationships with partners and family members, financial worries, as well as issues associated with existing mental ill health. Many Participants referred to the impact of particularly traumatic events in their lives, and how ATS use was something they engaged with to cope with the emotional distress they felt. Many of these traumatic events had occurred recently, such as the dissolution of a romantic or intimate relationship, or bereavement from the loss of a partner or family member. However, participants also referred to the lasting impact of adverse events which had occurred in the past (including adverse childhood experiences), such as traumatic childhood events and historic sexual abuse, and how ATS use lessened the impact of these painful memories on their present-day lives. Continued and increased ATS use When participants discussed factors that encouraged them to continue and potentially increase their use of ATS, they referred to the continued management of multiple stressors in their lives, the perception of apparent positive effects on their mental health and wellbeing, and the need to self-manage deleterious effects of ATS use. Many participants highlighted the perceived positive effects on their mental health, as a result of using ATS. These included feeling more confident and less anxious, a perceived sense of better life management, and feeling they were better equipped to cope with their day-to-day circumstances. These individuals continued using ATS as an attempt to sustain these desirable effects, which were associated with an overall sense of bolstered wellbeing. ' Several participants also described the ongoing management of their 'chaotic' lives, the negotiation of multiple individual, social and environmental stressors and deteriorating mental health as motivation for continued ATS use. As their circumstances became increasingly demanding and pressurised, ATS use became increasingly normalised and a part of their daily routine. Over time, most dependent users could not see a way of pursuing their lives without using ATS to support them, even if their ATS was contributing to their deteriorating mental health. 'I was dealing with the death of my children's dad, and I think because without it I had no energy at all. Looking back now I was probably very severely depressed, so I was kind of using it as a coping mechanism'. A dominant theme was self-management of the 'comedown' from ATS intoxication. Participants spoke about experiencing worsening psychological symptoms, including low-mood, anxiety, feelings of paranoia, engaging in self-harm behaviours and experiencing thoughts of suicide. This deterioration in mental health led to continued and/or increased ATS use to 'self-medicate' these symptoms, even though these issues were often, but not exclusively, their reason for initiating ATS use to begin with. Some participants reported that as their use continued, these symptoms worsened even further, leading them to increase their consumption use both in terms of frequency and quantity, establishing a cycle of maladaptive ATS use behaviour. 'On a come down and you're like, fuck my life. Genuinely, that's what it's like and then you feel sorry for yourself again for like two days, three days and then you're fine. Then before you know it, again, another one's been ordered'. (21-year-old female, current dependent user) 'I've used various things in between that, but I always went back to amphetamines. It's the only thing that's kept us normal. I'm paranoid when I'm not in it, but when I have it, it takes away the paranoia'. (40-year-old male, current dependent user) Decreased and remitted ATS use Participants discussed the negative impacts of continued ATS use, such as a decline in overall mental health functioning, and negative changes to their personality as motivating factors in reducing or ceasing their ATS use. Participants also referred to a desire for change and wanting a 'better life'. ATS users who had managed to reduce or cease their use spoke about the restoration of 'normality' in their lives. Many participants had previously referred to perceived positive impacts on their wellbeing due to ATS use, including Spencer et al 5 increased confidence. However, after prolonged use several participants described becoming increasingly aware of negative traits and changes to their personas, including paranoia, which they attributed to their ATS use. These perceived changes in personality often motivated a desire to reduce or cease ATS use. 'Your personality can flip side, you know, popular, sociable, outgoing person and then you're going to lose the grip on who you are'. (40-year-old male, former non-frequent user) 'I changed in the sense of the drugs I as taking. I wasn't the nice fella. I became very controlling, very paranoid, really insecure'. (38-year-old male, former dependent user) Despite many participants describing that they initially experienced an alleviation of their negative mental health symptoms from using ATS, after prolonged use these perceived benefits very often became less pronounced, or harder for the user to identify. For those users who reported increasing or uncontrolled use, participants became more focussed on their substance use as a way of alleviating symptoms of mental ill health. Many participants described deleterious impacts on their mental health functioning, as their use increased in terms of both frequency and quantity. This was often cited as a motivating factor in reducing or stopping the use of ATS. However, it was often not until participants reached a point of crisis which resulted in the intervention of support services, before reduced ATS use occurred. 'I was dead August last year, everything, and no feelings, nothing, soul gone, the lot. You know, it was just black dog'. (40-year-old male, former non-frequent user) 'I was suicidal. I wanted to die. I didn't want to live, and I hated myself, all that self-loathing and stuff and self-hatred. I thought everybody hated me'. (38-year-old male, former dependent user) Many participants spoke about a breaking point or hitting 'rock bottom' as a primary motivating factor in the decision to modify their ATS use. By this stage, participants were unable to associate their ATS use with any benefits or positive effects, and instead apportioned the blame for their negative mental health state and the poor circumstances of their lives to their ATS use. This motivated a desire for change and wanting a 'better life'. 'It got to a point where I thought, "I've got to sort my head out," because I' d gotten myself into a bit of a rut'. (40-year-old male, former dependent user) 'For me it was 25, which I'm really grateful for that I hit that rock bottom and I realised I'd had enough because I did'. (38-year-old male, former dependent user) However, for many participants change was an incremental process, and relapse was a common occurrence for participants, with the overall reduction and desistence from use very much associated with building personal resilience over time. Individual, social and environmental triggers often provided the catalyst for relapse and re-engagement with ATS use, followed by a period of reflection and consolidation of goals by the individual user. Participants who had successfully been able to reduce their ATS use, or stop all together, spoke about the restoration of a positive state of mental health and emotional wellbeing; being able to engage in positive everyday activities, which had not been possible amid their most intensive phase of ATS use. Participants were able to re-establish 'normal' practices, such as maintaining a regular sleeping pattern, getting up out of bed and ready and leaving the house, which individuals has previously struggled to sustain. This restoration of normality was perceived as a hugely important step forward in participants' recovery. Discussion We found that the initiation of ATS use was often initially viewed by participants as a positive strategy for bolstering their wellbeing and confidence, or as a coping mechanism for managing poor mental health and escaping traumatic or challenging issues. Continuation and increase of ATS use were associated with the continued management of multiple stressors in their lives; positive perception of effects on their wellbeing; management of negative side effects and self-medicating to maintain the perceived mental health benefits. Reducing and ceasing ATS use were associated with a decline in overall mental health functioning; negative personality changes; and a desire for change and wanting a 'better life'. Participants referred to the lasting impact of events which had occurred in the past, and existing research has suggested that experiences of trauma may be an important risk factor for distinguishing individuals who develop substance use issues, from those who do not. 48,49 This follows existing research that states that age and stage of life may play an important part in determining whether or not individuals are able to use adaptive coping strategies, with users who initiate substance use at an earlier age displaying higher disengagement, lesser use of social support, higher problem avoidance and increased social withdrawal. 50,51 Participants discussed a range of short-term and prolonged effects on their mental health, including depression and anxiety disorders, which have previously been documented amongst ATS users, related both to the acute effects of intoxication and prolonged effects related to withdrawal. 1,13 6 Substance Abuse: Research and Treatment Continued use was perpetuated by the perceived positive influences using ATS was having on individuals' lives. However, increased ATS use was also associated with the requirement for greater amounts of ATS to elicit similar effects, or to counter negative side effects and 'come-downs'. Self-medication was initially perceived as beneficial, and there is existing evidence that some users engage in ATS use as a form of self-medication for existing mental ill-health, or to manage symptoms associated with ADHD. 52 However, many individuals reported their mental health issues worsening due to prolonged ATS use, and often found these effects to be outweighing the positives over time. Participants also referred to adverse changes in their personality with prolonged ATS use, and factors associated with personality type, and the presence of certain personality disorders could provide explanation for increased risk of poor mental health outcomes, and substance use dependency. 53,54 A desire for a 'better life' or 'normality' was one of the strongest motivating factors for participants to stop using ATS, and much existing research has focussed on early adulthood as the period when regular and heavy use of substances declines; due to abundant social role transitions during this period, a process known as 'maturing out'. 55 However, for some users, it is only when a point of crisis is reached, which serves as a turning point and that there is an opportunity for individuals to break the cycle of substance use and seek support or treatment. 56,57 The social identity model of recovery proposes that these turning points can constitute the beginning of a process that allows individuals to construct an identity that supports their transition to recovery. 58 However, by this definition there is a risk for some users that a reduction in ATS use may not occur without a 'crisis', which could have other far-reaching impacts on their lives. In society, ATS are often perceived as safe and recreational substances, 9 which is a major barrier to the prevention and treatment of problematic use, and to effective public health messaging, which in recent decades has focussed on harm reduction, 59 rather than scare tactics and fear-based messages, which whilst often drawing criticism, may be effective prevention strategies. 60 This, in association with medical models which characterise addition as a primary chronic disease of neural circuitry, 35 focussing on medical solutions to social problems, 61,62 and a lack of targetted treatment for problematic ATS use, continues to leave affected individuals at disadvantage. 32 Another concept which may influence help-seeking, and the remittance of ATS use is 'flourishing', which proposes that high levels of both hedonic well-being (life-satisfaction, happiness) and eudaimonic well-being (social contribution, purpose in life, personal growth) reduces the risk of mental health and substance use disorders, but only when taking into consideration the impact of life-events and social support. 63 This is particularly relevant here, as existing research has shown that dependent ATS users experience a greater number of negative life events, and that individuals' social environment is affected by these negative life events. 64 This work helps to develop understanding around the complex and bi-directional relationship between ATS use and mental health, [19][20][21] and importantly highlights that individuals change their use of substances throughout their drug use careers as both antecedents to and consequences of their mental health. The findings from this study highlight that there is no 'one size fits all' approach to prevention and treatment, and rather than focussing on whether it is the mental health or substance use issue which occurred first, it is important to focus on users' individual circumstances and work to address these issues in co-occurrence. 24,65 Strengths and limitations The principle strength of this work is the qualitative nature of enquiry, which responds to a growing momentum and commitment to include the experiences and views of those whose voices are often overlooked or under-represented. 66 These findings help to challenge stereotypes about substance use, develop a deeper understanding of hidden populations and behaviours, and further demonstrate that substance use is shaped by a complex set of individual, social, environmental and cultural factors. 29,36 Whilst the sub-sample was relatively small, it was diverse in terms of age, gender and current or former ATS use status, and recruitment was undertaken via multiple sources, which allowed individuals from a variety of backgrounds to engage with the research and provide rich accounts of their experiences. The ATTUNE study only collected data in the North East of England, an area which has its own cultural dynamics, and economic challenges 37,38,67 ; with a higher rate of drug-related deaths (96.3 per million people) than any other region in England or Wales 68,69 ; the highest suicide rate in England, 70 and low levels of ethnic diversity, 71 which may not make the findings generalisable to other regional contexts, or countries. Whilst we attempted recruitment via organisations representing lesbian, gay, bisexual, transgender, queer/questioning and intersex (LGBTQI) communities, we had no success; and additionally, sexuality and gender identity information about participants was not collected. This is a further limitation of the study, as we know that gay and bisexual men use illicit drugs, including ATS, at higher rates than most other population groups. 72,73 Conclusions Many ATS users lead chaotic lives and engage in multiple risk behaviours, however there is a need to better understand and conceptualise the dynamic interaction between different individual, social, environment and cultural factors that determine individuals' mental health and substance use trajectories, as they are not a homogenous group. [45][46][47] The early identification of issues associated with both mental health and ATS use is universally important, to ensure they do not perpetuate one another, and these findings highlight the need for more joined-up, tailored and holistic approaches to intervention development. A public health approach could be through preventative work in 7 schools, particularly focussing on misconceptions about the safety of ATS; the relationship between mental health and substance use and reducing stigmatisation. Policymakers must also remain mindful that the prevalence of both ATS and mental health issues are higher in more deprived areas, and therefore preventative strategies should be targetted at these areas accordingly. Future research should also engage with varied and diverse population groups; and explore in-depth individuals' preferences for, and acceptability of treatment opportunities, and the barriers and facilitators to accessing treatment.
A case study on learning basic logical competencies when utilising technologies and real-world objects In our technological age, many technologies and real-world objects communicate with each other or partly merge. However, this combination of technologies and real-world objects has not yet found its way into everyday teaching practices in schools to any great extent. To investigate the possibilities of combining technologies and real-world objects in mathematics classes, we conducted an exploratory educational study with 47 students. Analysing students’ data using the principles of grounded theory demonstrated that for students in our study (A) using open tasks with multiple solutions, (B) immediate feedback and (C) novelty effects in the learning process, are essential to design mathematics learning environments with combining technologies and real-world objects when learning basic logical operations. Introduction New technologies have been having a significant impact on society in recent years. As schools and educational institutions are part of society, new technologies also have a significant impact on teaching and learning in schools, but this often happens some years later than in other fields such as business or science (Samuelsson 2006). Currently, we are experiencing not only the proliferation of digital technologies, but also their combination and communicating with everyday objects. Combining digital technologies, learning and real-world objects can be found slowly growing also in mathematics lessons. In their studies, Borba et al. (2016) and Pierce and Stacey (2011) summarise that using modern technologies in mathematics learning could facilitate https://doi.org/10.1007/s10639-020-10282-5 * Robert Weinhandl robert.weinhandl@gmail.com 1 students in mathematising real-world problems. Most combinations of digital technologies, mathematics learning and real-world objects often focus on geometry. In our educational case study, we explored how the combination of selected technologies and real-world objects could be linked to promote mathematics learning beyond geometry. To investigate how real-world objects and technologies could be linked to enhance mathematics learning beyond geometry, we decided to use Logifaces (real-world objects) and MS Excel (technology) to learn logical operations. Theoretical background Pseudo-realistic problems are those tasks which simplify real-world phenomena for teaching and learning purposes. Carreira and Baioa (2018) summarise that pseudorealistic problems should often trigger mathematics learning. However, pseudo-realistic problems might reduce and simplify real-world situations too much, resulting in students not using their knowledge of the real-world when learning. For this reason, Heck (2010) recommends placing real-world problems at the centre of mathematics learning. By treating real-world problems, students can learn in educational settings like real scientists. In this context, learning as real scientists is closely linked to learning by doing. Combining basic logical competencies and technologies In our educational case study, the real-world problem was the matching of Logifacesstones (see Fig. 1 left). Students' task was to determine whether or not two stones could form a pair so that their joint surface has no steps and form a smooth surface (see Fig. 1 right). To investigate whether or not two Logifaces-stones form a pair, students of our study also had to use MS Excel and develop a programme to investigate whether or not two Logifaces-stones can form a smooth surface. When developing this MS Excel program, students also had to use basic logical operations. We decided to use MS Excel because Tabach et al. (2006) showed in their study that students could achieve considerable cognitive growth in learning mathematics by using MS Excel. Using technologies in our educational case study followed the well-known work of Noss and Hoyles (2010) and Noss et al. (1997) who were already able to illustrate at the end of the last Century that educational technologies facilitate turning learning environments into laboratories. Such technology-enhanced lab-like learning environments could enable students to explore mathematical content with experimentation and with Fig. 1 Logifaces-stones and how to match them their creative approaches. As educational technologies and related pedagogies have been extensively studied and significantly developed over the past 20 years and using technologies has been simplified since the late 1990s, we assume that nowadays, it is easier to develop promising technology-enhanced lab-like learning environments. Mathematical content of our study The mathematical content we aimed students discover in a technology-enhanced learning environment in our educational case study was basic logical operations. According to Henderson (2014), it is basic logic and conceptual problem solving that should be considered early in students' educational careers in mathematics teaching. As logical operations and using technologies are central to the field of computing and computational thinking, and thus becoming always more relevant in our digital age, high quality teaching on logical operations is a key element for post-secondary education and professional life of today's secondary school students. In Austria, the country where our educational case study was conducted, and in the German-speaking countries in general, this relevance of basis logic is particularly evident. Here, basic knowledge of logic and logical operations is not only crucial in mathematics, science and computing but is found in many other university curricula and vocational fields that are not per se associated with mathematics. For example, basic logic is a compulsory p a r t o f t h e c u r r i c u l u m i n e c o n o m i c s ( h t t p s : / / f r i e d o l i n . u n i -j e n a . de/qisserver/rds;jsessionid=D6FF041661EAC277A63205E55219A71A.worker32 ?state=wtree&search=1&root120192=473737%7C472876%7C472594%7C472149 %7C472583&trex=step), in digital communication and marketing (https://online.fhwien.ac.at/fhwienonline/wbLv.wbShowLVDetail?pStpSpNr=230291&pSpracheNr=) or linguistics (https://hpsg.hu-berlin.de/~stefan/Lehre/S2012/as-logik.html). Since basic logical knowledge is part of both the curricula of mathematical related degree programmes and many curricula of other programmes, students should develop a logical knowledge framework already in secondary schools. As logical foundations are only part of the curriculum in the 9th grade (AHS) in Austria, high quality and engaging teaching on this topic should be of particular importance. Using technologies for learning logical foundations Considering technologies in our educational case study on learning logical foundations is supported by several educational studies which have showed that a technologyenhanced learning environment could be a fruitful setting for learning logical foundations. For example, in an educational case study, Kabaca (2013) used technologies to learn logical operations AND and OR. By combining technologies and logical operations, a holistic learning environment could be developed in which technologies made it possible to investigate whether the solutions developed are correct. Furthermore, Celedón-Pattichis et al. (2013) illustrated in their educational study that logical operations and technologies are a combination that could also inspire underrepresented groups for STEM. In our case, students with a migrant and a low socio-economic background form the group of underrepresented students. For our educational case study, both the findings of Kabaca (2013) and Celedón-Pattichis et al. (2013) are essential as on the one hand, students in our educational experiment should be enabled to use technologies to investigate whether their considerations regarding the matching of Logifaces-stones are correcti.e. that two Logifaces-stones form a smooth surface. On the other hand, our educational case study aims to investigate a technology-enhanced learning environment that should motivate many students to work with and learn logical operations. The fact that both mathematically interested as well as less interested students might need logical operations at a later stage in their educational careers or in their professional life justifies the focus of our educational case study on learning logical operations in a technology-enhanced learning environment using real-world objects. Another argument for combining technologies and explorative learning is that this approach to learning could also promote students' meta-competencies such as linguistic or social competencies. Clarkson (2003) was able to illustrate in her study that learning logic and also logical operators can also enhance students' language skills. The fact that students should first organise internal logical thinking and then communicate through language to their classmates or teachers supports the learning of logic and logical operators. To promote this meta-competence in our educational case study, students worked in groups of two or three. Research goal and question of our study Since our education case study was very limited in time, we could not focus on opportunities for potential learning gains for the students or compare the design of our techno-logical learning environment with other learning environments. Instead, the goal of our education case study was to investigate how real-world objects and technologies should be linked in mathematics learning to motivate students. This focus on students' motivation when learning basic logical competencies led us to the research question: Which design elements of a learning environment where real-world objects and using technologies are linked are essential for students to learn basic logical competencies? Our educational case study To investigate which design elements are essential for students when learning basic logical competencies in a learning environment in which real-world objects and technologies are linked, we conducted our educational case study in a Vienna secondary school located in the city centre. In defining basic logical competencies in our study, we have used the definition of mathematical competency by Niss and Højgaard (2019) as a guideline. Consequently, we interpret basic logical competencies as the insightful readiness of a student to react appropriately to a specific type of mathematical-logical challenge in given situations. The participants of our study Since the distance from the school to the students' home is a central factor in the admission of students to schools, it is assumed that the majority of our student participants live close to the city centre and have high socio-economic background as the residential area near the centre is rather expensive. The high socio-economic background of students lead to the assumption that students are familiar with working and learning with technologies in their homes, as opposed to using the technologies for leisure without learning support. The socio-economic gradient of the Pisa study (OECD 2015) also supported this assumption. Following the socio-economic gradient, there is a positive correlation between economic and social status on the one hand and the competence levels achieved by students in the fields of science and technologies on the other hand. Three groups of learners with a total of 47 students participated in our educational case study. The learners attended 9th grade and were between 13 and 15 years old. Our educational case study undertook four lessons per grouptwo double lessons each. In the teaching units, students formed groups of two or three. Each group of two or three had a Logifaces set with 16 stones and a computer with Internet access. The implementation of our study A young teacher led the units, and a researcher was present in each unit. The researcher observed the learning activities of the students and took notes of the lessons, and at the same time offered help if the students had problems. In this support of the students, the researcher also conducted mini-interviews with the students to discover what was causing problems. After each double lesson, written feedback was collected from all students. When giving written feedback, students were also asked to record their satisfaction or dissatisfaction with the design elements of the lesson. Students were asked to pay particular attention to interaction with classmates and their teachers, task communication, task design and teacher expectations. Methodological framework To identify which design elements could be relevant to students when learning basic logical competencies in a technology-enhanced and real-world object based learning environment, we conducted an educational case study. Using case study principles to reach our research goal Since using case studies has a long tradition in mathematics education research in examining students' solution processes and methods (e.g. Cobb 1986), this research method should also be appropriate for our study. Furthermore, as case studies can be used not only to investigate solution processes and methods in problem-solving but also to explore students' emotions when solving problems in mathematics classrooms (Eynde and Hannula 2006), using case study principles should provide valuable results for our research aims. Our study focused on students solving a particular problem (developing an MS Excel program to solve the Logifaces problem) and our research goal was to explore which design elements could be relevant for students in technology-enhanced and real-world object based learning environments. According to Cohen et al. (2007), case studies require a clearly defined limited system of real people in real situations experiencing a specified intervention. This limited system of real people in real situations should extend the understanding of concrete ideas and interventions beyond abstract theories. In this study, the limited system was defined as three groups of students throughout four teaching units. The situation to be investigated was defined as the students of these three groups, or more precisely students' needs and requirements concerning a technology-enhanced learning environment based on real-world objects when learning basic logical competencies. The intervention included basic logical competencies were learned by students using Logifaces-stones and MS Excel. According to the work of Yin (1984), our educational case study can be characterised as an explorative case study. The explorative character of our educational case study is because our study aims to develop hypotheses regarding design elements of learning environments in which real-world objects and technologies are linked. According to Cohen et al. (2007), among others, participatory observations or postobservation recordings are data collection methods that generally apply to case studies and are specifically appropriate for our case. Participatory observations were selected as the data collection instrument because a researcher was present in all teaching units and also interacted with the students when needed. The interactions of the researcher with the students also resulted in ongoing mini-interviews (Bakker and van Eerde 2015). The mini-interviews always lasted less than 3 min and were intended to help clarify why students encountered difficulties. The researcher made observational recordings immediately after the occurrence of any phenomena or after conducting mini-interviews. The data collected during the lessons were supplemented with final written feedback from students after each double lesson. According to Kane and Staiger (2012), collecting supplementary observation data through student feedback should lead to an increase in educational quality. Written feedback was chosen as a data collection tool to gather feedback from all students and to make it clear that their feedback could not be traced back. By making written feedback untraceable, it could be expected that the honesty of student's feedback was possibly increased. Using grounded theory approaches when collecting and evaluating research data When collecting and evaluating research data, we applied techniques and principles of grounded theory approaches (GTA). In our study, we followed the constructivist interpretation of GTA (Charmaz 2006) and a GTA interpretation according to Strauss and Corbin (Khan 2014). A constructivist interpretation of GTA and a GTA interpretation according to Strauss and Corbin means, on the one hand, that the previous knowledge of researchers and the current scientific body of knowledge should be included in the development of theories and hypotheses. On the other hand, this interpretation of a GTA follows that any hypothesis or theory developed in the course of research depends on the perspectives of researchers and cases under investigation. This constructivist interpretation of GTA was particularly relevant to our exploratory educational case study, as, on the one hand, the researchers could not be described as neutral, as they sometimes took on participating and supporting roles. On the other hand, it must be assumed that theories and hypotheses on design elements of technology-enhanced and real-world object based learning environments developed in our exploratory education case study would have been different if our study had been conducted with other classes, at a different time, or at other schools. According to Cohen et al. (2007), results or hypotheses that depend on the framework conditions of a study are a specific feature of case studies. However, if the conditions and frameworks of the study or case are described in detail, theories or hypotheses developed in a case study can be applied to similar cases, phenomena or situations. Coding techniques of grounded theory approaches In analysing the research data and developing theories and hypotheses, we have followed a four-part approach, namely: 1) screening of new data, 2) open coding, 3) axial coding, and 4) selective coding. We followed Ritchie's (2012) approach to initially view the new data. Initially viewing new data means that, in a first step, all researchers read the newly collected raw data. This repeated reading of the raw data was intended to give all researchers an overview of the current status of our educational case study and to be able to derive initial topics from the raw data. In the next step, the newly collected raw data were transcribed and then coded using a QDA software. Our approach to coding is based on the theoretical guidelines and practical applications of Breuer et al. (2009), Charmaz (2006 and Mey and Mruck (2011). In Table 1, columns 1 and 2). The open codes of a higher degree of abstraction were then used for axial coding. By coding open codes axially, a synthesis of the research data should be achieved again. In axial coding, open codes were grouped around a central open code (phenomenon) according to causes, activities and consequences (see Fig. 2). The open codes of the area of activities were then grouped and used as categories for selective coding. For selective coding, these categories were linked, and dependencies were identified. Identifying dependencies allowed us to develop the design elements relevant for students in terms of technologyenhanced and real-world object based learning environments for learning basic logical competencies core categories: (A) Using open tasks with multiple solutions, (B) Just-in-time feedback and (C) Novelty effects in the learning process. Student quotes given in the section Results have been translated from German to English by us. Individual student quotes were accompanied by information whether this feedback was collected via written feedback [F] or mini-interview [I]. If feedback was collected via mini-interviews, the composition of the student group in terms of gender is also given. Here, g stands for girl and b for boy. Results Examining and analysing students' feedback to explore which design elements could be relevant when learning basic logical competencies in a learning environment where real-world objects and technologies are combined, demonstrated that A) Using open tasks with multiple solutions, B) Just-in-time feedback and C) Novelty effects in the learning process are central to students in our exploratory educational case study. In developing the design elements of the learning environment that are central to students in our study, we did not focus on the feedback related to learning environments per sei.e. combining technologies, using Logifaces and the intended learning of mathematics. Instead, we focused on the design elements that are central to students in such a learning environment. As the design elements central to students, we qualified those student feedbacks which could be central to students' learning or motivation, and which could support or hinder learning processes. Using open tasks with multiple solutions A key design element for students in our exploratory educational case study was that students were able to take their learning paths in solving the problem, using their strategies. The students described the associated thinking and experimenting with their strategies or solutions as very motivating. [F] It was good that you were able to work at your own pace and that you could work on your own idea and not just have to meet specifications. [I, b-b-b] The fact that you can or have to think for yourself before working is really good. In this process of learning, students described the task as a brainteaser rather than a classical mathematics task. According to the teacher's feedback, working on this brainteaser activated students much more during lessons than usual. In this context, the teacher mentioned that a high involvement could be observed in mathematical high-achievers as well as in mathematical low-achievers. An increase in activity and motivation of otherwise mathematically below average successful students is reflected in the following quote. [I, b-b] It is really cool that we can learn and do brainteasers at the same time. [F] I liked the fact that we were able to do more tinkering and puzzling around in the classroom than just learning normally in class. When working on open-tasks with multiple solutions, it was positive for students that new mathematical or technological knowledge or competencies could be associated with achieving a concrete goal. Students emphasised that it was positive that only those new concepts were learned that could be used immediately when utilising a new solution strategy. [I, g-g] We learn exactly just the things we need to solve the problemwe would not have learned these things otherwise, would we? A key design element for students in our exploratory educational case study was that tasks were open-ended and there were several possible solutions. Achieving these open tasks was described by students as a brainteaser rather than an everyday mathematics lesson. When processing these brainteasers, it was crucial for the students that only those new concepts were introduced that were needed in the specific case. Just-in-time feedback When experimenting with Logifaces stones and Excel, it was vital for students of our study that they received immediate feedback or knew that they could get feedback at any time. Students emphasised that they found it enjoyable that a teacher and another person they could ask were present in all lessons. [F] A good thing about the lessons was that you could always ask the teacher or the other person if your concept is right and what the commands are to realise your concept. In addition to teachers' feedback, students also pointed out in their feedback that classmates as feedback providers were an essential element of our case study. In connection to classmates as feedback providers, it was emphasised that classmates were consulted both when problems were acute and when feedback on new strategies or ideas for solving the problem was needed. [I, b-b] Working or learning in a group is really great. We only argue from time to time how we will implement our ideas. Do not think about it, it is quite normal for us. [F] It was good that in a first step you could immediately ask your seat neighbour if you did not understand something. In addition to feedback on concrete strategies or ideas for solving the mathematicallogical problem of our case study, it was vital to students that the learning goal and task communication were clear and that questions concerning learning goal or task could be asked at any time. [I, g-g-g] It was really good that you explained at the beginning exactly what we should doand thank you that we can ask you again because we do not know all the details. [F] If the task and the images related to the task would have remained visible on the beamer for the whole lesson, that would have helped. But as we could always ask you, it was not that bad. However, not only the teacher or classmates were described by the students as feedback instruments. Also Logifaces-stones and Excel could be used to test developed strategies for solving the mathematical-logical problem was described by the students as a means of feedback. [I, b-b] Experimenting with the stones and the program is cooland you can immediately check if the program is correct. In summary, feedback for students is a key design element in our exploratory educational study. The decisive aspect for the students in our study was, on the one hand, that they could choose from a variety of feedback options. On the other hand, the feedback from the students made it evident that regardless of the type of feedback, it was important for the students that the desired feedback medium was available immediately when questions or problems arose. Thus, not only just-in-time learning but also just-intime feedback was a central design element for the students of our exploratory educational study. Novelty effects in the learning process Students' feedback made it clear that what was new or different in our exploratory educational study was a key design element for the students in our study. Students often described the new or different in our study in an undifferentiated way, merely contrasting the new or different with everyday teaching. [I, b-b] Today it is a real change compared to normal lessonsthat is exciting. [F] What I liked about working with Logifaces was that it was really creative and different learning than usual. However, much of students' feedback also related to concrete design elements of our study. An essential aspect for the students was combining mathematics learning as well as using Logifaces-stones and technologies. The surprise associated with this combination had a positive effect on the students' motivation, which can be found in the teaching notes as well as in the written feedback from the students: [F] I found working with the building blocks really interesting and I would not have thought at first that the building blocks could have anything to do with mathematics or information. In addition to combining mathematics, Logifaces-stones and technologies, new insights into the potential uses of Excel were a design element of our study that was remarkable for students. [I, b-b-b] These lessons reveal just how powerful Excel actually is ... I would not have thought that you can almost do coding with Excel. The feedback revealed that learning in our case study was always exciting and therefore motivating for students when something new or unexpected occurred during the learning process. This new or unexpected could refer to the design as a whole or very specific elements of our exploratory educational case study. Discussion In our explorative educational case study, examining how real-world objects and technologies could be combined in mathematics learning, the analysis of students' feedback indicated that key design elements in the learning process were A) using open tasks with multiple solutions, B) just-in-time feedback, and C) novelty effects . The importance of using multi-step open tasks to improve the quality of mathematics teaching has already been identified by Carreira and Baioa (2018) and Heck (2010). Following Carreira and Baioa (2018), real-world problems, and therefore problems that are open-ended and allow multiple solutions, should lead to students using school and nonschool knowledge and competencies in solving these tasks in mathematics lessons. Heck (2010) emphasises that one of the advantages of using real-world problems is that students learn like scientists when dealing with such problems. Also, treating open tasks with multiple solutions could be described as learning and researching like real scientists. Results of our study add to these findings dealing with real-world or open-ended problems with multiple solutions could improve not only the quality of mathematics teaching but also the motivation of students. Findings of Noss and Hoyles (2010) and Noss et al. (1997) suggest that learning environments could be developed into laboratories by using technologies. In such environments, mathematical content could be explored experimentally and creatively following the feedback from students in our exploratory educational study. Furthermore, according to students' feedback, it could be concluded that this learning environment could increase students' motivation, which, in turn, should have a positive effect on students' learning outcomes. Similar to Tabach et al. (2006), students in our exploratory educational study showed considerable cognitive gains in learning logical operations according to their feedback. If students are to achieve significant cognitive gains in open and real-world problem-based learning environments, it was important for students in our study to receive continuous feedback on their problem-solving strategies and help when the learning process has stalled; feedback on solution strategies concerned using technologies and real-world objects. Using technologies and real-world objects as feedback tools means, following the feedback of the students of our study, that the developed Excel program and the Logifaces-stones were used to examine if the logical operations were used correctly. Furthermore, it was also essential for students of our study to get personal feedback. The personal feedback included feedback from the teacher as well as feedback from classmates. This high demand for personal feedback from the students in our study confirms the results of Clarkson (2003) that learning logical operators and the associated communication regarding logical operators could also improve the students' language skills. The importance of novelty effects in the learning process for the students in our study extends the findings of Carreira and Baioa (2018) and Heck (2010) as well as Noss and Hoyles (2010) and Noss et al. (1997). For students in our study, it was essential to be able to learn with real-world problems like a real scientist (Carreira and Baioa 2018;Heck 2010) and to expand knowledge experimentally and creatively in laboratory-like learning environments (Noss and Hoyles 2010;Noss et al. 1997). It was equally important to students that new and unexpected insights could be gained during these learning processes. Analysing feedback from students in our exploratory educational study highlighted the importance for students of addressing real-world and open problems in class. In dealing with these real-world and open problems, students wanted to be able to use their ideas and strategies, and experiment with these ideas and strategies. In order to facilitate experimentation with their ideas, it could be fruitful to combine technological and real-world object-based learning environments. When experimenting with their ideas and using self-developed strategies, the students of our study were motivated by the fact that they not only gained insights that could have been gained in a teachercentred classroom. Students of our study emphasised that it was central to them that new and unexpected things could be discovered while learning. In order to discover new and unexpected things, it was vital for the students of our study that they had confidence in the learning process. In order to have confidence in the learning process, it was important for the students of our exploratory educational study to be able to receive feedback when needed. This feedback could be either personal or technology based. Conclusion and implications for education To find out which design elements are essential for students to learn basic logical skills in a learning environment where real-world objects and technologies are linked, we conducted an exploratory educational study. Analysing students' feedback demonstrated that using open tasks with multiple solutions, just-in-time feedback and novelty effects in the learning process was key for students in our study. In this context, it was interesting to note that those activities that were cognitively most challenging were most often positively mentioned by the students. It was the 'puzzling around' and experimenting, which was described by the students as particularly motivating elements of our study. In this puzzling around or experimenting with their solution strategies, it was also crucial for the students in our study that new or unexpected things could be discovered in these challenging processes. According to these results of our exploratory educational study, essential design elements of a productive mathematics learning environment are that students have challenging tasks to solve in the course of which new and, above all, unexpected things can be discovered. To ensure that the learning process in such environments is not overstrained, it was crucial for the students of our study that there is a rich repository of feedback possibilities. What was interesting in terms of feedback possibilities was that using technologies and the associated testing of solutions was described by the students as feedback. This possibility of using technologies could also be helpful in other mathematics learning settings and could increase students' confidence in mathematics learning, as well as accessible to a wider and/or different skill set of teachers, for example by moving the feedback task further from teacher to technology. Challenging and demanding tasks in combination with real-world objects and technologies as well as the provision of personal and technological feedback make it evident that the designers and implementers of such learning settings should currently be highly qualified. Designers and implementers of such learning settings are usually only one personthe teacher. To be able to use the potential of a mathematics learning environment based on real-world tasks, real-world objects and technologies in the best possible way, highly trained teachers are required. Specific tasks and requirements for such teachers in a learning environment such as in our exploratory educational study were not investigated in this study but will be the focus of our next research step. It is the intention that this will allow us to identify teacher training requirements and also how or if certain tasks e.g. feedback, may be flexibly allocated between teacher and technology making the learning environment accessible to a wider and/or different skill set of teachers. Compliance with ethical standards Conflict of interest Not applicable. Code availability Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
X-ray Single Crystal Structure, DFT Calculations and Biological Activity of 2-(3-Methyl-5-(pyridin-2’-yl)-1H-pyrazol-1-yl) Ethanol A pyridylpyrazole bearing a hydroxyethyl substituent group has been synthesized by condensation of (Z)-4-hydroxy-4-(pyridin-2-yl)but-3-en-2-one with 2-hydroxyethylhydrazine. The compound was well characterized and its structure confirmed by single crystal X-ray diffraction. Density functional calculations have been performed using DFT method with 6-31G* basis set. The HOMO-LUMO energy gap, binding energies and electron deformation densities are calculated at the DFT (BLYP, PW91, PWC) level. The electrophilic f(−) and nucleophilic f(+) Fukui functions and also the electrophilic and nucleophilic Parr functions are well adapted to find the electrophile and nucleophile centers in the molecule. The title compound has been tested for its DPPH radical scavenging activity which is involved in aging processes, anti-inflammatory, anticancer and wound healing activity. Compound is also found with a significant antioxidant activity, probably due to the ability to donate a hydrogen atom to the DPPH radical. Pyrazoles associated with pyridine groups show higher chelation ability [15][16][17][18]. This aptitude is mainly due to the presence of diversified sp 2 hybrid nitrogen donors with the involvement of geometry and the nature of the ligands. This chelating activity is accompanied by efficient biological activity of these drugs as insecticides [19] and fungicides [20]. On the other hand, compounds with hydroxyl substituents are known to show good pharmaceutics [21,22] and antimicrobial properties [23]. Indeed, the presence of hydroxyl groups on the aromatic ring make these products antioxidants that can scavenge free radicals. The hydroxyl radical is an extremely reactive oxidizing radical that can react with most biomolecules, including proteins, lipids and DNA in its vicinity at controlled diffusion rates. They can be produced in vivo by the homolytic breakage of oxygen-hydrogen bonds in water driven by the continuous exposure to background ionizing radiation. Taken together, all this information suggests that pyridylpyrazoles bearing hydroxyl substituents may be a key moiety in the treatment of diseases related to free-radical damage. However, there are only a few studies concerning pyridylpyrazole compounds. It is therefore interesting to increase the diversity of compounds containing these efficient and versatile ligands. Herein, we report on the synthesis of a pyridylpyrazole derivative with a hydroxyl substituent in the side chain. Its X-ray single crystal structure was determined and DFT calculations are reported. The radical scavenging and antioxidant activities of the compound were also tested. Chemistry The target compound based on a pyridylpyrazole core with a hydroxyl-functionalised arm was prepared in two steps (Scheme 1). The first synthesis step is the preparation of (2Z)-3-hydroxy-1-(pyridin-2-yl)but-2-en-1-one ligand 1 [24]. The reaction was carried out using ethyl 2-pyridinecarboxylate and acetone as nucleophile under mild Claisen condensation conditions (room temperature, two days), using toluene as solvent and sodium metal as the base. This procedure afforded exclusively the target product in its enol tautomeric form. The second step involves condensation of the ligand 1 with 2-hydroxyethylhydrazine using our previously described method [25]. This reaction affords the desired compound 2 in 54% yield as the major product. All the structures are in perfect agreement with their spectroscopic and analytical data. of hydroxyl groups on the aromatic ring make these products antioxidants that can scavenge free radicals. The hydroxyl radical is an extremely reactive oxidizing radical that can react with most biomolecules, including proteins, lipids and DNA in its vicinity at controlled diffusion rates. They can be produced in vivo by the homolytic breakage of oxygen-hydrogen bonds in water driven by the continuous exposure to background ionizing radiation. Taken together, all this information suggests that pyridylpyrazoles bearing hydroxyl substituents may be a key moiety in the treatment of diseases related to free-radical damage. However, there are only a few studies concerning pyridylpyrazole compounds. It is therefore interesting to increase the diversity of compounds containing these efficient and versatile ligands. Herein, we report on the synthesis of a pyridylpyrazole derivative with a hydroxyl substituent in the side chain. Its X-ray single crystal structure was determined and DFT calculations are reported. The radical scavenging and antioxidant activities of the compound were also tested. Chemistry The target compound based on a pyridylpyrazole core with a hydroxyl-functionalised arm was prepared in two steps (Scheme 1). The first synthesis step is the preparation of (2Z)-3-hydroxy-1-(pyridin-2-yl)but-2-en-1-one ligand 1 [24]. The reaction was carried out using ethyl 2-pyridinecarboxylate and acetone as nucleophile under mild Claisen condensation conditions (room temperature, two days), using toluene as solvent and sodium metal as the base. This procedure afforded exclusively the target product in its enol tautomeric form. The second step involves condensation of the ligand 1 with 2-hydroxyethylhydrazine using our previously described method [25]. This reaction affords the desired compound 2 in 54% yield as the major product. All the structures are in perfect agreement with their spectroscopic and analytical data. Each molecule of the title compound is constituted of two rings ( Figure 1) a pyridyl bonded by a 1.472(3)Å C-C junction to a pyrazole substituted by methyl and ethanol groups in the β-position. These two rings are essentially planar, as reflected by the rms deviations of fitted atoms, that are respectively 0.0075 and 0.0054Å, and they are nearly coplanar as indicated by the dihedral angle value of 10.65(2)˝. The bond lengths and angles between atoms in each cycle are comparable to those found in the literature [26]. Molecules are pairwise linked through N2¨¨¨H-O hydrogen bonds of 2.855(2)Å length into centrosymmetric dimers (N2¨¨¨H-O angle 179.0˝) shown in Figure 2. In each dimer, the cycles are arranged parallel to the (101) plane. The dimers are also involved with homologous units in weak intermolecular van der Waals interactions of 2.636Å (O¨¨¨H2) and 2.709Å (O¨¨¨H9b) as represented in Figure 3. In addition to the short hydrogen bonds N2-O that participate in dimer formation, some weaker interactions occur between dimers. Each dimer is further involved in interactions with two neighbours through N2-C7 contacts of 3.746(2)Å (N2¨¨¨H7-C7 172.4˝) and also with four additional molecules through contacts that involve the oxygen atom at the end of the ethyl branch. The distances are respectively 3.327(2) and 3.635(3)Å for the O-C9 and O-C2 contacts and angles are 131.7˝and 162.4˝for O¨¨¨H9-C9 and O¨¨H2-C2. On the other hand, each N1 atom participates in intramolecular interaction with C10, (N1-C10 is 2.919(2)Å and N1¨¨¨H10-C10 is 114.4˝). All these intermolecular interactions that result from molecular packing in the unit cell contribute to stabilization of the compound in its solid state. Each molecule of the title compound is constituted of two rings ( Figure 1) a pyridyl bonded by a 1.472(3)Å C-C junction to a pyrazole substituted by methyl and ethanol groups in the β-position. These two rings are essentially planar, as reflected by the rms deviations of fitted atoms, that are respectively 0.0075 and 0.0054Å, and they are nearly coplanar as indicated by the dihedral angle value of 10.65(2)°. The bond lengths and angles between atoms in each cycle are comparable to those found in the literature [26]. Figure 2. In each dimer, the cycles are arranged parallel to the (101) plane. The dimers are also involved with homologous units in weak intermolecular van der Waals interactions of 2.636Å (O···H2) and 2.709Å (O···H9b) as represented in Figure 3. In addition to the short hydrogen bonds N2-O that participate in dimer formation, some DFT Calculations Density functional theory (DFT) has been used to investigate the molecular geometry and the electron distribution. The geometric configurations have been optimized using the program DMol3. Results are obtained at different DFT levels within the GGA generalized gradient approximation with BLYP or PW91 functional as well as within the LDA local density approximation with PWC functional [27][28][29]. In all cases DNP double numerical plus polarization basis set have been used. Full geometry optimizations by minimization of the total energy have been carried out, starting from four different conformations of the molecule. While conformers b and c ( Figure 4), with the pyridine ring rotated by 90° from its position, revert into the experimental conformation a, the rotamer d (180°-rotation of pyridine ring) retains its initial geometry. DFT Calculations Density functional theory (DFT) has been used to investigate the molecular geometry and the electron distribution. The geometric configurations have been optimized using the program DMol3. Results are obtained at different DFT levels within the GGA generalized gradient approximation with BLYP or PW91 functional as well as within the LDA local density approximation with PWC functional [27][28][29]. In all cases DNP double numerical plus polarization basis set have been used. Full geometry optimizations by minimization of the total energy have been carried out, starting from four different conformations of the molecule. While conformers b and c ( Figure 4), with the pyridine ring rotated by 90° from its position, revert into the experimental conformation a, the rotamer d (180°-rotation of pyridine ring) retains its initial geometry. DFT Calculations Density functional theory (DFT) has been used to investigate the molecular geometry and the electron distribution. The geometric configurations have been optimized using the program DMol3. Results are obtained at different DFT levels within the GGA generalized gradient approximation with BLYP or PW91 functional as well as within the LDA local density approximation with PWC functional [27][28][29]. In all cases DNP double numerical plus polarization basis set have been used. Full geometry optimizations by minimization of the total energy have been carried out, starting from four different conformations of the molecule. While conformers b and c ( Figure 4), with the pyridine ring rotated by 90˝from its position, revert into the experimental conformation a, the rotamer d (180˝-rotation of pyridine ring) retains its initial geometry. DFT Calculations Density functional theory (DFT) has been used to investigate the molecular geometry and the electron distribution. The geometric configurations have been optimized using the program DMol3. Results are obtained at different DFT levels within the GGA generalized gradient approximation with BLYP or PW91 functional as well as within the LDA local density approximation with PWC functional [27][28][29]. In all cases DNP double numerical plus polarization basis set have been used. Full geometry optimizations by minimization of the total energy have been carried out, starting from four different conformations of the molecule. While conformers b and c ( Figure 4), with the pyridine ring rotated by 90° from its position, revert into the experimental conformation a, the rotamer d (180°-rotation of pyridine ring) retains its initial geometry. Nevertheless, the calculated energy is higher by~3 kcal/mol than that of experimental conformation, indicating a lower stability. This should be related to the intramolecular interaction of van der Waals type between C10 and N1 atoms that occur in the experimental conformation. Geometry has been optimized for an individual molecule and for a dimer considering molecular fragments isolated from the crystal structure as starting models. Similar calculations considering the complete unit cell packing have been performed to describe the crystalline solid state. Bond length, bond and torsion angles are of interest for the structure analysis. Some selected experimental parameters are given in Table 2 together with the calculated values for the molecule, dimer and crystalline state. Their variations give an evaluation of the structural packing constraints on the molecular geometry. Particularly interesting are the respective positions within a molecule of the pyridine and pyrazole rings. We note that these aromatic rings are not strictly coplanar and deviation from flatness can be measured by the dihedral angle between their mean planes. Values calculated for the corresponding N1-C5-C6-N3 angle are quite different in the molecule and in the crystal. In the meantime the rather long N1-N3 intramolecular distance is significantly affected by crystal packing. For example, in the GGA PW91 calculation, the dihedral angle is reduced by hydrogen bonding dimerization from 20.47 to 16.86˝and then reaches the experimental value of 11.8˝in solid state while the N-N distance is notably shortened (from 2.948 to 2.931 Å). It can be concluded that molecular packing in the solid state causes a flattening of the molecule that would allow conjugation effects to be extended. Nevertheless, the experimental C5-C6 bond length of 1.472 Å, slightly longer than computed distances, is of the order of those observed in similar compounds and does not provide much information about inter-ring conjugation. According to the FMO theory, the form and position of frontier orbitals are relevant for the reactivity of a molecule [30]. Whilst the lowest empty LUMO is highly π*-antibonding at the two cycles, the highest filled HOMO displays a π-bonding mainly localized at the pyrazole ring. From all Molecules 2016, 21, 1020 6 of 13 calculations (molecule, dimer or solid state) at the different DFT levels of theory, these orbitals are found separated by an energy gap of 3.4 eV which is to compare with the experimental value of 4.06 eV from UV experiments (absorption peak at 304 nm). On the other hand, a particular stacking is observed along the [hkl] direction with the pyridine ring of a molecule superimposed with the pyrazole ring of a neighbouring molecule and vice versa. In addition to these face-to-face (parallel) π/π interactions, edge-to-face (T-shape) π/π interaction occur between a pyridine ring and pyrazole ring of a neighbouring molecule as represented in Figure 5. According to the FMO theory, the form and position of frontier orbitals are relevant for the reactivity of a molecule [30]. Whilst the lowest empty LUMO is highly π*-antibonding at the two cycles, the highest filled HOMO displays a π-bonding mainly localized at the pyrazole ring. From all calculations (molecule, dimer or solid state) at the different DFT levels of theory, these orbitals are found separated by an energy gap of 3.4 eV which is to compare with the experimental value of 4.06 eV from UV experiments (absorption peak at 304 nm). On the other hand, a particular stacking is observed along the [hkl] direction with the pyridine ring of a molecule superimposed with the pyrazole ring of a neighbouring molecule and vice versa. In addition to these face-to-face (parallel) π/π interactions, edge-to-face (T-shape) π/π interaction occur between a pyridine ring and pyrazole ring of a neighbouring molecule as represented in Figure 5. The plane-to-plane distance between two neighbouring molecules is about 3.5Å, but no bonding density can be evidenced in this region. Instead significant densities are computed at N2-O atomic pairs involved in hydrogen bonding and very slight densities at C10-N1 and C2-O atomic pairs involved in intermolecular van der Waals interactions. This can be visualized with the 3D contour of the total density represented in Figure 6 at the 0.15 iso-level. Figure 6. Representation of the isosurface electron density (3D volumic contour) mapped with the deformation density (red color indicates electron localisation. Bule color indicates out electron losses, whereas the green/yellow represents a potential halfway between the two extremes. The deformation density, which is the total density with the density of isolated atoms subtracted, is mapped on the isosurface total density and shows high positive values (red) indicative of electron localization while low negative values (blue) point out electron losses. Looking at this representation The plane-to-plane distance between two neighbouring molecules is about 3.5Å, but no bonding density can be evidenced in this region. Instead significant densities are computed at N2-O atomic pairs involved in hydrogen bonding and very slight densities at C10-N1 and C2-O atomic pairs involved in intermolecular van der Waals interactions. This can be visualized with the 3D contour of the total density represented in Figure 6 at the 0.15 iso-level. According to the FMO theory, the form and position of frontier orbitals are relevant for the reactivity of a molecule [30]. Whilst the lowest empty LUMO is highly π*-antibonding at the two cycles, the highest filled HOMO displays a π-bonding mainly localized at the pyrazole ring. From all calculations (molecule, dimer or solid state) at the different DFT levels of theory, these orbitals are found separated by an energy gap of 3.4 eV which is to compare with the experimental value of 4.06 eV from UV experiments (absorption peak at 304 nm). On the other hand, a particular stacking is observed along the [hkl] direction with the pyridine ring of a molecule superimposed with the pyrazole ring of a neighbouring molecule and vice versa. In addition to these face-to-face (parallel) π/π interactions, edge-to-face (T-shape) π/π interaction occur between a pyridine ring and pyrazole ring of a neighbouring molecule as represented in Figure 5. The plane-to-plane distance between two neighbouring molecules is about 3.5Å, but no bonding density can be evidenced in this region. Instead significant densities are computed at N2-O atomic pairs involved in hydrogen bonding and very slight densities at C10-N1 and C2-O atomic pairs involved in intermolecular van der Waals interactions. This can be visualized with the 3D contour of the total density represented in Figure 6 at the 0.15 iso-level. Figure 6. Representation of the isosurface electron density (3D volumic contour) mapped with the deformation density (red color indicates electron localisation. Bule color indicates out electron losses, whereas the green/yellow represents a potential halfway between the two extremes. The deformation density, which is the total density with the density of isolated atoms subtracted, is mapped on the isosurface total density and shows high positive values (red) indicative of electron localization while low negative values (blue) point out electron losses. Looking at this representation Figure 6. Representation of the isosurface electron density (3D volumic contour) mapped with the deformation density (red color indicates electron localisation. Bule color indicates out electron losses, whereas the green/yellow represents a potential halfway between the two extremes. The deformation density, which is the total density with the density of isolated atoms subtracted, is mapped on the isosurface total density and shows high positive values (red) indicative of electron localization while low negative values (blue) point out electron losses. Looking at this representation ( Figure 6), in addition to high bonding populations at intramolecular bonds and at the intermolecular hydrogen bonding, it is evident that the highest values are calculated at electronegative atoms and are rather non-bonding densities. Deformation densities and molecular electrostatic potential have been shown to be strongly related and both may be used to discuss the reactivity of compound [31]. The calculated electrostatic potential found over the whole molecule would be a good tool to evaluate the regiochemistry especially for reactions dominated by electrostatic effects. More informative are the Fukui indices which are computed from the electronic density, and give a measurement of the local reactivity. The higher the Fukui indices, the higher the reactivity. The Fukui functions are defined as derivatives of the electron density with respect to the number of electrons at a constant potential. Then for an optimized geometry, changes in the calculated density when adding or removing an electron will point out the reactive regions of the molecule. The electrophilic f(´) and nucleophilic f(+) Fukui functions can be condensed to the nuclei by the use of a partitioning scheme of the atomic charge as Mulliken [32] or Hirshfeld [33]. Directly related to the electrophilicity and nucleophilicity, their high values reflect the susceptibility for an electrophilic or nucleophilic attack. Several recent studies have proved the relevance of such analyses and successfully used to rationalize the regioselectivity [34][35][36][37]. An easy graphical view of the molecule regioselectivity is provided in Figure 7 by the representation of f(´) and f(+) Fukui functions projected onto the molecular electrostatic potential. Atoms with high f(´) values are most likely to suffer an electrophilic attack as N2 atom at pyrazole ring which is also the best place for a radical attack as indicated by the f(0) function (not represented). Molecules 2016, 21, 1020 7 of 13 ( Figure 6), in addition to high bonding populations at intramolecular bonds and at the intermolecular hydrogen bonding, it is evident that the highest values are calculated at electronegative atoms and are rather non-bonding densities. Deformation densities and molecular electrostatic potential have been shown to be strongly related and both may be used to discuss the reactivity of compound [31]. The calculated electrostatic potential found over the whole molecule would be a good tool to evaluate the regiochemistry especially for reactions dominated by electrostatic effects. More informative are the Fukui indices which are computed from the electronic density, and give a measurement of the local reactivity. The higher the Fukui indices, the higher the reactivity. The Fukui functions are defined as derivatives of the electron density with respect to the number of electrons at a constant potential. Then for an optimized geometry, changes in the calculated density when adding or removing an electron will point out the reactive regions of the molecule. The electrophilic f(−) and nucleophilic f(+) Fukui functions can be condensed to the nuclei by the use of a partitioning scheme of the atomic charge as Mulliken [32] or Hirshfeld [33]. Directly related to the electrophilicity and nucleophilicity, their high values reflect the susceptibility for an electrophilic or nucleophilic attack. Several recent studies have proved the relevance of such analyses and successfully used to rationalize the regioselectivity [34][35][36][37]. An easy graphical view of the molecule regioselectivity is provided in Figure 7 by the representation of f(−) and f(+) Fukui functions projected onto the molecular electrostatic potential. Atoms with high f(−) values are most likely to suffer an electrophilic attack as N2 atom at pyrazole ring which is also the best place for a radical attack as indicated by the f(0) function (not represented). . Lower values correspond to blue zones, whereas the green/yellow represents a potential halfway between the two extremes Nevertheless, some recent works devoted to establish the regioselectivity in polar reactions report that the use of Fukui functions is not a good choice and that local electrophilicity may constitute an improved alternative for reactivity description [38]. On the other hand, it has been established [39,40] that the electrophilic and nucleophilic Parr functions are well adapted to find the electrophile and nucleophile centers in a molecule. These functions, respectively Pk + and Pk − , are considered as powerful tools for the study of organic reactivity. They can be obtained from the analysis of atomic spin density of the radical anion and the radical cation. Thus, spin unrestricted calculations have been performed for the molecule (in its optimized neutral geometry) either bearing a +1 or −1 charge in order to compute the spin densities (difference between α and β electron densities) at each atom. The atomic spin density spatial distribution can be visualized as an isosurface 3D contour and also by checking the calculated values at each atom. The greatest values of Mulliken and Hirshfeld atomic spin densities are reported in Table 3 with the graphic representation at the 0.5 level. Results show the highest electrophilic area is located around C5 and N1 atoms while the highest nucleophilic center is found at N2 atom. In the present case, the Parr functions quite well corroborate the results predicted on the basis of Fukui functions. . Lower values correspond to blue zones, whereas the green/yellow represents a potential halfway between the two extremes Nevertheless, some recent works devoted to establish the regioselectivity in polar reactions report that the use of Fukui functions is not a good choice and that local electrophilicity may constitute an improved alternative for reactivity description [38]. On the other hand, it has been established [39,40] that the electrophilic and nucleophilic Parr functions are well adapted to find the electrophile and nucleophile centers in a molecule. These functions, respectively P k + and P k´, are considered as powerful tools for the study of organic reactivity. They can be obtained from the analysis of atomic spin density of the radical anion and the radical cation. Thus, spin unrestricted calculations have been performed for the molecule (in its optimized neutral geometry) either bearing a +1 or´1 charge in order to compute the spin densities (difference between α and β electron densities) at each atom. The atomic spin density spatial distribution can be visualized as an isosurface 3D contour and also by checking the calculated values at each atom. The greatest values of Mulliken and Hirshfeld atomic spin densities are reported in Table 3 with the graphic representation at the 0.5 level. Results show the highest electrophilic area is located around C5 and N1 atoms while the highest nucleophilic center is found at N2 atom. In the present case, the Parr functions quite well corroborate the results predicted on the basis of Fukui functions. The nucleophilic character of the N2 pyrazolic atom and its reactivity is justified by formation of a strong N2··· HO (dimer) hydrogen bond. Instead, atoms characterized with a high Pk + value are susceptible to undergo a nucleophilic attack; in the molecule these are mainly located in the pyridine ring (C5, N1, C2, C3). The strength of all the intermolecular interactions involved in the solid state packing can be evaluated using binding energy. This energy associated to the structure cohesion corresponds to the energy needed to dissociate the molecule or the crystal into atoms at infinite separation. The strength of hydrogen bonds can be evaluated from comparison of binding energies calculated for the molecule and for the dimer. Since binding energy is found to increase from the molecule to crystal, it can be concluded that crystal packing is stabilized by the N2-HO hydrogen bonding and by the C10-N1 and C2-O van der Waals interactions. DPPH Radical Scavenging Activity In the present study, the possible radical scavenging activity of the pyridylpyrazole derivative was examined. It is known that compounds that include hydroxy substituents represent a significant source of reducers able to provide an electron or hydrogen radical to stabilize the 1,1-diphenyl-2picrylhydrazyl (DPPH) radical in solution. In this case, the radical-scavenging activity has been studied by measuring the decrease in absorbance, in order to assess the capacity of the studied organic compound. All results are expressed for each sample as a% activity related to BHA (a mixture of: 2-tert-butyl-4-hydroxyanisole and 3-tert-butyl-4-hydroxyanisole) and ascorbic acid references, which are strong DPPH radical scavengers. Table 4 reports the absorbance of DPPH for compound 2 which was characterized by a moderate radical scavenging activity compared to the standards. The nucleophilic character of the N2 pyrazolic atom and its reactivity is justified by formation of a strong N2¨¨¨HO (dimer) hydrogen bond. Instead, atoms characterized with a high P k + value are susceptible to undergo a nucleophilic attack; in the molecule these are mainly located in the pyridine ring (C5, N1, C2, C3). The strength of all the intermolecular interactions involved in the solid state packing can be evaluated using binding energy. This energy associated to the structure cohesion corresponds to the energy needed to dissociate the molecule or the crystal into atoms at infinite separation. The strength of hydrogen bonds can be evaluated from comparison of binding energies calculated for the molecule and for the dimer. Since binding energy is found to increase from the molecule to crystal, it can be concluded that crystal packing is stabilized by the N2-HO hydrogen bonding and by the C10-N1 and C2-O van der Waals interactions. DPPH Radical Scavenging Activity In the present study, the possible radical scavenging activity of the pyridylpyrazole derivative was examined. It is known that compounds that include hydroxy substituents represent a significant source of reducers able to provide an electron or hydrogen radical to stabilize the 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical in solution. In this case, the radical-scavenging activity has been studied by measuring the decrease in absorbance, in order to assess the capacity of the studied organic compound. All results are expressed for each sample as a% activity related to BHA (a mixture of: 2-tert-butyl-4-hydroxyanisole and 3-tert-butyl-4-hydroxyanisole) and ascorbic acid references, which are strong DPPH radical scavengers. Table 4 reports the absorbance of DPPH for compound 2 which was characterized by a moderate radical scavenging activity compared to the standards. As shown, at a concentration of 100 ppm, the compound exhibits a significant radical scavenging activity of about 73%. This considerable activity should probably be attributed to the high radical-scavenging property of the hydroxyl substituent, and is promising for the treatment of diseases caused by free radicals, and it provides more information for designing novel drugs. Moreover, the IC 50 value in antioxidant assays was calculated from the plotted curves using regression analyses (Figure 8). The result show moderate antioxidant activity (IC 50 value 31.8 µg/mL) compared to the ascorbic acid (IC 50 value 2.82 µg/mL) and BHA (IC 50 value 6.8 µg/mL) standards. As shown, at a concentration of 100 ppm, the compound exhibits a significant radical scavenging activity of about 73%. This considerable activity should probably be attributed to the high radicalscavenging property of the hydroxyl substituent, and is promising for the treatment of diseases caused by free radicals, and it provides more information for designing novel drugs. Moreover, the IC50 value in antioxidant assays was calculated from the plotted curves using regression analyses (Figure 8). The result show moderate antioxidant activity (IC50 value 31.8 μg/mL) compared to the ascorbic acid (IC50 value 2.82 μg/mL) and BHA (IC50 value 6.8 μg/mL) standards. Ferric Reducing Antioxidant Power (FRAP) Assay The FRAP assay is a simple, inexpensive and reproducible process based on the ability of a sample to reduce the ferric Fe 3+ to ferrous Fe 2+ ion. Therefore, the ability of a compound to transfer an electron is a significant indicator of its antioxidant potentialities [41]. Figure 9 shows the optical density (OD) read at 700 nm according to the sample concentration and with ascorbic acid taken as standard. Indeed, the reducing power of both sample and standard increases with the concentration. Ferric Reducing Antioxidant Power (FRAP) Assay The FRAP assay is a simple, inexpensive and reproducible process based on the ability of a sample to reduce the ferric Fe 3+ to ferrous Fe 2+ ion. Therefore, the ability of a compound to transfer an electron is a significant indicator of its antioxidant potentialities [41]. Figure 9 shows the optical density (OD) read at 700 nm according to the sample concentration and with ascorbic acid taken as standard. Indeed, the reducing power of both sample and standard increases with the concentration. As shown, at a concentration of 100 ppm, the compound exhibits a significant radical scavenging activity of about 73%. This considerable activity should probably be attributed to the high radicalscavenging property of the hydroxyl substituent, and is promising for the treatment of diseases caused by free radicals, and it provides more information for designing novel drugs. Moreover, the IC50 value in antioxidant assays was calculated from the plotted curves using regression analyses (Figure 8). The result show moderate antioxidant activity (IC50 value 31.8 μg/mL) compared to the ascorbic acid (IC50 value 2.82 μg/mL) and BHA (IC50 value 6.8 μg/mL) standards. Ferric Reducing Antioxidant Power (FRAP) Assay The FRAP assay is a simple, inexpensive and reproducible process based on the ability of a sample to reduce the ferric Fe 3+ to ferrous Fe 2+ ion. Therefore, the ability of a compound to transfer an electron is a significant indicator of its antioxidant potentialities [41]. Figure 9 shows the optical density (OD) read at 700 nm according to the sample concentration and with ascorbic acid taken as standard. Indeed, the reducing power of both sample and standard increases with the concentration. General Information All solvents and other chemicals (purity > 99.5%, Aldrich, Saint-Louis, MO, USA) of analytical grade were used without further purification. An Xcalibur CCD diffractometer (Oxford Diffraction, Abingdon, Oxfordshire, UK) was used to perform X-ray analysis on a parallelepiped colourless sample. Elemental analyses were performed by the Microanalysis Centre Service (CNRS, Lille, France). Melting points were measured using a Büchi 510 m.p. apparatus (LCAE, Oujda, Morocco). 1 H-and 13 C-NMR spectra were recorded on an AC 300 spectrometer (CNRST) (Bruker, Rabat, Morocco) (300 MHz for 1 H and 75.47 MHz for 13 C spectra). A JMS DX-300 mass spectrometer (JEOL, Rabat, Morocco) was used for the determination of molecular weights. Infrared (IR) spectra were recorded on a Shimadzu infrared spectrophotometer (LCAE, Oujda, Morocco) using the KBr disc technique. To a solution of 1-pyridin-2-yl-butane-1,3-dione (1, 1.5 g, 9.2ˆ10´3 mol) in absolute ethanol (50 mL) and cooled at 0˝C, was slowly added a solution of 2-hydroxyethylhydrazine (0.7 g, 9.2ˆ10´3 mol) in absolute ethanol (10 mL). The mixture was stirred at room temperature for 2 h. Then, the solvent was removed under reduced pressure and the obtained residue was purified on silica gel using (20% ethanol/80% ether) to give 54% yield of 2 as a brown solid. The obtained solid was recrystallized using methanol to give colourless crystals of 2 suitable for X-ray analysis. X-ray Crystallographic Analysis A single crystal of the title compound was selected for X-ray structural analysis and mounted on an Oxford Diffraction XCalibur CCD diffractometer [41] using Mo-Kα radiation (λ = 0.71073 Å). Unit cell dimensions with estimated standard deviations were determined by least-squares from the whole reflexion data set. A total of 15317 reflexions has been collected in θ range 3.30-26.14˝, from which 2114 are independent and 1616 satisfy the intensity criterion I > 2σ(I). The intensities were corrected for Lorentz and polarisation effects. Main crystal data collection and refinement parameters are listed in Table 1. Data reduction was carried out using the Oxford Diffraction CrysAlis Red 171 [42] program. The structure was solved by direct methods in the monoclinic space group C2/c and refined by full-matrix least-squares using SHELXS-97 and SHELXL-97 program packages [43]. The weighting scheme employed was w = 1/[σ 2 (F o 2 ) + (0.0882P) 2 + 0.3970P] where P = (F o 2 + 2F c 2 )/3. Except for the hydrogen atom of OH group which was detected in the Fourier difference and freely refined, the H atoms were included at calculated positions and refined in riding mode. Molecular graphics are drawn with ORTEP-3 for windows [44] and Mercury 3.8. The complete crystallographic data have been deposited (CCDC-1487543) and can be obtained free of charge from the Cambridge Crystallographic Data Centre via http://www.ccdc.cam.ac.uk/data_request/cif. DPPH Radical Scavenging Activity The radical scavenging activity against stable 1,1-diphenyl-2-picryl hydrazyl (DPPH) radical has been examined using the method reported by Moure et al. [45] for the compound comparatively with activities of known antioxidant agents, BHA and ascorbic acid. Briefly 0.1 mL of sample (tested at concentrations ranging from 5 to 100 µg/mL) was added to 1.9 mL ethanol solution of the DPPH radical. After 30 min of incubation in the dark at room temperature, the absorbance was read against a blank at 517 nm. The scavenging activity DPPH radical was expressed as a percentage inhibition by the following formula: rpA blank´Asample q{A blank sˆ100 (1) where, A sample is the absorbance of the solution containing the sample and A blank is the absorbance of the DPPH solution. The IC 50 values were calculated as the concentration of extract causing a 50% inhibition of DPPH radical. Ferric Reducing Antioxidant Power (FRAP) Assay Ferric reducing antioxidant power was determined according to the method described by Oyaizu (1986) [46]. This method may evidence the antioxidant behavior of reductants as they cause the reduction of [Fe(CN) 6 ] 3´c omplex into [Fe(CN) 6 ] 4´. Actually, 2.5 mL of aqueous sample (7.8-500 µg/mL) are mixed with 2.5 mL of the buffer solution phosphate (0.2 M; pH 6.6) and 2.5 mL of potassium ferricyanide (1.0%) and mixture is incubated at 50˝C for 30 min. Thereafter, 2.5 mL of trichloroacetic acid (10%) is added. The whole is centrifuged at 3000 rpm for 10 min. For each concentration, 2.5 mL of the supernatant is mixed with 2.5 mL of distilled water and 0.5 mL FeCl 3 (0.1%). Absorbance is measured at 700 nm using a spectrophotometer. The higher absorbance of the solution indicate the greater reducing power for the compound. For comparison, ascorbic acid has been used as positive control. Conclusions Based on the experimental results, a pyridylpyrazole derivative bearing a hydroxyl substituted arm group has been synthesized and its XRD single crystal structure determined. Density functional calculations have been performed using the DFT method. The HOMO-LUMO energy gaps, binding energies and molecular electrostatic potential (MEP) are calculated. In the present case, the Parr functions quite well corroborate the results predicted on the basis of Fukui functions. The title compound has been tested for its DPPH radical scavenging and antioxidant activities. Results show significant activities that are most probably due to the compound ability to donate a hydrogen atom to the DPPH radical.
Highly N-doped microporous carbon nanospheres with high energy storage and conversion efficiency Porous carbon spheres (CSs) have distinct advantages in energy storage and conversion applications. We report the preparation of highly monodisperse N-doped microporous CSs through the carbonization of polystyrene-based polymer spheres and subsequent activation. The N-doped microporous CSs have a remarkably high N-doping content, over 10%, and high BET surface area of 884.9 m2 g−1. We characterize the synergistic effects of the micropores and N doping on the energy storage performance of a supercapacitor electrode consisting of the CSs and on the performance in an electrocatalytic reaction of a CS counter electrode in a photovoltaic cell. The N-doped microporous CSs exhibit a maximum capacitance of 373 F g−1 at a current density of 0.2 Ag−1, a high capacitance retention up to 62% with a 10-fold increase in current density, and excellent stability over 10,000 charge/discharge cycles. A counter electrode consisting of N-doped microporous CSs was found to exhibit superior electrocatalytic behavior to an electrode consisting of conventional Pt nanoparticles. These CSs derived from polymer spheres synthesized by addition polymerization will be new platform materials with high electrochemical performance. Synthesis of Nitrogen-Doped Microporous Carbon Nanospheres and their Characterization. The synthesis of N-doped microporous CSs from emulsion-polymerized PS spheres is shown schematically in Fig. 1. The PS spheres were synthesized via the emulsifier-free emulsion polymerization of styrene monomers and subsequently crosslinked via Friedel-Craft alkylation. Crosslinking was required to achieve a high carbon conversion. During the pyrolytic carbonization of the PS spheres, we heat-treated them in the presence of carbamide, resulting in N-doping along with carbonization. It has been reported that carbamide is thermally decomposed and deposited in the form of carbon nitrides, and then the N atoms of the carbon nitrides thermally diffuse inside the carbon lattice at high temperatures (>500 °C) 36 . Note that performing N-doping during pyrolytic carbonization favors a high doping content, as will be discussed later. We then carried out the KOH-assisted activation reaction to introduce micropores into the CSs. The high-temperature KOH treatment of the CSs induces the reaction, 6KOH + 2 C → 2 K + 3H 2 + 2K 2 CO 3 , and at higher temperatures such as 700 °C the reaction, K 2 CO 3 + C → K 2 O + 2CO occurs in which C is consumed to create micropores inside the carbon matrix 37 . The XRD results obtained at various temperatures up to 700 °C indicate that K 2 CO 3 is produced at 600 °C, and at 700 °C K 2 O is produced while K 2 CO 3 is reduced (see Figure S1). SEM images of PS spheres, N-doped CSs, and N-doped microporous CSs are shown in Fig. 1. The PS spheres are monodisperse (a polydispersity within 5%) and the diameters of the PS spheres are approximately 285 nm, as shown in Fig. 1a. The diameters of the N-doped CSs are approximately 250 nm, as shown in Fig. 1b, which corresponds to a shrinkage in size of approximately 10%. This size shrinkage is due to the pyrolytic carbonization of PS. The KOH activation maintains the monodispersity of the N-doped CSs but the diameter is decreased further to 230 nm, as observed in Fig. 1c. This decrease in size occurs due to the activation creating pores while etching the carbon matrix. The SEM (Fig. 1d) and DLS data (Fig. 1e) further confirm that the N-doped microporous CSs have a narrow size distribution with particle sizes around 300 nm. The elemental mapping of the N-doped microporous CSs was performed with TEM; a dark-field image and the C, N, and O mappings are shown in Fig. 2b,c and d respectively. The N intensity is higher in the center of the sphere, which confirm that the N doping is not limited to the surface but is uniform over the sphere. The carbon microstructures of the CSs were characterized by recording their Raman spectra. The spectra of N-doped CSs and N-doped microporous CSs contain peaks centered near 1350 cm −1 and 1592 cm −1 , which are the D and G bands respectively, as shown in Fig. 2e. The D band arises from the vibrations of carbon atoms with dangling bonds in plane terminations, and is thus related to the defects of graphitic carbon, whereas the G band arises from the vibrations of sp 2 carbon atoms in the graphitic layer 38 . In contrast to the results for bare CSs, the N-doped CS and N-doped microporous CS spectra contain additional shoulders near 1180 cm −1 and 1500 cm −1 , which are assigned to sp 3 -hybridized carbon 39 . The presence of these peaks implies that N doping and activation result in the creation of a defective graphitic layer. To quantitatively evaluate the defects, the ratios of the peak intensities of the D and G bands (I D /I G ) were compared. The I D /I G ratios of CSs, N-doped CSs, and N-doped microporous CSs are 0.9733, 1.0172, and 1.0413, respectively. An increase in the I D /I G ratio is typically due to an increase in the number of sp 2 crystallite boundaries (i.e., a reduction in the sp 2 cluster size) or/and an increase in sp 3 hybridization, which indicates an increase in the proportion of defects 40,41 . We have observed that nitrogen doping increases the micropore, which is considered to be due to the defective site.(see Figure S2) Thus, N doping and pore generation are accompanied by lattice destruction and/or fragmentation. The graphitic crystallite domain sizes were determined by using the Tuinstra-Koenig relationship 42 ; the domain sizes of N-doped CSs and The chemical compositions of bare CSs, N-doped CSs, and N-doped microporous CSs were characterized with XPS analysis. The XPS survey spectra and the atomic compositions are shown in Fig. 3a. The N contents of N-doped CSs and N-doped microporous CSs are approximately 16 at% and 10 at% respectively. We optimized the N doping content and showed better electrochemical properties at a nitrogen content of 16 at% as shown in Figure S2. Note that the N-doped microporous CSs exhibit a remarkably high N doping content although the N doping content is reduced by the removal of unstable C-N bonds during high temperature activation 43 . Many previous studies that have conducted N doping of various carbon materials such as graphene and carbon nanotubes have reported 4-6% of N-doping. (see Table S2) In addition, N-doping into CSs derived from phenolic resins reported a doping level of 2-5%, with few results highlighting a high doping level of around 7 at%. (see Table S1) Previously, it has been reported that N doping is mediated by oxygenated groups or mostly occurs at defective sites or edges 44,45 . PS-derived CSs contain a high oxygen content of 12.2 at%, as observed in Fig. 3a. The CSs have a high defect density and a small domain size (i.e., a high density of edges) as shown in Raman analysis. The fact that a large amount of oxygenated group and defect generation are accompanied along with the carbonization implies the possibility of high concentration of N-doping. The N-doping configurations of N-doped CSs and N-doped microporous CSs were further characterized by examining their high resolution N1s spectra, which can be deconvoluted into four peaks located in the regions 398.2-398.6 eV, 399.5-399.7 eV, 400.7-400.8 eV, and 402.5-402.6 eV, as shown in Fig. 3b and c. The first two peaks are attributed to pyridinic-N (sp 2 hybridized with two carbon atoms, N-6), and pyrrolic N (incorporated in a five-membered ring of carbon atoms, N-5) 46,47 . The peak in the range of 400.7-400.8 eV is due to quaternary nitrogen (sp 3 hybridized with three carbon atoms, N-Q), i.e. N atoms substituted for carbon atoms in a graphene layer. The peak at around 402.5-402.6 eV is attributed to oxidized N (N-X) 48,49 . The relative proportions of these N configurations in the N-doped CSs and the N-doped microporous CSs are shown in Fig. 3d. The proportions of N-6 and N-5 in the N-doped CSs are larger than those of N-Q and N-X. The proportions of N-6 and N-5 are much lower in the N-doped microporous CSs than in the N-doped CSs. The large decrease in the levels of N-6 and N-5 during the activation step is probably due to their low binding energy, which means that they are likely to be etched during the high-temperature activation reaction 50 . The BET isotherm for N-doped microporous CSs were measured to characterize their pore structures. The BET isotherm for bare CSs was also measured for comparison. The isotherm of N-doped microporous CSs is type I with a steep increase in adsorption at very low relative pressures followed by a plateau and no apparent hysteresis in the adsorption/desorption cycle, as shown in Fig. 4a; these results indicate the presence of abundant micropores and some mesopores. The specific surface areas and total micropore and mesopore volumes for all samples are presented in Table 1. The specific surface area and the total pore volume of N-doped microporous CSs are 844.9 m 2 g −1 and 0.3383 cm 3 g −1 respectively. The BET surface area of the N-doped microporous CSs is approximately 40 times higher than that of bare CSs because of the presence of micropores. The pore size distributions obtained with the Barrett-Joyner-Halenda (BJH) method are also plotted in Fig. 4b: the N-doped microporous CSs contain highly monodisperse micropores with diameters of approximately 4 nm. Electrochemical properties and supercapacitor applications. The electrochemical properties of the N-doped microporous CSs were confirmed by cyclic voltammetry (CV) measurement using the film in which the CSs were assembled. (see Fig. 5a) The CV for N-doped microporous CSs as well as those for bare CSs and N-doped CSs are shown in Fig. 5b. The cyclic voltammogram for the bare CSs has a rectangular shape with a hump near 0.4 V, which is typical response of carbonaceous materials containing oxygenated groups; the hump corresponds to the reduction of quinone to hydroquinone in acid solution 51 . Compared to the bare CSs, the N-doped CSs exhibit higher current densities over the entire potential range, which is due to the enhancement of the electric double layer (EDL) capacitance that results from N-doping. Previously, N-doping enhanced the electrical conductivity of carbon, and the EDL capacitance 52 . We observe an increase in conductivity by N doping, as the voltage drop in the charge/discharge measurements. (see Fig. 5c). Meanwhile, compared to CSs, N-doped CSs show higher current densities at electrode potentials less than 0.6 V, as shown in Fig. 5b. The improvement of current densities around 0.2 V has been reported to be a contribution of Faradaic redox capacitance by N-doping; N-doping, particularly in N-5 and N-6 configurations, induces pseudocapacitance via the proton exchange reaction 53 . As demonstrated in Fig. 5d, N-doped CSs contain relatively high proportions of N-6 and N-5 configurations. Further, when compared to the cyclic voltammogram of N-doped CSs, that of the N-doped microporous CSs has a much larger rectangular-like shape and also a more pronounced hump around 0.5 V. This result implies that the activation reaction enhances both the EDL capacitance and the pseudocapacitance.The activation step obviously improves the specific area of the CSs by creating micropores, thereby increasing the EDL capacitance. In addition, the activation step generates more C = O quinone groups as observed in Figure S3, thereby improving the current at around 0.5 V corresponding to the reduction reaction of the quinone group. The galvanostatic charge/discharge curves of bare CSs, N-doped CSs, and N-doped microporous N-CSs are compared in Fig. 5c. The specific capacitance of each sample was obtained from its charge/discharge curve according to the equation, C S = (I × ∆t)/(∆V × m), where C S is the specific capacitance, I is the discharge current, ∆t is the discharge time, ∆V is the voltage range, and m is the mass of the electrode material 54 . The calculated specific capacitance of the N-doped microporous CSs is 363 F g −1 at 0.2 A g −1 . Note that this specific capacitance of N-doped microporous CSs is the highest level among previous N-doped carbon materials (see Tables S1 and S2). The specific capacitances of the bare CSs and N-doped CSs were calculated to be 30 F g −1 and 180 F g −1 at 0.2 A g −1 respectively. Thus, N-doping enhances the capacitance by a factor of approximately 6 and the subsequent micropore generation further enhances the capacitance by a factor of 2. Meanwhile, the voltage drop (or internal resistance drop) is decreased by N-doping and subsequent micropore generation; the voltage drops for bare CSs, N-doped CSs, and N-doped microporous CSs are 110 mV, 22 mV, and 12 mV, respectively, as shown in Fig. 5c. This decrease probably arises because activation increases the specific area of the CSs without any substantial decrease in the N-doping content. The specific capacitances were calculated at various current densities in the range 0.2 to 2 A g −1 : there is a decrease in the capacitance as the current density increases, as shown in Fig. 5d. The N-doped microporous CSs exhibit a specific capacitance of 373 F g −1 at 0.2 A g −1 and a capacitance retention of 61% when current density was increased 10-fold, whereas the capacitance retentions of the CSs and N-doped CSs are 23% and 56% respectively. Note that the capacitance retention of the N-doped microporous CSs is comparable to that of the N-doped CSs as well as that of bare CSs. It has often been observed that the presence of micropores, particularly those smaller than a few nm in size, impairs the retention at high current densities due to their limited ion transport kinetics 55,56 . In our case, the activation reaction creates large micropores with sizes near 4 nm, and moreover the well-defined pore network within the highly monodisperse, sub-micrometer-size CS assembly could facilitate ion transport. Further, considering that capacitance retention is markedly lower in high resistance electrodes, the high retention of N-doped microporous CSs is attributed to their higher conductivity because their N-doping content is high even after the activation reaction, as is evident in the voltage drop analysis. The cycle stability of the N-doped microporous CSs was also assessed, as shown in Fig. 5e. The capacitance retention is excellent even after 10,000 cycles, i.e. the capacitance is still 98% of the initial capacitance, which confirms the high performance of the N-doped microporous CSs as a supercapacitor electrode. Electrocatalytic applications. We tested the N-doped microporous CSs in an electrocatalysis application, i.e. as a counter electrode (CE) in a dye-sensitized solar cell (DSSC). The conventional DSSC was composed of a dye-sensitized TiO 2 photoanode, a platinum (Pt) CE, and an electrolyte solution containing a redox ion couple (I − /I 3− redox ion pairs). The CE of the DSSC plays the role of the electrocatalytic regeneration of the redox ion couple that is oxidized at the photoanode 57 , i.e., the CE induces the electrocatalytic reduction of the oxidized redox ion, I 3− into I − . Pt has been widely used in CEs because of its high electrocatalytic activity in regeneration reactions and high conductivity, but the high costs and scarcity of Pt limit its practical applications 58 . As an alternative, carbon materials such as carbon nanotubes, graphene, and activated carbon are attractive because of their high conductivities, large surface areas, and high corrosion resistance towards redox ions as well as their low cost [59][60][61][62] . Here, the N-doped microporous CS-based CE was prepared by coating the CSs onto a transparent conductive substrate, as shown in Fig. 6a. The CE was assembled with the conventional TiO 2 photoanode to fabricate the DSSC (see inset picture of Fig. 6b). Here, disulfide (S 2− /S x 2− ) ions were used in the electrolyte instead of the conventional I − /I 3− ions. The disulfide redox couple is a promising electrolyte for high performance DSSCs because it possesses a higher redox potential than the I − /I 3− redox couple, is non-corrosive toward dyes, and exhibits negligible visible-light absorption 63,64 . The photocurrent density vs applied voltage curve for the DSSC containing a N-doped microporous CS CE is shown in Fig. 6b. The results for a DSSC with a conventional Pt-CE are also shown for comparison. In Figure S4, the cyclic voltammogram of the disulfide electrolyte of the two counter electrodes was measured and the result shows a similar redox curve. The photovoltaic parameters of the DSSC, including J sc (short-circuit current), V oc (open-circuit voltage), the fill factor (FF), and the overall conversion efficiency (η) obtained by (J sc V oc FF/1000 W m −2 ) are also listed in Table 2. The η value for the DSSC with a N-doped microporous CS CE was found to be 8.621% whereas that of the DSSC with a conventional Pt CE was 6.884%. Thus, the CS electrode DSSC exhibits a 25% higher η than the Pt electrode DSSC, which is attributed to the higher J sc and FF of the N-doped microporous CS DSSC. Many previous studies have reported that the η values of various carbon CEs for DSSCs are comparable to those of Pt CEs.(see Table S3). Thus, the present result of the N-doped microporous CS CE demonstrate its superior electrocatalytic properties. The electrochemical impedance spectra (EIS) of the N-doped microporous CS electrode and Pt electrode DSSCs were recorded to characterize their charge transfer processes. Three-overlapped semicircles are evident in the Nyquist plots, as shown in Fig. 6c; the semicircles in the high, middle, and low frequency regions correspond to the charge transfer resistance at the electrolyte/CE interface, i.e., the electrocatalytic reaction resistance (R CE ), the resistance at the TiO 2 /dye/electrolyte interface (R ct ), and the diffusion impedance of the redox electrolyte (R diff ), respectively 65,66 . To analyze the Nyquist plot, we used an equivalent circuit shown in the inset of Fig. 6c. Note in particular that the R CE of the CS CE is much smaller than that of the Pt CE (see Table S4). This result indicates that the electrochemical reaction kinetics of the disulfide redox ions on the N-doped microporous CS CE are much enhanced when compared to those on the Pt CE. This high electrocatalytic activity of the CS CE may be attributed to highly defective graphitic character and abundant oxygen/nitrogen functional groups of the N-doped microporous CSs 67 . It has been known that the functional group near the carbon crystal edge are known to be the dominant catalytic active sites for oxidized disulfide 68,69 . Moreover, the hierarchical, well-defined pores of the CS-assembled film may facilitate ion transport, which improves the electrocatalytic reaction. The high R ct decreases the rate of dye regeneration and enhances the charge recombination reaction of the oxidized ions. Thus, the higher J sc of the N-doped microporous CS DSSC could be explained by its lower R CE 68 . Further, the R ct of the N-doped microporous CS CE is 15% smaller than that of the Pt CE: we also measured the ohmic series resistance (R S ) with a 4-point probe, and found that the R S of the CS CE is slightly lower that of the Pt CE, as shown in Table S5. This low resistance of the CS film is probably due to the high N-doping. The high FF of the N-doped microporous CS DSSC is thus explained by the low internal resistance of the CS film 68 . Discussion The preparation of microporous, heteroatom-doped CSs is a synergistic strategy for improving their energy storage/conversion efficiency. The heteroatom doping ameliorates their performance in electrochemical and electrocatalytic reactions, and the introduction of micropores maximizes their specific areas and thereby the reaction throughput. Although porous CSs with heteroatom doping have been prepared from phenolic-resin-derived polymer spheres, they have typically been prepared for energy storage applications (i.e., supercapacitors), and CSs with high doping levels were hardly synthesized. We obtained highly monodisperse N-doped microporous CSs by performing the carbonization of PS-based spheres and a subsequent activation reaction. The N-doping content of the N-doped microporous CSs is above 10% because doping was performed simultaneously with carbonization and the polystyrene-derived carbon has a highly defective microstructure. In applications as supercapacitor electrodes, these N-doped microporous CSs were found to provide a maximum capacitance of 373 F g −1 at a current density of 0.2 A g −1 ; they also exhibit high capacitance retention and excellent cycle performance. When used in the electrocatalytic electrodes of a DSSC, the N-doped microporous CSs were found to exhibit superior electrocatalytic behavior to a conventional Pt electrode. We believe that various polymer spheres synthesized by addition polymerization will be a platform for synthesizing nanocarbon materials with high performance electrochemical and electrocatalytic nanomaterials. Synthesis of Polystyrene-Derived Carbon Spheres. Synthesis of Polystyrene-Derived Carbon Spheres. Monodisperse polystyrene-based polymer spheres were synthesized by performing the emulsifier-free emulsion polymerization of styrene monomer (99.9% Sigma-Aldrich) in the presence of methyl methacrylate (99.9% Sigma-Aldrich, MMA) monomer (approximately 5 wt% with respect to styrene). Styrene and MMA were vigorously mixed in water along with 5 wt% potassium persulfate initiator (Aldrich), and then 40 wt% divinyl benzene (Aldrich) was added. After overnight polymerization, the polymer colloids were washed with water several times and re-dispersion in deionized water. Subsequently, the polystyrene (PS) was further crosslinked via Friedel-Crafts alkylation. Characterization. The morphologies were determined with a scanning electron microscope (Hitachi, S-4700) and transmission electron microscope (Carl Zeiss, LIBRA 120, 80 kV). Energy dispersive spectroscopy (EDS) elemental mapping was performed with a transmission electron microscope (JEOL, JEM-2100F, 200 kV). X-Ray photoelectron spectroscopy (XPS, ESCALAB 250 XPS) was performed for elemental analysis by using a Al Ka X-ray source at a pressure of 1 × 10 −10 torr. The Raman spectra were collected by using a Horiba Jobin Yvon LabRAM HR equipped with an air-cooled Ar ion laser operated at 541 nm. Electrochemical characterization. A three electrode system was used to measure the electrochemical properties. To fabricate CS working electrode, CSs were dispersed in a Nafion solution (Sigma-Aldrich) and anhydrous ethanol (Sigma-Aldrich) and this solution was dropped onto a glassy carbon electrode. A platinum wire and an Ag/AgCl (3 M NaCl) electrode were used as the counter and reference electrodes, respectively. Electrocatalytic characterization. N-doped microporous CSs were tested by fabricating a counter electrode (CE) for use in a DSSC. The CE was prepared by coating the N-doped microporous CSs onto a FTO substrate. Briefly, CSs were dispersed in a PVdF N-methyl-2-pyrrolidone solution, and the solution was cast onto a FTO substrate and bladed to create a film with uniform thickness. The thickness of the CS electrode film was approximately 10 μm. A conventional Pt CE was prepared by coating a FTO substrate with a 0.5 mM H 2 PtCl 6 solution in anhydrous ethanol followed by heat treatment at 450 °C for 30 min. To fabricate the photoanode for the DSSC, a nanocrystalline TiO 2 suspension (Dyesol, TiO 2 Paste DSL 18NR-T) was screen printed followed by heat treatment at 500 °C for 15 min. The TiO 2 anode was sensitized by immersion in a dye solution (0.5 M ruthenium-535-bis-TBA ethanol solution, Solaronix, N719) for 18 h at room temperature. Finally, the electrolyte solution was injected into the gap in the CE and photoanode assembly. The electrolyte solution was prepared by dissolving polysulfide with tetramethylammonium sulfide in an 8.5:1.5 (v/v) ratio with a mixture of acetonitrile (Aldrich) and valeronitrile (Aldrich). The polysulfide with tetra-methylammonium sulfide was synthesized as reported elsewhere 70 .
Heat Transfer on Micro and Nanostructured Rough Surfaces Synthesized by Plasma : The review summarizes recent experimental results of studying heat transfer on rough surfaces synthesized by plasma. The plasma-surface interaction leads to the stochastic clustering of the surface roughness with a high specific area breaking the symmetry of the virgin surface of the initial crystalline materials. Such a surface is qualitatively different from the ordinary Brownian surface. The micro- and nanostructured surface consist of pores, craters, and nanofibers of size from tens of nanometers to tens of microns, which can provide new heat transfer properties related to a violation of the symmetry of the initial materials. In recent years, new results have been obtained in the study of heat transfer during phase change on plasma-modified surfaces in relation to energy, chemical, and cryogenic technologies. The objective of the review is to describe the specific structure of refractory metals after high-temperature plasma irradiation and the potential application of plasma processing of materials in order to create heat exchange surfaces that provide a significant intensification of two-phase heat transfer. Refractory metals with such a highly porous rough surface can be used as plasma-facing components for operation under extreme heat and plasma loads in thermonuclear and nuclear reactors, as catalysts for hydrogen production, as well as in biotech-nology and biomedical applications. Introduction In fusion plasma devices, high-temperature plasma produces intense erosion of plasma-facing materials, evaporation, redeposition of eroded materials, and a surface structure reformation breaking the symmetry of the virgin surface of initial crystalline materials. Such several multiscale effects lead to a specific surface clustering. As a result of agglomeration under extremely high thermal loads and the collective effects of plasma and material flows, unique stochastic topography and hierarchy of material granularity (self-similarity) are formed on the surface at scales from nanometers to millimeters [1][2][3]. The problem relates to the growth of materials with complex structures, which are neither crystals nor amorphous bodies in the classical sense. The topology of such surfaces is strictly different from any other clustering of materials produced in non-plasma devices or solidification observed earlier. The role of nanoscales in such processes under plasma influence is important in the dendritic growth of various structures and aggregationbased growth of branched structures or hierarchical granularity of fractal topology. The shape and hierarchical structure of such a surface can be classified within the framework of fractal geometry. Such structures, called fractals, are known in nature (for example, the structure of trees, and corals). Such materials with unique roughness and porosity are attractive for use in modern cooling and thermal stabilization components in devices at high thermal loads of 1-10 MW/m 2 . The use of materials with a rough surface allows reducing the heat exchange area in order to increase the efficiency of equipment in energy and chemical technologies and in electronic components. Recently, new results have been obtained using modern technologies for surface modification in order to improve the characteristics of heat transfer during phase transitions, for which the change in the characteristic properties of the surface is paramount: the structure of roughness, porosity, and wettability. Currently, there is a competitive selection of technologies that allow obtaining the maximum intensification of heat transfer, and interest in the possibilities of surface modification by plasma treatment has practical applications. Plasma irradiation of the surface allows changing the entire set of influencing parameters. The motivation for the review is the need to analyze the latest research results on two-phase heat transfer on rough surfaces modified by plasma, a comparison of the results obtained, emphasizing the choice of a method of surface modification by plasma, which has the potential for practical application. In this review, we focus on the above-mentioned effects of rough surfaces synthesized by plasma related to a violation of the symmetry of the initial materials, especially on nanostructured high-porous and fuzz-like surface growth under high-temperature plasma irradiation. We review the recent experimental results of studying heat transfer on rough surfaces synthesized by plasma. The comprehensive focus is on the findings of the significant intensification of two-phase heat transfer on such materials. In the latter part, the pool boiling heat transfer enhancements on the surface treated by plasma will be discussed. The Plasma-Surface Interaction High-temperature plasma in thermonuclear fusion devices with magnetic plasma confinement (tokamaks, linear devices, and others) has complex nonlinear properties with self-organization [4]. The properties of magnetized plasma in fusion devices differ from the properties of low-temperature plasma. Strong plasma turbulence in such devices causes degradation of the magnetic plasma confinement and leads to enhanced plasma transport across the confined magnetic field [5,6]. As a result of this process, hot plasma enters the plasma-facing walls of the device chamber. Such plasma fluxes of the powerful heat load of the order from 0.1 to 10 MW/m 2 on average (and pulse load of up to 1-2 GW/m 2 ) [7][8][9] lead to strong erosion and degradation of the material surfaces of the wall facing the plasma. The eroded material enters the near-wall plasma (edge plasma), affecting the plasma-surface interaction. Thus, the plasma-surface system acquires the properties of a system near criticality with the properties of self-organization. The properties of near-wall plasma and turbulence in fusion devices were studied in detail (see recent reviews [5,6] and references therein). In the edge plasma, instability of the drift-dissipative type leads to strong fluctuations in plasma density and electric fields. In the near-wall plasma, charged particles (ions and electrons) move in turbulent electric fields generated by drift-dissipative (electrostatic) turbulence. The amplitude of the electric field is typically from ~1 to ~50 V/cm, and the fluctuation frequency range is from ~1 to ~1000 kHz. The wavelength is from ~1 to ~50 mm [5,6]. Under such conditions, charged particles move in turbulent eddies with a velocity from ~0.1 to ~1 km/s. In fusion devices, near-wall plasma turbulence is characterized by super diffusion, intermittency, and non-Gaussian statistics [10][11][12][13][14]. A feature of high-temperature plasma in fusion devices is a distribution function of turbulent pulsations with the so-called non-Gaussian statistics, which characterizes intermittency. Numerous experimental measurements have demonstrated that fluctuations in electric field and density in the near-wall plasma of fusion devices have non-Gaussian statistics. This property leads to the formation of flight trajectories of ions and electrons during diffusion: the trajectories of plasma particles in turbulent electric fields are not Brownian motion (classical diffusion) but stochastic Levy-type motion with a predominant contribution of flight trajectories. This means that the movement of ions in the near-wall plasma is not a classical Brownian motion (classical diffusion). When such flows are deposited on the material surface, conditions arise for the growth of inhomogeneous structures on the surface facing the plasma. As a result of surface agglomeration under extremely high thermal loads and the collective effects, a unique stochastic topography and hierarchy of material granularity (self-similarity) on scales from nanometers are formed on the plasma-facing surface. Stochastic Clustering of the Surface Roughness The material irradiated with high-temperature plasma in fusion devices has an inhomogeneous structure deviating from the trivial stochastic granularity (of the Brownian surface), Figure 1. It obeys high porosity and high specific area. Stochastic clustering of the surface under the action of random forces generated by near-wall plasma leads to the growth of materials with a complex structure that are neither crystals nor amorphous solids considered by classical solid-state theory. The granularity of materials irradiated with plasma is observed from nanoscales to macroscales ( [1][2][3]; see Figures 1 and 2). It is known that nanostructures (for example, nanocrystallites), due to their mobility and adaptability in a disordered solid provide scale invariance of the distribution of stress fields at the microscopic and mesoscopic levels. It leads to the scale invariance (hierarchical self-similarity) of the structure [1][2][3]. The shape and hierarchical structure of such structures can be classified within the framework of fractal geometry. Such structures, called fractals, are known in nature (for example, the structure of trees, corals, etc.). The growth of such structures is regulated by the universal instability of the growth of interface layers (see, e.g., [15,16]). The growth of the stochastic structure of materials during deposition from the volume to the surface or the interface dynamics is reviewed in the literature widely (see, e.g., [3]). In vapor deposition, molecular beam epitaxy, etc. [16], fractal surface growth is observed in the deposition process where the agglomerated particle dynamics on large spatio-temporal scales are regulated by several driven and damping growth mechanisms (elementary processes). In nuclear fusion devices, the material surfaces are modified under a high-temperature plasma load. The problem of fractal growth is treated by nonlinear Equations (e.g., Kardar-Parisi-Zhang Equation [17,18]) describing effects from competing for elementary processes. In order to describe irregular structures observed in solids and agglomerates of various scales, kinetic models based on the Smoluchowski kinetic Equation [19,20] are used. The theoretical treatment (based on Smoluchowski Equation, etc.) has shown that scale invariance is influenced by the statistics of the agglomerating particle dynamics. In order to describe the stochastic aggregation process, the standard theoretical model based on Smoluchowski kinetic Equation is used (see [19,20]), considering the interaction of two particles (or clusters) with masses m1 and m2 forming a new particle (cluster) with mass m = m1 + m2. It is considered that large particles (clusters) do not decay. The Equation for the concentration N(m,t) (see [20]): In (1), the last two terms are the source (incoming particles of mass m0 with the flux J0) and the sink (removal of particles of mass M with a flux J). The kernel K(m1,m2) and the factor Λ regulate the rate of interaction of clusters (particles). In the literature the kernels with self-similarity characteristics are considered (see [20]): Indexes μ and ν are related to Hurst exponents, which were found in experiments. Typical Hurst exponents are in the range of 0.55-0.9. The redistribution of mass between clusters during the agglomeration (the sticking/decay of clusters of different sizes) is analogous to the energy transfer in the turbulence cascade of fluid flow. A formal analogy between the Equation for the nonlinear fragmentation-aggregation process and the kinetic Equation describing 3-wave turbulence is discussed (see, e.g., [20]), resulting in the power-law spectrum, which is considered in the Kolmogorov-Zakharov approach [21][22][23]. The kinetic Equation with the kernel (2) can be treated by using the theory of A.N. Kolmogorov [5,21] to describe the distribution of clusters in scales observed in experiments. In order to simplify the problem; it is necessary to use experimental data on the self-similarity scaling of stochastic surface relief. It is important to use the scaling exponents and the fractal dimensions observed in experiments. The formation of cluster fractality is associated with a universal cascade mechanism for the formation of fracture centers at a high degree of nonequilibrium (high density of absorbed energy) in the system when acoustic unloading of the irradiated object does not provide relaxation. Cauliflower-like Surfaces After irradiation with high-temperature plasma in fusion devices, the materials acquire a stochastic surface structure. Such experiments were carried out on tokamaks [1][2][3][26][27][28][29][30][31][32][33], QSPA powerful plasma accelerator facility [2,3,34], and linear plasma devices used for testing and treatment of refractory materials [35][36][37][38]. Experiments in such devices provide a powerful plasma-thermal load on the material components of the wall facing the plasma. Plasma energy loading on the wall material in fusion devices ranges from 0.1 to 10 MW/m 2 in quasi-stationary discharges. In tokamak discharges with instabilities (ELMs of 0.1-1 ms duration), such a load on the material can reach 1 GW/m 2 or more. Such a load is inhomogeneous in space and unstable in time. Instabilities of the near-surface plasma can lead to localized pulsed jets and bursts, which leads to pulsed local overheating of the material on a scale from 1 micrometer to several centimeters. Under the influence of such loads, the materials of the wall components facing the plasma are eroded and melted, leading to the formation of an irregular stochastic surface. Inhomogeneous stochastic clustering of the surface was found for materials with different chemical compositions and initial crystal structures (tungsten, molybdenum, titanium, carbon materials, stainless steel, and other metals) after the powerful high-temperature plasma loads in fusion devices. Clustering of materials irradiated with high-temperature plasma differs qualitatively from the trivial roughness of the Brownian surface and clustering under other conditions, which is shown by the comparative analyses with molybdenum irradiated with magnetron plasma and a steel casting surface with a typical trivial roughness formed during solidification after melting [2,3]. The difference in surface clustering under plasma irradiation in a fusion device occurs due to the movement of the material during surface clustering under the influence of stochastic electromagnetic fields formed by near-surface plasma [5,10,14], which provide long-range correlations and conditions for the growth of agglomerates with a self-similar structure [2,3]. The multiple effects of plasma and surface growth instabilities lead to the mechanism of fractal growth on scales from several tens of nanometers to hundreds of micrometers (see [1][2][3]), the dominant factor of which is not the physical and chemical characteristics of the virgin materials, but the collective effects of stochastic clustering. For tungsten irradiated with several plasma pulses in the QSPA plasma facility of a plasma beam with a diameter of ~10 cm, a structure with the self-similarity of the granularity structure is growing ( Figure 1). The stochastic topography of this surface with a different granularity of a unique hierarchical granularity of the "cauliflower" type is observed over the scales ranging from nanometers to micrometers ( Figure 1). X-ray and metallographic studies of tungsten samples [3] have shown that the stochastic structure of tungsten with a dendrite structure, significantly different from the structure of virgin polycrystalline tungsten, is formed in a surface layer with a thickness from ~100 to ~400 microns. When titanium is irradiated with plasma streams, a nanostructured surface with a hierarchical structure is formed in the PLM installation, Figure 2, which demonstrates the structures in the range from ~100 nm to ~10 microns. Chemical analysis of the elemental composition of the surface of this sample revealed titanium and nitrogen (which joined the surface upon contact with the atmosphere during the period after extraction from the PLM) on the surface of the samples. The surfaces with cauliflower-like nano-and microstructures were found on other materials-Carbon [31,32], beryllium [1], molybdenum, and lithium [39] after plasma irradiation in fusion devices. Such experimental observations indicate a universal mechanism of growth of a nanostructured surface of the "cauliflower" type. The example of the relief shown in Figure 3 demonstrates a profile with a change in heights in the range from ~100 nm to ~5 microns. In order to characterize such a stochastic surface profile, a probability distribution function (PDF) is used, constructed in the form of a histogram of height values (see example in Figure 4). The PDFs of the relief heights ( Figure 4) of materials irradiated with plasma in fusion devices have "heavy" tails typically and are not described by the Gaussian (normal) law. Such PDFs significantly deviate from the Gaussian law and cannot be fitted by other known laws of the theory of probability, e.g., the Cauchy-Lorentz law (see analysis in [3]). The statistical scale invariance (a self-similarity) of the surface topology is described by scaling of the structure-function, multifractal spectra, and the Hurst exponent [1,41]. The Hurst exponents for stochastic surface reliefs of titanium, tungsten, lithium, carbon, and beryllium are from 0.55 to 0.9 [1]. Such values of H > 0.5 mean a persistent behavior (trend). The fractal dimension df of the surface is related to the Hurst exponent H as H = 3 − df, see [3]. It corresponds to irregular stochastic clustering with hierarchical granularity (fractality), e.g., the cauliflower-like shape of the surface structure (see above). Hurst exponents from 0.55 to 0.9 correspond to the fractal dimension of the surface df = 2.1-2.45. The property of statistical inhomogeneity is characterized by multifractality indices [41] as well. The multifractality index for the relief profiles of the samples from fusion devices is in the range of 0.5-1.2, illustrating a deviation of their structural complexity from the trivial stochasticity of the Brownian surface (for Brownian surface, this index is equal to 0). Quantitative characteristics of the statistical heterogeneity of the structure of materials of fusion devices, including the multifractality, are typical for multifractal objects and processes in nature (see [42]). The growth of fuzz in fusion devices was found on refractory metals-tungsten, molybdenum, titanium, noble metals, and others [36]. The example of fuzz on titanium is shown in Figure 7. The structure is similar to the fuzz layers on tungsten irradiated in the same device. The universality of the "fuzz" growth mechanism is explained in the model [56] considered the evolution of excited adatoms over a surface under high energy helium flux. Approving such theory, recently, the fiber density of the "fuzz" layer was observed to depend on the plasma load intensity: a dense "fuzz"-type structure and a rare "fuzz"-type structure were observed, see Figure 6 [47]. High-Porous Micro-Structured Surface The structures described above are formed under special conditions of plasma load during a long time of plasma exposure. Under the action of additional factors of thermal or beam loading, highly porous surface structures can form, see the example in Figure 8, with a pore diameter and a pore depth from ~100 nm to ~10 microns. Such highly porous structures can be formed by the combined sequential treatment by beam loading on the material and then subsequently irradiated with high-temperature plasma in a plasma device. Such combined processing of refractory materials leads to the formation of a highly porous surface with pore sizes of micro-and nanometer diameter. Figure 8 shows an example of such a material: VM-P ITER-grade tungsten was treated using a combination of 40 MW/m 2 electron beam thermal cycling and a subsequent stationary plasma load of up to 2 MW/m 2 in a PLM plasma device. These combined treatments led to erosion, melting, and cracking of the material under the influence of an electron beam and the subsequent growth of a nanostructured fuzz structure on the surface of the material under the influence of a plasma load. Post-mortem scanning electron microscopy and X-ray analysis have shown size of pores and irregular structures less than 100 nm on the stochastic nanostructured surface. Fuzz-type structure of high porosity is observed. Conditions of such loads were as follows: electron beam loads near the melting point to obtain a corrugated surface and subsequent plasma irradiation to grow fuzz layers on the corrugated surface. As a result, a highly porous surface with a unique nanostructured surface is formed. Another nanostructured surface formed under plasma load can be of a multi-cone type structure, see Figure 9. Steal samples irradiated with plasma in a PLM-M device demonstrate cones growth on the surface. During helium plasma irradiation, the steal surface temperature was ~450 °С. The plasma load in the stationary helium discharge in the PLM-M produced the surface growth of conical-type nanostructures of the cone size in the range from 20 nanometers to 500 nanometers. Advantages of Practical Applications The relevance of the technology for the production of highly porous refractory materials is discussed in the literature regarding the problem of catalysts [72][73][74][75][76][77][78]. The materials with a highly porous nanostructure described above and the method of their preparation by plasma irradiation have the potential for use as catalysts in the production of hydrogen. Materials mentioned above are potential for practical application as catalysts in hydrogen production. Highly porous titanium surfaces are candidates for bone implants, improving implant fixation and being well adapted to the human bone, see [79,80]. The practical application of fuzzy structures covering the surface is associated with a change in physical properties. Such materials can potentially be used in plasma and beam facilities as the first wall components interacting with plasma or beams. The surface area of nanostructured "fuzz" is 20-30 times larger than that of a flat surface [57][58][59]. From the point of view of the interaction of such a surface with the plasma flow, the sputtering rate decreases by about an order of magnitude [58][59][60], and the reflection of particles decreases, which leads to an increase in the power transfer coefficient [60]. In thermonuclear plasma devices, due to the nanostructure, arc ignition is expected in response to high-power plasma pulse load-like edge localized modes [68][69][70]. This process has both a disadvantage (associated with the enhanced erosion) and an advantage due to the possible effect of vapor plasma shielding effect. The potential of the application of porous and fuzzy nanostructured materials is associated with a change in the properties of electronic emission. The field electron emission and the field enhancement factor increase [63], and thermal conductivity decreases significantly [64]. Secondary electron emission decreases by about 50%. Such materials absorb almost all photons in the spectrum from ultraviolet to near infrared (>99%) [65], and the optical emissivity is increased [66,67]. Heat Transfer on Surfaces Synthesized by Plasma Currently, the development of many technologies determines the possibility of stable cooling and thermal stabilization of equipment components at high (up to 10 MW/m 2 ) and ultra-high (more than 10 MW/m 2 ) heat loads. This is due both to the high heat flux accompanying the processes (nuclear fusion devices, rocket nozzles, laser mirrors, etc.) and the need to reduce the heat exchange area in order to increase the efficiency of equipment used in the energy and chemical technologies and in the most rapidly developing technology of electronic components. Indeed, hundreds of laboratories around the world are engaged in the development of improved cooling methods. This issue was the most relevant in the heat transfer community over the past decade. Numerous new results have been obtained using primarily modern surface modification technologies in order to improve heat transfer characteristics. Because perceiving high and ultra-high heat fluxes at reasonable parameters is possible only with phase changes. Further the works performed under the conditions of boiling of heat carriers are considered. A distinctive feature of two-phase heat transfer from a single-phase one is the disproportionate improvement in heat transfer to the coefficient of development (increase) of the surface, the influence of many factors, and physical and chemical properties on the results obtained. As an example, Figure 10 shows data on the relative increase in critical heat fluxes (CHF, maximum heat flux density during nucleate boiling) depending on the coefficient of development (increased specific area) of the surface obtained by pool boiling of saturation water in some of the most cited experiments on modified surfaces [81]. As can be seen, the maximum increase is explained not by an increase in the heat exchange surface but precisely by the influence of a change in the characteristic properties of the surface. Currently, there is a competitive selection of technologies that allow obtaining the maximum intensification of heat transfer, and interest in the possibilities of surface modification by plasma treatment has a practical application. It should be noted that the achievement of high values of CHF is the easiest when boiling a subcooled liquid on unmodified surfaces [87]. Methods for Enchantment Boiling Heat Transfer At the turn of this century, new technical features have appeared primarily related to surface modification. The use of so-called nanofluids and nanomaterials, femtosecond laser exposure, and plasma and ion processing has made it possible to obtain a significant number of new results and caused a surge in relevant research. Traditionally, most of the studies have been carried out for pool boiling conditions in order to establish the main factors influencing this process and to find its general patterns. Further, the results are transferred to boiling in the flow and in the evaporation channels. The following methods are used (see [88]): (1) influence of internal mechanisms (increase in evaporation centers, increase in the inflow of liquid into the evaporation zone of the microlayer, regulation of wettability, etc.) (2) increase/development of the heat exchange surface area; (3) creation of suppression of the least efficient processes during boiling, which ensures the removal of steam from the wall (including boiling in a highly subcooled liquid, alternation of zones with different wettability, etc.). Above listed methods are often combined during implementation. Increasing the heat exchange surface area several times and simultaneously creating artificial centers of vaporization and increasing the flow of liquid into the evaporation zone can be used. The heat transfer during boiling is accompanied by the nucleation of a bubble on the wall, then by the growth of its volume due to the evaporation of the liquid upon contact with the wall and the entrainment of the vapor phase. The main mechanism in nucleate boiling is precisely evaporation in the microlayer near the zone contact with the wall. Over the past two decades, numerous studies have been carried out to find improved conditions for enchantment boiling heat transfer on modified surfaces. The number of published articles on this topic is measured in thousands. It is impossible to provide a brief overview of the research. At the moment, several dozen (!) detailed reviews of the problem have already been published, see, e.g., [66][67][68][69][70][71], and is no longer a fantasy and the appearance of a review "reviews". A large number of results have been obtained in laboratory experiments on surfaces approximately 1 cm by 1 cm in size on a significant intensification of heat transfer and CHF during boiling, as a rule, of water in a large volume at atmospheric pressure. The most advanced and high-tech surface modification technologies (femtosecond laser ablation, microelectronics technologies, etc.) are used. With the use of surface modification, high rates of intensification of heat transfer and CHF were achieved up to 4-5 times in comparison with the unmodified surface. There are much fewer known studies devoted to the intensification of CHF during boiling in round tubes with a modified surface. The problems of thermal stabilization of modern and future microelectronic technology impose restrictions on the flow parameters and the type of coolant. Here the achievement of the required CHF is possible only when using the intensification of two-phase transfer processes. Surface modification methods and materials used are systematized in [89]. They are as follows: Electric discharge machining (EDM), mechanic machining, wire EDM laser machining, end milling, rolling, polishing, selective laser melting, sintering, orthogonal Ploughing/Extrusion, wire cutting, anodization, photolithography, dry etching, chemical etching was used as methods to fabricate the microgrooves, pin-fin array, and tunnel structured surface. The main conclusions from numerous works and reviews [88][89][90][91][92][93][94] for the implementation of the conditions for surface modification, where the greatest intensification of heat transfer and CHF have been achieved, are the following: the existence of a multiscale structure of the modified surface, combining nano, micro, and meso "roughness"; the porosity of the structure, providing the action of capillary forces; application of modification technologies that provide zones with contrasting physical and chemical properties (wettability, thermal conductivity). It is obvious that all these conditions can be realized using the plasma exposure technologies presented above. Due to a large number of influencing factors, it is very difficult to predict the surface structure that gives the maximum effect. The experimental verification of the intensification of heat transfer and CHF is necessary. In addition, for a number of highly porous surfaces, an improvement in heat transfer was noted due to the retention of vapor and gas in the pores but the deterioration of the CHF due to the rapid "steaming" of the surface. Modification of the Heat Transfer Surface by Plasma Plasma modification of surface is proposed in a large number of technologies. Currently, technologies have been developed using low-temperature plasma for various applications (disinfection, nanotechnology, food production, improvement of the properties of metals, plastics, membranes, etc.) [95][96][97][98][99][100]. Low-temperature plasma technique is used for material processing and modification at temperatures below 200 °C [96]. Low-temperature plasma can modify the property of the material surface and simultaneously maintain material structures. It is important, especially for temperature-sensitive materials. Working gases Ar, N2, O2, H2, and air are used in low-temperature plasma facilities. No harmful substances such as acid, alkali or organic solvent are used. Moreover, the effect on material surface properties can be efficiently tuned by plasma irradiation conditions such as power, treatment duration, and working gas. To solve the problem of heat transfer intensification, the use of low-temperature plasma has a few examples [100][101][102][103][104]. The use of low-temperature plasma when modifying a heat exchange surface makes it possible to change the physicochemical properties of surfaces, primarily wettability, which has a significant effect on the intensification of boiling and condensation. The change in the physicochemical properties of the surface is due to oxidation. The coating with atmospheric pressure plasma was used to modify the surface of the copper heating block [105]. Gaseous nitrogen enters the upper grounded electrode, the lower electrode with a high-voltage power supply and is covered with a dielectric layer. The plasma spray with a temperature of 25-30 °C is formed and blown into the volume. Thus, thermal damage of treated materials is avoided. The static contact angle of the modified surface is 18°, which is around 60° of contact angle reduction compared to that of the untreated original copper surface (80°). The thickness of the oxide layer on the copper surface reaches approximately 3 µm at a plasma treatment time of 400 s. The CHF value improves by 18% after plasma treatment of the heating surface. The reason for the increase in CHF is related to the change in the wettability of the heating surface after plasma treatment. In [101], the intensification of drop condensation was studied. In order to enhance condensation in the droplet mode, thin coatings (<100 nm) with low surface energy and with a small contact angle hysteresis, are used. Ultra-thin (<5 nm) silane self-assembling monolayers (or SAMs) have been studied to reveal the effects of droplet condensation due to their minimal thermal resistance. Such thin coatings decompose within an hour when water vapor condenses. After the destruction of the coating, the condensation of water vapor passes into an inefficient film regime with low heat transfer. In [101], the quality and durability of silane SAM under conditions of water vapor condensation on the copper surface were improved in comparison with silane coatings on metal surfaces using oxygen plasma treatment. The resulting SAM silane has a low contact angle hysteresis (≈20°), which ensures efficient droplet condensation of water for >360 h, with no destruction/degradation of the coating. Moreover, over a long period of time, it was demonstrated an increase in heat transfer by 5-7 times compared to film condensation. After condensation, the SAM silane is hypothesized to decompose due to the reduction and subsequent dissolution of the copper oxide at the oligomer-substrate interface. SiO2-like hydrophilic and polymer-like hydrophobic SiOxCyHz films [103] at atmospheric pressure in the presence/absence of O2 are deposited by the plasma method on the copper surface from the vapor phase. Pool boiling experiments were performed on treated surfaces under atmospheric saturation conditions. The effects of surface modifications on CHF, nucleate boiling onset (ONB), and heat transfer coefficient (HTC) were investigated. It has been found that hydrophilic films deposited on surfaces lead to a significant increase in CHF and HTC compared to untreated surfaces. A reduction in up to 53% of the ONB of hydrophobic films deposited on copper surfaces was registered. Summarizing the experiment, the CHF values of bare, hydrophilic, and hydrophobic copper surfaces are 1539.9, 1798.8, and 593.1 kW/m 2 , respectively. The results show that for hydrophilic surfaces, the CHF threshold is increased by 16.8%, and for hydrophobic surfaces, CHF is reduced to 62% compared to bare surfaces. Changes in surface morphology and contact angles due to plasma treatment led to comparable changes in bubble ejection and in CHF. The hydrophilic coating delays the CHF due to small contact angles, which is an advantage of the hydrophilic coating. High-temperature plasma is used for surface modification much less frequently than low-temperature plasma. For heat transfer conditions during phase transitions, only a few works are known. Heat transfer during boiling in a pool on capillary-porous coatings was studied for heat carriers of water and liquid nitrogen at atmospheric pressure [103]. Unique capillary-porous coatings of various thicknesses (400-1390 µm) with high porosity (up to 60%) were obtained by plasma spraying. At low heat fluxes, capillary-porous coatings cause a significant increase in heat transfer up to 4 times when boiling liquid nitrogen and up to 3.5 times when boiling water. The high-speed video showed that the mechanisms of heat transfer enhancement differ significantly depending on the properties of the liquid and the morphology of the coatings. The study of a capillary-porous coating obtained by plasma spraying in [104] revealed the effect of structured coatings on cryogenic quenching by a falling liquid nitrogen film. Experiments on cryogenic hardening were carried out on a vertical copper plate with a bare surface and on surfaces with different orientations of the coating protrusions. Features of the dynamics of the hardening front and heat transfer in the transition process affect heat transfer. The characteristics of heat transfer during hardening were measured for various surfaces, and experimental cooling thermograms and visualization were analyzed. The results have shown that the thermal properties of the coating and the geometry of the protrusions on the solid surface affect the cooling rate. The capillary-porous coating significantly affects the dynamics of hardening, which leads to a decrease in the total hardening time by more than three times. As a result, the structured capillary-porous coating provides a reduction in the total mass of the cryogenic liquid and the time required for the quenching process. Modification by high-temperature plasma makes it possible to obtain unique surface structures (see Section 1). It opens new opportunities for significant intensification of two-phase heat transfer. It was demonstrated recently in experiments with a high-porous surface modified by high-temperature plasma in the PLM plasma device [81,106]. The study of pool boiling on the surface treated by PLM plasma resulted in new achievements in HTC. In such experiments, an iron sample (stainless steel) was processed in PLM by exposing the surface to helium plasma for 3 h at a surface temperature of 850 °C and a thermal load of 180 kW/m 2 . The modified surface has an almost regular structure of cones with an angle of 20°, a height of up to 12 µm, and a base diameter of 3 µm (Figure 11a). The surface of the cones has a porous structure (Figure 11b). A highly porous structure with a characteristic pore size of 100 nm is present at the bases of the cones. For this surface, an increase in the HTC by up to 40% was obtained compared to the unmodified surface. The comparison results are shown in Figure 12 [81]. A slight decrease in the CHF was registered for this surface, which is typical for material with a highly porous structure. Figure 11). Discussion and Advantages of Practical Applications Plasma treatment of materials is an appropriate method for producing a rough surface with a high specific area. Using the control of the plasma load on the material, providing erosion, redeposition and resolidification after surface melting, it is possible to produce a surface with a different roughness topology. In studies of intensification of twophase heat transfer, until recently, low-temperature plasma was mainly used to obtain a surface with a rough and developed structure. Getting enough materials treated with hot plasma in thermonuclear devices was restrictive. Studies of recently obtained samples treated with hot plasma in thermonuclear devices have demonstrated their effectiveness as a surface for heat transfer intensification. Currently, materials with a rough surface treated by high-temperature plasma can be produced in sufficient quantities in linear plasma devices constructed recently [35][36][37][38]59]. Surface treatment with low-temperature plasma leads to a change in the physical and chemical properties, mainly of the thin surface layer of several tens of nanometers. In contrast, irradiation with hot plasma makes it possible to obtain a developed rough surface relief to a depth of up to hundreds of microns or more. The advantage of such relief is also the presence of peaks of various scales on the surface and the property of selfsimilarity of the topology, which presumably has a positive effect on heat exchange. A change in the contact angle with the surface provides an increase in up to seven times in the heat transfer coefficient during condensation [101], which, in terms of the degree of influence on the process, exceeds the known results obtained using relief produced with non-plasma methods. In this case, unlike many other methods, long-term preservation of the working properties of the surface is ensured according to the data [101]. Exposure to plasma to modify the surface is possible in facilities such as plasma torches [103,104,107] and special plasma facilities, for example [81,105]. The advantages of the practical application of the surface treated with plasma [81,103,104,107], in comparison with surface modification by other methods (see reviews [88][89][90][91][92][93][94][108][109][110]), are high rates of intensification of heat transfer and CHF achieved (up to four times) in comparison with the unmodified surface. Surface treatment by high-temperature plasma makes it possible to form a surface structure that is resistant to degradation, especially using refractory metals in special heat exchange devices. In order to choose a specific surface modification technology for practical use, the following issues should be taken into account: the possibility of implementing the technology, the surface material, and the operating temperature range. Conclusions The use of low-temperature plasma exposure makes it possible to form stable hydrophobic coatings, which opens up great opportunities to develop highly efficient condensers. The use of high-temperature plasma allows obtaining of unique rough surfaces that are potentially in demand for the intensification of heat transfer during phase changes. Experiments in fusion plasma devices provide a powerful plasma-thermal load from 0.1 to 10 MW/m 2 on the plasma-facing materials. The treatment of metals with high-temperature plasma in fusion devices leads to the growth of nanostructured surfaces, breaking the symmetry of the virgin surface of the initial crystalline material. The surfaces of cauliflower-like, "fuzz"-type structures and high porosity surfaces are growing under plasma load in fusion devices. The specific structure of such surfaces, such as 20-50 nanometers fibers of fuzzy structure, and fractal structure of cauliflower-like surfaces, are unique and cannot be grown under other conditions except high-temperature plasma processing under conditions other than high-temperature plasma treatment. Such nanostruc-tured surfaces grow on tungsten, molybdenum, titanium, steel, and other metals. Refractory metals with a highly porous rough surface after high-temperature plasma treatment can be used as components for operation under extreme thermal and plasma loads in thermonuclear and nuclear reactors, as catalysts for hydrogen production, as well as in biotechnology and biomedical applications. The advantages of the practical application of the surface treated with plasma are high rates of intensification of heat transfer and CHF achieved (up to four times) in comparison with the unmodified surface. Surface treatment by high-temperature plasma makes it possible to form a surface structure that is resistant to degradation, especially using refractory metals in special heat exchange devices. In order to choose a specific surface modification technology for practical use, the following issues should be taken into account: the possibility of implementing the technology, the surface material, and the operating temperature range. Further studies of heat transfer on surfaces produced by high-temperature plasma are needed to achieve significant intensification of two-phase heat transfer. Conflicts of Interest: The authors declare no conflict of interest.
Combined Effects of Genetic Variants of the PTEN, AKT1, MDM2 and p53 Genes on the Risk of Nasopharyngeal Carcinoma Phosphatase and tensin homolog (PTEN), v-akt murine thymoma viral oncogene homolog 1 (AKT1), mouse double minute 2 (MDM2) and p53 play important roles in the development of cancer. We examined whether the single nucleotide polymorphisms (SNPs) in the PTEN, AKT1, MDM2 and p53 genes were related to the risk and severity of nasopharyngeal carcinoma (NPC) in the Chinese population. Seven SNPs [p53 rs1042522, PTEN rs11202592, AKT1 SNP1-5 (rs3803300, rs1130214, rs3730358, rs1130233 and rs2494732)] were genotyped in 593 NPC cases and 480 controls by PCR direct sequencing or PCR-RFLP analysis. Multivariate logistic regression analysis was used to calculate adjusted odds ratios (ORs) and 95% confidence intervals (CIs). None of the polymorphisms alone was associated with the risk or severity of NPC. However, haplotype analyses indicated that a two-SNP core haplotype (SNP4-5, AA) in AKT1 was associated with a significantly increased susceptibility to NPC risk (adjusted OR  =  3.87, 95% CI  =  1.96–7.65; P<0.001). Furthermore, there was a significantly increased risk of NPC associated with the combined risk genotypes (i.e., p53 rs1042522 Arg/Pro + Pro/Pro, MDM2 rs2279244 G/T + G/G, PTEN rs11202592 C/C, AKT1 rs1130233 A/A). Compared with the low-risk group (0–2 combined risk genotypes), the high-risk group (3-4 combined risk genotypes) was associated with a significantly increased susceptibility to NPC risk (adjusted OR  =  1.67, 95% CI  =  1.12–2.50; P = 0.012). Our results suggest that genetic variants in the PTEN, AKT1, MDM2 and p53 tumor suppressor-oncoprotein network may play roles in mediating the susceptibility to NPC in Chinese populations. Introduction Nasopharyngeal carcinoma (NPC) is a rare malignancy in most parts of the world, but it occurs at relatively high rates in some geographic regions and among certain ethnic groups. According to global cancer statistics from the International Agency for Research on Cancer, there were over 84,000 incident cases of NPC and 51,600 deaths in 2008, with 80% of the cases located in Asia [1,2]. The disease is a major public health challenge in southeast China, where it accounts for 20% of all cancers [3,4]. Over the years, numerous studies have revealed that NPC is a complex disease caused by the interaction of Epstein-Barr virus (EBV) infection, environmental and host genetics factors in a multi-step process of carcinogenesis [5]. Currently available data on the origin of NPC suggests that the genetic alterations of tumor suppressor genes and oncogenes may be important in NPC carcinogenesis. The phosphatase and tensin homolog (PTEN), v-akt murine thymoma viral oncogene homolog 1 (AKT1), mouse double minute 2 (MDM2) and p53 tumor suppressor-oncoprotein network plays a crucial role in regulating a number of cellular processes such as cell growth, apoptosis, survival and cell cycle, which ultimately contributes to cancer development and progression [6][7][8]. The p53 tumor suppressor protein plays a central role in the prevention of tumor development, and tumorigenesis is accelerated when p53 activity is inhibited [9,10]. Recent observations demonstrated that AKT1 phosphorylates and stabilizes MDM2, the principal negative regulator of p53, resulting in the downregulation of p53 activity [11]. On the other hand, AKT1 is negatively regulated by PTEN, a p53 response gene that is inactivated in a variety of cancers [12]. Thus, two known tumor suppressor proteins (p53 and PTEN) and two oncoproteins (MDM2 and AKT1) are networked to balance cell survival and apoptosis [7]. The gain and loss of function of the oncogenic and tumor suppressor components of this network have been extensively described in a variety of cancers [7,13] including NPC [14][15][16][17]. Because of the importance of this tumor suppressoroncoprotein network in cancer development and progression, we hypothesized that genetic variations within this network may lead to the deregulation of proliferation or cell death and subsequently affect cancer risk. Several single nucleotide polymorphisms (SNPs) in the PTEN, AKT1, MDM2 and p53 genes have been well characterized. The p53 gene has a single base change of G to C at codon 72 in exon 4, known as the p53 Arg72Pro polymorphism (rs1042522), which causes alteration of amino acid residue from arginine to proline. The p53 Pro72 allele is weaker than the Arg72 allele in inducing apoptosis and suppressing cellular transformation, but it appears to be better at initiating senescence and cell cycle arrest [18][19][20]. Of the identified MDM2 variants, the SNP309 polymorphism (rs2279244) is a T to G change at nucleotide 309 in the first intron. Compared with the T allele, the G allele has been shown to result in the increased expression of MDM2 RNA and protein and the subsequent down regulation of the p53 pathway [21,22]. Recent studies indicate that several specific combinations of SNPs in the AKT1 gene have been associated with variable AKT1 expression and p53-dependent apoptosis [23]. In addition, a polymorphism located in the 59 untranslated region of the PTEN gene, C-9G (rs11202592), was shown to result in enhanced PTEN expression, which subsequently led to a reduced insulin-induced phosphorylation of AKT [24]. Taken together, these data indicated that the PTEN, AKT1, MDM2 and p53 tumor suppressor-oncoprotein network is genetically heterogeneous, a feature that can lead to a wide variation in the p53 response and may ultimately influence cancer risk [25]. We have previously reported that the MDM2 SNP309 polymorphism is associated with increased susceptibility and advanced lymph node metastasis to NPC [26]. Given the role of the PTEN, AKT1, MDM2 and p53 tumor suppressor-oncoprotein network in the regulation of cell survival and apoptosis, we hypothesize that the combined genetic variants in this tumor suppressor-oncoprotein network could collectively modify the risk of NPC; and these combined risk genotypes could serve as susceptibility markers for identifying high-risk subgroups of patients who might benefit from personalized prevention and treatment. Therefore, in this study, we investigated the combined effects of genetic variants in the PTEN, AKT1, MDM2 and p53 genes on the risk and disease severity of NPC in the Chinese population. Ethics statement The study was performed with the approval of the Ethical Committee of Beijing Institute of Radiation Medicine and conducted according to the principles expressed in the Declaration of Helsinki. Written informed consent was obtained from all the participants before inclusion in the study. Study subjects This case-control study consisted of 593 patients with NPC and 480 controls that have been described previously [26]. All subjects were unrelated ethnic Chinese and were enrolled from Nanning city and its surrounding regions between September 2003 and July 2005. The diagnosis of cases, the inclusion and exclusion criteria for cases and controls, the definition of smokers and drinkers, and the tumor staging were described previously [26]. At the time of recruitment, personal information including demographic factors, medical history, tobacco and alcohol use, and family history of NPC were collected via a structured questionnaire. Genotype analysis Genomic DNA from peripheral blood was isolated by using standard phenol/chloroform protocols. The p53 Arg72Pro polymorphism (rs1042522) was genotyped using polymerase chain reaction (PCR) direct sequencing. Polymorphisms in the PTEN (C-9G, rs11202592) and AKT1 (SNP1, rs3803300; SNP2, rs1130214; SNP3, rs3730358; SNP4, rs1130233; and SNP5, rs2494732) genes were genotyped using PCR-based restriction fragment length polymorphism (RFLP) analysis. The primers and the restriction enzymes used in the study are listed in Table 1. PCR conditions were identical to those used for the SNP discovery described previously [27]. Genotyping was performed by staff blinded to the subjects' case/control statuses. The accuracy of the genotyping data for each polymorphism obtained from PCR-RFLP analyses was validated by direct DNA sequencing of a 15% masked, random sample of cases and controls. Statistical analysis The genotype and allele frequencies for the polymorphisms were determined by gene counting. The fitness to Hardy-Weinberg equilibrium was tested using the random-permutation procedure implemented in the Arlequin package (available at http://lgb.unige.ch/arlequin/). Associations between polymorphisms and the risk of NPC were estimated by logistic regression analyses and adjusted for confounding factors (including age, sex, smoking and drinking status, smoking level, and nationality). Odds ratios (ORs) and 95% confidence intervals (CIs) were used to measure the strength of the association. The potential modification effect of the polymorphism on NPC risk was assessed for the above confounding factors by the addition of interaction terms in the logistic model and by separate analyses of subgroups of subjects stratified by these factors. A homogeneity test was used to compare the difference of ORs within each stratum. In view of the multiple comparisons, the correction factor n (m-1) (n loci with m alleles each) was applied to correct the significance level. These analyses were performed using SPSS software (version 11.5; SPSS Inc.). The pairwise linkage disequilibrium (LD) calculation (Lewontin's D and r 2 ) and haplotype blocks construction were performed using the program HaploView 4.2 [28]. Haplotypes based on the polymorphisms in the AKT1 gene were inferred using PHASE 2.1 software (available at http://www.stat.washington.edu/stephens/). Haplotype frequencies of the cases and controls were compared using x 2 tests. The haplo.glm program (available at http://rss.acs. unt.edu/Rdoc/library/haplo. stats/html/haplo.glm.html) was then used to calculate adjusted ORs for each haplotype, and the number of simulations for empirical P values was set at 1000. Individual polymorphisms and the risk of NPC The genotyping results of the seven polymorphisms are presented in Table 2. The observed genotype frequencies for seven polymorphisms were in Hardy-Weinberg equilibrium (all P.0.05, data not shown). The genotype frequencies of all seven SNPs among patients were not significantly different from those among the controls. Further, on the basis of logistic regression analysis with adjustment for age, sex, smoking and drinking status, smoking level, and nationality, we found no association with the risk of NPC for these polymorphisms. The associations between the seven polymorphisms and the risk of NPC were further examined with stratification by age, sex, smoking and drinking status, smoking level and nationality. Again, no significant association was found (data not shown). The effect of the seven polymorphisms on the severity of NPC, as measured by the tumor-node-metastasis (TNM) staging system, was also assessed. However, the distributions of genotypes of these polymorphisms were not significantly different among the subgroups with different clinical stage, or different T, N and M classification of the cancer (data not shown). Haplotypes and risk of NPC The pairwise disequilibria measures (D and r 2 ) of the five AKT1 polymorphisms were calculated. Figure 1b and 1c showed that two polymorphisms, SNP4 and SNP5, were in strong LD. We next performed haplotype analysis to derive haplotypes specifically correlated with NPC. We identified several multi-SNP haplotype systems (Figure 1d). Further multilocus analysis identified a two-SNP haplotype (SNP4-5, AA) that was associated with significant increased susceptibility to NPC risk (adjusted OR = 3.87, 95% CI = 1.96-7.65; P,0.001) (Figure 1e). Three other multi-SNP haplotypes (SNP1-5, AGCAA; SNP3-5, CAA; SNP2-5, GCAA) also showed a significant association with NPC risk; they all share a common core that extends from SNP4 to SNP5 (Figure 1e). The effect of the haplotypes on severity of NPC was also assessed. However, the distributions of haplotype frequencies were not significantly different among the subgroups with different clinical stages, or different T, N and M classification of the cancer (data not shown). Combined effects of the genetic variants on risk of NPC Considering that each of these polymorphisms appeared to have a weak effect on NPC risk, we next investigated the combined effects of three functional polymorphisms (MDM2 SNP309, p53 Arg72Pro and PTEN C-9G) and two associated polymorphisms (AKT1 SNP4 and SNP5) on NPC risk. Because the AKT1 SNP4 and SNP5 were in strong LD, we only included AKT1 SNP4 in the analysis. Specifically, in the study subjects who had data available for all four polymorphisms, we categorized all risk genotypes of each polymorphism (i.e., p53 rs1042522 Arg/Pro + Pro/Pro, MDM2 rs2279244 G/T + G/G, PTEN rs11202592 C/C, AKT1 rs1130233 A/A) into a new variable according to the number of risk genotypes carried by an individual. When we combined the risk genotypes of the four polymorphisms together, we found that the risk for NPC increased significantly as the number of the combined risk genotypes increased (P trend = 0.019). We then categorized the patients into two groups: (i) the low-risk group (0-2 combined risk genotypes) and (ii) the high-risk group (3-4 combined risk genotypes). The frequencies of the combined risk genotypes among the cases were significantly different from those among the controls (P = 0.005). Furthermore, using the low-risk group as the reference group, the high-risk group was significantly associated with an increased susceptibility to NPC risk (adjusted OR = 1.67, 95% CI = 1.12-2.50; P = 0.012; Table 3). The association remained significant even after correction for multiple comparisons (P = 0.048). We also evaluated the association between the combined risk genotypes and risk of NPC stratified by age, sex, smoking and drinking status, smoking level, and nationality (Table 4). Although the susceptibility to NPC seemed to be more pronounced in subjects who were male, older (. 47 years), nonsmokers and those of Han nationality, these differences could be attributed to chance (all P.0.05, test for homogeneity), indicating that these potential confounding factors had no modification effect on the risk of NPC. Furthermore, we evaluated the effect of combined risk genotypes on the severity of NPC. However, the distributions of frequencies of combined risk genotypes were not significantly different among the subgroups with different clinical stage or different T, N and M classification of the cancer (data not shown). Discussion The PTEN, AKT1, MDM2 and p53 tumor suppressoroncoprotein network plays an important role in the development of cancers. Polymorphisms within this network may affect their corresponding protein expression or function and, thus, potentially affect the risk to developing various cancers. However, the role of genetic variations of this tumor suppressor-oncoprotein network in NPC is not yet fully understood. In this study, we found that genetic variants of the PTEN, AKT1, MDM2 and p53 genes jointly influence the susceptibility to NPC risk. These results suggest that genetic variation within the PTEN, AKT1, MDM2 and p53 network can be used as biomarkers to identify high-risk subgroups of patients who might benefit from personalized prevention and treatment. The genetic associations observed in this study are biologically plausible. P53, MDM2, PTEN and AKT1 each have a role in carcinogenesis and tumor progression. The deregulation of these four genes has been detected in a broad range of human malignancies including NPC [13][14][15][16][17]. Furthermore, p53, MDM2, PTEN and AKT1 can interact with each other to balance cell survival and apoptosis. The major role of MDM2 is to interact directly with p53 to block p53-mediated transactivation and apoptosis. AKT1 is an antiapoptotic protein kinase, and one of its substrates is MDM2 protein. The phosphorylation of MDM2 by AKT1 leads to the stabilization of MDM2 and also promotes the movement of MDM2 into the nucleus where it can act to downregulate p53 activity [11]. On the other hand, the major function of PTEN relies on its AKT1 inhibitory activity, and the loss of PTEN function results in increased AKT1 activation [12]. In addition, PTEN can directly inhibit the movement of MDM2 into the nucleus, thereby protecting p53 from survival signals emanating from growth factor receptors [29]. Therefore, these numerous interactions may support the biological plausibility that the combination of variants of the PTEN, AKT1, MDM2 and p53 network could result in more comprehensive and accurate estimates of risk for NPC than can be obtained from a single variant. Another finding in the present study was that a two-SNP core haplotype in the AKT1 gene, SNP4-5 AA, was significantly associated with increased NPC risk. AKT1 is a central node in cell signaling that plays an important role in tumorigenesis. AKT1 has been reported to be constitutively activated in NPC, enhancing cell survival by blocking the induction of apoptosis [15]. Haplotypes in the AKT1 gene were recently reported to be associated with higher levels of AKT1 protein expression and are resistant to p53-dependent apoptosis [23,30]. One study also reported an association between the AKT1 polymorphism and cancer metastasis [31]. Additionally, AKT1 polymorphisms were found to predict treatment response and clinical outcome in patients with esophageal and non-small cell lung cancer [32,33]. Collectively, these observations indicate that our finding of an association between the AKT1 haplotype and the risk of NPC may be biologically plausible. The molecular mechanism by which the AKT1 SNP4-5 AA haplotype confers a risk of developing NPC is unknown. It has not been shown that either SNP4 (a silent change at amino acid 242 in exon 11) or SNP5 (located in intron 13) represent a functional SNP with an ability to change either the expression or activity of AKT1. Rather, these two SNPs may only be markers for this region, and a unique variant capturing the effect of both SNP4 and SNP5 remains to be discovered. In addition, Emamian et al. reported that a core risk haplotype TC, extending from AKT1 SNP2 to SNP3, was associated with lower AKT1 protein levels in EBV-transformed lymphocytes in Americans of Northern European descent [30]. However, Harris et al. demonstrated that B cells harboring the major SNP3-4 haplotype at AKT1 expressied higher levels of AKT1and are relatively resistant to p53-dependent apoptosis compared to cells with the minor haplotype in Caucasians [23]. The inconsistency between these findings may be due to the difference of LD between a functional SNP and a marker SNP in different populations. Indeed, the LD patterns and allele frequencies in this region vary in different racial populations. Thus, there may be ''race-specific'' differences in the contribution of polymorphisms to AKT1 expression and, consequently, to cancer risk. However, additional studies are needed to clarify this possibility. Polymorphisms in the PTEN, AKT1, MDM2 and p53 network have been individually used to search for susceptibility alleles of different cancers, but the results are inconsistent. The inconsistent results of these studies may be attributed to different molecular mechanisms of carcinogenesis among cancers, small sample size, marginal statistical significance and different ethnicities of the study populations. Additionally, a minor effect of a single variant on cancer risk could also cause the inconsistent results. Several studies have reported a potential interaction between the MDM2 SNP309 and the p53 Arg72Pro polymorphisms for breast and endometrial cancer, gastric cardia adenocarcinoma, and hepatitis B virus-related hepatocellular carcinoma [34][35][36][37]. Interactions were observed between the p53 Arg72Pro and PTEN polymorphisms with regard to the risk of esophageal squamous cell carcinoma [38]. This is particularly true in the present study, in which the haplotype and combined analyses confirmed the effects of multi-SNPs on NPC risk. Our results, together with those of the earlier studies, highlight the need for the combined analysis of genetic variants on cancer risk. In reviewing the results of this study, one must also keep several potential limitations in mind. First, as a hospital-based study, our cases were enrolled from the hospitals and the controls were selected from the community population. Consequently, inherent selection bias might have occurred. To overcome this limitation, we matched cases and controls for their age and residential area. Moreover, any inadequacy in matching was controlled in the data analyses with further adjustment and stratification. Second, considering our study population included a small number of patients with the low-risk genotype group, our initial findings should be investigated in additional studies with larger sample sizes. Third, in this study, we selected variants from four genes that encode the core functional components of this tumor suppressoroncoprotein network. However, this network is complex, and further studies that investigate other genes in this network are warranted to fully clarify the role of this important tumor suppressor-oncoprotein network in the genetic etiology of cancers. In summary, to our knowledge, this report is the first to describe the association between the combined effects of genetic variants of the PTEN, AKT1, MDM2 and p53 tumor suppressor-oncoprotein network and the risk of NPC. If confirmed by other studies, the contribution of genetic factors to the pathogenesis of the NPC presented here may have implications for the screening and treatment of this disorder.
Is evolution Darwinian or/and Lamarckian? Background The year 2009 is the 200th anniversary of the publication of Jean-Bapteste Lamarck's Philosophie Zoologique and the 150th anniversary of Charles Darwin's On the Origin of Species. Lamarck believed that evolution is driven primarily by non-randomly acquired, beneficial phenotypic changes, in particular, those directly affected by the use of organs, which Lamarck believed to be inheritable. In contrast, Darwin assigned a greater importance to random, undirected change that provided material for natural selection. The concept The classic Lamarckian scheme appears untenable owing to the non-existence of mechanisms for direct reverse engineering of adaptive phenotypic characters acquired by an individual during its life span into the genome. However, various evolutionary phenomena that came to fore in the last few years, seem to fit a more broadly interpreted (quasi)Lamarckian paradigm. The prokaryotic CRISPR-Cas system of defense against mobile elements seems to function via a bona fide Lamarckian mechanism, namely, by integrating small segments of viral or plasmid DNA into specific loci in the host prokaryote genome and then utilizing the respective transcripts to destroy the cognate mobile element DNA (or RNA). A similar principle seems to be employed in the piRNA branch of RNA interference which is involved in defense against transposable elements in the animal germ line. Horizontal gene transfer (HGT), a dominant evolutionary process, at least, in prokaryotes, appears to be a form of (quasi)Lamarckian inheritance. The rate of HGT and the nature of acquired genes depend on the environment of the recipient organism and, in some cases, the transferred genes confer a selective advantage for growth in that environment, meeting the Lamarckian criteria. Various forms of stress-induced mutagenesis are tightly regulated and comprise a universal adaptive response to environmental stress in cellular life forms. Stress-induced mutagenesis can be construed as a quasi-Lamarckian phenomenon because the induced genomic changes, although random, are triggered by environmental factors and are beneficial to the organism. Conclusion Both Darwinian and Lamarckian modalities of evolution appear to be important, and reflect different aspects of the interaction between populations and the environment. Reviewers this article was reviewed by Juergen Brosius, Valerian Dolja, and Martijn Huynen. For complete reports, see the Reviewers' reports section. Background The celebrations of Darwin's 200 years jubilee and the 150 anniversary of On the Origin of Species [1] in 2009, to a large extent, overshadowed another anniversary: Jean-Bapteste Lamarck's magnum opus, Philosophie Zoologique [2], was published in 1809, the year of Darwin's birth [3]. Arguably, Lamarck's book was the first published manifesto of biological evolution as fittingly pronounced by Darwin himself in the later editions of the Origin [4][5][6]. Lamarck's concept of evolution was limited in scope: in particular, he did not believe in extinction of species but rather thought that species are gradually transformed into other species via phyletic modification. Lamarck also believed in the innate tendency of organisms to progress toward perfection down the succession of generations. In line with this idea, Lamarck speculated on an extremely simple and straightforward mechanism of evolutionary change whereby the use of a particular organ would lead to its gradual functional improvement that would be passed through generations (the example of the giraffe's neck is, probably, one of the most notorious "just so stories" in the history of biology). Later, a generalization of Lamarck's hypothetical mechanism became known as inheritance of acquired characters (characteristics) (IAC) to emphasize a key aspect of this mechanism, namely, the direct feedback between phenotypic changes and the (what is now known as) the genotype (genome). However, it should be stressed that the phrase "inheritance of acquired characters" is substantially imprecise in that Lamarck and his followers were very particular about adaptive (beneficial, useful) not just any acquired traits being inherited. Furthermore, inheritance of acquired characters certainly is not Lamarck's original idea; rather, it appears to have been "folk wisdom" in Lamarck's day [7]. Hereinafter we use the acronym IAC with this implicit understanding. As already mentioned Darwin was well aware of Lamarck's work and generously acknowledged Lamarck's contribution in the chapter on his scientific forerunners that he included in the Origin starting with the 3 rd edition [4]. Darwin's own views on IAC markedly evolved. In the first edition of the Origin, he allowed IAC as a relatively unimportant mechanism of evolutionary change that was viewed as subsidiary to random, undirected variation. However, in the subsequent additions, Darwin viewed IAC as being progressively more consequential, apparently, in the face of the (in)famous Jenkin's nightmare of blending inheritance [8] which Darwin was unable to refute with a plausible mechanism of heredity. Even in Darwin's day, many scientists considered his giving in to Lamarckian inheritance a sign of weakness and a mistake. In the 1880s, the renown German biologist August Weismann, in the context of his theory of germ plasm and germline-soma barrier, set out to directly falsify IAC in a series of experiments that became as famous as Lamarck's giraffe [9]. Almost needless to say, cutting off tails of Weismann's experimental rats not just failed to produce any tail-less pups but did not result in any shortening of the tail of the progeny whatsoever. Weismann's experiments delivered a serious blow to the public perception of IAC although, technically, they may be considered irrelevant to Lamarck's concept that, as already mentioned, insisted on the inheritance of beneficial changes, primarily, caused by use of organs, not senseless mutilation (which was generally known to have no effect on progeny long before Weismann, for instance, in the case of human circumcision, although claims to the contrary were common enough in Weismann's day and were the direct incentive for his experiments). Lamarck's ideas survived Weismann's experiments and more, perhaps, owing to the notion of the innate trend toward progress as a driving force of evolution that was attractive to various kinds of thinkers (and many individuals who hardly met that classification). Be it as it may, the fate of "Lamarckism" was arguably far worse than a quiet demise under the tails of Weismann's rats. Inspired by ideas of progress in biological evolution, the flamboyant Viennese researcher and popularizer of science Paul Kammerer in the beginning of the 20 th century embarked on a two-decade long quest to prove IAC [10][11][12][13][14]. Kammerer's work included mostly experiments with amphibians that changed their color patterns and breeding habits depending on the environmental factors such as temperature and humidity. Strikingly, Kammerer insisted that the induced changes he observed were fully inheritable. Kammerer's experiments drew criticism due to his sloppy documentation and suspicious, apparently, doctored drawings and photographs. Kammerer defended his conclusions energetically but in 1923 his career came to end after the famous geneticist William Bateson found that Kammerer's showcase midwife toad that supposedly acquired black mating pads, a trait that was passed to the progeny, was actually injected with black ink. Kammerer killed himself within two years after this disgraceful revelation. Whether or not Kammerer was a fraud in the worst sense of the word remains unclear; it is thought that he might have used ink to "augment" a color change that he actually observed, a scientific practice that was not approved of even then, let alone now, but a far cry from flagrant cheating. Kammerer's findings might find their explanation in hidden variation among his animals that, unbeknownst to him, became subject to selection [11] or, alternatively, in epigenetic inheritance [12][13][14]. Under the most charitable of explanations, Kammerer ran a seriously sloppy operation, even if he unknowingly stumbled over important phenomena. Regardless of the specifics, the widely publicized "affaire Kammerer" hardly improved the reputation of Lamarckian inheritance. The worst for Lamarck was yet to come. In a cruel irony, Kammerer was warmly welcomed by the Bolshevik leaders of the Soviet Union and nearly ended up moving his laboratory to that country. Despite the striking successes of Russian genetics in the 1920s (suffice it to recall the names of Chetverikov and Vavilov), the party leaders cherished the ideas of fast, planned, no-nonsense improvement of nature, including human nature. So, when the general situation in the country gravitated toward mass terror and hunger around 1930, a suitable team was found, under the leadership of the agronomist Trofim Lysenko. Lysenko and his henchmen were not scientists at all, not by any stretch, but utterly shameless criminals who exploited the abnormal situation in the country to amass in their hands extraordinary power over Soviet scientific establishment and beyond. Lamarckian inheritance that Lysenkoists, not without a certain perverse cleverness (to the modern reader, with a distinctly Orwellian tint), touted as a "true Darwinian" mechanism of evolution, was the keystone of their "theory". They took Lamarck's idea to grotesque extremes by claiming, for instance, that cuckoos repeatedly emerged de novo from eggs of small birds as a particularly remarkable adaptation. In his later years, after he fell from power, Lysenko retained an experimental facility where he reportedly fed cows butter and chocolate in an attempt to produce a breed stably giving high-fat milk. Mostly, the Lysenkoist "science of true Darwinism" was not even fraudulent because its adepts often did not bother to fake any "experiments" but simply told their ideologically inspired tales. This could have been comical if not for the fact that many dissenters literally paid with their lives, whereas almost all research in biology in the Soviet Union was hampered for decades. There is no reason to discuss Lysenko any further here; detailed accounts have been published [15][16][17], and the proceedings of the infamous 1948 session of the Soviet Agricultural Academy, where genetics was officially banished, remain a fascinating even if harrowing read [18]. What concerns me here is that, quite understandably, the unfortunate saga of Lysenko made the very idea of a Lamarckian mechanism actually operating during evolution repulsive and unacceptable to most biologists. The IAC itself remains, effectively, a derogatory phrase and is presented as a grave error in judgment in otherwise admiring accounts of Lamarck's work [3]. However, an objective look at several routes of emergence and fixation of evolu-tionary change that surfaced in the genomic era reveals mechanisms that appear suspiciously Lamarckian or at least quasi-Lamarckian. In this article, we discuss these classes of genomic changes and arrive to the conclusion that some mechanisms of evolution that meet all Lamarckian criteria do exist whereas, in many other instances, there is no sharp distinction between "Lamarckian" and "Darwinian" scenarios, with the two representing different aspects of the interaction between organisms and their environment that shapes evolution. Throughout this discussion, we stick to actual changes occurring in genomes, leaving apart the separate, fascinating subject of epigenetic inheritance. The Lamarckian mode of evolution, its distinction from the Darwinian mode and the criteria for the identification of Lamarckian inheritance Before turning to the wide range of phenomena that seem to display all or some features of the mechanism of evolution proposed by Lamarck, it is of course necessary to define the Lamarckian paradigm and the criteria an evolutionary process must satisfy to be considered Lamarckian. In doing so, we deliberately do not dwell on the differences between Lamarck's original views and the numerous subsequent (mis)representations, but rather try to distill the essence of what is commonly known as IAC and the Lamarckian mode of evolution. Lamarck's concept of heredity, which is also one of the two cornerstones of his evolutionary synthesis, stands on two principles that he promoted to the status of fundamental laws in Philosophie Zoologique and other texts: 1) Use and disuse of organs 2) The inheritance of acquired characters. Lamarck directly linked the 'use and disuse' clause to effects of the environment on the "habits" of an organism and, through the said habits, on the "shape and nature" of body parts; and, of course, he considered these environment-effected adaptive changes to be heritable. Wrote Lamarck: "...nature shows us in innumerable...instances the power of environment over habit and of habit over the shape, arrangement and proportions of the parts of animals" [2]. Thus, Lamarck's idea of heredity is based on the threefold causal chain: environment-habit-form. Lamarck insisted on the essentiality of change in habits as an intermediate between the environment and (inheritable) change of organismal form: "Whatever the environment may do, it does not work any direct modification whatever in the shape and organization of animals. But great alterations in the environment of animals lead to great alterations in their needs, and these alterations in their needs necessarily lead to others in their activities. Now if the new needs become permanent, the animals then adopt new habits that last as long as the needs that evoked them". Lamarck was not original in his belief in IAC that appeared to be the folk wisdom of his day. However, he was both more specific than others in spelling out the above causal chain and, more importantly, he made this scheme the foundation of the far more original concept of evolution [7]. The second foundation of Lamarck's evolutionary synthesis was his belief in the innate tendency toward increasing organizational complexity -or, simply, progress -that, in Lamarck's view, shaped biological evolution along with the IAC. Although Lamarck often used the phrase "pouvoir de la vie" to denote this fundamental tendency, his idea was completely materialistic, even mechanistic, as he attributed the trend toward progress to the motion of fluids in the animal body which would carve channels and cavities in soft tissues, and gradually lead to the evolution of increasing organizational complexity. For a good measure, to explain why simply organized life form persisted despite the progressive character of evolution, Lamarck maintained that spontaneous generation was a constant source of primitive organisms. The ideas of spontaneous generation and the innate tendency toward progress, especially, with its naïve mechanistic underpinning, are hopelessly obsolete. Whether or not there is an overall trend toward increasing complexity over the course of evolution of life, remains a legitimate subject of debate [19][20][21][22], but of course, even those researchers who advocate the existence of such a trend would not characterize it as an "innate tendency". In what follows, we address the much more relevant and interesting problem of the IAC and its contribution to the evolutionary process. In terms compatible with modern genetics, Lamarck's scheme entails that 1) environmental factors cause genomic (heritable) changes 2) the induced changes (mutations) are targeted to a specific gene(s) 3) the induced changes provide adaptation to the original causative factor ( Figure 1). Obviously, the adaptive reaction to a specific environmental factor has to be mediated by a molecular mechanism that channels the genomic change. The distinction from the Darwinian route of evolution is straightforward: in the latter, the environment is not the causative agency but merely a selective force that may promote fixation of those random changes that are adaptive under the given conditions ( Figure 1). The Darwinian scheme is simpler and less demanding than the Lamarckian one in that no specialized mechanisms are required to direct the change to the relevant genomic locus (loci) and restrict it to the specific modifications (mutations) providing the requisite adaptation. Indeed, it is the difficulty of discovering or even conceiving of mechanisms of directed adaptive change in genomes that have for decades relegated the Lamarckian scheme to trash heap of history. In the rest of this article we discuss the recent studies of several phenomena that seem to call for resurrection of the Lamarckian scenario of evolution. Of course, despite the substantial mechanistic differences, the Lamarckian and Darwinian schemes are similar in that both are essentially adaptive in the final outcome and in that regard are radically different from random drift (which may be denoted the "Wrightian modality of evolution", after Sewall Wright, the originator of the key concept of random genetic drift [23]) ( Figure 1). Lamarckian and quasi-Lamarckian phenomena The CRISPR-Cas system of antivirus immunity in prokaryotes: the showcase for a genuine Lamarckian mechanism A recently discovered novel system of antiphage defense in archaea and bacteria seems to function via a straightforward Lamarckian mechanism. The system is known as CRISPR-Cas, where CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats and Cas for CRISPR-associated genes (sometimes referred to as CASS or simply CRISPR system) [24][25][26]. The CRISPR are interspersed in the sense that they contain short unique inserts (spacers) embedded within each palindromic repeat unit. Archaeal and bacterial genomes contain cassettes of up to multiple CRIPSR units, in some cases, more than one cassette per genome. Although CRISPR have been recognized over 20 years ago, even before the first complete bacterial genome was sequenced, only much later was it realized that CRISPR cassettes are always adjacent in genomes to a group of cas genes that are predicted to encode various (predicted) enzymes involved in nucleic acid metabolism including several nucleases, a helicase, and a polymerase [27][28][29]. Serendipitously, it was discovered that some of the inserts in CRISPR cassettes are identical to fragments of bacteriophage and plasmid genes [30,31], so the hypothesis was formulated that the CRISPR-Cas system utilized the phage-derived sequences as guide molecules to destroy phage mRNAs analogously to the eukaryotic RNA interference (RNAi) [32]. Although most of the mechanistic details remain to be uncovered, the principal propositions of this hypothesis have been validated: the presence of an insert precisely complementary to a region of a phage genome is essential for resistance [33]; the guide RNAs form complexes with multiple Cas proteins and is employed to abrogate the infection [34][35][36]; and new inserts conferring resistance to cognate phages can be acquired [37,38]. An important modification to the origi-nal proposal is that, in the systems so far explored, the cleaved target is phage DNA itself rather than mRNA [39]. The mechanism of heredity and genome evolution embodied in the CRISPR-Cas system seems to be bona fide Lamarckian (Figure 2): -an environmental cue (mobile element) is employed to directly modify the genome -the resulting modification (unique, element-specific insert) directly affects the same cue that caused the modification -the modification is clearly adaptive and is inherited by the progeny of the cell that encountered the mobile element. A peculiarity of the CASS-mediated heredity is that it appears to be extremely short-lived: even closely related bacterial and archaeal genomes do not carry the same inserts, the implication being that, as soon as a bacterium or archaeon ceases to encounter a particular bacteriophage, the cognate insert rapidly deteriorates (indeed, the inserts hardly can be evolutionarily stable in the absence of strong selective pressure because a single mutation renders them useless) [32,37,38]. Nevertheless, the Lamarckian scenario seems undeniable in the case of CASS: adaptive evolution of organisms occurs directly in response to an environmental factor, the result being specific adaptation (resistance) to that particular factor [32]. Lamarckian, Darwinian, and Wrightian modalities of evolution Other potential Lamarckian systems functioning on the CASS principle It is instructive to compare the hereditary and evolutionary features of the CASS with those of eukaryotic RNA interference (RNAi) and, more specifically, siRNA and piRNA, and immune systems, the two systems in eukaryotes that, at least, in general terms are functionally analogous to CASS. Neither of these systems seems to utilize a straightforward Lamarckian mechanism. Nevertheless, both can be considered to display certain "Lamarckianlike" features. The siRNA system (a distinct branch of RNAi) definitely "learns" from an external agent (a virus) by generating siRNAs complementary to viral genes [40][41][42], a process that could be related, at least metaphorically, to Lamarck's "change of habits". Moreover, there is a degree of memory in the system because in many organisms siRNAs are amplified, and the resistance to the cognate virus can persist for several generations [43,44]. Such persistence of siRNA is one of the manifestations of increasingly recognized RNA-mediated inheritance, sometimes called paramutation [45,46]. The key difference from CASS is that (as far as currently known) siRNAs are not incorporated into the genome, so Lamarckian-type epigenetic inheritance but not bona fide genetic inheritance seems to be involved. However, even that distinction becomes questionable in the case of transposon-derived piRNAs which form rapidly proliferating clusters that provide defense against transposable elements in the germ lines of all animals [47,48]. In the case of piRNA, like with the CRISPR-Cas, fragments of mobile element genomes are integrated into the host genome where they rapidly proliferate, apparently, under the pressure of selection for effective defense [48]. All the criteria for the IAC and the Lamarckian mode of evolution seem to be met by this system. It seems particularly remarkable that the sequestered germline, a crucial animal innovation, that seems to hamper some forms of Lamarckian inheritance, such as those associated with HGT, itself evolved a specific version of IAC. Notably, recent findings in both plants and arthropods, although preliminary, indicate that these eukaryotes integrate virus-specific DNA into their genomes and might employ these integrated sequences to produce siRNAs that confer immunity to cognate viruses [49,50]. If corrobo- The mechanism of CASS: a bona fide Lamarckian system Figure 2 The mechanism of CASS: a bona fide Lamarckian system. rated by more detailed research, these mechanisms will be fully analogous to CRISPR-Cas and decidedly Lamarckian. Horizontal gene transfer: a major Lamarckian component Arguably, the most fundamental novelty brought about by comparative genomics in the last decade is the demonstration of the ubiquity and high frequency of horizontal gene transfer (HGT) among prokaryotes, and a considerable level of HGT in unicellular eukaryotes as well [51][52][53][54][55][56]. Prokaryotes readily obtain DNA from the environment, with phages and plasmids serving as vehicles, but in many cases, also directly, through the transformation pathway [57]. The absorbed DNA often integrates into prokaryotic chromosomes and can be fixed in a population if the transferred genetic material confers even a slight selective advantage onto the recipient, or even neutrally [58]. The HGT phenomenon has an obvious Lamarckian aspect to it: DNA is acquired from the environment, and naturally, the likelihood to acquire a gene that is abundant in the given habitat is much greater than the likelihood to receive a rare gene. The second component of the Lamarckian scheme, the direct adaptive value of the acquired character, is not manifest in all fixed HGT events but is relevant and common enough. Perhaps, the most straightforward and familiar case in point is evolution of antibiotic resistance. When a sensitive prokaryote enters an environment where an antibiotic is present, the only chance for the newcomer to survive is to acquire a resistance gene(s) by HGT, typically, via a plasmid [59]. This common (and, of course, extremely practically important) phenomenon seems to be a clear case of Lamarckian inheritance. Indeed, a trait, in this case, the activity of the transferred gene that mediates antibiotic resistance, is acquired under a direct influence of the environment and is clearly advantageous, even essential in this particular niche. More generally, any instance of HGT when the acquired gene provides an advantage to the recipient, in terms of reproduction in the given environment (that is specifically conducive to the transfer of the gene in question), seems to meet the Lamarckian criteria. Recent comparativegenomic studies indicate that HGT is the principal mode of bacterial adaptation to the environment through the extension of metabolic and signaling networks that integrate new, horizontally acquired genes and hence incorporate new capabilities within pre-existing frameworks [60][61][62]. Quantitatively, in prokaryotes, HGT appears to be a far more important route of adaptation than gene duplication [62,63]. A provocative indication that HGT might be an adaptive phenomenon is the recent discovery of the Gene Transfer Agents (GTAs). The GTAs are derivatives of defective bac-teriophages that pack a variety of bacterial genes and transfer them within bacterial and archaeal populations [64,65]. The properties of GTAs remain to be investigated in detail but it seems to be a distinct possibility that these agents are dedicated vehicles of HGT that evolved under the selective pressure to enhance gene transfer. Should that be the case, one would have to conclude that HGT itself is, in part, an adaptive phenomenon. Stress-induced mutagenesis and activation of mobile elements: quasi-Lamarckian phenomena Darwin emphasized the evolutionary importance of genuinely random, undirected variation whereas the Lamarckian modality of evolution is centered at directed variation that is specifically caused by environmental factors. The real evolution seems to defy such oppositions. A crucial case in point is the complex of diverse phenomena that collectively can be denoted stress-induced mutagenesis [66,67], one major facet of which is activation of mobile elements. In her classic experiments, McClintock demonstrated activation of "gene jumping" in plants under stress and the importance of this stress-induced mobility of distinct "controlling elements" for the emergence of resistance phenotypes [68,69]. The later, also famous and controversial, experiment of Cairns and coworkers on reversion of mutations in the lac operon induced by lactose brought the Lamarckian mechanism of evolution to the fore in a dramatic fashion [70,71]. Cairns et al. showed strong enhancement of frameshift reversion in the lac operon in the presence of lactose and boldly speculated that the classical Lamarckian mechanism of evolution was responsible for the observed effect, i.e., that lactose directly and specifically caused mutations in the lac operon. Subsequent, more thorough investigations, including the work of Cairns and Foster, showed that this was not the case: stress such as starvation was shown to induce mutations but not in specific loci [72][73][74][75][76][77]. Crucially, the mutations underlying the reversion of the lac-phenotype and other similar phenotypes have been shown to be strictly stress-induced: laccells plated on a medium with lactose as the only carbon source experience starvation stress -rather than emerging from the pool of pre-existing rare, spontaneous mutations [78][79][80]. Actually, stress-induced mutagenesis, specifically, the mutagenic SOS repair pathway in E. coli was discovered long before the experiments of Cairns. Moreover, Radman [81] and Echols [82] independently came up with the seminal idea that this mutagenic form of repair actually could be an adaptive, anti-stress response mechanism rather than malfunctioning of the repair systems. The two decades of subsequent research seem to prove this striking conjecture beyond reasonable doubt. The adaptive character of error-prone DNA repair is supported by several lines of strong evidence. The activity of the SOS pathway and the other mutagenic repair mechanisms are elaborately regulated, in particular, through the switch from high-fidelity to error-prone double-strand break repair affected by the dedicated σ-factor, RpoS, apparently, to produce the optimal mutation rate [83]. Mutations produced by error-prone repair processes, although not targeted to specific genes, are not randomly scattered in the genome either. On the contrary, these mutations are clustered around double-stranded breaks, a phenomenon that is thought to have evolved as a distinct adaptation that allows coordinated evolvability of clustered, functionally linked genes (a central feature of genome architecture in prokaryotes) in rare cells where beneficial mutations emerge while limiting the damage to other parts of the genome [67,[83][84][85][86]. More recently, stress-induced mutagenesis, in particular, retrotransposon mobilization, was demonstrated also in yeast and in animals [87][88][89], suggesting that this mechanism of adaptive evolvability is general across the entire range of cellular life forms [67]. Stress-induced mutagenesis is a rule among bacteria rather than an exception: among hundreds investigated natural isolates of E. coli, more than 80% showed induced mutagenesis in aged colonies, and the excess of stress-induced mutations over constitutive ones varied by several orders of magnitude [90]. Strikingly, it appears that stress-induced genome instability is also central to the progression of cancer in animals [82]. Tumors evolve under conditions of perpetual hypoxic stress which induces extensive genome rearrangement and mutation [91,92]. These stress-induced changes comprise the basis for the survival of mutants that are capable of uncontrolled growth in spite of the stress. Despite the differences in the actual mechanisms of mutagenic repair and its regulation, malignant tumors in animals are conceptually not so different from bacterial populations evolving under stress [67]. Adaptive evolution resulting from stress-induced mutagenesis is not exactly Lamarckian because the stress does not cause mutations directly and specifically in genes conferring stress resistance. Instead, organisms evolved mechanisms that in response to stress induce non-specific mutagenesis which, however, appears to be fine-tuned in such a way so to minimize the damage from deleterious mutations in those rare genomes that carry a beneficial mutation. This type of mechanism is best defined as quasi-lamarckian. Indeed, in the case of stress-induced mutagenesis: i) mutations are triggered by environmental conditions; ii) the induced mutations lead to adaptation to the stress factor(s) that triggered mutagenesis; iii) muta-genic repair is subject to elaborate regulation which leaves no reasonable doubt regarding the adaptive nature of this process. Remarkably, there is a direct link between the Lamarckian aspects of stress-induced mutagenesis and HGT via the phenomenon of antibiotic-induced HGT of resistance determinants [93,94]. More specifically, many antibiotics induce the SOS response which in turn leads to the mobilization of integrating conjugative elements (ICEs) that serve as vehicles for the antibiotic resistance genes. Here we observe an apparent convergence of different mechanisms of the genome change in the Lamarckian modality. The dissolution of a conflict: the continuum of Darwinian and Lamarckian mechanisms of evolution In the preceding sections, we discussed a considerable variety of phenomena some of which seem to strictly meet the Lamarckian criteria whereas others qualify in quasi-Lamarckian ( Table 1). The crucial difference between "Darwinian" and "Lamarckian" mechanisms of evolution is that the former emphasizes random, undirected variation whereas the latter is based on variation directly caused by an environmental cue and resulting in a specific response to that cue ( Figure 1). Neither Lamarck nor Darwin were aware of the mechanisms of emergence and fixation of heritable variation. Therefore, it was relatively easy for Lamarck to entertain the idea that phenotypic variation directly translates into heritable (what we now consider genetic or genomic) changes. We now realize that the strict Lamarckian scenario is extremely demanding in that a molecular mechanism must exist for the effect of a phenotypic change to be channeled into the corresponding modification of the genome (mutation). There seems to be no general mechanisms for such reverse genome engineering and it is not unreasonable to surmise that genomes are actually protected from this type of mutation. The "central dogma of molecular biology" which states that there is no information flow from protein to nucleic acids [95] is a partial embodiment of this situation. However, in principle, the backward flow of specific information from the phenotype -or the environment viewed as extended phenotype -to the genome is not impossible owing to the wide spread of reverse transcription and DNA transposition. Highly sophisticated mechanisms are required for this bona fide Lamarckian scenario to work, and in two remarkable cases, the CASS and the piRNA system, such mechanisms have been discovered. Although the existence of other bona fide Lamarckian systems, beyond the CASS and the piRNA, is imaginable and even likely, as suggested, for instance, by the discovery of virus-specific sequences, potentially conferring resistance to the cognate viruses, in plant and animal genomes [49,50] these mechanisms hardly constitute the main-stream of genome evolution. In contrast, the mechanisms that we denoted in the preceding sections as quasi-Lamarckian are ubiquitous. Conceptually, these mechanisms seem to be no less remarkable -and no less sophisticated -than the genuine Lamarckian scenario, because the quasi-Lamarckian processes translate mutations that, in and by themselves, are random into specific, adaptive responses to environmental cues. The theme of powerful, often adverse effects of the environment on organisms seems to be common to different facets of the Lamarckian mode of evolution described here, be it the case of the CASS system or stress induced mutagenesis. This association is most likely not spurious: it stands to reason that strong signals from the environment trigger (quasi)Lamarckian processes whereas relatively weak signals ("business as usual") are conducive to the Darwinian modality of evolution (Figure 3). In a recent discussion of the evolutionary significance of HGT [96], Poole suggested that the Lamarckian aspect of HGT, which was invoked by Goldenfeld and Woese [56]as the dominant modality of the earliest stages of life evolution, becomes illusory when "a gene's view" of evolution [97] is adopted. Indeed, it appears that the Lamarckian modality is associated primarily, if not exclusively, with the organismal level of complexity, and does not apply to the most fundamental level of evolution which indeed involves genes, independently evolving portions of genes (e.g. those encoding distinct protein domains) and Environment, stress and the Lamarckian and Darwinian modalities of evolution Figure 3 Environment, stress and the Lamarckian and Darwinian modalities of evolution. Lamarckian modality stress level Darwinian modality mobile elements [98]. In that sense, Lamarckian evolution may be considered an "emergent phenomenon", perhaps, not surprisingly, considering the need for complex mechanisms for the integration of new material into the genome, to realize the Lamarckian scheme. In our opinion, the view of directed and undirected variation and their places in evolution presented here diffuses the long-standing tension between the Darwinian and Lamarckian scenarios. Indeed, evolution is a continuum of processes, from genuinely random to those that are exquisitely orchestrated to ensure a specific response to a particular challenge. The critical realization suggested by many recent advances referred to in this article is that genomic variation is a far more complex phenomenon than previously imagined and is regulated at multiple levels to provide adaptive reactions to changes in the environment. The distinction between Lamarckian and Darwinian mechanisms of evolution potentially could be considered as one of only historical, semantic or philosophical interest. However, the radical reappraisal of the nature of genomic variation and the realization that much of this variation is adaptive, thus apparently eliminating the conflict between the Lamarckian and Darwinian scenarios, is a veritable, although underappreciated paradigm shift in modern biology. Conclusion A close examination of a variety of widespread processes that contribute to the generation of genomic variation shows that evolution does not rely entirely on stochastic mutation. Instead, generation of variation is often controlled via elaborate molecular machinery that instigates adaptive responses to environmental challenges of various degrees of specificity. Thus, genome evolution appears to span the entire spectrum of scenarios, from the purely Darwinian, based on random variation, to bona fide Lamarckian where a specific mechanism of response to a cue is fixed in an evolving population through a distinct modification of the genome. In a broad sense, all these routes of genomic variation reflect the interaction between the evolving population and the environment in which the active role belongs either to selection alone (pure Darwinian scenario) or to directed variation that itself may become the target of selection (Lamarckian scenario). Competing interests The authors declare that they have no competing interests. Authors' contributions EVK conceived of the article and wrote the original draft; YIW modified the manuscript and designed and prepared the figures; both authors read, edited and approved the final text. Reviewer 1: Juergen Brosius, University of Muenster This is a timely, captivating and clear presentation of yet another and highly significant testimony to the fact that in nature, we rarely encounter clear boundaries. Figure 1 is a centerpiece of the article, as it clearly pinpoints the salient differences between Lamarckian, Darwinian and also neutral evolution, but at the same time it illustrates their great similarities. Key in the Lamarckian mode are the mutation-directing mechanisms. Although acquired traits can be passed onto the next generation in case of a greatly reduced Weismann barrier as would have been the case in an RNA world, where genotype and phenotype were almost indistinguishable on the same ribonucleic acid molecule [99], the directional component was almost certainly absent. While commenting on Kammerer in the Background section, the authors might include that very recently A. Vargas has revisited Paul Kammerer's controversial midwife toad experiments. He comes to the conclusion that there might be substance to Kammerer's observations based upon what we learned about patterns of epigenetic inheritance, in the meantime [12]; see also commentaries by Wagner and Pennisi [13,14]. Authors' response: We modified the text accordingly and cited these publications; the pointer to this recent re-analysis of Kammerer's work is greatly appreciated. It is also worth noting that memes [97] and cultural evolution in general obey the laws of both Darwinian and Lamarckian evolution [100]. Recently, it was proposed that the human lineage is at the verge of several major evolutionary transitions [101], one of these being a capability very close to Lamarckism with the potential to direct acquired knowledge on phenotype/genotype relationships into our germ-line, the tools of Genetic Engineering and Molecular Medicine representing the mutation directing mechanisms [99,102]. Hence, I would recommend to qualify the sentence on page 20: "There seems to be no general mechanisms for such reverse genome engineering and it is not unreasonable to surmise that genomes are actually protected from this type of mutation" with "up to now". Authors' response: These are interesting possibilities but we are of the opinion that, when and if realized, these aritficial methods of introducing directed changes into genomes will be qualitatively distinct from naturally evolved mechanisms. Accordingly, we did not modify the text of the article in the belief that the reader is adequately served by this comment. However, we do not need to wait for this to fully develop. The authors recognized that a mechanism of capturing invader nucleic acids (e.g., from viruses and plasmids) and using it antisense to the genetic material that evolved in Archaea and Bacteria long ago is very close to the definition of Lamarckism. This mechanism is well described in the article. Although there are a number of original papers and reviews on the subject, no one seems to have recognized the significance of the findings with respect to Lamarckism (but see ref. 24 and Acknowledgements). Concerning definitions regarding "adaptation to the original causative factor" or the "adaptive reaction", at least initially, this is not always the case: Strictly speaking, the CRISPR system is an exaptation. For example, the viral sequences did not evolve for the function in the host; instead the host is co-opting them subsequent to integration for RNA-based antivirus immunity. Perhaps one way out would be the use of the term "aptation" which comprises exaptation and adaptation as suggested by Gould and Vrba [103]. Authors' response: We think this is a very subtle although, perhaps, valid semantic point. Again, the interested reader will be alerted by the comment. Horizontal gene transfer (HGT), which was rampant in the RNA world [99], I would not hang up too high with respect to Lamarckism. The CRISPR system is a much more impressive example. With respect to HGT, once more I only see a continuum with HGT on one end and sex among members of the same species on the other. HGT is just limit-, border-, or barrierless sex acquiring different genes instead of different alleles [99]. Obviously, I do not quite agree with the view that the "Lamarckian modality is associated primarily, if not exclusively, with the organismal level of complexity, and does not apply to the most fundamental level of evolution which indeed involves genes, independently evolving portions of genes (e.g. those encoding distinct protein domains) and mobile elements [98]" because of the inseparability of genotype and phenotype in the RNA world [99]. However, I agree with the authors to consider Lamarckism as largely an "emergent phenomenon" (but see the CRISPR system) in our lineage (see memes and other evolutionary transitions discussed above). Stress-induced mutations, whether point mutations including small indels including SOS repair or large indels in the form of mobile genetic elements constitute a crude machinery, at best, but hardly directed. Despite a preference for TTAAAA during RNA mediated retroposition in placental mammals [104], insertions can happen at almost any locus and hardly can be considered specific. At a later point, the authors put this in the right perspective. I hope misguided individuals do not stop reading before they reach these important paragraphs. Giving an outlook on the future of our species, we might expect a sharp increase in mutations and retroposition, due to the selfinflicted stress by feedback from our environment. Once more, one can only agree with Stephen Jay Gould: ". our deepest puzzles and most fascinating inquiries often fall into a no-man's land not clearly commanded by either party" [7]. Reviewer 2: Valerian Dolja, Oregon State University I follow the recent series of Eugene Koonin's conceptual papers pretty closely, and I must admit that this latest one is a surprising twist. When we were taught Biology, work of Lamarck appeared to be a fine example of a feasible, coherent, and even likable theory that, however, had no experimental support whatsoever. By and large, this perception did not change in the last four decades of our direct engagement in biology research. Enter discovery of the CRISPR system based initially on the bioinformatics analysis of the prokaryotic genomes by Koonin's team, and then confirmed experimentally in several labs, again, with Koonin's direct involvement. Even though in its infancy (e.g., it is not known how phages respond to this defense; they either have CRISPR suppressors or are busy evolving those), CRISPR already emerged as a truly Lamarckian phenomenon, complete with a mechanism for insertion of the acquired phage DNA fragments into bacterial genomes. With the addition of piRNA facet of RNAi system and other, 'quasi-Lamarckian' phenomena such as HGT (particularly when mediated by GTAs), inheritance of the environmental DNA becomes a major player in, at least, evolution of prokaryotes. However, one can still ask how relevant this partial vindication of Lamarck is to the contemporary, mechanismbased understanding of biological evolution. One argument is that, vili-nili, Lamarck and Darwin based their concepts solely on observational natural philosophy rather than on investigation of underlying molecular mechanisms. It seems that the latter beats the former; suffice it to say that the Mendel laws are trivial consequence of the DNA replication mechanisms. In a sense, it does not matter so much, Darwinian or Lamarckian, when it is understood how evolution operates at the molecular, organismal, and population levels. This having been said, I still believe that the effort of reviving Lamarck's ideas should be applauded for at least the following reasons. Firstly, it enriches the conceptual framework of modern evolutionary theory by providing a novel insight into complexity of relationships between genomes and environment, and by showing several amazing examples of how the latter can directly or indirectly change the former. It is also fitting that Koonin who cofathered discovery of the CRISPR system, has also brought this, now molecular mechanism-based Lamarckism back to the fold of evolutionary biology. Secondly, it shows how even the seemingly opposing theories can be combined to complement each other. This pluralistic approach appears to be a strong and continuing trend in Koonin's work, be it introns early vs late or TOL vs FOL debates. Thirdly, it emphasizes the need and the benefits of continuous rethinking and reinterpreting the history of science. The significance of the latter issue is hard to overestimate given the dramatic personal story of Kammerer (recently recapitulated in Science) that intertwined with the darkest days of Russian biology under Stalin and Lysenko. In conclusion, I think that Koonin and Wolf essay will be very instructive for the broad audience of the students of evolution and their opponents alike. It integrates so seamlessly the literary, historical, philosophical, and mechanistic approaches. It also helps a lot that the paper is very engaging, impossible to put aside before finishing. Authors' response: We appreciate the constructive comments and would like to emphasize that the primary goal of this paper is indeed not a reappraisal of the role of Jean-Bapteste Lamarck in the history of evolutionary biology. To engage in such an undertaking, one needs to be a professional historian of science, which we certainly are not, and of course, to be able to read Lamarck's oeuvre in the original which, most unfortunately, we cannot do (at least, not without a long-term, sustained effort). Rather, this paper focuses on the increasing realization of the more direct and active involvement of environmental factors in evolutionarily relevant genomic change than perceived within the Modern Synthesis of Evolutionary Biology. This emerging new aspect of evolution necessarily brings to mind Lamarck but we do not propound a revival of the actual ideas of Philosophie Zoologique. Reviewer 3: Martijn Huynen, Radboud University Koonin and Wolf have written an interesting and provocative study on the Lamarckian aspects of some non-random genetic changes. In commenting on this paper I will try to not run into semantic issues about what is really Lamarckian. Some newly discovered systems like the CAS system can, also in my view, clearly be regarded as Lamarckian, and I applaud the authors for carefully making their case. To regard Horizontal Gene Transfer (HGT) as Lamarckian one would however have to show that a substantial fraction of HGT is indeed adaptive. I do not think we have data to substantiate that. One could of course argue that species living in the same environment share the same needs, like adaptation to high temperatures, and thus the transfer of Reverse Gyrase from Archaea to Bacteria could be regarded as Lamarckian. I doubt however that of the total number of genes that get transferred a reasonable fraction will have adaptive value. It may be tentative to think so, but we simply have no data to separate the effects of the process of HGT from the process + the effect of selection. I would therefore not agree that "any instance of HGT when the acquired gene provides an advantage to the recipient, in terms of reproduction in the given environment (that is specifically conducive to the transfer of the gene in question), seems to meet the Lamarckian criteria", because there will be many non-adaptive HGTs, just as there are many non-adaptive mutations. Authors' response: we do not claim that all or most of HGT is adaptive or Lamarckian but only that there is a substantial Lamarckian component to it. The quoted sentence says nothing about the frequency of adaptive HGT, so we maintain that it is valid. Further, one has to clearly distinguish between the occurrence of HGT and its fixation in the population. Of course, the huge majority of occurring HGT is non-adaptive but that does not necessarily apply to the fixed transfers. Similarly I do not think that there is evidence to support that the stress induced changes in tumors are adaptive in themselves, even though some of them could indeed be selected, and I do not know of any evidence to support that "the induced mutations lead to adaptation to the stress factor(s) that triggered mutagenesis". Authors' response: it is important to emphasize that, unlike the case of CRISPR and the adaptive component of HGT, which we view as bona fide Lamarckian, we denote stress-induced mutagenesis including that occurring in tumors, a quasi-Lamarckian phenomenon (Table 1). So we do not posit that induced mutations are adaptive "in themselves" but rather that some of them are, often, only a small fraction. However, all these mutations are directly induced by environmental stress factors, and those that are adaptive, even if a small minority, are most consequential for evolution. Finally: at least I do not realize that "much of this variation is adaptive". But this study did get me to think about it, and as such I think this manuscript provides valuable new insights and thoughts about the possible continuum between Darwinian and Lamarckian evolution.
Human Primary Astrocytes Differently Respond to Pro- and Anti-Inflammatory Stimuli For a long time, astrocytes were considered a passive brain cell population. However, recently, many studies have shown that their role in the central nervous system (CNS) is more active. Previously, it was stated that there are two main functional phenotypes of astrocytes. However, nowadays, it is clear that there is rather a broad spectrum of these phenotypes. The major goal of this study was to evaluate the production of some inflammatory chemokines and neurotrophic factors by primary human astrocytes after pro- or anti-inflammatory stimulation. We observed that only astrocytes induced by inflammatory mediators TNFα/IL-1a/C1q produced CXCL10, CCL1, and CXCL13 chemokines. Unstimulated astrocytes and those cultured with anti-inflammatory cytokines (IL-4, IL-10, or TGF-β1) did not produce these chemokines. Interestingly, astrocytes cultured in proinflammatory conditions significantly decreased the release of neurotrophic factor PDGF-A, as compared to unstimulated astrocytes. However, in response to anti-inflammatory cytokine TGF-β1, astrocytes significantly increased PDGF-A production compared to the medium alone. The production of another studied neurotrophic factor BDNF was not influenced by pro- or anti-inflammatory stimulation. The secretory response was accompanied by changes in HLA-DR, CD83, and GFAP expression. Our study confirms that astrocytes differentially respond to pro- and anti-inflammatory stimuli, especially to inflammatory cytokines TNF-α, IL-1a, and C1q, suggesting their role in leukocyte recruitment. Introduction Astrocytes are the major glial cell population of the central nervous system (CNS), defined by their stellate morphology and expression of the glial fibrillary acidic protein (GFAP) [1]. These cells possess a high rate of metabolic activity and a wide range of functions that are crucial to maintaining a balanced brain microenvironment. Astrocytes are classically divided into three major types based on their morphology and spatial organization: protoplastic astrocytes in gray matter, fibrous astrocytes in white matter, and radial astrocytes surrounding ventricles. Astrocytes, long considered mainly as a trophic and mechanical support for neurons, have progressively gained more attention as their role in CNS pathologies has become more obvious. They actively regulate synaptic transmission via the release and clearance of neurotransmitters and the regulation of the extracellular ion concentration [2,3]. They contribute to the formation and maintenance of the blood-brain barrier (BBB) [4], which separates the peripheral blood circulation from the CNS [5]. Moreover, astrocytes secrete various neurotrophic factors to regulate synaptogenesis, neuronal differentiation, and survival [6,7]. It is suggested that they may also act as antigen-presenting cells, regulating immune responses within the CNS [8]. Additionally, they may mediate the transmigration Analysis of Surface Receptor Expression and Intracellular GFAP Level with Flow Cytometry For flow cytometry analysis, astrocytes were harvested from 6-well plates after 48 h stimulation with investigated cytokines. Cells were collected on ice with cell dissociation solution (Sigma, St. Louis, MO, USA) and washed with cold PBS/1%FBS. For multicolor staining of surface receptors, a panel of the following fluorochrome-labeled antibodies was applied: anti-CD80 APC mouse IgG1 κ, anti-CD83 PE mouse IgG1 κ, anti-CD86 Alexa Fluor488 mouse IgG2b κ, anti-HLA-DR Alexa Fluor 700 mouse IgG2a κ (all antibodies from Biolegend, San Diego, CA, USA). After incubation with fluorescent antibodies (30 min, 4 • C, dark), cells were washed two times with cold PBS/1%FBS, fixed with formalin (20 min, 4 • C), washed two times with cold PBS/1%FBS, and frozen in FBS containing 10%DMSO (dimethyl sulfoxide) for further analysis. Flow cytometry measurements were conducted on an LSR II flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). For GFAP staining, cells were fixed with formalin (20 min, 4 • C), permeabilized with Perm/Wash solution (BD Pharmingen), and stained with anti-GFAP PE antibody (BD Pharmingen). For all staining procedures, samples with antibody isotype controls were also utilized. Statistical Analysis Statistical analysis was carried out using Statistica 13.1 software (TIBCO Software Inc., Houston, TX, USA). The normality of distribution was checked with the Shapiro-Wilk test. Variables with normal distribution were analyzed with a parametric one-way ANOVA test followed by post hoc Tukey's honest significant difference test. Variables with abnormal distribution were analyzed with a non-parametric Mann-Whitney U test. Results from flow cytometry measurements were analyzed with the Wilcoxon signed-rank test. Statistical differences were considered significant for p values < 0.05. Proinflammatory Environment Results in Elevated GFAP Level in Astrocytes To confirm that the analyzed cells were astrocytes, we measured the intracellular level of the astrocyte marker-the GFAP protein-by flow cytometry. All analyzed cells showed a high intracellular GFAP protein level ( Figure 1A). GFAP increased in astrocytes exposed to the microglia proinflammatory cytokine mixture (TNF-α/IL-1a/C1q) compared to unstimulated cells (p = 0.02) ( Figure 1B) [16]. We did not notice any significant changes in GFAP level in cells stimulated with other cytokines. Cells cultured in medium alone were used as a control. 3.2. CD83 and HLA-DR Molecules Are Upregulated in Astrocytes Exposed to TNF-α/IL-1a/C1q Cytokines Analysis of surface receptors' expression revealed weak density and minimal changes in CD80 and CD86 molecules on cells exposed to the analyzed cytokines. Interestingly, we noticed a high level of CD83 on the surfaces of the analyzed cells, and the expression, measured as the median fluorescence intensity, significantly increased (p = 0.03) on cells exposed to TNF-α/IL-1a/C1q compared to cells cultured in medium alone ( Analysis of surface receptors' expression revealed weak density and minimal changes in CD80 and CD86 molecules on cells exposed to the analyzed cytokines. Interestingly, we noticed a high level of CD83 on the surfaces of the analyzed cells, and the expression, measured as the median fluorescence intensity, significantly increased (p = 0.03) on cells exposed to TNF-α/IL-1a/C1q compared to cells cultured in medium alone ( were assessed with Wilcoxon signed-rank test, and p < 0.05 was considered as significant. Proinflammatory Stimuli Induced Dramatic Chemokine Release in Astrocytic Cultures We observed strong CCL1 (p = 0.00001), CXCL1 (p = 0.0000001), CXCL10 (p = 0.000001) and CXCL13 (p = 0.00001) chemokine production in astrocyte cultures, as well as the induction of proinflammatory IL-1β (p = 0.02), in response to the TNF-α/IL-1a/C1q cytokine cocktail in cell culture supernatants. These chemokines were not detected in supernatants from unstimulated cells or astrocytes cultured with anti-inflammatory cytokines IL-4, IL-10, or TGF-β1 ( Figure 3). and CXCL13 (p = 0.00001) chemokine production in astrocyte cultures, as well as the induction of proinflammatory IL-1β (p = 0.02), in response to the TNF-α/IL-1a/C1q cytokine cocktail in cell culture supernatants. These chemokines were not detected in supernatants from unstimulated cells or astrocytes cultured with anti-inflammatory cytokines IL-4, IL-10, or TGF-β1 ( Figure 3). , and IL-1β (E) production in human astrocyte cultures. Results from at least 2 separate experiments conducted on primary astrocyte cells, collected from 4 human donors. Cells were cultured on 48-well plates for 6 days in proinflammatory conditions (TNF-α/IL-1a/C1q), anti-inflammatory conditions (IL-4, IL-10, or TGF-β1), and in nonstimulatory conditions (culture medium). Data shown as mean chemokine concentration ± SD. Normality of the distribution was checked with Shapiro-Wilk test. For comparisons between groups, Mann-Whitney U test was used and differences were considered significant for p values < 0.05. Various Cytokine Environments Differently Regulate PDGF-A Expression in Astrocytes Unstimulated astrocytes cultured for 6 days in an astrocyte growth medium spontaneously produced high amounts of PDGF-A (mean 3109 ± 500 pg/mL). The addition of IL-4 or IL-10 to the culture medium did not affect the PDGF-A level in collected supernatants. Cells cultured in proinflammatory conditions showed significantly decreased PDGF-A release, as compared to unstimulated astrocytes (p = 0.0004). However, in response to TGF-β1, astrocytes showed significantly increased PDGF-A production compared to medium alone (p < 0.003) and compared to proinflammatory conditions (p = 0.0017) ( Figure 4A). mary astrocyte cells, collected from 4 human donors. Cells were cultured on 48-well plates for 6 days in proinflammatory conditions (TNF-α/IL-1a/C1q), anti-inflammatory conditions (IL-4, IL-10, or TGF-β1), and in non-stimulatory conditions (culture medium). Data shown as mean chemokine concentration ± SD. Normality of the distribution was checked with Shapiro-Wilk test. For comparisons between groups, Mann-Whitney U test was used and differences were considered significant for p values < 0.05. Various Cytokine Environments Differently Regulate PDGF-A Expression in Astrocytes Unstimulated astrocytes cultured for 6 days in an astrocyte growth medium spontaneously produced high amounts of PDGF-A (mean 3109 ± 500 pg/mL). The addition of IL-4 or IL-10 to the culture medium did not affect the PDGF-A level in collected supernatants. Cells cultured in proinflammatory conditions showed significantly decreased PDGF-A release, as compared to unstimulated astrocytes (p = 0.0004). However, in response to TGF-β1, astrocytes showed significantly increased PDGF-A production compared to medium alone (p < 0.003) and compared to proinflammatory conditions (p = 0.0017) ( Figure 4A). Cells were cultured on 48-well plates for 6 days in proinflammatory conditions (TNF-α/IL-1a/C1q), anti-inflammatory conditions (IL-4, IL-10, or TGF-β1), and in non-stimulatory conditions (culture medium). Data shown as mean PDGF-AA (A) or BDNF (B) concentration ± SD. For PDGF-AA results, statistical analysis was performed with parametric oneway ANOVA test followed by post-hoc Tukey's test. Analysis of BDNF results was carried out with non-parametric Kruskal-Wallis test. Normal distribution within groups was checked with Shapiro-Wilk test, p values < 0.05 were considered as significant. BDNF production in astrocyte cultures was also detected; however, we did not observe the impact of stimulatory conditions on BDNF levels ( Figure 4B). GDNF production was detected only in the cell culture of one donor, and β-NGF was not detectable in 6-day cultures with the applied ELISA kits (data not shown). Discussion Several studies have indicated the existence of the neuroimmune system in the CNS and its role in CNS functioning, homeostasis, and pathology [19,20]. The main cellular components of this system are glial cells-astrocytes and microglia-which initiate BDNF production in astrocyte cultures was also detected; however, we did not observe the impact of stimulatory conditions on BDNF levels ( Figure 4B). GDNF production was detected only in the cell culture of one donor, and β-NGF was not detectable in 6-day cultures with the applied ELISA kits (data not shown). Discussion Several studies have indicated the existence of the neuroimmune system in the CNS and its role in CNS functioning, homeostasis, and pathology [19,20]. The main cellular components of this system are glial cells-astrocytes and microglia-which initiate communication with other cell types via the production of various signaling molecules. Astrocytes are the most numerous glial cell type in the CNS [21]. They play various roles in the CNS, as regulators of the physiological state and responders to various pathological conditions, such as injury or infection [22]. The CNS is considered an immune-privileged region due to the presence of the BBB, which limits access to peripheral antibodies and leukocytes [23]. Astrocytes, as part of the BBB, are amongst the first CNS-resident cells that come into contact with blood-derived leukocytes entering the brain during neuroinflammation [4]. Moreover, the anatomical localization of astrocytic endfeet enables them to react to various soluble factors in the meningeal space [24,25]. The reciprocal interaction between activated peripheral immune cells and astrocytes impacts the active migration of leukocytes into the CNS [26]. Astrocytes secrete two main chemokines controlling the recruitment of perivascular leukocytes into the CNS [27]. One of these is CXCL10, which is known as a potent chemoattractant for Th1 cells, NK cells, and monocytes/macrophages [28,29]. CXCL10 is induced locally in the CNS in diverse pathologic states, e.g., Alzheimer's disease [30] and multiple sclerosis (MS) [31]. An increase in mRNA encoding CXCL10 in experimental autoimmune encephalomyelitis (EAE)-affected mouse brains under an inflammatory state was related to an increase in GFAP expression and astrogliosis [32]. Studies by Oh and others established that CXCL10 gene expression by astrocytes is quite dynamic and can be regulated by a variety of factors, e.g., IL-1β, TNF, and LPS [33,34]. Our results add to this group additional factors such as IL-1a and C1q ( Figure 3C), similarly to what was observed by Liddelow et al. [16]. In our studies, only astrocytes stimulated with proinflammatory cytokine cocktail TNF-α/IL-1a/C1q were able to secrete CXCL10. There was no detectable CXCL10 production in unstimulated astrocytes or in those stimulated to induce an alternative protective phenotype ( Figure 3C). This result is in agreement with previous reports on CXCL10 expression in the rodent CNS. Significant elevation of this chemokine has been observed in diverse neuropathologies, including inflammatory diseases such as EAE, contusion injury, cerebral ischemia, and neurotoxicant-induced neurodegeneration [35][36][37][38][39]. It has been observed that astrocyte-produced CXCL10 regulates not only leukocytes' accumulation but also microglial migration toward an injury site, through a CXCR3-mediated mechanism [27,35,[40][41][42][43]. This CXCL10-induced microglial movement has been linked to efficient myelin debris clearance in a cuprizone-induced demyelination model [44]. Moreover, in an in vitro model of myelination, astrocytes with high CXCL10 expression were unable to promote this process [45]. The addition of this chemokine to normal myelinating cultures leads to reduced myelination of axons. This observation overall points to CXCL10's function in oligodendrocyte maturation and axonal wrapping [45]. In our study, astrocytes with the proinflammatory phenotype secreted this chemokine on a high level, suggesting the possibly important role of these cells in the above-described processes ( Figure 3C). CCL1 is a chemokine that induces chemotaxis and plays an important role in the regulation of apoptosis [46]. This chemokine exerts its effects via the CCR8 receptor [47], whose constitutive expression has been shown in monocytes and macrophages, Th2 and Treg lymphocytes [48][49][50], NK cells, and immature B cells. It has been reported that the CCL1/CCR8 pathway is associated with phagocytic macrophages and activated microglia in active lesions in MS, and the level of CCL1 directly correlates with demyelinating activity. High expression of CCL1 and CCR8 in the CNS during EAE suggests that CCL1 plays an important role in the neuroinflammation process [50,51]. In addition, in in vivo studies, CCL1 increased the number of GFAP-positive astrocytes and Iba-1-positive microglia [52]. Moreover, it has been shown, in an in vivo model of ischemia, that CCL1 produced by astrocytes and oligodendrocytes attracts Treg cells to the ischemic brain [53]. Reactive microglia exert direct neurotoxic effects, but they also participate in neuronal injury indirectly. Increased secretion of cytokines and chemokines, e.g., CCL1, CCL20, IL-1β, IL-6, and TNF-α, is a hallmark of the proinflammatory activation phenotype of microglia. Some of these chemokines and cytokines (e.g., IL-1α, TNF-α and complement C1q) are able to induce the proinflammatory phenotype of astrocytes [54]. We observed that the stimulation of astrocytes by the mixture of IL-1α, TNF-α and complement C1q stimulated CCL1 production by these cells (Figure 3A). CXCL13 is constitutively expressed in lymphoid organs and has been shown to be a key chemokine in lymphocyte recruitment and compartmentalization. CXCL13 exerts its effect via the CXCR5 receptor [55]. The function of CXCL13 is only partially defined and mainly related to B cell chemoattraction to the CNS [56]. However, CXCL13's effects are not limited to the development and support of lymphoid tissues-it is also involved in chronic inflammation through the formation of tertiary lymphoid structures (TLS) [57]. CXCL13 is not expressed in the CNS in physiological conditions, but its expression is high in the brain and spinal cord under pathological conditions, such as autoimmune demyelination, primary CNS lymphoma, and Lyme neuroborreliosis (LMN) [58][59][60]. It has been suggested that the sources of CXCL13 in the CNS are as follows: monocytes in LMN, macrophages infiltrating lesions, perivascular stromal cells in primary CNS lymphoma, microglial cells, or meningeal TLS in MS [56,61,62]. The fact that astrocytes are able to produce CXCL13 upon activation by proinflammatory cytokines has not been reported before. It was observed in in vitro studies that CXCL13 was produced by monocytes and by macrophages (at a much higher level). Expression of CXCL13 (both mRNA and protein) was induced by TNF-α and IL-1β but inhibited by IL-4 and IFN-γ [56]. In our study, CXCL13 production by astrocytes was induced by incubation with a mixture of TNF-α, IL-1a, and C1q only ( Figure 3D). Various trophic factors released by astrocytes impact neuronal survival and plasticity after brain injury. These factors play important roles in pathological conditions, where they trophically support damaged neurons and oligodendrocytes and, some of them activate progenitor cells [11]. Moreover, growth factors also act on astrocytes in an autocrine/paracrine manner, thus contributing to a feed-forward amplification loop, which starts and sustains reactive astrogliosis [63,64]. One of the most studied trophic factors is the platelet-derived growth factor (PDGF). The PDGF family involves five proteins, existing as homodimers of chains from A to D, and as one known heterodimeric form (AB). PDGF molecules are ubiquitous in mammalian brains, where they are involved in the regulation of neuronal system development, although, in the adult brain, PDGF family members are implicated in numerous cellular activities. PDGF-A regulates oligodendrocyte precursor cell (OPCs) development, proliferation, and survival, determining the number of oligodendrocytes in the developing [65] and adult brain [66]. PDGF-A regulates also the proliferation and branching of astrocyte cells [67]. In our experimental conditions, astrocytes cultured in a medium without any stimulus spontaneously produced PDGF-A. Cytokines with anti-inflammatory activity-IL-10 and IL-4-were not able to alter PDGF-A production in astrocyte cultures compared to the quiescent state. TGF-β1 strongly enhanced PDGF-A production in astrocytic cultures, whereas cells cultured in proinflammatory conditions showed lower production of this neurotrophin ( Figure 4A). Our observation is partially similar to the results described by Silberstein et al. [68]. The researchers reported mRNA for PDGF-A in untreated cells at a detectable level; however, its production in culture did not increase in response to TGF-β1 stimulation. Moreover, the authors also described an increased PDGF response to TNF-α, which is a strong proinflammatory agent. In their research, however, they used astrocyte-enriched, but not pure, astrocyte cultures. The production of PDGF-A by astrocytes and its increase in response to TGF-β1 may be especially significant for functional relapses in inflammatory demyelinating disorders such as multiple sclerosis. PDGF-A-expressing astrocytes are able to stimulate myelin renewal due to the PDGF-A-dependent activation of oligodendrocytes and their precursors [69]. In experiments utilizing transgenic mice with expression of the human PDGF-A gene over the control of a specific promoter, remyelination in a cuprizone-induced MS model was associated with the increased density of oligodendrocyte progenitor cells and a reduced apoptosis ratio as compared to control animals [70]. During the MS course, demyelination is associated with inflammation development in the CNS. Strong proinflammatory conditions provided by activated microglia induce a neurotoxic astrocyte phenotype through TNF-α/IL-1a/C1q-dependent signaling [16], which, in turn, according to our results, may lead to a decrease in PDGF-A secretion ( Figure 4A). This results in demyelination and less efficient remyelination, especially during a chronic inflammatory reaction, characteristic of MS. Brain-derived neurotrophic factor (BDNF) plays an important role in neural survival, synaptic plasticity, and long-term potentiation (LTP); thus, its alterations are correlated with cognitive impairments [71,72]. It is also important for dendrite outgrowth and spine number, which has been observed in an in vitro study [73]. BDNF is an important factor for OPCs. It promotes their proliferation and differentiation into mature oligodendrocytes [74]. It may also impact the differentiation of neural stem/progenitor cells into oligodendrocyte lineage cells [75]. Astrocytes are able to express BDNF and release it, as well as recycle and store this neurotrophin for use in an activity-dependent manner [76,77]. Moreover, BDNF promotes astrocytes' proliferation and survival through its truncated form of receptor tropomyosin receptor kinase B (TrkB) located on these cells, pointing to the existence of a feed-forward regulatory loop [78]. Astrocytes are known to express BDNF following injury in vivo [79]. It was shown that astrocyte-derived BDNF may be a source of trophic support, which can reverse deficits present following demyelination [80]. It was observed that, during endogenous recovery from ischemic injury of white matter, astrocytes support the maturation of OPCs by secreting BDNF [81]. Additionally, transgenic mice with downregulated expression of BDNF in GFAP-positive astrocytes subjected to ischemic injury exhibited a lower number of newly generated oligodendrocytes and larger white matter damage [81]. However, in our experimental conditions, there was no significant change in BDNF expression levels ( Figure 4B). This may be due to the stimulatory factors used in our research. Glial-derived neurotrophic factor (GDNF) is an important regulator of neurons' growth and differentiation, and its expression is elevated during brain development [82,83]. In the healthy developed brain, neurons are the major source of GDNF; however, in inflammation, caused by infections or brain injury, astrocytes, as well as microglia cells, participate in GDNF production [84][85][86][87][88][89]. Elevated production of GDNF in the brain supports the renewal of injured tissues [90]. Astrocyte cells, through GDNF secretion, are able to abolish microglia activation by Zymosan A, which was shown on midbrain astrocyte cultures collected from the four-day-old pups of Wistar rats [91]. In our research, we did not observe the production of GDNF by astrocytes in either one-or six-day cultures (except one donor, data not shown). In our stimulation model, we used a limited number of cytokines; moreover, we did not use any danger signals, which may be necessary as tissue damage induces GDNF production. Brambilla et al. reported the role of TNF-α-TNFR1 signaling in the control of GDNF synthesis in spinal cord astrocytes in SOD1 mice [92]. In our study, TNF-α, together with IL-1a and C1q, was not able to induce GDNF release at a detectable level. This might be explained by the functional diversity of astrocyte subtypes, resulting in the absence of the proper form of the TNF-α receptor (TNFR2 in spite of TNFR1), as well as the participation of other cells and processes in providing signals for astrocytes after TNF-α injection. Surprisingly, we did not observe nerve growth factor (NGF) production in astrocyte cultures in either one-or six-day stimulations at ELISA detection level (data not shown), although astrocytes are considered major producers of this neurotrophic factor [93]. Metoda et al. described the role of histamine together with IL-1β in the induction of NGF production in cortical astrocytes from rats [94]. Another inducer of NGF is IFN-β, which was able to induce a 40-fold increase in mRNA for NGF [95]. It can be stated that the lack of NGF production in our study may be the result of the detection assay or more likely, due to the stimulation conditions. Our results suggest that astrocytes do not produce NGF in response to the examined pro-and anti-inflammatory factors. Conclusions In our study, we observed the differences in the secretory activity of astrocytes after stimulation with selected factors. There were differences not only between the modes of action of pro-and anti-inflammatory molecules but also among these groups. This indicates the complexity of the regulation of astrocytes' functional phenotypes. Moreover, to the best of our knowledge, this is the first study reporting the production of CXCL13 by astrocytes stimulated with proinflammatory factors. As astrocytes may contribute to both pathological and repair processes, this is enormously important to deepen the knowledge about the regulation of their functions, especially in CNS pathologies.
Theory and Practice of VR/AR in K-12 Science Education—A Systematic Review : Effective teaching of science requires not only a broad spectrum of knowledge, but also the ability to attract students’ attention and stimulate their learning interest. Since the beginning of 21st century, VR/AR have been increasingly used in education to promote student learning and improve their motivation. This paper presents the results of a systematic review of 61 empirical studies that used VR/AR to improve K-12 science teaching or learning. Major findings included that there has been a growing number of research projects on VR/AR integration in K-12 science education, but studies pinpointed the technical affordances rather than the deep integration of AR/VR with science subject content. Also, while inquiry-based learning was most frequently adopted in reviewed studies, students were mainly guided to acquire scientific knowledge, instead of cultivating more advanced cognitive skills, such as critical thinking. Moreover, there were more low-end technologies used than high-end ones, demanding more affordable yet advanced solutions. Finally, the use of theoretical framework was not only diverse but also inconsistent, indicating a need to ground VR/AR-based science instruction upon solid theoretical paradigms that cater to this particular context. Augmented Reality/Virtual Reality(AR/VR) Applications and Beliefs Science education for primary and secondary school students are facing a variety of challenges nowadays. On the one hand, scientific knowledge often contains a large number of abstract and complex concepts [1], which is difficult for children and adolescents to internalize, even with the help of words and 2D images [2,3]. For example, food digestion has been documented as an essential topic in many countries' primary school science curriculums [4][5][6], but without vivid animation, it can be overwhelming for students to obtain accurate understanding with their pure imagination. On the other hand, implementing real scientific experiments is often bounded by reality conditions, such as a lack of materials, high cost for necessary equipment, safety risks, or difficulties in geographical distance [7]. To tackle the above challenges, researchers have resorted to computing technologies, which are suggested should play a crucial role in student learning [8], comprehension of science concepts, as well as scientific reasoning skill development [9,10]. This is especially true for Generation Z, who have been born in the digital era, and have technologies permeated into virtually every aspect of their lives [11]. The way Gen Z processes information requires educators to not only teach with basic technologies, but capitalize the full potential of e-learning 4.0 [12], which is more personalized, data-based, and gamified [13]. For instance, instead of viewing pictures of digesting organs, students may use Google Board to view food digestion in action, and see clearly how food is processed in each organ with the naked eye. Among all advanced computing technologies, virtual reality (VR) and augmented reality (AR) are increasingly capturing educators' and learners' attention. In particular, VR is defined as a real-time graphical simulation in which the user interacts with the system via Table 1. A comparative analysis of related reviews. Reference Covered Years Research Topics Technology Type Grade Level Analyzed Dimensions [28] 2004-2011 Science learning AR Not specified in subject terms) AND ("science education" or "science teaching" or "science learning" in abstract) AND ("primary school" or "elementary school" or "primary education" or "high school" or "k-12" in abstract). To ensure both quality and accuracy, only peer-reviewed journal papers with full text available have been included. This paper establishes the following inclusion and exclusion criteria (Table 2), and reviews each paper to determine whether it is eligible for analysis. Inclusion Criteria Exclusion Criteria Students used VR/AR devices to learn Not using VR/AR treatment as an independent variable The participants were primary or secondary school or high school students For preschool children, special education, college students, teachers and other adult learners Learning of science Non-science subjects Empirical studies Literature reviews, commentaries or meta-analysis Written in English Written in other languages On this basis, the researchers performed the PRISMA review process (Figure 1), including identification, screening, qualification and analysis. After several rounds of screening, 61 papers meeting the standard were eventually retained (listed in the Appendix A), labeling ID1-ID61 sequentially. establishes the following inclusion and exclusion criteria (Table 2), and reviews each paper to determine whether it is eligible for analysis. Inclusion Criteria Exclusion Criteria Students used VR/AR devices to learn Not using VR/AR treatment as an independent variable The participants were primary or secondary school or high school students For preschool children, special education, college students, teachers and other adult learners Learning of science Non-science subjects Empirical studies Literature reviews, commentaries or meta-analysis Written in English Written in other languages On this basis, the researchers performed the PRISMA review process (Figure 1), including identification, screening, qualification and analysis. After several rounds of screening, 61 papers meeting the standard were eventually retained (listed in the Appendix A), labeling ID1-ID61 sequentially. Coding Scheme To better understand these studies, seven types of coding scheme were either adapted or developed as follows: (1) Codes for bibliometric analysis. In reference to Zou et al.'s [34], the bibliometrics information may be categorized by published years, distributed journals, involved disciplines and grades. (2) Codes for theories. Zydney and Warner Coding Scheme To better understand these studies, seven types of coding scheme were either adapted or developed as follows: (1) Codes for bibliometric analysis. In reference to Zou et al.'s [34], the bibliometrics information may be categorized by published years, distributed journals, involved disciplines and grades. (2) Codes for theories. Zydney and Warner propose that there are three theoretical types, namely the grounded theoretical foundations, cited theoretical foundations and theoretical foundations not provided [35]. (3) Codes for learning activities. Based on Luo's approach, learning activities can be analyzed from the perspective of learning mode, such as collaborative learning, inquiry-based learning, receptive learning and so on [36]. (4) Codes for research design. Luo also categorizes research design in six aspects, including the type of research, research method, number of experiments, study length, data collection method, and data analysis methods [36]. ( [19,37], AR/VR technologies may be divided into four types, namely they are immersive VR, desktop VR, image-based (or tag based) AR and location-based AR. Meanwhile, Hwang et al. propose that devices of AR/VR refer to the hardware equipment they rely on, such as tablet computer, cameras, desktop computer, smart phone, etc. [38]. (6) Codes for content focus. In reference to Li and Tsai's classification of cognitive goals, we have coded the science learning content into six dimensions: scientific knowledge/concept, scientific reading, scientific process, problem solving, scientific thinking and scientific literacy [39]. It should be noted that scientific literacy is a comprehensive index, which includes the connotation of the first five indicators. (7) Codes for outcomes. Drawing upon Bloom's classification system [40], we have coded learning outcomes as one of the following: cognition, affection and behavior. Meanwhile, from the perspective of effectiveness, the papers were also classified as positive effect, negative effect or mixed effect. The positive effect means that the research results confirmed the research hypothesis; the negative effect means that the research hypothesis was refuted, and the mixed effect refers to having a positive effect in some of the variables and a positive effect in others. Research Trends The distribution of publication per year is shown in Figure 2. It can be seen that the number of papers published per year has maintained relatively stable from 2002 to 2018, with no more than four papers every year. However, it surged up to eight in 2019, and 16 in 2020, indicating that more scholars have been paying attention to this field. propose that there are three theoretical types, namely the grounded theoretical foundations, cited theoretical foundations and theoretical foundations not provided [35]. (3) Codes for learning activities. Based on Luo's approach, learning activities can be analyzed from the perspective of learning mode, such as collaborative learning, inquiry-based learning, receptive learning and so on [36]. (4) Codes for research design. Luo also categorizes research design in six aspects, including the type of research, research method, number of experiments, study length, data collection method, and data analysis methods [36]. (5) Codes for VR/AR technologies and devices. According to Sun et al. and Chen [19,37], AR/VR technologies may be divided into four types, namely they are immersive VR, desktop VR, image-based (or tag based) AR and location-based AR. Meanwhile, Hwang et al. propose that devices of AR/VR refer to the hardware equipment they rely on, such as tablet computer, cameras, desktop computer, smart phone, etc. [38]. (6) Codes for content focus. In reference to Li and Tsai's classification of cognitive goals, we have coded the science learning content into six dimensions: scientific knowledge/concept, scientific reading, scientific process, problem solving, scientific thinking and scientific literacy [39]. It should be noted that scientific literacy is a comprehensive index, which includes the connotation of the first five indicators. (7) Codes for outcomes. Drawing upon Bloom's classification system [40], we have coded learning outcomes as one of the following: cognition, affection and behavior. Meanwhile, from the perspective of effectiveness, the papers were also classified as positive effect, negative effect or mixed effect. The positive effect means that the research results confirmed the research hypothesis; the negative effect means that the research hypothesis was refuted, and the mixed effect refers to having a positive effect in some of the variables and a positive effect in others. Research Trends The distribution of publication per year is shown in Figure 2. It can be seen that the number of papers published per year has maintained relatively stable from 2002 to 2018, with no more than four papers every year. However, it surged up to eight in 2019, and 16 in 2020, indicating that more scholars have been paying attention to this field. The journal distribution is shown in Figure 3. The most published journals were Journal of Science Education and Technology (9), Computers and Education (7) and British Journal of Educational Technology (6). Other less frequently published journals include Journal of Educational Technology and Society, International Journal of Computer-Supported Collaborative Learning, Interactive Learning Environments, and so on. The journal distribution is shown in Figure 3. The most published journals were Journal of Science Education and Technology (9), Computers and Education (7) and British Journal of Educational Technology (6). Other less frequently published journals include Journal of Educational Technology and Society, International Journal of Computer-Supported Collaborative Learning, Interactive Learning Environments, and so on. The cross-distribution by scientific discipline and level of education is shown in Table 3. The subjects were unevenly distributed across disciplines, with most focusing on Physics (23) and Biology (13). As for the participants, 50% were primary school students, 30.6% were junior students, and 19.4% were high school students. Astronomy 1 1 0 2 Biology 10 2 1 13 Chemistry 0 1 1 2 Environmental Science 1 1 2 4 Geography 6 3 2 11 Medical Science 0 2 3 5 Physics 13 7 3 23 Physiology 2 2 0 4 STEM 1 0 0 1 Science 2 3 2 7 Total Note: Some studies involved multiple levels of education or disciplines, so the total number is more than 67. Theories With reference to Zydney and Warner, theories may be coded as one of three types: grounded theoretical foundations, cited theoretical foundations, and theoretical foundations not provided [35]. Grounded Theoretical Foundations Grounded theoretical foundations refer to the explicit proposal to carry out research under the guidance of a certain theory. Among the 61 papers, 21 (34.4%) of them clearly indicated the theories they used, as shown in Appendix B. These theories cover a wide The cross-distribution by scientific discipline and level of education is shown in Table 3. The subjects were unevenly distributed across disciplines, with most focusing on Physics (23) and Biology (13). As for the participants, 50% were primary school students, 30.6% were junior students, and 19.4% were high school students. Note: a Some studies involved multiple levels of education or disciplines, so the total number is more than 67. Theories With reference to Zydney and Warner, theories may be coded as one of three types: grounded theoretical foundations, cited theoretical foundations, and theoretical foundations not provided [35]. Grounded Theoretical Foundations Grounded theoretical foundations refer to the explicit proposal to carry out research under the guidance of a certain theory. Among the 61 papers, 21 (34.4%) of them clearly indicated the theories they used, as shown in Appendix B. These theories cover a wide range of fields, including pedagogy, psychology, and learning science. This demonstrates that VR/AR research has integrated the latest developments of contemporary pedagogy, psychology, and learning science research. Meanwhile, it also shows that solid understanding of theoretical paradigms are perceived as critical for effective VR/AR instructional design. Cited Theoretical Foundations Among the 61 papers, 11 (18%) of them cited theories to analyze the research results. These theories were not directly applied to the design of VR/AR learning activities. Among the cited theories, constructivism was most frequently used (i.e., ID14, ID22, ID28, ID23, ID42, ID55), indicating that learners' active role and centrality were underlined in these studies. The second most cited theory was Mayer's cognitive theory of multimedia learning. Three papers (ID22, ID9, ID56) cited the continuous principle of the theory to demonstrate how learning materials designed according to the principle could effectively reduce the cognitive load of learners and improve learning performance. The other cited theories include cognitive load theory (ID22, ID56), cooperative learning theory (ID23, ID55), game-based learning theory (ID2), and so on. Theoretical Foundations Not Provided Thirty papers (49.2%) did not cite any theory to inform their learning or research design, but did mention certain terms closely related to particular theories. For example, Gnidovec et al. (ID36) studied 13-and 14-year-old students' technology acceptance of AR, which was a construct from the Technology Acceptance Model [41]. Learning Activities The denotation of learning activities is shown in Appendix C. Among all the learning activities, inquiry-based learning was used the most (34 papers), followed by receptive learning (12 papers), problem-based learning (8 papers), game-based learning (6 papers), and collaborative learning (5 papers). It should be noted that in experimental research, only activities of the treatment group were accounted for, due to the inexplicit nature of the control group activity description. The research using inquiry-based learning enabled learners to understand scientific concepts or phenomena through the operation and interaction of virtual things with the support of VR/AR. For example, Squire and Jan (ID2) required students to learn about polychlorinated biphenyls and mercury by exploring the cause of death of Ivan in VR games [42]. Sun et al. (ID6) built a VR model to simulate the movement of the Sun, the Moon, and the Earth [19]. Papers that adopted receptive learning used VR/AR to present virtual objects, so that learners could observe scientific things or phenomena in an intuitive way. For instance, Shim et al. (ID1) developed a VR system called VBRS simulating the iris and pupil of the human eye, through which students could see flowers of various shapes when they shifted between multiple viewpoints by pressing the number keys on the keyboard [43]. Three papers integrated collaborative learning, while they also adopted inquirybased learning at the same time. That is to say, learners inquired about certain objects or phenomenon in collaborative ways. For instance, Chiang et al. (ID10) used location-based AR to assist students' investigation of the ecological environment of the pond near the school [44]; Fidan and Tuncel (ID23) developed an AR-based application, which used sound and animation to create an inspiring atmosphere [1]. There was one paper on flipped learning, topic-based learning and design-based learning respectively, as shown in Appendix C. Research Designs The research methods were combed in terms of six aspects, and the statistical results are shown in Table 4. First of all, the number of experimental studies (47 papers) was far more than that of investigation studies (14 papers). Secondly, the majority of studies employed quantitative design (31 papers) and mixed-research design (26 papers). Thirdly, most studies (24 papers) used VR/AR for teaching within 0-3 h, as compared to over three hours, and 25 papers reported teaching with AR/VR for only one class session. Furthermore, questionnaires (44 papers) and knowledge tests (32 papers) were used as major data collection methods. Finally, a t-test was the most frequently adopted statistical measure (34 papers). Technologies and Devices In terms of technologies, four types of VR/AR were identified (see Figure 4), including immersive VR, desktop VR, image-or marker-based AR, and location-based AR [28]. The immersive VR system surrounds the user with a 360-degree virtual environment; the desktop VR system is displayed to the user on a conventional computer monitor, whereas a 3-D perspective displays technology projects 3-D objects onto the 2-D plane of the computer screen [19]. Specifically, seven (ID6, ID16, ID27,ID43, ID48, ID51, ID56) of the 61 papers used immersive VR; 14 papers (ID3, ID20, ID4, ID1, ID5, ID24, ID41, ID42, ID49, ID52, ID55, ID59, ID60, ID61) used desktop VR; seven papers (ID9, ID10, ID21, ID30, ID46, ID53 ID54) used location-based AR, 31 papers used image-or marker-based AR, and two papers (ID14, ID18) used two kinds of VR or AR at the same time. As for the device or hardware equipment, it may be categorized as the following ( Figure 5). It can be seen that tablet PC (24 papers), desktop PC (18) and smart phone (14) were the most frequently used devices, whereas devices like the puzzle set were least employed. As for the device or hardware equipment, it may be categorized as the following ( Figure 5). It can be seen that tablet PC (24 papers), desktop PC (18) and smart phone (14) were the most frequently used devices, whereas devices like the puzzle set were least employed. As for the device or hardware equipment, it may be categorized as the following ( Figure 5). It can be seen that tablet PC (24 papers), desktop PC (18) and smart phone (14) were the most frequently used devices, whereas devices like the puzzle set were least employed. Content Focus The first type of content was scientific knowledge/concept, which was also the most targeted among other types. Specifically, in 47 out of 61 papers, researchers used VR/AR technology to help learners understand scientific knowledge and concepts. For example, Wrzesien (ID5) used an immersive interactive virtual water world software called E-Junior to let learners play the role of Mediterranean residents or fish in the sea, participating in daily activities of the Mediterranean, and learning the concept and knowledge of marine ecology through exploration in the virtual world [20]. The second type of content was science reading. There was one paper on AR technology that supported scientific reading. In the research of Lai et al.'s (ID25), students used mobile devices equipped with an AR science learning system to scan the textbooks, and the relevant pictures would immediately and dynamically appear above them [45]. The experimental results showed that, compared with the traditional multimedia science learning method, the treatment significantly improved the students' academic performance and learning motivation, and also significantly reduced their perception of the external cognitive load in the learning process. Content Focus The first type of content was scientific knowledge/concept, which was also the most targeted among other types. Specifically, in 47 out of 61 papers, researchers used VR/AR technology to help learners understand scientific knowledge and concepts. For example, Wrzesien (ID5) used an immersive interactive virtual water world software called E-Junior to let learners play the role of Mediterranean residents or fish in the sea, participating in daily activities of the Mediterranean, and learning the concept and knowledge of marine ecology through exploration in the virtual world [20]. The second type of content was science reading. There was one paper on AR technology that supported scientific reading. In the research of Lai et al.'s (ID25), students used mobile devices equipped with an AR science learning system to scan the textbooks, and the relevant pictures would immediately and dynamically appear above them [45]. The experimental results showed that, compared with the traditional multimedia science learning method, the treatment significantly improved the students' academic performance and learning motivation, and also significantly reduced their perception of the external cognitive load in the learning process. The third type of learning was scientific process, and two paper (ID15, ID61) focused on this. For example, Hsu et al. (ID15) used AR technology to build a surgical simulator to train students performing laparoscopic surgery and cardiac catheterization [46]. They found that students had positive cognition and high level of participation in AR courses and simulators, and their interest in learning greatly increased. The fourth type was problem-solving (9 papers). For example, Kyza and Georgiou (ID21) used an AR application called TraceReaders, which allowed learners to write location-based AR applications for outdoor survey learning [47]. Three papers (ID2, ID37, ID44) embodied the fifth type of content, which was science thinking. For example, Chang et al. (ID37), with the support of mobile AR, aided students in contemplating about the dilemma of building nuclear power plants and using coal-fired power plants in virtual cities [48]. It is found that students' previous knowledge and beliefs had a certain impact on students' ability to participate in learning and reasoning. Finally, there were also two papers (ID34, ID50) focusing on acquiring science literacy. Scientific literacy is the comprehensive embodiment of scientific knowledge, scientific thinking, and scientific ability [49]. Wahyu et al. (ID34) found that mobile AR assisted STEM learning could significantly improve students' scientific literacy than traditional learning methods [49]. Outcomes The learning outcomes of 61 papers were classified according to the theory of Bloom's instructional objective classification [40]. As is shown in Figure 6, 46 papers set cognitive goals, and six of them reported mixed effects; 40 papers established affection goals, and five of them reported mixed effects. Three papers aimed to improve behaviors, all of which reported positive effects. acy. Scientific literacy is the comprehensive embodiment of scientific knowledge, scientific thinking, and scientific ability [49]. Wahyu et al. (ID34) found that mobile AR assisted STEM learning could significantly improve students' scientific literacy than traditional learning methods [49]. Outcomes The learning outcomes of 61 papers were classified according to the theory of Bloom's instructional objective classification [40]. As is shown in Figure 6, 46 papers set cognitive goals, and six of them reported mixed effects; 40 papers established affection goals, and five of them reported mixed effects. Three papers aimed to improve behaviors, all of which reported positive effects. But there are still some studies (6 papers: ID3, ID5, ID33, ID38, ID49, ID52) concluding that VR/AR technology was no more effective than non-VR/AR technology in improving students' performance. For example, Chen et al. (ID3) developed an Earth VR motion system to help understand the changes of day and night and four seasons caused by the Earth's rotation. The researchers conducted a pre-test and a post-test on the students, and the scores of most post-test items were higher than those of pre-test. However, for the questions about the rotation of the Earth, the students' post-test score was significantly lower than the pre-test score due to the fact that the system did not provide sufficient information about the Earth's rotation [14]. Similarly, Wrzesien et al. (ID5) concluded that there was no significant difference in academic performance between the experimental group using VR technology and the control group using traditional learning methods [20]. A possible cause could be that the attraction of the virtual environment had diverted students' attention. They were more interested in operating virtual things than scientific concepts themselves. Wang (ID33) found that students who used e-Book learning materials had higher scores than students who used AR learning materials, although the difference was not statistically significant [51]. It may be inferred that well-designed AR content could limit students' thinking, because some students preferred studying directly based on the guidance of AR content as soon as they received the materials, and completed the tasks without thinking. E-books do not provide very detailed demonstration information, but the text information guided by graphics makes learners think first and then work. Chen (ID38) compared the differences between the game method and AR supported learning, and found that there was no significant difference between the two methods in improving academic performance [37]. It may be because several methods used in the experiment provided immediate reflection tips when students submitted wrong answers, while AR did not give full play to its advantages in multimedia learning. However, there were still some studies that concluded with negative results in such dimensions as motivation (1 paper: ID32), technology acceptance (1 paper: ID46), selfefficacy (1 paper: ID48), satisfaction (1 paper: ID55), expectation (1 paper: ID56) and so on. It could be because that the use of VR/AR was too complex to operate appropriately and effectively, or that there was insufficient information provided, which could have resulted in learning difficulty. For example, Lu et al. (ID32) found that the experimental group using AR has lower learning motivation than the control group without AR. The author believed that the main reason was that learners were unfamiliar with materials and equipment, which posed certain learning challenges [55]. Lo et al. (ID46) found that the perceived usefulness of using AR was correlated with age. That is, older students tended to think that AR applications were not very useful. It was hypothesized by the authors that the older the students were, the harder it was for them to follow the teacher's instructions, or the more difficult for them to learn [56]. Shin (ID55) found that the learners did not enjoy the experience of desktop VR, because it did not generate a strong sense of immersion [57]. Behavioral Goals There were three papers (ID11, ID54, ID57) that focused on realizing behavioral goals. These studies reached a consistent conclusion that the use of VR/AR could improve students' learning behavior. For example, Yoon and Wang (ID11) compared the time of interaction with devices and team cooperation between AR users and non-AR users. It was found that the former's time of interaction with devices was significantly higher than that of the latter, while the team cooperation was the opposite. This indicated that AR devices improved participation in learning, but also affected cooperation between teams to some degree [58]. Trends in the Integration of VR/AR in K-12 Science Education First of all, there is a growing number of studies in VR/AR's integration in K-12 science education, indicating researchers' and practitioners' interest in using VR/AR to enhance learning science. For instance, 20 out of 60 papers were published in the last two years. Despite this, the majority of studies were published much more in generic educational technology journals, such as Computers and Education and Educational Technology and Society, which accounted for 85% of all. Contrarily, only few domain specific science education journals (i.e., Journal of Science Education and Technology) published such studies. This may be due to the fact that for most K-12 science teachers, VR/AR is an emerging technology that seems novel and inaccessible, and its effects on students is still ambiguous without conclusive findings or universal instructional design models [59,60]. Therefore, in future research more attention should be paid to the exemplary integration of VR/AR into teaching specific science topics, foster deep integration and enumerate the particular effectiveness of VR/AR application on students' learning outcomes, so that science teachers become more receptive of VR/AR uses. Secondly, the theories involved appeared very diverse. On the one hand, this diversity demonstrates VR/AR's capacity of accommodating a multitude of theories; on the other hand, it also indicates the lack of an over-arching theoretical paradigm that could guide AR/VR-based science instructional design. Such a paradigm would not be possible without the collaborative effort from learning scientists, science teaching experts, instructional designers and VR/AR specialists. The absence of any of the stakeholders may lead to an ineffective design framework. It should also be noted that 45% of the reviewed papers did not cite any theory, which could lead to unsubstantiated interpretation of obtained results. Thirdly, inquiry-based learning was the most adopted learning model (87.5%) among the reviewed studies, which is consistent with previous findings that inquiry-based learning was one of the most commonly used learning models [28][29][30][31]. Regardless, this learning model was not entirely gauged with the measured learning outcomes in the reviewed studies. That is, although students indeed used VR/AR devices, teachers did not necessarily capitalize on the benefits of inquiry-based learning, beside providing students with immersive or lifelike experiences. Previous studies have shown that inquiry-based learning without sufficient guidance is not significantly better than traditional textbook teaching [61]. Thus, it must be cautioned that there is a fine line between inquiry-based learning and simply asking students to explore or view an VR/AR object or environment. For example, Salmi et al. (ID19) developed a mobile AR application to enable students to explore the different reactions between a number of atoms and molecules, within which students only needed to interact with the AR system to view the structure of atoms and molecules; thus, it could be hardly deemed as inquiry-based learning [62]. Fourthly, in terms of the research methods, there were more quantitative studies (50.8%) than qualitative or mixed-method studies (42.6%), more experimental designs (77%) than investigation designs (23%). The emphasis on experimental studies could be because that those experimental studies were practically more welcomed than investigative studies in nearly all academic journals, owing to their more advanced statistical analysis measures and illustrations. Meanwhile, experimental studies help teachers make more instant and precise adjustment to their existing science teaching, such as integrating a certain VR/AR software, or a device. On the other hand, investigation studies are more suitable for understanding students' perceptions, attitudes or satisfaction toward the generic VR/AR technologies, the results of which may not be directly applied to specific instructional design or adaptation. Last but not least, there were a variety of VR/AR technologies employed, such as location-based AR, image-or marker-based AR, immersive VR, and desktop VR, but the ratio of using advanced VR/AR technologies was very low. This is in direct contrast to Pellas, Dengel and Christopoulos's finding that 60% of the studies used high-end immersive devices, while nearly 30% used low-end solutions [36]. One major reason could be that school teachers were unlikely to purchase higher-end technologies, for experiment's sake without school's financial support. Moreover, considering K-12 students' cognitive ability and psycho-motor skills, it is not only appropriate but also safe for them to use lessadvanced and -expensive devices, so as to avoid the risks of under-utilization or damage. In other words, to increase the diffusion of AR/VR use in K-12 science education, there is a need to develop more affordable and portable devices that can be easily operated, so that both science teachers and students can utilize them effectively and efficiently. Also, given that there were only four papers (ID8, ID11, ID17, ID34, accounting for 10%) that focused on learning with AR/VR in informal environment, it may be suggested that VR/AR technologies that can be easily transported from one place to another be developed, so that students can learn with such technologies seamlessly in and out of class. For instance, students who were instructed to observe planets with VR/AR devices in class may continue to learn this topic at home by using both VR/AR technologies and their personal microscope. Issues in the Integration of VR/AR in K-12 Science Education Despite its apparent advantages, VR/AR also has its limitations or issues. The first type of issues reflected in previous studies are technical issues, which refer to either the inherent limitations of VR/AR technologies, or the associated technological glitches, such as lack of mobility and inconvenience of using, especially for immersive VR. For example, HMD, trackers and other VR-related utilities like the Cave Automatic Virtual Environment could often cause such difficulties [14]. The second type of issues are pedagogical issues. Teachers who use VR/AR to teach science may have problems in using it effectively and efficiently, including identifying the most suitable resources, designing the most appropriate activities, or conducting the most precise assessments. For instance, VR/AR has been reported as distracting and visually overloading. Wrzesien and Raya (ID55) found that there was no significant difference between the results of the experimental group using virtual devices and the control group without virtual devices. Learners were easy to get lost in the virtual environment, and a lack of sufficient learning information was the main reason for this phenomenon [20]. Teachers thus are obligated to sift through various VR/AR resources, and identify those that are age-appropriate, visually comfortable, and mentally congruent. Also, as Charsky and Ressler (2011) point out, the lack of teaching methods and objectives can make students confused and depressed, and even increase their knowledge overload and reduce their learning motivation [63]. Some studies noted the limitations of VR/AR technology and sought to overcome them with supplementary activities. For example, Yoon et al. (ID8) used knowledge prompts, a bank of peer ideas, working in collaborative groups, instructions for generating consensus, and student response forms for recording shared understanding [64]. These scaffolds could promote collaboration within the peer groups by encouraging students to discuss their observations and reflections of their experience. Another pedagogical issue lies in the comprehensive and accurate evaluation of student learning outcomes. For instance, students' cognitive and affective outcomes were mainly measured, whereas behavioral change was less emphasized. The third type of issues can be categorized as social issues. For instance, the price of VR/AR devices is considered a social issue, rather than a technical issue, because the price is not solely determined by the technical complexity or sophistication, but its relative novelty among other technologies as well as the income level of its targeted consumers. Meanwhile, whether teachers can integrate VR/AR into science teaching is greatly dependent upon the social perceptions of such technologies, as well as their school support, both of which constitute the context for our topic. For instance, according to Chih et al., not all schools were willing to pay a high price for virtual display devices and real-world devices [14]. There are also several research issues. In terms of the research length, about 62% of the studies observed usage for less than 10 h. Under such circumstances, probability factors like the novelty effect could hardly be eliminated. Also, while a multitude of variables were examined, including scientific reading, scientific process, scientific problem solving, scientific literacy and so on, most studies still focused on low-level cognition through knowledge tests; high-level thinking ability has not received adequate attention. According to Bloom's goal classification, memory, understanding, and application correspond with low-order thinking abilities, whereas analysis, evaluation and creation belong to high-order thinking abilities [65]. Academic research shows that "injection" mode is usually used to cultivate high-level thinking ability in science learning; that is, the learning of thinking skills is integrated with the learning of the science curriculum. In this mode, students are fully involved in thinking practice, focusing on the learning process and understanding of meaning. After solving certain challenging problems, high-level thinking skills can be developed [65]. However, the emphasis on higher-order thinking has been absent in most reviewed studies in this paper. This is consistent with previous research that the application of VR/AR in science education mainly focuses on the understanding of scientific concepts and phenomena [28,29]. For example, 85% of the studies focused on students' mastery of scientific knowledge or concepts, without mentioning critical thinking, social reasoning ability, innovation tendency and other high-level thinking ability. Moreover, the data analysis methods relied mostly on t-test (55.7%), which would be insufficient to analyze more complex relationships or phenomenon. Implications and Recommendations Based on the issues identified above, we offer the following suggestions in both theoretical advancement and practical improvement of the VR/AR's integration in K-12 science education. As for science teachers, it is paramount to be familiar with both psychological and pedagogical theories, so that VR/AR-based activities can effectively and efficiently promote student learning interest as well as achievement. They should also be very selective in choosing the most appropriate and authoritative VR/AR apps or resources, so as to not only meet the learning demand of students, but also avoid foreseeable technical glitches. When designing learning activities, it is essential for teachers to target more advanced skills, such as critical thinking, in order to cultivate students' inquiry-based mindset. What's more, with the knowledge of trending VR/AR practices, teachers may embrace more learning models like collaborative learning and project-based learning into their science instruction. Researchers, on the other hand, are suggested to conduct more mixed-method studies, which offer a comprehensive and profound understanding of students' experiences and changes in cognition, affection and behavioral skills. They may as well include teachers as research participants, instead of focusing on students only, so that barriers in teachers' intention or proficiency of VR/AR integration could be identified and addressed at an early stage. When possible, studies that last longer and have repeated trials are strongly recommended. Longer interventions with repeated evaluation could help solidify the benefits of VR/AR-based science instruction, and boost teachers' confidence with its exemplary uses. Finally, technical experts or software engineers may be prompted to develop more affordable, portable and personalized subject-specific VR/AR technologies, and program more science-related immersive VR/AR environment that cater to different grade levels' needs. For instance, lower-grade level students may use AR/VR to gain new experience and direct observation, while higher-grade levels may use it to foster the ability to analyze, evaluate and even create. Limitations The current research also has its limitations. For example, the review was very selective, meaning that we intentionally chose journal articles from renowned databases only to ensure quality, rather than including also conference papers or theses. Another limitation was that the citation or reference network analysis were not included, in order to keep the paper more focused and tightened. Future research that aims to conduct a more comprehensive review could enlarge the scope and utilize knowledge mapping software to illustrate the trend and research hot spots with sophisticated displays. Conclusions VR/AR is advantageous in K-12 science education [18]. The purpose of this paper was to examine the theoretical and practical trends and issues in existing research on VR/AR's application in K-12 science education between 2000 and 2021, including the publication data, adopted theories, research methods, and technical infrastructure, etc. It was found that there has been a growing number of research projects on VR/AR integration in K-12 science education, but studies pinpointed the technical issues rather than the deep integration of AR/VR with science subject content. Also, while inquiry-based learning was most frequently adopted in reviewed studies, students were mainly guided to acquire scientific knowledge, instead of cultivating more advanced cognitive skills, such as critical thinking. Moreover, there were more low-end technologies used than high-end ones, demanding more affordable yet advanced solutions. In terms of research methods, quantitative studies with students as the sole subjects were mainly conducted, calling for more mixed-method studies targeting both teachers and students. Finally, the use of theoretical frameworks was not only diverse but also inconsistent, indicating a need to ground VR/AR-based science instruction upon solid theoretical paradigms that cater to this particular domain. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the fact that it was written in mostly Chinese. Conflicts of Interest: The authors declare no conflict of interest. An AR-based mobile learning system to improve students' learning achievements and motivations in natural science inquiry activities Cited Meyer's multimedia design theory and used the language related to inquiry learning theory but did not cite it Appendix A Inquirybased learning ID10 [44] Students' online interactive patterns in AR-based inquiry activities Grounded on knowledge construction theory Inquirybased learning and collaborative learning ID11 [58] Making the invisible visible in science museums through AR devices Not provided Inquirybased learning ID12 [68] Employing Augmented-Reality-Embedded instruction to disperse the imparities of individual differences in earth science learning Used the language related to learning style theory but did not cite it Inquirybased learning ID13 [69] Constructing liminal blends in a collaborative augmented-reality learning environment Grounded on distributed cognitive theory Inquirybased learning and collaborative learning ID14 [70] Enhancing learning and engagement through embodied interaction within a mixed reality simulation Grounded on embodied learning theory; cited constructivism theory; used the language related to learning attitude theory, self-efficacy theory, learning participation theory but did not cite them Inquirybased learning ID15 [46] Impact of AR lessons on students' stem interest Used the language related to learning motivation theory but did not cite it Inquirybased learning ID16 [71] An augmented-reality-based concept map to support mobile learning for science Used the language related to learning motivation theory, learning attitude theory but did not cite it Inquirybased learning and receptive learning ID17 [72] How AR enables conceptual understanding of challenging science content Not provided Receptive learning ID18 [45] The influences of the 2-D image-based AR and VR on student learning Grounded on cognitive load theory; Used the language related to technology acceptance but did not cite it Inquirybased learning Impacts of an AR-based flipped learning guiding approach on students' scientific project performance and perceptions Used the language related to critical thinking theory, group self-efficacy theory, learning motivation theory, and psychological load theory but did not cite it Flipped learning ID21 [47] Scaffolding AR inquiry learning: The design and investigation of the Tracereaders location-based, AR platform Grounded on the theory of experiential learning and used the language related to the theory of inquiry learning but did not cite it Inquirybased learning ID22 [73] Impacts of integrating the repertory grid into an AR-based learning design on students' learning achievements, cognitive load and degree of satisfaction Grounded on situated learning theory and cited constructivism theory, cognitive load theory, and cognitive theory of multimedia learning Receptive learning ID23 [1] Integrating AR into problem based learning: The effects on learning achievement and attitude in physics education Grounded on Situational learning theory; Cited constructivism theory, cooperative learning theory, Self-guidance theory, situational learning theory; Used the language related to learning attitude theory but did not cite it Problem-based learning ID24 [74] Applying VR technology to geoscience classrooms Not provided Problem-based learning ID25 [45] An AR-based learning approach to enhancing students' science reading performances from the perspective of the cognitive load theory Grounded on cognitive theory of multimedia learning and cognitive load theory; used the language related to learning motivation theory but did not cite it Problem-based learning ID26 [21] A usability and acceptance evaluation of the use of AR for learning atoms and molecules reaction by primary school female students in Palestine Not provided Receptive learning ID27 [75] The effect of the AR applications in science class on students' cognitive and affective learning Used the language related to learning motivation theory, learning interest theory, and meaningful learning theory but did not cite them Receptive learning The effect of using VR in 6th grade science course the cell topic on students' academic achievements and attitudes towards the course Cited Piaget's learning theory and used the language related to learning motivation theory but did not cite it Receptive learning ID29 [76] The effect of AR Technology on middle school students' achievements and attitudes towards science education Used the language related to learning motivation theory but did not cite it Topic-based learning ID30 [54] Integration of the peer assessment approach with a VR design system for learning earth science Used the language related to learning motivation theory, critical thinking theory, creative ability theory, and cognitive load theory but did not cite it Design-based learning ID31 [77] Students' motivational beliefs and strategies, perceived immersion and attitudes towards science learning with immersive VR: A partial least squares analysis Used the language related to motivation theory, self-regulation theory, learning attitude theory but did not cite it Inquirybased learning ID32 [55] Evaluation of AR embedded physical puzzle game on students' learning achievement and motivation on elementary natural science. Used the language related to learning motivation theory but did not cite it Game-based inquiry learning ID33 [51] Integrating games, e-Books and AR techniques to support project-based science learning Used the language related to learning motivation theory but did not cite it Inquirybased learning ID34 [49] The effectiveness of mobile AR assisted stem-based learning on scientific literacy and students' achievement Not provided Inquirybased learning ID35 [78] Using AR to teach fifth grade students about electrical circuits Used the language related to learning attitude theory but did not cite it Receptive learning ID36 [41] Using AR and the Structure-Behavior-Function Model to teach lower secondary school students about the human circulatory system Used the language related to technology acceptance but did not cite it Receptive learning ID37 [48] Students' context-specific epistemic justifications, prior knowledge, engagement, and socioscientific reasoning in a mobile AR learning environment Used the language related to situational cognitive theory, learning engagement theory but did not cite them Inquirybased learning ID38 [37] Impacts of AR and a digital game on students' science learning with reflection prompts in multimedia learning Used the language related to situational learning theory but did not cite it Inquirybased learning ID39 [79] Use of mixed reality applications in teaching of science Used the language related to learning motivation theory, learning attitude theory but did not cite them Receptive learning ID40 [50] Perceived learning in VR and animation-based learning environments: A case of the understanding our body topic Used the language related to constructing knowledge and so on but did not cite it Receptive learning The impact of internet virtual physics laboratory instruction on the achievement in physics, science process skills and computer attitudes of 10th-grade students Grounded on cognitive and social constructivism theory Problem-based learning Appendix B Table A2. List of grounded theoretical foundations cited in reviewed studies. Multiple intelligences theory Students were asked to explore in the virtual environment, so their multiple senses were stimulated, and their ability to establish intellectual and emotional connections with their own world was enhanced. (ID4) The researchers attempted to stimulate primary school students' musical intelligence, bodily-kinesthetic intelligence, spatial intelligence, interpersonal intelligence, and intrapersonal intelligence in VR environment. (ID5) The theory of leisure A Serious Virtual World was constructed with VR, which enabled primary school students to find their potential and skills in the leisure environment, compare with other players in the game, and learn in the cooperative game. (ID5) Knowledge construction theory Learning scaffoldings, such as knowledge prompt and peer thinking database, were designed to support 6-8 grade students' knowledge construction in AR environment. (ID8) A location-based mobile device AR system was developed to help learners construct knowledge through discussing problems and sharing knowledge. Theories Application Scenarios Multimedia learning theory In this study, an AR-based science learning system was developed based on the contiguity principle of multimedia learning theory, which was used by students to interact with textbooks. (ID25) Based on the interactivity principle, students set up the AR experiment and observed the results. (ID35) According to the multimedia learning theory, an AR game was designed to test its learning efficiency. (ID38) The theory of experiential learning Students collected virtual elements to mimic reality experience. (ID5) The principle of experience continuum and interaction of experiential learning theory were used to design primary school student's learning activities of visiting outdoor space, motivate them to learn, and exert a positive impact on their cognitive and emotional outcomes. (ID21) The researchers developed a system with AR technology that allowed the learner's body to move freely in a multimodal learning environment to enhance embodied learning. (ID58) Situated learning theory An AR-based learning system called Mindtool was designed, which enabled students from fourth graders to explore concepts or solve problems. (ID22) AR environment was used to create heuristic problem situation, so that students aged from 12 to 14 could learn through PBL. (ID23) A health education board game applying AR was developed. This game included eight topics, such as health check, hospital, ambulance and so on, helping students learn health knowledge in a realistic situational environment. (ID47) Theory of immersion A research model to understand the learning perception of immersion was proposed, which tested the learning characteristics and evaluated the immersion variables through the individual's motivational beliefs and strategies. (ID31) Technology acceptance theory TAM theory was used to study users' adoption patterns from the perspective of perceived usefulness and perceived ease of use, and a blueprint for the research to be explored was constructed. (ID46) Theory of inquiry learning A VR system named Multi-User Virtual Environments was developed to enable multiple simultaneous participants to enact collaborative learning activities of various types. (ID49) Piaget's cognitive theory Research questions were put forward according to Piaget's cognitive theory and the Inventory of Piaget's Developmental Tasks was used in the study for learners to complete. (ID51) Theory of collaborative learning The learning activity was designed according to theory of collaborative learning, including three parts: (1) a new mixed-reality learning scenario, (2) a student participation framework, and (3) a curriculum. (ID57) Other theories Lin et al. (ID47) used five theories to design their AR health education board game. In addition to the situational learning theory mentioned above, other four theories are scaffolding theory, dual-coding theory, over-learning theory, and competition-based learning theory. In their AR health education board game, users needed to use the developed App to scan question card on the inspection report. Guidance and correct answers were provided at the back of the question card (scaffolding theory). pictures and text were added to the question card as study aids (dual-coding theory). To answer the question rightly, the users needed to repeat practicing again and again (over-learning theory), and the competition mechanism was used by the game to enhance the learning motivation of learners. (ID47) Appendix C Table A3. List of learning activities in reviewed studies. Inquirybased learning ID2, ID3, ID4, ID5, ID6, ID7, D8, ID9, ID10, ID11, ID12, ID13, ID14, ID15, ID16, ID18, ID21, ID31, ID32, ID33, ID34, ID37, ID38, ID41, ID42, ID43, ID46, ID48, ID49, ID51, ID52, ID56, ID57, ID58 Learners interacted with virtual environment or virtual objects created by VR/AR, and learned scientific knowledge and scientific concepts or phenomena by exploring. ID1, ID16, ID17, D19, ID22, ID26, ID27, ID28, ID35, ID36, ID39, ID40 VR/AR could help learners better understand scientific concepts and phenomena by visualizing invisible things, simplifying complex things, concretizing abstract things, and combining real-world learning objectives with digital content. Problem-based learning ID23, ID24, ID25, ID41, ID44, ID50, ID60, ID61 Researchers used the environment created by VR/AR as the basis for raising problems and the source of materials for solving problems. Game-based learning ID45, ID47, ID48, ID53, ID54, ID59 Learning activities were carried out in the form of games. Learners used scientific knowledge to solve problems through interaction with the environment or other learners. The main types of games are story game (ID45), health education board game (ID47), role playing games (ID48 and ID59), and collaborative role playing game (ID53 and ID54). Flipped learning ID20 Learners used AR-based flipped learning system, to watch videos in advance, finishing homework, and discussing in class. Topic-based learning ID29, ID55 The researchers developed an AR-based activity manual with 32 learning activities. In the experimental group, teachers used these activity manuals for theme teaching, and students completed learning activities according to the content of the manual. (ID29) The learning content was organized according to different topics, which indicated learning subjects of earth science education. (ID55) Design based learning ID30 Researchers developed a peer assessment approach and incorporated it into VR design activities, in which students designed their own VR projects to raise environmental awareness and cultivate earth science knowledge.
Legal Aspects of Intellectual Property Rights in Accreditation Instruments of Study Program Performance Reports The purpose of this study is to analyze the legal aspects of IPR related to Study Program Performance Reports (LKPS) as one of the aspects of assessment in the study program accreditation instrument and is stipulated in PERBAN-PT No. 2 of 2019. LKPS requires a letter of determination in the form of a decree or certificate issued by Ministry of Law and Human Rights for the outcomes of research and community services (PKM). The research method uses normative research with the legislative approach and economic approach. The results showed that the characteristics are unique to each type of IPR, so the application of LKPS assessments related to intellectual property cannot be equated. Copyright, LKPS instruments do not need to be determined by the Ministry of Law and Human Rights. Patents, not all registered patents granted have been determined by the Ministry of Law and Human Rights can be used as an assessment, but also need to consider other aspects of cost and commercialization. Industrial design, terminology must definitely refer to "industrial design" not "industrial product design". Recommendations on the results of the study include: first, copyright needs to determine definitively the scope of which if necessary to obtain a letter of determination from the Ministry of Law and Human Rights. Secondly, aspects of patent appraisal are not limited to the determination but also aspects of cost and commercialization. Third, the terminology of industrial design and the scope of the assessment is more emphasized. In general, recommendations for the results of this study suggest that the LKPS instrument needs to be reviewed (reformulation). INTRODUCTION The preamble of the 1945 Constitution of the Republic of Indonesia clearly states that one of the objectives is "to educate the life of the nation". Such effort to realize this goal as a great nation is by putting education as priority [1], as the education is fundamental indicator to the progress of a country. It is believed that education empowers human resources which lead into great ideas, thoughts, innovations, and creativity from various fields of sciences and technology; that education should receive serious attention from the government. Article 31 paragraph (3) of the 1945 Constitution of the Republic of Indonesia states that "the government strives and establishes a national education system, which enhances faithful, piety, and noble characters in order to educate the nation's life, as it is regulated by law". The national education system applied in Indonesia consists of primary education, secondary education, and higher education. For the higher education extension of the secondary education that manages some area, namely diploma, bachelor, master, specialist and doctoral programs. Higher education is conducted by universities, institutes, colleges, and polytechnics. The consideration of Law Number 12 of 2012 on Higher Education point (c) states that to improve the competitiveness of the nation to face globalization in all sectors, higher education is needed to develop science and technology as well as produce intellectuals, scientists, and/or professionals with culturecompetent, creative, tolerant, democratic, and strong characters, also they have courage to defend the truth for the benefit of the nation. As part of government's strategy to improve the national education system requires to implement national education standards. Consideration of the point (c) of the Law of Higher Education, government has authority to establish the national standard for higher education (SNPT), covering some areas like education, research, and community service. In a more technical manner of the SNPT, government has issued the provisions contained in the Ministerial Regulation of Research, Technology and Higher Education Number 44 of 2015 on National Higher Education Standards which were amended by Ministerial Regulation of Research, Technology and Higher Education Number 50 of 2018 (Permenristek SNPT). SNPT as stipulated in Permenristek SNPT provides the provisions related to the requirement of quality assurance. The quality assurance system consists of an internal quality assurance system (SPMI) and an external quality assurance system (SPME). SPMI is carried out by the internal quality assurance, while SPME is carried out by a special agency outside the university (external quality assurance) which is commonly known as an accreditation. As stated in the socialization of the 2011 Accreditation "… a process of external quality review used by higher education to scrutinize colleges, universities and higher education programs for quality assurance and quality improvement". Accreditation can be interpreted as a guarantee and improvement in the quality of study programs or higher education institutions. Both SPMI and SPME are essential parties of quality assurance system in order to support the achievement of a study program at a university. Accreditation is an activity to measure the suitability of SNPT as determined in the Law of Higher Education and other relevant laws and regulations; the government has issued a policy in the form of accreditation conducted by the National Accreditation Board of Higher Education (BAN-PT) assisted by the Independent Accreditation Institution (LAM). One of the authorities possessed by The provision of the research/PKM outputs in the form of IPR as evidenced by the decree from the authorized ministries become the assessment benchmarks or aspects of a study program for the accreditation, so that the study program strives to meet these requirements in order to gain accreditation score. One of the strategies carried out by the study program is to register the IPR which is research/PKM output to the Directorate General of Intellectual Property (DJKI), Kemenkumham. There is nothing wrong to registry the intellectual property in order to obtain the IPR (legal protection). One thing that should be noted, however, that IPRs have several types and characteristics that are not the same as each other. Some other aspects that also become concerns are the economic aspects (effective and efficient concerns), that the IPR is not necessarily and automatically obtained at the time of registration. There are certain types of IPRs that require some fees, which are considerably high. Another research was conducted by Melany with a research focus on Database Modeling on the Study Program Performance Report Information System (LKPS) Based on Study Program Accreditation Instruments (IAPS 4.0) [2]. Whereas based on the results of this research, the existence of this relational database design will facilitate the management, provision and maintenance of accreditation data availability, besides that it can be part of the development of the study program performance report information system. Other research has also been carried out by Layang Sardana, whose research focus is the aspect of legal protection of intellectual property rights of research results produced by lecturers [3]. These results can be in the form of works in the fields of technology, science, art and literature. The law must be able to provide protection for intellectual work, so that it is able to develop the creative power of the community which ultimately leads to the successful protection of Intellectual Property Rights. I Made Dwi Ardiada has also conducted similar research with a focus on Intellectual Property Management Information Systems Using the Symfony Framework [4]. Based on the results of these studies that the Symfony Framework System is one of the best frameworks for complex enterprise-level applications, and to quickly and efficiently enrich the institution's information systems. The focus of this research is more on the legal aspects of intellectual property rights in LKPS as an accreditation research instrument. Based on the description of the background of the problem is how the legal and economic aspects of IPR are related to accreditation of study programs as determined by Kemenkumham. METHODS The type of research used by the authors in this study is a type of juridical-normative research. The research approach in this study is a statutory and concept approach. The juridical-normative research method is research in which the objects are statutory regulations and library materials. Education, Higher Education, National Standard of Higher Education in Indonesia Education is an essential element of humans, starting from the time in womb to the old time, men experience the process of education. Education is a light that guides humans in determining the direction, purpose, and meaning of life. [5] Law Number 20 of 2003 on the National Education System states that education is a conscious and planned effort to create a learning atmosphere and process so that learners actively develop their potentials to have religious and spiritual power, self-control, personality, Advances in Social Science, Education and Humanities Research, volume 618 intelligence, noble character, and skills needed by their own selves, community, to the nation and country. The Ministry of Research, Technology and Higher Education [6] as the representation of the government in the area of higher education has a vision and mission, one of which is to produce quality human resources. Regarding the quality of higher education, Kemristekdikti has established the rules of internal and external supervisory body in order to guarantee the quality in the institutions. One of the important parameters to measure the capacity of quality culture is based on the accreditation of the institutional accreditation (university level) as well as the study programs. The institutional accreditation becomes the benchmark for the public to assess whether the implementation of university or the study programs is already in accordance with national standard of higher education (SNPT) as stipulated by the government. In addition to current trend, companies and government institutions have included minimal B accreditation of the institutions as one of the prerequisites for employee recruitment. [6] Based on data from the Ministry of Research, Technology and Higher Education, currently there is still a high disparity of education quality as reflected in some sources, that only 50 universities of 4,472 universities in Indonesia have the institutional accreditation of "A", and only 12% of the overall study programs are "A"-accredited from a total of 20,254 study programs that have been accredited. At present, the accreditation of the institutions and the study programs and institutions, whether the accreditation is carried out by the National Accreditation Board of Higher Education (BAN-PT) or other international accreditation such as ABET or ASIIN, is the only reliable instrument to measure the quality of higher education. Based on some of the background factors, BAN-PT has authority to improve the accreditation instruments in order to improve the quality towards a better direction. Intellectual Property (IP) and Intellectual Property Rights (IPR) As humans are given with intelligence by the Almighty God, who distinguishes from other creatures, have potentials to produce a variety of intellectual works that may have economic value. The intellectual property can be in the form of creation, invention, and design in the area of science, art, literature and technology which are commonly known as intellectual property (IP). The terminology of IP does not automatically become intellectual property rights (IPR) as these are exclusive rights granted by the State to the creators, inventors, or designers to prohibit and close other people or other parties from using, publishing, or copying the intellectual property. Public in general are relatively mistakenly using the terms intellectual property (IP) and intellectual property rights (IPR). The society relatively often uses the brand and the rights to their brand without paying attention to the context being discussed, even though this might lead into considerable implications from some aspects such as legal protection on intellectual property. In order to change the status of IP to IPR should refer to two systems. First, a declarative system (first to use system), which means that IP becomes IPR (can be protected by law) when the IP is created in reality (automatic protection). Any registration or recording process is merely limited to administration, and it does not become a basis given the inherent the rights of an IP. Second is constitutive system (first to file into system), which means that IP becomes IPR when it is registered and obtains a decree from the Kemenkumham in the form of a certificate. The use of these two terminologies should be used properly, so that any mistakes in the use of the terms shall not occur, especially for the legal academics. Some rationales that become the bases why IP is considered as an asset that has economic value should be protected. First, the natural rights which are properly received by the creator, inventor, or designer to spend or use their time, energy, thoughts, and even costs to create the IP. Second, with the protection of IP, especially in the field of invention, it is believed to increase the inventors' passion to conduct further research. Third, the protection to IP, in the end, may attract healthy business competition climate. In the case of Indonesia, there are many IPs that do not necessarily have IPR. The fact is Indonesia (formerly Nusantara / East Indiche) as an eastern country which focuses more on communal rights upon the intellectuals than individual rights as becomes the principles of the IPR. IPR is neither popularly known nor familiar in Indonesian community as it contradicts customary law, so that Indonesia only became acquainted with the idea of privilege rights upon the intellectual property when the Colonial period was applied by the Dutch. Indonesia began to fully implement the concept of IPR after Indonesia participated in becoming a member in the Agreement Establishing the World Trade Organization, which covers the approval of Trade Related Aspects of Intellectual Property Rights (TRIPs Agreement), through Law Number 7 of 1994. The implication of this agreement is that Indonesia must provide full protection of intellectual property by stipulating a legislation by referring to the TRIPs Agreement. Advances in Social Science, Education and Humanities Research, volume 618 In general, IPR can be classified into two types, namely copyright and industrial rights. The classification is based on the Bern Convention for the Protection of Artistic and Literary Works and the Paris Convention for the Protection of Industrial Property. Copyright includes the area of creativity in science, art, and literature. Any terms related to copyright are regulated in Law Number 28 of 2014 on Copyright (UUHC). Copyright has both economic and moral rights that are not owned by all other types of IPRs. On this basis, the rights inherent in the creator should be accepted and not misused by others without any permission. Industrial rights include patents, brands, industrial designs, trade secrets, and Integrated Circuit Layout Designs. Patents are property rights in the area of technology to help humans to carry out their activities easier; historically, patents began to apply since the Industrial Revolution era which was indicated by the changes of the trend replacing human power into engine power. Patent arrangements in Indonesia are stipulated in Law Number 13 of 2016 on Patents (Patent Law). Based on the Patent Law, legal protection by nature is granted to some inventions based on the following criteria [7]: a. The invention should be in the area of technology; b. The technology that is invented must be problemsolving by nature; c. The invention should contain state of the art and has never been published in written form, or verbally and has never been demonstrated; d. The invention should contain inventive steps, which means the invention cannot be predicted beforehand; and e. The inventions that will be patented can be applied in industrial sector so that if the invention is in the form of a product, the product can be multiplied in number or by mass using certain technology. Based on the scope of the patent, in the field of technology (both as products and processes) with new requirements, it contains inventive steps and can be applied in the industrial world. Patent consists of ordinary and simple patents with a very limited period of protection and cannot be extended. In time when the legal protection is over, it will become public property; people who will use it no longer need permission or pay royalties to the inventor. Industrial design can be defined as a creation of the shape, configuration or composition of lines or colors, or lines and colors, or a combination of three-dimensional or two-dimensional shapes that give an aesthetic impression and can be realized in three-dimensional or twodimensional patterns and can be used to produce a product, goods, industrial commodities, or handicrafts (Law of Industrial Design, 2000). Industrial design in relation to the IPR protection is intended to enhance design development and, at the same time, provide balanced economic rights to the design works [8]. Indonesia as a country that has so many micro, small and medium enterprises (MSMEs) actors, industrial design becomes an asset for companies in relation to the products that they produce. Trademarks were not initially associated with IPR because trademarks were more into business than human intellectuals as patents and copyrights. However, in the end in various international agreements, trademarks become inseparable part in relation to IPRs. The history of development of trademarks is indeed one of the important aspects for the sustainability of a business as it serves as a distinguishing feature [9] regarding the names of the product they produce with other products (by other parties). Law Number 16 of 2016 on Trademarks and Geographical Indications regulates the scope, duration, and trademarks violations. It is definitively stated that trademarks that can be displayed graphically in the form of images, logos, names, words, letters, numbers, arrangement of colors, in the form of 2 (two) dimensions and/or 3 (three) dimensions, sounds, holograms, or combinations of 2 (two) or more of these elements to distinguish goods and/or services produced by a person or a legal entity in the activity of trading goods and/or services (Law of Trademark and Geographical Indications). Therefore, trademark protection requirements include the marks which have differentiating power, and are used in the activities of trading goods and/or services. Trade secret as regulated in Law Number 30 of 2000 on Trade Secrets (UURD) is one form of expensive investment in addition to other forms of investment that should be maintained against all parties so that it is not misused for the benefit of other parties through dishonest competition [10]. Common Trade Secrets also known as "knowing how to do" are IPR related to information in the field of business or technology that has economic value that its confidentiality should be granted. Legal Aspects of IPR in LKPS Instruments The quality of education is one of the most fundamental aspects of the progress of a nation. Education is a priority for a country to produce great human resources. Back when Japan was defeated by allies, the first person to be sought was not a police officer, not a soldier, not a doctor, but a teacher. Based on this brief story how Japan considers the importance an education is for the civilization of a nation, Japan is currently one of the developed countries in the Advances in Social Science, Education and Humanities Research, volume 618 world with a variety of technologies exported to various countries throughout the world. As a developing country, Indonesia has determined one of its objectives in the preambule of the 1945 Constitution of the Republic of Indonesia, namely "to educate the life of the nation", and the realization of these objectives has been granted by budget provision for education for at least 20% (twenty percent) of the state budget (APBN), despite the fact that establishing a good education is not merely to guarantee the budget provision, but also how the education system is applied properly, proportionally, and full of responsibly throughout the country. A solid and established national education system is believed to guarantee the fulfillment of people's needs for quality human resources. [11] The national education system as stipulated in the Law of National Education System, states that the education system in Indonesia consists of some levels, namely primary education, secondary education (junior and high school levels), and higher education. Higher education is expected to have strategic roles to realize the goals of the country as the higher education is the place where there are many intellectuals inventing and developing ideas and thoughts. For this, higher education should have standardized and measurable quality through accreditation mechanisms. All tertiary institutions or study programs are required to regularly accredit their institutions every five years conducted by the national accreditation agencies, e.g. BAN-PT or LAM. In relation with the quality of the higher education, one of the indicators is through the research outputs by the lecturers. The academics of an institution in conducting research and community services (PKM) have already produced considerable outputs, one of which is IP which has economic value. The terminology of IP can be interpreted simply as a result of human thought by using intellectuals in the form of creation, invention, and also design in the fields of science, art, literature, and/or technology. In addition, the terminology of IPR is an exclusive right granted by the state to the creators, inventors or designers of the intellectual works produced. IPR has a distinctive type and characteristic presented in detail in the following Table 2. -It is intellectual property in the field of technology produced by inventors for their inventions; -The scope of the patent includes ordinary patents and simple patents; -It should have novelty, inventive steps, and can be applied in the industrial world; -Patent protection system uses constitutive system; -Patent protection period for ordinary patent is 20 years, simple patent is 10 years; it cannot be extended, so that it would become public domain (public property); and -There is an annual fee that must be paid to Directorate General of Intellectual Property (DJKI), Kemenkumham, to protect the invention. -It is categorized as intellectual property in the fields of science, art and literature; -The scope of protected works includes creation in the fields of science, art, and literature, -The requirement for a work to be protected is the originality of the work; -The copyright protection system is declarative, it is automatic when a work is manifested in a tangible form; -The legal protection for copyrights is the creator's lifetime plus 70 years. -Industrial design is intellectual property, in particular for creation and innovation in the field of design; -Scope of the design is both two-dimensional and three-dimensional designs, covering shapes, configurations, or composition of lines or colors, or lines and colors, or a combination thereof; -The requirements that a design can be protected by industrial design include a creation, giving an aesthetic impression, and can be used to produce a product, commodity for industrial goods or handicrafts; -Industrial design protection system is based on constitutive system; and -Duration of industrial design protection is 10 (ten) years. -Trademark is one of the intellectual properties related to the mark used in the activities of trading goods and/or services; -Scope of images, logos, names, words, letters, numbers, arrangement of colors, in the form of two-dimensional and three-dimensional formats, sounds, holograms, or a combination of 2 (two) or more of these elements; -The requirements of a trademark include signs, distinctiveness, and use in trading activities of goods and/or services; -The trademark protection uses a constitutive system -The duration of the trademark protection is 10 years, and it can extended. -Trade secrets are a type of intellectual property in the field of information in the field of technology or business used in trading activities; -The scope includes production methods, processing methods, sales methods, or other information in the area of technology "know how to do" and/or business -The requirements of a trade secret include confidential information, economic value, and maintained to be confidential. -The protecting system for trade secrets in principle is declarative, but the information can still be kept confidential. -The protection period for trade secret is unlimited as long as the information can be kept confidential. -Integrated Circuit Layout Design (DTLST) is an IPR in the area of technology, in particular semiconductor materials to produce electronic functions; -Scope and requirements of a DTLST can be granted the protection if it has originality and authenticity which is an independent work of the designer and is not something that is common to designers; -The protection system used in DTLST is constitutive system; and -DTLST protection period is 10 years, and it cannot be extended. Source: Processed data by author Based on Table 2 and in relation with the LKPS instruments based on PerBAN-PT Number 2 of 2019 which requires proof of a decree from the Ministry of Law and Human Rights and other authorized ministry, it can be analyzed limited to certain types of intellectual property; there should be further review on the formulation of the instruments, including copyrights, patents, industrial designs, and trade secrets. Copyright is one type of intellectual properties that has distinctive characteristics, namely related to the acquisition of rights to a work that is automatically protected when the work is produced in a tangible form, and any registration recording to the DJKI of Kemenkumham is only relevant administrative evidence proof in case there would be any disputes later. It would be Advances in Social Science, Education and Humanities Research, volume 618 problematic if the LKPS instruments require academic community (lecturers and students) to show proof of determination (i.e. a decree) issued by the Ministry of Law and Human Rights and all the research and PKM outputs should administratively be recorded in the DGKI of Kemenhumkam as one of the completeness of the instruments. Referring to the Table 2, every research result and PKM outputs related to copyrights have a wide scope whether they are included into science, art, or literature (Article 1 paragraph (3), Law of Copyrights). The certain extent of the scope based on the Law of Copyrights does not rule out the possibility of the research results and PKM outputs can be categorized as copyrights, while the formulation on the LKPS instruments has not been definitively mentioned despite it could be interpreted into all criteria. As the consequence, at the implementation level, it might lead into ambiguous and different perceptions among the assessors of BAN-PT. One example of the research result or PKM outcome is in the form of published articles in a reputable national or international journal. As the process of the article to be published in the journal is through an editorial process, reviewing process including plagiarism checks (to measure level of similarity with the previous articles) which basically meets the requirements of originality creation until finally published, the submission or the publication date of the article can be referred as a creation and automatically obtain legal protection. The legal framework related to copyright protection needs to be adapted to the times [12]. In another example is if the academic community conducted a research and wrote a book based on the research, their book which has been published by the publisher automatically obtains copyright protection without having to be registered with the DJKI, Kemenkumham. To this rationale, the LKPS instrument should not require written proof or a decree from the authorized ministries as it may lead into illogical competitions among study programs by multiplying the registration of particular creations to the DJKI of Kemenkumham, only in order to fulfill the LKPS instruments without considering the substance. In addition, as the registration also requires certain amount of payment, it may become financial burden for the study programs and the academic community if the instruments are actually implemented. Regarding the required legal document issued by Kemenkumham as a proof for copyrights, it is recommended that there be no need to prove a work by the decree as it has been stipulated in Law of Copyrights that the copyright protection is gained automatically. Secondly, if it still requires legal proof issued by Ministry of Law and Human Rights, further review is needed for the LKPS instrument to provide definitive criteria to which scope of the copyright; otherwise, there would be different interpretations among assessors. Patents are intellectual property rights in the field of technology, both processes and products. In this regard, the provisions for the protection of a patent under the Patent Law are through a constitutive system (first to file), meaning who registers first will have the patent; in contrast to the system in the United States that uses first to invent for patent protection [13]. The implementation of the LKPS instrument that requires legal proof i.e. a decree issued by Kemenkumham might trigger the study programs to register the research and PKM outputs in order to obtain the decrees by Kemenkumham to pursue accreditation without considering other important aspects, e.g. whether the research and PKM meets the patent requirements, whether the inventions and research outcomes by the academic community provide useful value, whether the invention can be commercialized for industry, etc. These significances should become the concerns for the decree by Kemenkumham as it also has to deal with costs. There will be regular payment for the patent, not only initial payment during the registration, but also the annual fees as long as patent protection is filed. If the patent registered does not meet the mentioned requirements, such as it does not provide use value and/or cannot be commercialized, it becomes the financial burden either for the inventor or the study program's finances. Regarding the patent included in the LKPS instrument, not all patents that have obtained a determination from the Kemenkumham, in other words having been registered (granted) can be assessed according to the LKPS instruments because it is related to fees, long registration process, and protection period. It is recommended that the LKPS instruments need to determine definitively that what it means by patent, it is not only the registered patent, rather the patents that have already been commercialized or licensed (for a process of technology transfer) to third parties; otherwise, the patent becomes financial burden (in terms of institution's finances) if it be commercialized. The institution is required to pay the annual fee during the protection period as the patent holder of the invention produced by the inventor, e.g. academic community in official relations with government agencies is the intended government agency and the inventor, except stated in other agreements (Law of Patent). Advances in Social Science, Education and Humanities Research, volume 618 Industrial design is the creation of the shape, configuration, or composition of lines or colors, or lines and colors, or a combination of three-dimensional shapes containing aesthetic values and can be realized in either three-dimensional or two-dimensional patterns and used to produce products, commodity goods industry, or handicrafts as stated in Law of Industrial Design (UUDI). UUDI has some weaknesses as stated by Andrieansjah related to the objectives of an aesthetic impression [14] and also the assessment for the novelty or originality. First thing that needs to be reviewed is the use of the terminology "industrial product design" in the LKPS instrument, which is clearly not in accordance with the UUDI which only states "industrial design". The use of different terminology may lead into different meanings that the industrial design as specified in the general provisions of Article 1 paragraph (1) of the UUDI is not only as a product, but also industrial commodity such as goods and/or handicrafts. In other words, the use of terminology "industrial product design" seems to limit the meaning as determined in the UUDI. In addition, the research and PKM outcome conducted by the academic community have the potential to be able to be granted for industrial design rights protection as long as it meets the requirements, namely it is a new creation, gives an aesthetic impression, and can be used to produce a product, commodity for industrial goods or handicraft. The requirements are not only as specified in the UUDI, but the industrial design has already been used and implemented in producing products, commodities, or handicrafts in anticipation of only administrative registration to complete the LKPS instrument. Finally, the industrial design produced not only intended to abort the administrative requirements of the instrument, but it has actual benefits in terms of being applied in the industrial world. Trademarks and trade secrets in the context of research and PKM outputs are likely difficult to be implemented as some previous types of IPRs. If the research results are associated with the "sign" of the trademarks, it will be likely more inclined to the design than the trademarks, especially if it is examined using the requirements of a trademark protection in the form of marks, having distinguishing features, and being used in trade activities and services. Both research and PKM are quite difficult to fulfill the three cumulative requirements, mainly related to the use of trades in goods and services, in this case also applies to trade secrets. The recommendations of LKPS instruments related to industrial design are namely, 1) there should be reformulation of the terminology of "industrial product design" into "industrial design" as the addition of one word might reduce the real meaning in terms of legal standing; 2) additional requirements should be added particularly related to "being" applied in the industrial world; otherwise, study programs like Product Design or Visual Communication Design are believed to have a lots of designs submitted to the Ministry of Law and Human Rights in order to obtain the determination letter without considering the substantive aspects related to application in the industrial world. In terms of legal objectives as well as modern theory presented by Gustav Radbruch, the research and PKM outcomes covering copyrights, patents and industrial designs with a determination letter from the authorized ministries which covers justice, certainty, and benefit will be likely more inclined towards the goal of legal certainty [15]. From these objectives, however, the goal might not be achieved simultaneously, but rather a tug of war between one goal and another. In addition, legal certainty is not considered as the ultimate goals, rather the benefit side for the institutions, designers, creator, or inventor. Some recommendations are shown in the following table: Table 3. Recommendations for LKPS Instruments Ref. Types of IPRs LKPS Instruments Formulations 1 Copyrights -There is no need to prove a creation with a decree issued by Ministry of Law and Human Rights because based on the Law of Copyrights, as automatic protection applies when a creation is produced. -If only the legal proof issued by the authorized ministries is still needed, the LKPS instruments should provide definitive statement which scope of the copyrights being assessed; otherwise, there could be different interpretations among the assessors of the accreditation body. Patents -Not all patents which have been registered (granted) can be assessed. This is mainly related to the financial aspects such as initial fee and annual fee for patent protection. The LKPS instruments need to define definitively the patents being assessed are those which have been commercialized or licensed to third parties so they might not burden the institution's finance. 3 Industrial Design -The use of the term "industrial product design" should be adjusted to the Law of Industrial Design, namely "industrial design". This change is intended not to narrow the meaning of the design which is only for products because the Law of Industrial Design also includes industrial commodity goods or handicrafts. This shows the Advances in Social Science, Education and Humanities Research, volume 618 importance of synchronization between the legislation and LKPS instruments, even though it is just for a term. -The LKPS instruments should not only require the decree issued by authorized ministries as only specified in the Law of Industrial Design (administrative consideration only); instead, the required industrial designs are which have been used and implemented in producing products, commodities, or handicrafts. Thus, the industrial design produced is not only to abort the administrative requirements, but actually provides benefits in terms of being applied in the industrial world. Sources: Processed Data by Author CONCLUSION Based on the elaboration, there are some conclusions related to the LKPS instrument as regulated in PERBAN-PT Number 2 of 2019 on research results and PKM outputs which should be proven with legal determination from the Ministry of Law and Human Rights. First, in terms of copyright, there is no need for the instruments to require the research outcomes to have letter of determination from Kemenkumham. In addition, LKPS instruments do not definitive information of the scope of the copyrights for the research and PKM outputs. Second, in the case of patents, not all registered patents (granted) that have been determined by the Kemenkumham can be assessed by considering some aspects, namely financial burden and commercialization significance. Third, in terms of industrial design, LKPS instrument should definitively state "industrial design" instead of "industrial product design". In addition to the assessment, the letter of determination by the Kemenkumham is necessary to ensure that the industrial design has been applied in the industrial world. There are some recommendations based on the results of the study, namely in terms of copyright, it is necessary for the BAN-PT define definitively which scope should be needed in order to obtain a decree from the Kemenkumham. Second, in the case of patents, the assessment should not only be limited to the registered patents, but also other aspects, such as financial aspect and commercialization significance. Third, in terms of industrial design, the terminology of the scope of the assessment should be more emphasized.
Congenital Deletion of Nedd4-2 in Lung Epithelial Cells Causes Progressive Alveolitis and Pulmonary Fibrosis in Neonatal Mice Recent studies found that expression of NEDD4-2 is reduced in lung tissue from patients with idiopathic pulmonary fibrosis (IPF) and that the conditional deletion of Nedd4-2 in lung epithelial cells causes IPF-like disease in adult mice via multiple defects, including dysregulation of the epithelial Na+ channel (ENaC), TGFβ signaling and the biosynthesis of surfactant protein-C proprotein (proSP-C). However, knowledge of the impact of congenital deletion of Nedd4-2 on the lung phenotype remains limited. In this study, we therefore determined the effects of congenital deletion of Nedd4-2 in the lung epithelial cells of neonatal doxycycline-induced triple transgenic Nedd4-2fl/fl/CCSP-rtTA2S-M2/LC1 mice, with a focus on clinical phenotype, survival, lung morphology, inflammation markers in BAL, mucin expression, ENaC function and proSP-C trafficking. We found that the congenital deletion of Nedd4-2 caused a rapidly progressive lung disease in neonatal mice that shares key features with interstitial lung diseases in children (chILD), including hypoxemia, growth failure, sterile pneumonitis, fibrotic lung remodeling and high mortality. The congenital deletion of Nedd4-2 in lung epithelial cells caused increased expression of Muc5b and mucus plugging of distal airways, increased ENaC activity and proSP-C mistrafficking. This model of congenital deletion of Nedd4-2 may support studies of the pathogenesis and preclinical development of therapies for chILD. Introduction Nedd4-2 is an E3 ubiquitin-protein ligase that participates in the posttranscriptional regulation of several proteins including ENaC, Smad2/3 and proSP-C, which play key roles in multiple cellular processes such as epithelial ion and fluid transport, TGFβ signaling and surfactant biogenesis that are essential for epithelial homeostasis and lung health [1][2][3][4][5][6][7][8]. In a previous study, we found that NEDD4-2 is reduced in the lung tissue of patients with idiopathic pulmonary fibrosis (IPF) [9]. Further, we demonstrated that the conditional deletion of Nedd4-2 in lung epithelial cells by doxycycline induction of adult Nedd4-2 fl/fl /CCSP-rtTA2 S -M2/LC1 mice, hereafter referred to as conditional Nedd4-2 −/− mice, causes a chronic progressive, restrictive lung disease that shares key features with IPF in patients including signature lesions such as radiological and histological honeycombing and fibroblast foci [9]. These studies also identified the dysregulation of (i) ENaC, leading to airway surface liquid depletion and reduced mucociliary clearance; (ii) proSP-C biogenesis and (iii) TGFβ/Smad signaling, promoting fibrotic remodeling as epithelial defects and potential mechanisms triggering IPF-like disease in adult conditional Nedd4-2 −/− mice [9]. Compared to the detailed characterization of the functional consequences and resulting pulmonary phenotype produced by the conditional deletion of Nedd4-2 in the lung epithelial cells of adult mice [9], current knowledge on the impact of the congenital deletion of Nedd4-2 on the lung phenotype in neonatal mice remains limited. A mouse line with constitutive systemic deletion of Nedd4-2 demonstrated that the majority of mice lacking Nedd4-2 died during or shortly after birth and that survivors developed substantial neutrophilic inflammation in the lungs at the age of 3 weeks [10]. Subsequent studies in mice with constitutive lung-specific deletion of Nedd4-2 using a "leaky" Nedd4-2 fl/fl /Sftpc-rtTA/Cre triple transgenic system under the control of the surfactant protein C (Sftpc) promoter showed massive neutrophilic inflammation, aspects of cystic fibrosis-like lung disease and premature death 3-4 weeks after birth [11]. However, the lung phenotype of neonatal Nedd4-2 fl/fl /CCSP-rtTA2 S -M2/LC-1 mice, facilitating "tight" deletion of Nedd4-2 in alveolar type 2 (AT2) cells as well as club cells of the conducting airways under control of the club cells 10 kDa secretory protein (CCSP) [12] promoter, has not been studied. The aim of the present study was therefore to determine the effects of congenital deletion of Nedd4-2 in lung epithelial cells of neonatal Nedd4-2 fl/fl /CCSP-rtTA2 S -M2/LC-1 mice, hereafter referred to as congenital Nedd4-2 −/− mice. Using physiologic, histopathologic, inflammatory and microbiological endpoints, we focused on the clinical phenotype including survival, lung morphology, inflammation markers in BAL, mucin (Muc5b and Muc5ac) expression in whole lung and airway mucus content, ENaC-mediated Na + transport in freshly excised tissues of the conducting airways and proSP-C trafficking in AT2 cells to provide a comprehensive characterization of the lung phenotype of congenital Nedd4-2 −/− mice, and to elucidate the impact of epithelial defects identified in adult conditional Nedd4-2 −/− mice in the neonatal lung. The results of this study validate a new mouse model that shares key aspects of interstitial lung diseases in children (chILD), and thus offers new opportunities for studies of the pathogenesis and therapy of these childhood lung diseases with high unmet need [13]. Congenital Deletion of Nedd4-2 in Lung Epithelial Cells Causes Severe Hypoxemia, Failure to Thrive and Early Mortality in Neonatal Mice To determine the effect of the congenital deletion of Nedd4-2 in epithelial cells of the neonatal mouse lung, we crossed mice carrying Nedd4-2 flanked by loxP sites (Nedd4-2 fl/fl ) with CCSP-rtTA2 S -M2 S /LC1 mice to enable tight doxycycline-dependent Cre expression for the targeted deletion of Nedd4-2 in club cells of the conducting airways and AT2 cells of the lung [9,12]. Dams were continuously fed with doxycycline from the first day of mating to obtain triple transgenic congenital Nedd4-2 −/− mice. At 10 days after birth, before the onset of clinical signs of lung disease, body weight did not differ between congenital Nedd4 2 −/− mice (5.4 ± 0.08 g) vs. littermate controls (5.3 ± 0.11 g). Around 3 weeks after birth, congenital Nedd4-2 −/− mice showed clinical symptoms of respiratory distress with severe hypoxemia (Figure 1a with CCSP-rtTA2 S -M2 S /LC1 mice to enable tight doxycycline-dependent Cre expression for the targeted deletion of Nedd4-2 in club cells of the conducting airways and AT2 cells of the lung [9,12]. Dams were continuously fed with doxycycline from the first day of mating to obtain triple transgenic congenital Nedd4-2 −/− mice. At 10 days after birth, before the onset of clinical signs of lung disease, body weight did not differ between congenital Nedd4 2 −/− mice (5.4 ± 0.08 g) vs. littermate controls (5.3 ± 0.11 g). Around 3 weeks after birth, congenital Nedd4-2 −/− mice showed clinical symptoms of respiratory distress with severe hypoxemia (Figure 1a), weight loss ( Figure 1b) and ~95% mortality within 4 weeks after birth (Figure 1c). Congenital Deletion of Nedd4-2 in Lung Epithelial Cells Causes Alveolar Inflammation and Fibrosis in Neonatal Mice Microscopically, hematoxylin-and eosin (H&E) stained lung sections from 10-dayold congenital Nedd4-2 −/− mice did not show abnormalities compared to littermate controls (Figure 2a), whereas lung sections from 3-week-old congenital Nedd4-2 −/− mice displayed patchy inflammatory infiltrates, especially in the periphery of the lung (Figure 2b). These same regions also showed evidence of epithelial hyperplasia and alveolitis, with large foamy macrophages and granulocytes infiltrating the alveolar airspaces in the affected areas ( Figure 2b). Masson-Goldner-Trichrome staining of lung sections of 3-week-old congenital Nedd4-2 −/− mice showed substantial collagen deposition in affected lung regions ( Figure 2c). The use of multiple control lines established that the observed phenotype was not caused by off-target effects of rtTA, Cre recombinase or doxycycline and that the expression system was tight in the absence of doxycycline ( Figure A1, Appendix A). Congenital Deletion of Nedd4-2 in Lung Epithelial Cells Causes Alveolar Inflammation and Fibrosis in Neonatal Mice Microscopically, hematoxylin-and eosin (H&E) stained lung sections from 10-dayold congenital Nedd4-2 −/− mice did not show abnormalities compared to littermate controls (Figure 2a), whereas lung sections from 3-week-old congenital Nedd4-2 −/− mice displayed patchy inflammatory infiltrates, especially in the periphery of the lung (Figure 2b). These same regions also showed evidence of epithelial hyperplasia and alveolitis, with large foamy macrophages and granulocytes infiltrating the alveolar airspaces in the affected areas ( Figure 2b). Masson-Goldner-Trichrome staining of lung sections of 3-week-old congenital Nedd4-2 −/− mice showed substantial collagen deposition in affected lung regions ( Figure 2c). The use of multiple control lines established that the observed phenotype was not caused by off-target effects of rtTA, Cre recombinase or doxycycline and that the expression system was tight in the absence of doxycycline ( Figure A1, Appendix A). Development of Pneumonitis in Congenital Nedd4-2 −/− Mice BAL studies demonstrated that the histological pneumonitis observed in congenital Nedd4-2 −/− mice was accompanied by a dynamic polycellular inflammatory cell influx, as well as a mixed proinflammatory cytokine response. Assaying the BAL of 10-day-old and 3-week-old congenital Nedd4-2 −/− mice revealed that the congenital deletion of Nedd4-2 −/− produced an early (10 days) increase in the number of macrophages that demonstrated morphologic features of activation, including irregular shape, vacuolized cytoplasm and increased size, which was accompanied by increased acitiviy of matrix-metalloproteinase 12 (Mmp12) on the cell surface (Figure 3a-d), as previously described in Scnn1b-Tg mice with muco-obstructive lung disease [14,15]. Inflammatory parameters further increased by the age of 3 weeks, with elevated numbers of neutrophils and eosinophils (Figure 3e Congenital Deletion of Nedd4-2 in Lung Epithelial Cells Causes Mucus Plugging and Epithelial Necrosis in Distal and Terminal Airways in Neonatal Mice Previous studies in adult mice with conditional deletion of Nedd4-2 identified epithelial remodeling of the distal airways with increased numbers of mucin-producing goblet cells, expression of Muc5b and impaired mucociliary clearance as key features of IPF-like lung disease in this model [9]. We therefore determined expression of the secreted mucins Muc5b and Muc5ac and mucus content in lungs of congenital Nedd4-2 −/− mice. Transcript levels of Muc5b and Muc5ac were increased in the lungs of 3-week-old congenital Nedd4-2 −/− mice compared to controls (Figure 5a,b). Alcian blue-periodic acid-Schiff (AB-PAS) staining of lung sections showed goblet cell metaplasia and mucus plugging in the distal and terminal airways of 3-week-old congenital Nedd4-2 −/− mice, especially in regions with a high grade of inflammation and fibrosis ( Figure 5c), but not in age-matched littermate controls. Previous studies in Scnn1b-Tg mice and patients with muco-obstructive lung disease demonstrated that airway mucus plugging, probably via local hypoxia in the airway lumen, led to hypoxic degeneration and necrosis of airway epithelial cells [16][17][18][19]. Similarly, we found increased numbers of degenerative cells in the mucus-obstructed distal and terminal airways, especially in inflamed and fibrotic lung regions of 3-week-old congenital Nedd4-2 −/− mice (Figure 5d,e). Congenital Deletion of Nedd4-2 in Lung Epithelial Cells Causes Mucus Plugging and Epithelial Necrosis in Distal and Terminal Airways in Neonatal Mice Previous studies in adult mice with conditional deletion of Nedd4-2 identified epithelial remodeling of the distal airways with increased numbers of mucin-producing goblet cells, expression of Muc5b and impaired mucociliary clearance as key features of IPF-like lung disease in this model [9]. We therefore determined expression of the secreted mucins Muc5b and Muc5ac and mucus content in lungs of congenital Nedd4-2 −/− mice. Transcript levels of Muc5b and Muc5ac were increased in the lungs of 3-week-old congenital Nedd4-2 −/− mice compared to controls (Figure 5a,b). Alcian blue-periodic acid-Schiff (AB-PAS) staining of lung sections showed goblet cell metaplasia and mucus plugging in the distal and terminal airways of 3-week-old congenital Nedd4-2 −/− mice, especially in regions with a high grade of inflammation and fibrosis ( Figure 5c), but not in age-matched littermate controls. Previous studies in Scnn1b-Tg mice and patients with muco-obstructive lung disease demonstrated that airway mucus plugging, probably via local hypoxia in the airway lumen, led to hypoxic degeneration and necrosis of airway epithelial cells [16][17][18][19]. Similarly, we found increased numbers of degenerative cells in the mucus-obstructed distal and terminal airways, especially in inflamed and fibrotic lung regions of 3-week-old congenital Nedd4-2 −/− mice (Figure 5d,e). Increased ENaC Activity in Freshly Excised Airway Tissues of Congenital Nedd4-2 −/− Mice Nedd4-2 was shown to regulate cell surface expression of ENaC [4,20], and our previous studies demonstrated that a lack of Nedd4-2 caused increased ENaC function, which led to airway surface liquid depletion and impaired mucociliary clearance in adult conditional Nedd4-2 −/− mice [9]. To investigate the effects of the congenital deletion of Nedd4-2 on ENaC activity, we performed bioelectric Ussing chamber experiments in freshly excised tracheal tissue from 10-day-old neonatal mice. At postnatal day 10, i.e., prior to detectable histological changes, the ENaC-mediated amiloride-sensitive short circuit current (I SC ) was significantly increased in congenital Nedd4-2 −/− mice compared to littermate controls (Figure 6a,b), supporting a role of increased ENaC acitvity in the patohphysiology of the observed phenotype. Increased ENaC Activity in Freshly Excised Airway Tissues of Congenital Nedd4-2 −/− Mice Nedd4-2 was shown to regulate cell surface expression of ENaC [4,20], and our previous studies demonstrated that a lack of Nedd4-2 caused increased ENaC function, which led to airway surface liquid depletion and impaired mucociliary clearance in adult conditional Nedd4-2 −/− mice [9]. To investigate the effects of the congenital deletion of Nedd4-2 on ENaC activity, we performed bioelectric Ussing chamber experiments in freshly excised tracheal tissue from 10-day-old neonatal mice. At postnatal day 10, i.e., prior to detectable histological changes, the ENaC-mediated amiloride-sensitive short circuit current (ISC) was significantly increased in congenital Nedd4-2 −/− mice compared to littermate controls (Figure 6a,b), supporting a role of increased ENaC acitvity in the patohphysiology of the observed phenotype. Increased ENaC Activity in Freshly Excised Airway Tissues of Congenital Nedd4-2 −/− Mice Nedd4-2 was shown to regulate cell surface expression of ENaC [4,20], and our previous studies demonstrated that a lack of Nedd4-2 caused increased ENaC function, which led to airway surface liquid depletion and impaired mucociliary clearance in adult conditional Nedd4-2 −/− mice [9]. To investigate the effects of the congenital deletion of Nedd4-2 on ENaC activity, we performed bioelectric Ussing chamber experiments in freshly excised tracheal tissue from 10-day-old neonatal mice. At postnatal day 10, i.e., prior to detectable histological changes, the ENaC-mediated amiloride-sensitive short circuit current (ISC) was significantly increased in congenital Nedd4-2 −/− mice compared to littermate controls (Figure 6a,b), supporting a role of increased ENaC acitvity in the patohphysiology of the observed phenotype. 2.6. proSP-C Is Mistrafficked in Lung Epithelial Cells of Congenital Nedd4-2 −/− Mice Nedd4-2 was also shown to play a role in the posttranslational regulation of SP-C expressed in AT2 cells, and previous studies found mutations in the SFTPC gene in association with the development of ILD both in children (chILD) and in familial IPF in adults [21][22][23][24][25][26]. In our previous studies, we found that a lack of Nedd4-2 causes mistrafficking of proSP-C, but that this defect did not play a dominant role in determining the IPF-like lung phenotype produced by the conditional deletion of Nedd4-2 in adult mice [9]. To determine the impact of proSP-C mistrafficking due to lack of Nedd4-2 in the neonatal lung, we performed biochemical studies and investigated the effect of the genetic deletion of Sftpc in congenital Nedd4-2 −/− mice. Using double label fluorescence immunohistochemistry for proSP-C and Lamp-1, we found that, in 3-week-old neonatal control mice, the subcellular distribution of proSP-C was predominantly found in Lamp-1 positive lamellar bodies. In the lungs of 3-weekold congenital Nedd4-2 −/− mice, similar to our findings in adult conditional Nedd4-2 −/− mice [9], a significant proportion of proSP-C expression shifts to Lamp-1 negative cytosolic compartments (Figure 7a). The mistrafficking of proSP-C was accompanied by marked changes in its posttranslational processing. Western blots of proSP-C from lung homogenates of 3-week-old mice revealed a 21-22 kDa proSP-C doublet in control mice while, in congenital Nedd4-2 −/− mice, the primary translation product doublet shifts to a single band, accompanied by the appearance of a new intermediate around 16 or 17 kDa (Figure 7b). In BAL, Western blotting revealed a reduction in mature SP-C in 3-week-old congenital Nedd4-2 −/− mice compared to littermate controls (Figure 7c). Despite a major impact on SP-C biosynthesis, other components of the surfactant system, such as surfactant protein B and D (SP-B and SP-D), were largely unaffected (Figure 7b). Discussion This study demonstrates that the congenital deletion of Nedd4-2 in lung epithelial cells causes a spontaneous and rapidly progressive lung disease in neonatal mice that shares key clinical and histopathological features of interstitial lung diseases in children (chILD), and thereby extends recent reports on the E3 ubiquitin ligase NEDD4-2 in the pathogenesis of ILD [9]. These features include respiratory distress, hypoxemia, growth failure, sterile alveolitis, patchy fibrotic remodeling of the alveolar airspaces and high ne- Despite in vivo confirmation of the previously described role for NEDD4-2 in SFTPC biosynthesis [1,2], and similar to our previous studies in adult conditional Nedd4-2 −/− mice [9], we found that proSP-C mistrafficking alone was insufficient to drive the abnormal lung phenotype found in neonatal mice with the congenital deletion of Nedd4-2. When Nedd4-2 fl/fl /CCSP-rtTA2 S -M2/LC1 mice were crossed with Sftpc-deficient (Sftpc −/− ) mice and induced in utero with doxycycline, the genetic deletion of Sftpc in quadruple transgenic mice had no effect on survival (Figure 7d), the number of BAL macrophages (Figure 7e), neutrophils (Figure 7f), eosinophils (Figure 7g) or on structural lung disease (data not shown) compared to triple transgenic congenital Nedd4-2 −/− mice. These data are consistent with our previous results in adult conditional Nedd4-2 −/− mice, and imply that congenital Nedd4-2 deficiency imparts a toxic effect that is not attributable to a single protein but more likely caused by pleiotropic effects on AT2 cell homeostasis. Discussion This study demonstrates that the congenital deletion of Nedd4-2 in lung epithelial cells causes a spontaneous and rapidly progressive lung disease in neonatal mice that shares key clinical and histopathological features of interstitial lung diseases in children (chILD), and thereby extends recent reports on the E3 ubiquitin ligase NEDD4-2 in the pathogenesis of ILD [9]. These features include respiratory distress, hypoxemia, growth failure, sterile alveolitis, patchy fibrotic remodeling of the alveolar airspaces and high neonatal mortality (Figures 1-4) [27,28]. Similar to conditional deletion in adult mice [9], we found that the congenital deletion of Nedd4-2 results in increased expression of the mucins Muc5b and Muc5ac and a remodeling of the distal airways including goblet cell metaplasia in congenital Nedd4-2 −/− mice ( Figure 5). In addition, epithelial defects previously reported in adult conditional Nedd4-2 −/− mice, such as increased ENaC-mediated Na + /fluid transport and abnormal proSPC trafficking, were confirmed in the lungs of neonatal congenital Nedd4-2 −/− mice (Figures 6 and 7) [9]. Taken together, these results demonstrate that Nedd4-2 in lung epithelial cells plays an important role in normal lung development, provide additional evidence for its importance in lung health and have established a mouse model of chILD, comprising a spectrum of lung diseases in children with high unmet need. Besides the important similarities of pulmonary phenotypes caused by the congenital vs. the conditional deletion of Nedd4-2 in the murine lung, including restrictive lung disease with patchy fibrotic remodeling of distal airspaces due to dysregulated Smad2/3 signaling, leading to increased levels of TGFβ, remodeling of distal airways with goblet cell metaplasia and increased expression of Muc5b, as well as high pulmonary mortality (Figures 1, 2 and 5) [9,29], our study also revealed some striking age-dependent differences. First, the onset and progression of ILD was substantially accelerated in congenital vs. conditional Nedd4-2 −/− mice, as evidenced by the time point of mortality that occurred withiñ 4 weeks after birth in most neonatal congenital Nedd4-2 −/− mice compared to~4 months after conditional deletion of Nedd4-2 −/− in adult mice (Figure 1) [9]. Second, alveolitis with inflammatory cell infiltrates, including morphologically activated "foamy" macrophages, neutrophils and eosinophils associated with elevated pro-inflammatory cytokines such as IL-1β, KC and IL-13 in BAL, was substantially more prominent in neonatal congenital Nedd4-2 −/− compared to conditional Nedd4-2 −/− mice (Figure 3) [9]. Third, histopathologic studies of the lungs of congenital Nedd4-2 −/− mice revealed mucus plugging of the distal airways that was associated with hypoxic epithelial necrosis ( Figure 5), a phenotype that was previously reported in neonatal Scnn1b-Tg mice with muco-obstructive lung disease [16][17][18][19], but not observed in adult conditional Nedd4-2 −/− mice [9]. Based on these findings, we studied the role of pro-SPC trafficking in AT2 cells and ENaC-mediated Na + transport across freshly excised airway tissues of neonatal congenital Nedd4-2 −/− mice, i.e., epithelial cell functions that we previously found to be abnormal in adult conditional Nedd4-2 −/− mice, as a potential explanation for these age-dependent differences in lung phenotypes. Using a variety of techniques, our data provide evidence of defective proSP-C trafficking, maturation and secretion in this neonatal model (Figure 7), which parallels findings we reported in adult conditional Nedd4-2 −/− mice [9]. However, similar to adult conditional Nedd4-2 −/− mice, the genetic deletion of Sftpc was insufficient to rescue the lung disease phenotype in congenital Nedd4-2 −/− mice (Figure 7) [9]. Thus, the effect size of neither misprocessed proSP-C nor loss of mature SP-C in surfactant is sufficient to drive the ILD phenotype and explain the age-dependent differences observed in neonatal congenital vs. adult conditional Nedd4-2 −/− mice. Similar to previous studies in adult conditional Nedd4-2 −/− mice [9], we show that congenital deletion of Nedd4-2 produces increased ENaC activity in airway epithelial cells of neonatal mice ( Figure 6). In adult conditional Nedd4-2 −/− mice, we demonstrated that increased ENaC-mediated Na + /fluid absorption across airway epithelia, as previously shown in patients with cystic fibrosis and Scnn1b-Tg mice [6,[30][31][32][33], results in airway surface liquid depletion and impaired mucociliary clearance [9]. As mucociliary clearance is an important innate defense mechanism of the lung, and retention of inhaled irritants and pathogens leads to repeated micro-injury and chronic inflammation, our data support mucociliary dysfunction as an important disease mechanism triggering ILD in both congenital and adult conditional Nedd4-2 −/− mice [9,32,34,35]. Of note, this concept is consistent with studies in Muc5b-overexpressing mice that exhibit impaired mucociliary clearance and develop more severe bleomycin-induced pulmonary fibrosis [36]. The importance of dysregulated ENaC activity in the pathogenesis of ILD in congenital Nedd4-2 −/− mice is also supported by the observation that this epithelial ion transport defect was already present in 10-day-old mice with normal lung morphology, i.e., prior to the onset of histological signs of ILD (Figure 2), as well as previous studies in Nedd4-2 fl/fl /Sftpc-rtTA/Cre mice with the constitutive deletion of Nedd4-2 under control of the SP-C promoter [11] and mice with the constitutive overexpression of the α and β subunits of ENaC in the lung [37]. In both models, increased ENaC activity in the distal lung was associated with severe pulmonary inflammation, mucus obstruction of distal airways and high neonatal mortality [11,37]. Taken together, these data support increased ENaC activity leading to airway/alveolar surface liquid depletion and mucociliary dysfunction in distal airways as a key pathogenetic mechanism of ILD in congenital Nedd4-2 −/− mice. Interestingly, a previous study in fetal distal lung epithelial cells of wild-type rats found that the male sex is associated with reduced ENaC-mediated Na + transport [38]. Our study included all newborns from each litter, resulting in a balanced distribution of male and female neonates that enabled an exploratory analysis of potential gender differences. Similar to previous studies in rat lung epithelia [38], we observed a~30% reduction in ENaC-mediated Na + absorption in male vs. female mice in the control group, as well as the congenital Nedd4-2 −/− group (data not shown). However, this gender difference in ENaC function did not reach statistical significance based on the number of mice available for our study. Similar, other pulmonary phenotypes of neonatal congenital Nedd4-2 −/− mice including hypoxemia, growth failure, pulmonary inflammation, mucin expression, epithelial cell necrosis, abnormal proSP-C trafficking and mortality did not differ between male vs. female mice. However, our study was not powered to detect gender differences, and future studies are necessary to determine the potential role of gender differences in ENaC-mediated Na + absorption in the pathogenesis of lung disease in congenital Nedd4-2 −/− mice. Several factors may explain the age-specific differences in pulmonary phenotypes produced by deletion of Nedd4-2 in neonatal vs. adult mice. First, the accelerated onset and increased severity of pulmonary inflammation observed in congenital Nedd4-2 −/− mice may be explained by an increased susceptibility of the neonatal lung to the retention of inhaled irritants, as previously shown for cigarette smoke exposure in Scnn1b-Tg mice with muco-obstructive lung disease [39]. Second, in congenital Nedd4-2 −/− mice, we found that increased ENaC activity leading to mucociliary dysfunction, probably due to a smaller diameter of neonatal vs. adult airways, is associated with mucus plugging and hypoxic epithelial cell necrosis of the distal airways ( Figure 5), whereas this phenotype was not observed in conditional Nedd4-2 −/− mice [9]. As hypoxic epithelial cell necrosis in mucus-obstructed airways has been identified as a strong trigger of sterile inflammation via triggering the pro-inflammatory IL-1 signaling pathway in the absence of bacterial infection in Scnn1b-Tg mice, and patients with muco-obstructive lung diseases such as cystic fibrosis and chronic obstructive pulmonary disease [16,[40][41][42], this mechanism may also contribute to the more severe inflammatory phenotype caused by the congenital deletion of Nedd4-2 in the neonatal lung. Finally, the differences in the onset and progression of ILD in congenital vs. conditional Nedd4-2 −/− mice may be explained by age-dependent differences in the temporal and spatial activity of the CCSP promoter observed in previous studies of the CCSP-rtTA2 S -M2 activator line that was used for inducible lung-specific deletion of Nedd4-2 [9,12]. These studies demonstrated a broader expression of the reverse tetracycline transactivator rtTA2 S -M2 in AT2 cells, as well as club cells throughout the conducting airways of the neonatal lung whereas, in adult mice rtTA2 S -M2 expression was more restricted to AT2 cells and club cells of the distal airways [12]. In addition, a previous study demonstrated age-dependent activity of the CCSP promoter, with the highest levels around birth and decreasing activity in older mice [43]. These temporal and spatial differences are expected to result in a faster and more widespread deletion of Nedd4-2 in the neonatal vs. adult lung that may aggravate increased ENaC activity and mucociliary dysfunction, increased pro-fibrotic TGFβ signaling and potentially other pathogenic processes induced by Nedd4-2 deficiency [1][2][3][8][9][10][11][44][45][46][47][48][49]; therefore, they might also contribute to the more rapid onset and progression of ILD in congenital vs. conditional Nedd4-2 −/− mice. Previous studies demonstrated that systemic deletion of Nedd4-2 leads to perinatal lethality in mice and loss-of-function variants of NEDD4-2 have not been described in humans [10]. In our study, targeted in utero deletion of Nedd4-2 in lung epithelial cells did not cause perinatal morbidity or mortality, as evidenced by a normal distribution of genotypes and as expected from Mendelian ratios, normal development and weight gain, as well as a lack of respiratory symptoms in the the first 10 days of life (Figures 2, 3 and A2). However, our data demonstrate that the congenital deletion of Nedd4-2 in the lung leads to an early onset and rapid progression of ILD beyond the perinatal period (Figures 1-3). In our previous study, we found that NEDD4-2 protein and transcript levels were reduced in lung tissue biopsies from IPF patients, supporting the role of NEDD4-2 dysfunction in human ILD [9]. Based on these findings in adult IPF patients, we speculate that NEDD4-2 deficiency may also be implicated in the pathogenesis of chILD. However, future studies are necessary to test this hypothesis and determine mechanisms of lung-specific NEDD4-2 deficiency that may be caused, e.g., by transcriptional, post-transcriptional or epigenetic regulation of NEDD4-2 in the lung. In summary, our results demonstrate that the congenital deletion of Nedd4-2 in lung epithelial cells causes severe ILD in neonatal mice that shares key features with interstitial lung diseases in children (chILD), including respiratory distress, hypoxemia, growth failure, sterile alveolitis, progressive fibrotic remodeling of the lung parenchyma and high mortality. These data further substantiate an important role of Nedd4-2 in normal lung development and lung health, and have established a mouse model of chILD that may serve as a useful tool for studies of the complex in vivo pathogenesis, the identification of biomarkers and therapeutic targets, as well as preclinical evaluations of novel therapeutic strategies that are urgently needed to improve the clinical outcome of patients with chILD [13]. Experimental Animals All animal studies were approved by the animal welfare authority responsible for the University of Heidelberg (Regierungspräsidium Karlsruhe, Karlsruhe, Germany). Mice for congenital deletion of Nedd4-2 in lung epithelial cells were generated as previously described [9]. In brief, mice carrying Nedd4-2 fl/fl [11] were intercrossed with CCSP-rtTA2 S -M2 line 38 (CCSP-rtTA2 S -M2) [12] and LC1 mice [50,51]. All three lines were on a C57BL6/N background. Sftpc −/− mice [52] were obtained on a 129S6 background. Mice were housed in a specific pathogen-free animal facility and had free access to food and water. For prenatal induction, dams were treated continuously with doxycycline from the first day of mating and mice were studied at 10 days and 3 weeks of age. All newborn mice of a litter were included in our study, irrespective of gender and genotype, yielding a balanced gender distribution in the control groups and congenital Nedd4-2 −/− groups. Details on the genotype distribution are provided in Figure A2, Appendix A. Measurement of Inflammatory Markers in BAL BAL was performed and differential cell counts and macrophage sizes were determined as previously described [17]. Concentrations of KC (CXCL-1) and IL-13 were measured in cell-free BAL supernatant and IL-1β was measured in total lung homogenates by ELISA (R&D Systems, Minneapolis, MN, USA) according to manufacturer's instructions. Mmp12 acitivity on the surface of BAL macrophages was assessed by a Foerster resonance energy transfer (FRET) based activity assay as previously described [14]. In brief, BAL cells were incubated for 10 min at room temperature with the membrane-anchored FRET reporter Laree1 (1 µM). Cells were diluted with PBS to a volume of 200 µL and centrifuged on slides by cytospin. Membrane-bound Mmp12 activity was measured by confocal microscopy. Images were acquired on a Leica SP8 confocal microscope with an HC PL APO CS2 63× 1.3 oil objective (Leica microsystems, Wetzlar, Germany). Donor/acceptor ratio was calculated using the open source imaging analysis software Fiji version 1.46r [53,54]. Histology and Morphometry Right lungs were inflated with 4% buffered formalin to 25 cm of fixative pressure. Noninflated left lungs were immersion fixed. Lungs were paraffin embedded and sectioned at 5 µm and stained with H&E, Masson-Goldner-Trichrome and AB-PAS. Images were captured with a NanoZoomer S60 Slidescanner (Hamamatsu, Hamamatsu City, Japan) at a magnification of 40×. Airway regions were determined from proximal-to-distal distances and airway branching, as determined by longitudinal sections of lung lobes at the level of the main axial airway, as previously described [55]. Degenerative cells were identified by morphologic criteria such as swollen cells with vacuolized cytoplasm and pycnotic nucleus in H&E stained lung sections. Numeric cell densities were determined using NDP.view2 software version 2.7.52 (Hamamatsu, Hamamatsu City, Japan), as previously described [16]. Pulse Oximetry Oxygen saturation of 3-week-old mice was determined using a noninvasive pulse oximeter for laboratory animals (MouseOx ® Plus, Starr Life Science, Oakmont, PA, USA) and measured with a thigh clip sensor, as previously described [9]. Percent oxygen saturation was measured after stabilization of heart rate and breathing frequency. Immunofluorescence Microscopy Lung sections were evaluated for proSP-C using a primary polyclonal anti-NproSP-C antibody and Alexa Fluor 488 conjugated goat anti-rabbit IgG (Jackson Immuno Research, 111-545-062, West Grove, PA, USA), as described previously [24]. Confocal images were acquired using a 488 nm laser line package of an Olympus Fluoview confocal system attached to an Olympus IX81 microscope (60× oil objective). Electrogenic Ion Transport Measurements Mice were deeply anesthetized via intraperitoneal injection of a combination of ketamine and xylazine (120 mg/kg and 16 mg/kg, respectively) and killed by exsanguination. Airway tissues were dissected using a stereomicroscope as previously described [60,61] and immediately mounted into perfused micro-Ussing chambers. Experiments were performed at 37 • C under open-circuit conditions and amiloride-sensitive ENaC-mediated short circuit current (I SC ) was determined as previously described [61]. mRNA Expression Analysis Lungs from mice were stored at 4 • C in RNAlater (Applied Biosystems, Darmstadt, Germany). Total RNA was extracted using Trizol reagent (Invitrogen, Karlsruhe, Germany) according to manufacturer's instructions. cDNA was obtained by reverse transcription of 1 µg of total RNA with Superscript III RT (Invitrogen, Karlsruhe, Germany). To analyze mRNA expression of mucins, quantitative real-time PCR was performed on an Applied Biosystems 7500 Real Time PCR System using TaqMan universal PCR master mix and the following inventoried TaqMan gene expression assays for Muc5b (Accession No. NM_028801.2; Taqman ID Mm00466391_m1) and Muc5ac (Accession No. NM_010844.1; Taqman ID Mm01276718_m1) (Applied Biosystems, Darmstadt, Germany) according to manufacturer's instructions. Relative fold changes of target gene expression were determined by normalization to expression of the reference gene Actb (Accession No. NM_007393.1; Taqman ID Mm00607939_s1) [17,62]. Microbiology Studies BAL was performed in 3-week-old mice under sterile conditions. Mice were deeply anesthetized via intraperitoneal injection with a combination of ketamine and xylazine (120 mg/kg and 16 mg/kg, respectively) and killed by exsanguination. A cannula was inserted into the trachea and whole lungs were lavaged 3 times with 300 µL PBS. The recovered BAL fluid was plated on columbia blood agar (Becton Dickinson, Heidelberg, Germany), chocolate agar, Mac Conkey agar, prereduced Schaedler agar and kanamycinvancomycin blood agar plates (bioMérieux, Nürtingen, Germany). After 48 h of incubation at 37 • C, colony forming units were counted and classified by MALDI-TOF massspectrometry (Bruker Daltonik, Bremen, Germany). Then, 16S rRNA PCR was performed to detect non-culturable bacterial species [63]. Statistical Analysis All data are shown as mean ± S.E.M. Data were analyzed with GraphPad Prism version 7 (GraphPad Software Inc, LaJolla, CA, USA). Distribution of data was assessed with Shapiro-Wilk test for normal distribution. For comparison of two groups, unpaired two-tailed t-test or Mann-Whitney test were used as appropriate. Comparison of more than two groups with normally distributed data was performed with one-way ANOVA followed by Tukey's post hoc test. Genotype frequency was analyzed by χ 2 test. Comparison of survival was evaluated using the log rank test. A p value < 0.05 was accepted to indicate statistical significance.
Leptons, quarks, and their antiparticles from a phase-space perspective It is argued that antiparticles may be interpreted in macroscopic terms without explicitly using the concept of time and its reversal. The appropriate framework is that of nonrelativistic phase space. It is recalled that a quantum version of this approach leads also, alongside the appearance of antiparticles, to the emergence of `internal' quantum numbers identifiable with weak isospin, weak hypercharge and colour, and to the derivation of the Gell-Mann-Nishijima relation, while simultaneously offering a preonless interpretation of the Harari-Shupe rishon model. Furthermore, it is shown that - under the assumption of the additivity of canonical momenta - the approach entails the emergence of string-like structures resembling mesons and baryons, thus providing a different starting point for the discussion of quark unobservability. Charge conjugation and time In the Standard Model (SM), elementary particles are grouped into multiplets of various symmetry groups such as SU (2), SU (3), etc. Particles and antiparticles belong then to complex conjugate representations (e.g. when coloured quarks are assigned to representation 3 of SU (3) C , the antiquarks are assigned to 3 * ). With the standard particle theory formulated on the background of classical space and time, the concepts of complex conjugation and time reversal are closely related. Accordingly, Stückelberg and Feynman interpret the antiparticles as particles 'moving backwards in time'. The spacetime-based description of Reality provided by the Standard Model is very successful. Yet, there are many questions that go beyond what SM was devised to answer. These include: why the physical quantities that enter into the SM as parameters have their specific values (masses of elementary particles or mixing angles between particle generations), what is the origin of various internal quantum numbers, or how to describe both elementary particles and gravity in a single framework. The latter issue suggests that the standard particle theory should give way to an approach in which spacetime serves no longer as a background but becomes a dynamical structure, in line with general relativity ideas. In fact, many physicists have argued that we need an approach in which macroscopic classical time and space emerge -in the limit of a large interconnected structure -out of a simple timeless and alocal quantum level [2], [3]. The description of this level is still expected to be complex, as quantum descriptions are supposed to be [4]. Can one interpret then classically the expected presence of particles and antiparticles without explicitly using the concept of macroscopic time? In order to deal with this question, we find it appropriate to recall first how the classical concept of background time was originally introduced. Time and change In classical approaches with background time, position is viewed as a function of time x(t), while its change δx is thought of as occurring in time, 'due' to its increase: δx = dx dt δt. However, a deeper insight undermines this concept of background time. Rather, it views time as an effective parameter that somehow parametrizes change, as stated by Ernst Mach (see motto). In [5] Barbour described how energy conservation (in an isolated system) serves as a key ingredient that leads to a definition of an increment of ephemeris time through observed changes: where δx i are measured changes in the positions of astronomical bodies, E is total (and fixed) energy and V is the gravitational potential of all interacting bodies of the system. Thus, the (astronomical) time is defined by change, not vice versa. A more neutral expression of the relation between an increment of time δt and changes of position δx i is that the two are correlated. It is then up to us to decide which of the two alternative formulations to choose: change in (background) time, or time defined from (observed) change. Now, in full analogy with Eq. (1), it is natural to consider where P is the total (and conserved) momentum of the system. For the sake of our discussion, we restrict the above equation to the one-particle case and observe that, again, one may view this relation in two ways: 1) either with momentum p calculated from the change of position δx in a given increment of (background) time δt, or 2) with time increment δt calculated from given p and δx. The latter standpoint -in which momentum is not calculated from a change in position but assumed as independent of position -is familiar from the Hamiltonian formalism, in which momenta and positions are treated as independent variables. Thus, the idea of time induced by change should be expressible in the language of phase space in which the macroscopic arena is not 3-but 6-dimensional. Furthermore, with independent p and x one may consider various independent transformations of p and x, for example, study the symmetries of the time-defining equation Eq.(3). Then, standard 3D reflection corresponds to (p, x) → (−p, −x) (and leaves time untouched), while the operation (p, x) → (p, −x) leads to time reflection: t → −t. Moving now to the quantum description we observe that the canonical commutation relations naturally involve the imaginary unit (we use units in whichh = 1): In spite of containing i, Eq. (4) does not involve time explicitly. Consequently, since charge and complex conjugations are related, it should be possible to give an interpretation to the particleantiparticle degree of freedom using the phase-space concepts of positions and momenta alone, i.e. without referring to the concept of explicit time. Charge conjugation should then be seen not only as connected to time reversal, but also as just one of several possible transformations in phase space. Noncommuting phase space In the standard picture with background spacetime there is a connection between the properties of the background (e.g. under 3D rotation) and the existence of the 'spatial' quantum numbers (e.g. spin). Therefore, if one is willing to enlarge the arena to that of phase space, with independent position and momentum coordinates, one might expect the appearance of additional quantum numbers [6]. From the point of view of the standard (3D space + time) formalism, such quantum numbers would necessarily appear 'internal'. Born reciprocity The issue of a possible relation between particle properties (such as quantum numbers or masses), and the concept of phase space was of high concern already to Max Born. In his 1949 paper [7] he discussed the difference between the concepts of position and momentum for elementary particles and noted that the notion of mass appears in the relation p 2 = m 2 , while x 2 , the corresponding invariant in coordinate space (with x 2 of atomic dimensions), does not seem to enter in a similar relation. At the same time, Born stressed that various laws of nature such aṡ are invariant under the 'reciprocity' transformation: Noting that relation p 2 = m 2 is not invariant under this transformation, he concluded: 'This lack of symmetry seems to me very strange and rather improbable'. The simplest phase-space generalization of the 3D (nonrelativistic) concepts of rotation and reflection is obtained with a symmetric treatment of the two O(3) invariants, x 2 and p 2 , via their addition (obviously, this procedure requires an introduction of a new fundamental constant of Nature of dimension [momentum/distance]): Eq. (7) is invariant under O(6) transformations and Born reciprocity in particular. We may now treat x and p as operators, and require their commutators to be form invariant. The original O(6) symmetry is then reduced to U (1) ⊗ SU (3). The appearance of the symmetry group present in the Standard Model leads us to ask the question whether phase-space symmetries could possibly lie at the roots of SM symmetries. As shown in [6] and also argued below, this seems quite possible. Table 1. Decomposition of eigenvalues of Y into eigenvalues of its components. Dirac linearization Our present theories are standardly divided into classical and quantum ones. Yet, as stressed by Finkelstein, the latter should be more appropriately regarded as belonging to a mixed, classicalquantum type. In Finkelstein's view, the classical (c), and classical-quantum (cq) theories should give way to a purely quantum (q) approach in which infinities would not appear and the concept of infinite and infinitely divisible background space would no longer exist [3]. This view is clearly in line with the arguments for a background-independent approach to quantum gravity. The Dirac linearization prescription may be thought of as a procedure that has led us from the classical description of Nature to part of its quantum description. Indeed, the linearization of p 2 leads to the appearance of Pauli matrices, which describe spin at the quantum level. It is therefore of great interest to apply Dirac's idea to the phase-space invariant of Eq.(7). Using anticommuting matrices A k and B k (k = 1, 2, 3) (for more details, see [6]) one finds The first term on the r.h.s., R = p 2 + x 2 , appears thanks to the anticommutation properties of A k and B l . The other term, R σ , appears because x k and p k do not commute. These two terms sum up to a total R tot = R + R σ . When viewed from Finkelstein's perspective, the invariant connects then the cq-level of phase space (noncommuting positions x and momenta p) with the presumably purely q-level structure: the Clifford algebra built from matrices A and B. Gell-Mann-Nishijima relation Just as R is quantized, so is R σ . Thus, we have to find the eigenvalues of R σ . For better correspondence with the standard definitions of internal quantum numbers, we introduce operator Y : where 3 is the 7-th anticommuting element of the Clifford algebra. Since Y k commute among themselves, they may be simultaneously diagonalized. The eigenvalues of Y k (k = 1, 2, 3) are ±1/3. The resulting pattern of possible eigenvalues of Y is shown in Table 1. Table 2. Rishon structure of leptons and quarks with I 3 = +1/2. In [6] a conjecture was put forward that the electric charge Q is proportional to operator R tot B 7 , evaluated for the lowest level of R, i.e.: where R lowest = (p 2 + x 2 ) lowest = 3, and I 3 = B 7 /2. The above equation is known under the name of the Gell-Mann-Nishijima relation (with I 3 of eigenvalues ±1/2 known as weak isospin and Y of eigenvalues −1, +1/3 known as weak hypercharge) and is considered to be a law of nature. It summarizes the pattern of charges of all eight leptons and quarks from a single SM generation. In the phase-space approach it is derived as a consequence of phase-space symmetries. Harari-Shupe rishons As shown in [6], the pattern in which the weak hypercharge Y is built out of 'partial hypercharges' Y k corresponds exactly to the pattern in which electric charges are built in the Harari-Shupe (HS) model of quarks and leptons [8]. The HS approach describes the structure of a SM generation with the help of a composite model: it builds all eight fermions of a single generation from two spin-1/2 'rishons' V and T of charges 0 and +1/3. The proposed structure of leptons and quarks is shown in Table 2. Our phase-space approach not only reproduces exactly the successful part of the rishon structure, but it also removes all the main shortcomings of the HS model. In particular, the approach is preonless, i.e. the phase-space 'rishons' are components of charge (hypercharge) only, with no interpretation in terms of spin 1/2 subparticles. Thus, there is no problem of rishon confinement. Consequently, our leptons and quarks are viewed as pointlike, in perfect agreement with the experimental knowledge. In summary, the phase-space approach explains the origin of the observed symmetries without introducing any subparticles. One might object that the nontrivial combination of spatial and internal symmetries is forbidden by the Coleman-Mandula no-go theorem [9]. Yet, this theorem is neatly evaded by our construction: the theorem works at the S-matix level, while quarks are to be confined, as we expect and as will be argued later on. Furthermore, no additional dimensions (standardly understood) have actually been added in our framework. The only change was a shift in the conceptual point of view: instead of a picture based on 3D space and time, we decided to view the world in terms of a picture based on the 6D arena of canonically conjugated positions and momenta. Compositeness and additivity The linearized phase-space approach suggests that the Clifford algebra of nonrelativistic phase space occupies an important place in our description of leptons and quarks (see also [10]). Thus, one would like to use it also in a description of composite systems, in particular in the description of hadrons as composite systems of quarks. Unfortunately, it is not clear how to achieve this goal. Yet, the q-level construction in question must be related to the c-level description of Nature, and, consequently, should be interpretable at the purely classical level. Indeed, as Niels Bohr said: 'However far the phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms'. In fact, Eq.(9) provides the required connection through which transformations between leptons, quarks, and their antiparticles are related to those in phase space. Now, in any description of composite systems, be it at the classical or the quantum level, an important ingredient is provided by the tacitly assumed concept of additivity. Indeed, additivity is assumed both at the quark level (e.g. additivity of spins or flavour quantum numbers), and at the classical level (e.g. additivity of momenta). The question then appears: can this property of the additivity of momenta at the classical level be somehow used to infer about the properties of such q-level objects as our quarks when these are viewed from the macroscopic classical perspective. The relevant basic macroscopic observation is that for any direction in 3D space, and completely regardless of what happens in the remaining two directions, we have the additivity of physical momenta of any number of ordinary particles or antiparticles, quite irrespectively of their internal quantum numbers (e.g. P z = i p iz ). (By ordinary particles we mean those that can be oberved individually, such as leptons and hadrons, but not quarks.) Note that the positions of ordinary particles are not additive in such a simple way, since a composite object is best described in terms of its center-of-mass coordinates: X z = i m i x iz / i m i . In the following, we shall present the connections between lepton-quark and phase-space transformations and discuss their implications for the concept of the additivity of momenta. Charge conjugation In a quantum description the transition from particles to antiparticles is effected by complex conjugation. Consider now a system of particles in which some are being transformed into antiparticles. We want to find the related transformation in phase space. We note that in order to preserve the principle of the additivity of physical momenta, one is not allowed to change the momenta of any of the particles being transformed. The invariance of [x k , p l ] = iδ kl requires then that Using the invariance of Eq.(9), the ensuing transformations of A k and B k , and the definition of Q, I 3 , and Y , one can check [11] that in this way we are indeed led from particles to antiparticles. Thus, in the phase-space picture, with time regarded as secondary, the antiparticles are related to particles via i → −i combined with the reflection of position space (with the time reflection t → −t induced via the invariance of Eq. (3)). Isospin reversal Similarly, one checks [11] that isospin reversal I 3 → −I 3 corresponds to the transformation A k → A k , B k → −B k , and i → i, and, consequently, to p k → p k and x k → −x k . Yet, this is not the same as charge conjugation since in this case, with i → +i, the momentum-position commutation relations do not stay invariant. In summary, the phase-space representations of both the particle-antiparticle transformation and of the isospin reversal may be (and have been) chosen in a way that does not affect the momenta of any of the particles, thus preserving their additivity. Lepton-to-quark transformations Transformations from the lepton to the quark sector (in particular, the change Y = −1 → Y = +1/3) require the use of SO(6) rotations going outside those generated by the familiar 1 + 8 generators of U (1) × SU (3). The remaining six of the fifteen SO(6) generators form two SU(3) triplets, of which only one actually leads to transformations of some A k into B m , while keeping i and I 3 fixed [6]. Under these transformations, A is changed into some A Qn , and B into some B Qn , with n = 1, 2, 3 being the colour index [11]. For the transformed elements A Qn (and similarly for B Qn ) one can then choose the following (not unique) representation : with the quark canonical momenta P Qn (and likewise, for quark canonical positions X Qn ), obtained from the condition of the invariance of Eq. (9): The above forms of A Q and P Q remain unchanged for quarks of opposite isospin. For the antiquarks, the relevant forms are and Again, these forms are independent of isospin. As expected, the difference between quarks and antiquarks is represented by a change in sign in front of physical positions entering into the definitions of quark (antiquark) canonical momenta. If additivity of canonical momenta of quarks is a proper generalization of the additivity of physical momenta for leptons and other individually observable particles, then -on account of the relative (positive and negative) signs between position components in (14) and (16) -this additivity (separately in each of the relevant phase-space directions) leads to translationally invariant string-like expressions for quark-antiquark and three-quark systems (i.e. x 1 2 (q) − x 1 2 (q) for qq and (x 3 ) for q 1 q 2 q 3 ), but not for qq or qqqq systems. Additivity of canonical momenta leads therefore to the formation of 'mesons' and 'baryons' only. In principle, the chain of arguments leading to Eq. (13,14) could involve ordinary reflections, e.g (A 1 , −B 3 , +B 2 ) → (A 1 , +B 3 , +B 2 ), before putting the latter expression and its cyclic counterparts together into a matrix form similar to (13) ('grouping'). The corresponding phasespace counterparts would then look somewhat different, i.e. (p 1 , −x 3 , +x 2 ) → (p 1 , +x 3 , +x 2 ) etc. The translational invariance of three-quark systems could then not be achieved by simply adding the appropriate canonical momenta of different quarks, because all physical position coordinates would enter an analog of (14) with positive signs. The point, however, is that one may choose the ordering of the operations of grouping and reflection in such a way that -by the simple procedure of addition -the translational invariance can be achieved at all. Furthermore, it seems nontrivial that this requires collaboration of phase-space representatives of quarks of three different colours. After all, the latter were originally definedin a way seemingly independent of the concept of additivity -via a diagonalization procedure performed at the Clifford algebra level. In other words, the q-level structure of coloured quark charges corresponds to a specific picture in the macroscopic arena. According to this picture, individual quarks are not observable at the classical level since their individual canonical momenta are not translationally invariant. On the other hand, translational invariance may be restored via the collaboration of quarks of different colours. When viewed from our classical point of view, the resulting composite systems possess standard particle-like properties, while at the same time exhibiting internal string-like features. Thus, quark unobservability is supposed to be connected to the very emergence and nature of space and time. Speaking more precisely, quarks are supposed to be unobservable because space and phase space are most probably just convenient classical abstractions, into the descriptive corset of which we try to force various pieces of Reality. The often-used argument that 'space is standard' at the distances a few orders of magnitude smaller than proton's size is not sufficiently sound. After all, the existence of long-distance nonlocal quantum correlations indicates that -at least for some purposes -our classical spacetime concepts (into which we try to force our descriptions of elementary particles) are inadequate at much larger scales. In fact, statements about an 'unchanged nature of space' at a distance of 10 −18 m or so follow from the success of the Standard Model (a cq-level theory) and are strictly valid only within the description it provides, not outside of it (e.g. not in a q-level theory in which space is to be an emergent concept only). The above discussion suggests that the phase-space approach has the capability of describing the phenomenon of quark unobservability in a way seemingly different from the SM flux-tube picture of confinement. In fact, however, the phase-space approach does not have to be in conflict with the latter picture, just as the Faraday picture of 'real' fundamental field lines is not in conflict with the Maxwell concept of fields. Rather, we regard the idea of the linearized phase space as offering a possible q-level starting point. Our discussion is based on the invariance of Eq. (9) which provides a link between q-and cq-(c-) levels of description. Obviously, however, Eq. (9) does not specify how the macroscopic background phase space actually emerges from the underlying quantum level. Hence, at present there is no way to compare our ideas directly with the standard, background-dependent, QCD-based picture of confinement. Conclusions The linearized phase-space approach differs markedly from the standard frameworks, in particular from the SU (5)-based unifications. A brief comparison of some differences between the two schemes is given in Table 3. In the author's opinion, the SU(5) approach lacks a solid philosophical background. On the other hand, the phase-space approach, although overly simplistic, satisfies an important philosophical condition: the necessity to connect the q-level description of elementary particles to the c-level description of the macroscopic world. One can hear the echoes of this condition in the words of Roger Penrose, who stated in [12]: 'I do not believe that a real understanding of the nature of elementary particles can ever be achieved without a simultaneous deeper understanding of the nature of spacetime.' The phase-space approach provides a possible theoretical explanation of the structure of a single generation of the Standard Model. It gives us a tentative (pre)geometric interpretation of the origin of the Gell-Mann-Nishijima relation. It reproduces the structure of the Harari-Shupe preon model without actually introducing any preons at all, in line with the standard pointlike The picture offered by the Clifford algebra of nonrelativistic phase space need not be regarded as 'the' q-level approach. Rather, it should be thought of as a possible deeper layer of description only. However, it has encouraging features, which -as I believe -will show up at the classical level if derived 'in an emergent way' from any suitable q-level description. Let me therefore end by paraphrasing Penrose's opinion: I do not believe that deeper understanding of elementary particles can be achieved without further studies of the proposed link between the elementary particles themselves and the properties and symmetries of nonrelativistic phase space. This work has been partially supported by the Polish Ministry of Science and Higher Education research project No N N202 248135.
Nonlocal Infrared Modifications of Gravity. A Review We review an approach developed in the last few years by our group in which GR is modified in the infrared, at an effective level, by nonlocal terms associated to a mass scale. We begin by recalling the notion of quantum effective action and its associated nonlocalities, illustrating some of their features with the anomaly-induced effective actions in $D=2$ and $D=4$. We examine conceptual issues of nonlocal theories such as causality, degrees of freedoms and ghosts, stressing the importance of the fact that these nonlocalities only emerge at the effective level. We discuss a particular class of nonlocal theories where the nonlocal operator is associated to a mass scale, and we show that they perform very well in the comparison with cosmological observations, to the extent that they fit CMB, supernovae, BAO and structure formation data at a level fully competitive with $\Lambda$CDM, with the same number of free parameters. We explore some extensions of these `minimal' models, and we finally discuss some directions of investigation for deriving the required effective nonlocality from a fundamental local QFT. Introduction I am very glad to contribute to this Volume in honor of prof. Padmanabhan (Paddy, to his friends), on the occasion of his 60th birthday. I will take this opportunity to give a self-contained account of the work done in the last few years by our group in Geneva, on nonlocal modifications of gravity. Our motivation comes from cosmology. In particular, the observation of the accelerated expansion of the Universe [1,2] has revealed the existence of dark energy (DE). The simplest explanation for dark energy is provided by a cosmological constant. Indeed, Λ CDM has gradually established itself as the cosmological paradigm, since it accurately fits all cosmological data, with a limited set of parameters. From a theoretical point of view, however, the model is not fully satisfying, because a cosmological constant is not technically natural from the point of view of the stability under radiative corrections. Independently of such theoretical 'prejudices', the really crucial fact is that, with the present and forthcoming cosmological data, alternatives to Λ CDM are testable, and it is therefore worthwhile to explore them. At the fundamental level QFT is local, and in our approach we will not depart from this basic principle. However, both in classical and in quantum field theory, at an effective level nonlocal terms are unavoidably generated. Classically, this happens when one integrates out some degree of freedom to obtain an effective dynamics for the remaining degrees of freedom. Consider for instance a system with two degrees of freedom φ and ψ, described classically by two coupled equations of the generic form 2φ = j(ψ) and 2ψ = f (φ ). The first equation is solved by φ = 2 −1 j(ψ). This solutions can then be re-injected in the equation for the remaining degree of freedom ψ, leading to a nonlocal equations involving only ψ. In QFT, nonlocalities appear in the quantum effective action, as we will review below. The appearance of nonlocal terms involving inverse powers of the d'Alembertian is potentially interesting from a cosmological point of view, since we expect that the 2 −1 operator becomes relevant in the infrared (IR). This review is organized as follows. In Sect. 2 we recall the notion of quantum effective action, in particular in gravity, and we discuss the associated nonlocalities. In Sect. 3 we examine two particularly important nonlocal quantum effective actions, the anomaly-induced effective actions in D = 2 (i.e. the Polyakov quantum effective action) and in D = 4. In Sect. 4 we introduce a class of nonlocal theories in which the nonlocality is associated to a mass scale. In Sects. 5, building also on the experience gained in Sect. 3 with the anomaly-induced effective actions, we discuss conceptual issues of nonlocal theories, such as causality and degrees of freedom, emphasizing the importance of dealing with them as quantum effective actions derived from a fundamental local QFT. In Sect. 6 we discuss how nonlocal theories can be formally put in a local form, and we examine the conceptual subtleties associated to the localization procedure concerning the actual propagating degrees of freedom of the theory. The cosmological consequences of these nonlocal models are studied in Sect. 7.1 at the level of background evolution, while in Sect. 7.2 we study the cosmological perturbations and in Sect. 7.3 we present the results of a full Bayesian parameter estimation and the comparison with observational data and with Λ CDM. In Sect. 7.4 we discuss further possible extensions of the 'minimal models', and their phenomenology. As we will see, these nonlocal models turn out to be phenomenologically very successful. The next step will then be understanding how these nonlocalities emerge. Possible directions of investigations for deriving the required nonlocality from a fundamental theory are briefly explored in Sect. 8, although this part is still largely work in progress. Nonlocality and quantum effective actions At the quantum level nonlocalities are generated when massless or light particles run into quantum loops. The effect of loop corrections can be summarized into a quantum effective action which, used at tree level, takes into account the effect of quantum loops. The quantum effective action is a nonlocal object. For instance in QED, if we are interested in amplitudes where only photons appear in the external legs, we can integrate out the electron. The corresponding quantum effective action Γ QED is given by To quadratic order in the electromagnetic field this gives where, to one-loop order and in the MS scheme [4], Here µ is the renormalization scale and e(µ) is the renormalized charge at the scale µ. In the limit |2/m 2 e | 1, i.e. when the electron is light with respect to the relevant energy scale, the form factor 1/e 2 (2) becomes 1 e 2 (2) 1 e 2 (µ) − β 0 log −2 where β 0 = 1/(12π 2 ). The logarithm of the d'Alembertian is a nonlocal operator defined by log −2 Thus, in this case the nonlocality of the effective action is just the running of the coupling constant, expressed in coordinate space. In the opposite limit |2/m 2 e | 1 the form factor (3) becomes local, Observe that the corresponding beta function, which is obtained by taking the derivative with respect to log µ, is independent of the fermion mass, so in particular in a theory with several fermions even the heavy fermions would contributes to the beta function, and would not decouple. Actually, this is a pathology of the MS subtraction scheme, and is related to the fact that, when m 2 e is large, eq. (6) develops large logarithms log m 2 e /µ 2 , so in this scheme perturbation theory breaks down for particles heavy with respect to the relevant energy scales. To study the limit |2/m 2 e | 1 it can be more convenient to use a mass-dependent subtraction scheme, such as subtracting from a divergent graph its value at an Euclidean momentum p 2 = −µ 2 . Then, in the limit |2/m 2 e | 1, so the contribution of a fermion with mass m e to the beta function is suppressed by a factor |2/m 2 e |, so the decoupling of heavy particles is explicit [5]. 1 Thus, using a mass-dependent subtraction scheme, the effect of a heavy fermion with mass m e , at quadratic order in the fields, is to produce the local higher-derivative operator F µν 2F µν , suppressed by a factor 1/m 2 e . Adding to this also the terms of order F 4 µν gives the well-known local Euler-Heisenberg effective action (see e.g. [6] for the explicit computation), valid in the limit |2/m 2 e | 1, To sum up, nonlocalities emerge in the quantum effective action when we integrate out a particle which is light compared to the relevant energy scale. In contrast, heavy particles give local contributions which, if computed in a mass-dependent subtraction scheme, are encoded in higher-dimension local operators suppressed by inverse powers of the particle mass. The quantum effective action is a particularly useful tool in gravity, where the integration over matter fields gives the quantum effective action for the metric (see e.g. [7][8][9][10] for pedagogical introductions). Let us denote collectively all matter fields as φ , and the fundamental matter action by S m [g µν , φ ]. Then the quantum effective action Γ is given by where S EH is the Einstein-Hilbert action. 2 The effective quantum action Γ determines the dynamics of the metric, including the backreaction from quantum loops of matter fields. Even if the fundamental action S m [g µν , φ ] is local, again the quantum effective action for gravity is unavoidably nonlocal. Its nonlocal part describes the running of coupling constants, as in eq. (2), and other effects such as particle production in the external gravitational field. The matter energy-momentum tensor T µν is given by the variation of the fundamental action, according to the standard GR expression T µν = (2/ √ −g)δ S m /δ g µν . In contrast, the variation of the effective quantum action gives the vacuum expectation value of the energy-momentum tensor, More precisely, the in-out expectation value 0 out |T µν |0 in is obtained when the path-integral in eq. (9) is the standard Feynman path-integral, while using the Schwinger-Keldish path integral gives the in-in expectation value 0 in |T µν |0 in . This point will be important for the discussion of the causality of the effective nonlocal theory, and we will get back to it in Sect. 5.1. In principle, in eq. (9) one could expand g µν = η µν + h µν and compute perturbatively in h µν . A much more powerful and explicitly covariant computational method is based on the heat-kernel technique (see e.g. [9] for review), combined with an expansion in powers of the curvature. In this way Barvinsky and Vilkovisky [11,12] have developed a formalisms that allows one to compute, in a covariant manner, the gravitational effective action as an expansion in powers of the curvature, including the nonlocal terms. The resulting quantum effective action, up to terms quadratic in the curvature, has the form where m Pl is the reduced Planck mass, m 2 Pl = 1/(8πG), C µνρσ is the Weyl tensor, and we used as a basis for the quadratic term R 2 , C µνρσ C µνρσ and the Gauss-Bonnet term, that we have not written explicitly. Just as in eq. (4), in the case of loops of massless particles the form factors k R (2) and k W (2) only contain logarithmic terms plus finite parts, i.e. k R,W (2) = c R,W log(2/µ 2 ), where now 2 is the generally-covariant d'Alembertian, µ is the renormalization point, and c R , c W are known coefficients that depend on the number of matter species and on their spin. The form factors generated by loops of a massive particles are more complicated. For instance, for a massive scalar field with mass m s and action the form factors k R (−2/m 2 s ) and k W (−2/m 2 s ) in eq. (11) were computed in [13,14] in closed form, for (2/m 2 s ) generic, in a mass-dependent subtraction scheme where the decoupling of heavy particles is explicit. After subtracting the divergent part, the result is whereξ = ξ − (1/6), and In the limit |2/m 2 s | 1 (i.e. in the limit in which the particle is very light compared to the typical energy or curvature scales), eq. (14) has the expansion and similarly for k W . This result has also been re-obtained with effective field theory techniques [15][16][17]. Similar results can also be obtained for different spins, so in the end the coefficients α, β , γ, δ depend on the number and type of massive particles. The result further simplifies for a massless conformally-invariant scalar field. Taking the limit m s → 0, ξ → 1/6 in eq. (11) one finds that the terms involving log m 2 s cancel and the form factor k R (2) becomes local, k R = −1/1080, while k W (2) → −(1/60) log(−2/µ 2 ). Similar results, with different coefficients, are obtained from massless vectors and spinor fields. So, for conformal matter, the oneloop effective action has the form where c 1 , c 2 are known coefficients that depends on the number and type of conformal matter fields, and we have stressed that the computation leading to eq. (17) has been performed only up to terms quadratic in the curvature. In contrast, when the particle is heavy compared to the relevant energy or curvature scales, i.e. in the limit −2/m 2 s 1, the form factors in eqs. (13) and (14) become local, Again, this expresses the fact that particles which are massive compared to the relevant energy scale decouple, leaving a local contribution to the effective ac-tion proportional to higher derivatives, and suppressed by inverse powers of the mass. This decoupling is explicit in the mass-dependent subtraction scheme used in refs. [13,14]. The anomaly-induced effective action In a theory with massless, conformally-coupled matter fields, in D = 2 space-time dimensions, the quantum effective action can be computed exactly, at all perturbative orders, by integrating the conformal anomaly. In D = 4 one can obtain in this way, again exactly, the part of the quantum effective action that depends on the conformal mode of the metric. These examples of quantum effective actions for the gravitational field will be relevant for us when we discuss how the nonlocal models that we will propose can emerge from a fundamental local theory. They also provide an explicit example of the fact that effective quantum actions must be treated differently from fundamental QFT, otherwise one might be fooled into believing that they contain, e.g., ghostlike degrees of freedom, when in fact the fundamental theories from which they are derived are perfectly healthy. We will then devote this section to recalling basic facts on the anomaly-induced effective action, both in D = 2 and in D = 4 (see e.g. [7-10, 18, 19] for reviews). The anomaly-induced effective action in D = 2 Consider 2D gravity coupled to N s conformally-coupled massless scalars [i.e. m s = 0 and ξ = 1/6 in eq. (12)] and N f massless Dirac fermions. We take these fields to be free, apart from their interaction with gravity. For conformal matter fields, classically the trace T a a of the energy-momentum tensor vanishes [in D = 2 we use a = 0, 1 as Lorentz indices, and signature η ab = (−, +)]. However, at the quantum level the vacuum expectation value of T a a is non-zero, and is given by where N = N s + N f . Equation (19) is the trace anomaly. The crucial point about this result is that, even if it can be obtained with a one-loop computation, it is actually exact. 3 No contribution to the trace anomaly comes from higher loops. We can now find the effective action that reproduces the trace anomaly, by integrating eq. (10). We write whereḡ ab is a fixed reference metric. The corresponding Ricci scalar is where the overbars denotes the quantities computed with the metricḡ ab . In D = 2, eq. (10) gives Therefore where T a a = g ab T ab . In D = 2, without loss of generality, locally we can always write the metric as g ab = e 2σ η ab , i.e. we can choseḡ ab = η ab . In this case, from eq. (21), where 2 η is the flat-space d'Alembertian, 2 η = η ab ∂ a ∂ b . Then, inserting eq. (19) into eq. (23) and using √ −g = e 2σ , we get This can be integrated to obtain We see that, in general, the trace anomaly determines the effective action only modulo a term Γ [0] independent of the conformal mode. However, in the special case D = 2, when σ = 0 we can choose the coordinates so that, locally, g ab = η ab . Thus, all curvature invariants vanish when σ = 0, and therefore Γ [0] = 0. Therefore, in D = 2 the trace anomaly determines exactly the quantum effective action, at all perturbative orders! Finally, we can rewrite this effective action in a generally-covariant but non-local form observing that 2 g = e −2σ 2 η , where 2 g is the d'Alembertian computed with the full metric g ab = e 2σ η ab . Then, from eq. (24), R = −22 g σ , which can be inverted to give σ = −(1/2)2 −1 g R, so that This is the Polyakov quantum effective action. The remarkable fact about this effective quantum action is that, even if it has been obtained from the one-loop computation of the trace anomaly, it is the exact quantum effective action, to all perturbative orders. In the above derivation we have studied matter fields in a fixed gravitational background. We now add the dynamics for the metric itself, i.e. we consider 2D gravity, including also a cosmological constant λ , coupled to N massless matter fields, where S m is the the action describing N = N S + N F conformally-coupled massless scalar and massless Dirac fermion fields. In 2D the Einstein-Hilbert term is a topological invariant and, once we integrate out the massless matter field, all the gravitational dynamics comes from the anomaly-induced effective action. The contribution of the N matter fields is given by the Polyakov effective action (27). Diff invariance fixes locally g ab = e 2σḡ ab , whereḡ ab is a reference metric. In a theory with dynamical gravity, where in the path integral we also integrate over g ab , this is now a gauge fixing condition, and the corresponding reparametrization ghosts give a contribution −26 to be added to N, while the conformal factor σ gives a contribution +1 [20][21][22]. Then, after dropping the topologically-invariant Einstein-Hilbert term, the exact quantum effective action of 2D gravity reads with an overall factor in the nonlocal term proportional to (N − 25). 4 Using eq. (21) and dropping a σ -independent term √ −ḡ R −1 R we see that, in terms of the conformal mode, eq. (29) becomes local, which is the action of Liouville field theory. Equation (30) also allows us to illustrate an issue that will emerge later, in the context of the nonlocal model that we will propose. If we try to read the spectrum of the quantum theory from eq. (30), treating it as if it were the fundamental action of a QFT, we would conclude that, for N = 25, there is one dynamical degree of freedom, σ . Recalling that our signature is η ab = (−, +), we would also conclude that for N > 25 this degree of freedom is a ghost and for N < 25 it has a normal kinetic term. However, this conclusion is wrong. Equation (30) is the quantum effective action of a fundamental theory which is just 2D gravity coupled to N healthy fields, in which there is no ghost in the spectrum of the fundamental theory. If we perform the quantization of the fundamental theory in the conformal gauge (20), the fields involved are the matter fields, the reparametrization ghosts, and the only surviving component of the metric once we have fixed the conformal gauge, i.e. the conformal factor σ . Each of them has its own creation and annihilation operators, which generate the full Hilbert space of the theory. However, as always in theories with a local invariance (in this case diff invariance) the physical Hilbert space is a subset of the full Hilbert space. The condition on physical states can be obtained requiring that the amplitude f |i between an initial state |i and a final state | f is invariant under a change of gauge fixing (see e.g. chap. 4 of [23] for a discussion in the context of bosonic string theory). From this it follows that two states |s and |s are physical if and only if s |T ab tot |s = 0 , where T ab tot is the sum of the energy-momentum tensors of matter, ghosts and σ . This condition (or, more, precisely, the condition that physical states must by BRST invariant) eliminates from the physical spectrum both the states associated with the reparametrization ghosts, and the states generated by the creation operators of the conformal mode, as explicitly proven in [24]. Of course, the physical-state condition (31) is the analogous of the physical-state condition in the Gupta-Bleuler quantization of electrodynamics, which again eliminates from the physical spectrum the would-be ghost states associated to A 0 . What we learn from this example is that, if we start from a theory such as (30), e.g. to explore its cosmological consequences, there is a huge difference between the situation in which we take it to be a fundamental QFT, and the situation in which we consider it as the quantum effective action of some underlying fundamental theory. In the former case, in the theory (30) we would treat σ as a scalar field living in 2D, and the theory would have one degree of freedom, which is a ghost for N > 25 and a healthy scalar for N < 25, while for N = 25 there would be no dynamics at all. In contrast, when eq. (30) is treated as the effective quantum action derived from the fundamental QFT theory (28), the interpretation is completely different. The field σ is not just a scalar field living in 2D, but the component of the 2D metric that remains after gauge fixing. The physical spectrum of the fundamental theory is given by the quanta of the N healthy matter fields, which are no longer visible in (30) because they have been integrated out. There is no ghost, independently of the value of N, and there are no physical quanta associated to σ , because they are removed by the physical-state condition associated to the diff invariance of the underlying fundamental theory. As a final remark, observe that the fact that no physical quanta are associated to σ does not mean that the field σ itself has no physical effects. The situation is again the same as in electrodynamics, where there are no physical quanta associated to A 0 , but still the interaction mediated by A 0 generates the Coulomb potential between static charges. In other words, the quanta associated to σ (or to A 0 in QED) cannot appear in the external lines of Feynman diagram, since there are no physical states associated to them, but do appear in the internal lines. The anomaly-induced effective action in D = 4 Let us now follow the same strategy in D = 4 space-time dimensions, again for massless conformally-coupled matter fields. As we will see, in this case we will not be able to compute the quantum effective action exactly, but still we will be able to obtain valuable non-perturbative information from the trace anomaly. In D = 4 the trace anomaly is where C 2 is the square of the Weyl tensor, E the Gauss-Bonnet term, and it is convenient to use as independent combinations [E − (2/3)2R] and 2R, rather than E and 2R. The coefficients b 1 , b 2 , b 3 are known constants that depend on the number of massless conformally-coupled scalars, massless fermions and massless vector fields. Once again, the anomaly receives contribution only at one loop order, so eq. (33) is exact. Let us now write again A crucial difference compared to the 2D case is that in D = 4 diff invariance no longer allows us to setḡ µν = η µν . Equation (23) still holds, so the anomaly-induced effective action satisfies We have added the subscript 'anom' to stress that this is the part of the effective action which is obtained from the anomaly. The total quantum effective action is obtained adding Γ anom to the classical Einstein-Hilbert term. To integrate eq. (35) we first of all observe that the 2R term can be obtained from the variation of a local R 2 term, To integrate the other terms we observe that where the overbars denotes the quantities computed with the metricḡ µν , and ∆ 4 is the Paneitz operator Thus, we get where Γ anom [ḡ µν ] is an undetermined integration 'constant', i.e. a term independent of σ , equal to Γ anom [g µν ] evaluated at σ = 0. We will discuss below the possible covariantizations of the term in the second line. First, we can rewrite everything in terms of σ andḡ µν using Then Once again, the trace anomaly allowed us to determine exactly the dependence of the action on the conformal mode σ . However, we cannot determine in this way the σindependent part of the effective action, Γ anom [ḡ µν ]. This is an important difference compared to the D = 2 case, where we could show that Γ anom [ḡ ab ] = 0 using the fact that locally we can always choose g ab = η ab . In the end, the effective action must be a function ofḡ µν and σ only in the combination g µν = e 2σḡ µν , so the σindependent term Γ anom [ḡ µν ] is just the conformally-invariant part of the effective action, Γ c [g µν ], which by definition satisfies It is interesting to compare the anomaly-induced effective action (42) with the conformal limit of the explicit one-loop computation given in eq. (17). First of all, the anomaly-induced effective action has a local R 2 term, coming both from the explicit b 3 R 2 term and from the term (−2/3)b 2 σ 2R, corresponding to the two terms proportional to 2R in eq. (35). The value of its overall coefficient −[b 3 − (2/3)b 2 ]/12, obtained from the trace anomaly as a function of the number of conformal massless scalar, massless spinor and massless vector fields, agrees with the coefficient c 1 obtained from the one-loop computation, as it should. Consider now the Weyl-square term in eq. (17). Recall that eq. (17) is valid only up to second order in the curvature. Thus, strictly speaking, in the term C µνρσ log(−2/µ 2 )C µνρσ , the 2 operator is the flat-space d'Alembertian. If one would compute to higher orders in the curvature, this term should naturally become a covariant d'Alembertian acting on a tensor such as C µνρσ . The covariantization of the log(2) operator acting on such a tensor is a non-trivial problem, see the discussion in [25,26]. In any case we expect that, at least in the simple case of g µν = e 2σḡ µν with σ constant, we will have 2 g = e −2σ 2ḡ, just as for the scalar d'Alembertian. Then, The second term on the right-hand side, once multiplied by √ −g, is independent of σ and therefore belongs to Γ c [ḡ µν ]. On the other hand, the term proportional to √ −g σC 2 = √ −ḡ σC 2 is just the term proportional to b 1 in eq. (42). Once again, one can check that the numerical value of the coefficient from the explicit one-loop computation and from the trace anomaly agree. We see that the anomaly-induced effective action and the explicit one-loop computation give complementary information. The anomaly-induced effective action misses all terms independent of σ , such as the term proportional to C µνρσ log(− /µ 2 )C µνρσ that gives the logarithmic running of the coupling constant associated to C 2 . However, the terms that depend on the conformal mode are obtained exactly, without any restriction to quadratic order in the curvature. One can now look for a covariantization of eq. (40), in which everything is written in terms of g µν = e 2σḡ µν . In general, the covariantization of an expression is not unique. A possible covariantization is given by the Riegert action [27] Just as for the Polyakov action, even if the anomaly-induced action is local when written in terms of the conformal factor, it becomes nonlocal when written in terms of curvature tensors. In this covariantization, as we have seen, the log 2 form factor in eq. (17) is not really visible since the term C µνρσ log(− /µ 2 )C µνρσ is hidden in Γ conf [g µν ]. Alternative ways of covariantizing the log 2 operator are discussed in [25,26]. In any case, in the approximation in which one is interested only in the dynamics of the conformal mode one can use the effective action in the form (42), simply dropping the σ -independent term Γ [ḡ µν ], independently of the covariantization chosen. Once again, if one uses eq. (42) as if it were a fundamental QFT, one would reach the conclusion that this theory contains a ghost. This would be an unavoidable consequence of the presence of the four-derivative term σ∆ 4 σ in eq. (42) which, expanding over flat space and after integrations by parts, is simply (2σ ) 2 . As a fundamental QFT, the theory defined by eq. (42) would then be hopelessly sick. In contrast, we have seen that eq. (42) is the quantum effective action derived from a fundamental and healthy quantum theory, with no ghost. One could still wander whether the appearance a four-derivative term σ∆ 4 σ signals the fact that a new ghost-like state emerges in the theory because of quantum fluctuations. To answer this question one can quantize the theory (42), and see which states survive the physical-state condition, analogous to eq. (31) in D = 2, which reflects the diffinvariance of the underlying fundamental theory. This analysis has been carried out in [28] and it was found that, once one imposes the physical state condition, there is no local propagating degree of freedom associated to σ . Rather, we remain with an infinite tower of discrete states, one for each level, all with positive norm. In the limit Q 2 /(4π) 2 ≡ −2b 2 → ∞, these states have the form d 4 x √ −g R n |0 . Nonlocality and mass terms In this section we introduce a class of nonlocal theories where the nonlocality is associated to a mass term. In Sect. 5, using also the experience gained with the study of the anomaly-induced effective action, we will discuss some conceptual issues (such as causality and ghosts) in these theories. A different class of nonlocal models, which do not feature an explicit mass scale, has been introduced in [29,30], and reviewed in [31]. In this review we will rather focus on the nonlocal models where the nonlocal terms are associated to a mass scale. Nonlocal terms and massive gauge theories A simple and instructive example of how a nonlocal term can appear in the description of a massive gauge theory is given by massive electrodynamics. Consider the the Proca action with an external conserved current j µ The equations of motion obtained from (46) are Acting with ∂ ν on both sides and using ∂ ν j ν = 0, eq. (47) gives Thus, if m γ = 0, we get the condition ∂ ν A ν = 0 dynamically, as a consequence of the equation of motion, and we have eliminated one degree of freedom. Making use of eq. (48), eq. (47) becomes Equations (48) and (49) together describe the three degrees of freedom of a massive photon. In this formulation locality is manifest, while the U(1) gauge invariance of the massless theory is lost, because of the non gauge-invariant term m 2 γ A µ A µ in the Lagrangian. However, as shown in [32], this theory can be rewritten in a gaugeinvariant but nonlocal form. Consider in fact the equation of motion or, rewriting it in terms of A µ , Equation (50) is clearly gauge invariant. We can therefore chose the gauge ∂ µ A µ = 0. As we see more easily from eq. (51), in this gauge the nonlocal term vanishes, and eq. (51) reduces to the local equation (2 − m 2 γ )A ν = j ν . Thus, we end up with the same equations as in Proca theory, (2 − m 2 γ )A µ = j µ and ∂ µ A µ = 0. Note however that they were obtained in a different way: in the Proca theory there is no gauge invariance to be fixed, but eq. (48) comes out dynamically, as a consequence of the equations of motion, while in the theory (50) there is a gauge invariance and ∂ µ A µ = 0 can be imposed as a gauge condition. In any case, since the equations of motions are finally the same, we see that the theory defined by (50) is classical equivalent to the theory defined by eq. (46). Observe also that eq. (50) can be formally obtained by taking the variation of the nonlocal action (apart from a subtlety in the variation of 2 −1 , that we will discuss in Sect. 5.1). 5 Thus, eq. (52) provides an alternative description of a massive photon which is explicitly gauge invariant, at the price of nonlocality. In this case, however, the nonlocality is only apparent, since we see from eq. (51) that the nonlocal term can be removed with a suitable gauge choice. In the following we will study similar theories, in which however the nonlocality cannot be simply gauged away. An interesting aspect of the nonlocal reformulation of massive electrodynamics is that it also allows us to generate the mass term dynamically, through a nonvanishing gauge-invariant condensate F µν 2 −1 F µν = 0. In the U(1) theory we do not expect non-perturbative effects described by vacuum condensates. However, 5 The equivalence of the two theories can also be directly proved using the "Stückelberg trick": one introduces a scalar field ϕ and replaces A µ → A µ + (1/m γ )∂ µ ϕ in the action. The equation of motion of this new action S[A µ , ϕ], obtained performing the variation with respect to ϕ, is 2ϕ + m γ ∂ µ A µ = 0, which can be formally solved by ϕ(x) = −m γ 2 −1 (∂ µ A µ ). Inserting this expression for ϕ into S[A µ , ϕ] one gets eq. (52), see [32]. these considerations can be generalized to non-abelian gauge theories. Indeed, in pure Yang-Mills theory the introduction in the action of a nonlocal term (where D ab µ = δ ab ∂ µ − g f abc A c µ is the covariant derivative and m is a mass scale) correctly reproduces the results on the non-perturbative gluon propagator in the IR, obtained from operator product expansions and lattice QCD [33][34][35]. In this case this term is generated in the IR dynamically by the strong interactions. In other words, because of non-perturbative effects in the IR, at large distances we have which amounts to dynamically generating a mass term for the gluons. Effective nonlocal modifications of GR We next apply a similar strategy to GR. We will begin with a purely phenomenological approach, trying to construct potentially interesting IR modifications of GR by playing with nonlocal operators such as m 2 /2, and exploring different possibilities. When one tries to construct an infrared modification of GR, usually the aims that one has in mind is the construction of a fundamental QFT (possibly valid up to a UV cutoff, beyond which it needs a suitable UV completion). In that case a crucial requirement is the absence of ghosts, at least up to the cutoff of the UV completion, as in the dRGT theory of massive gravity [36][37][38], or in ghost-free bigravity [39]. In the following we will instead take a different path, and present these models as effective nonlocal modification of GR, such as a quantum effective action. This change of perspective, from a fundamental action to an effective quantum action, is important since (as we already saw for the anomaly-induced effective action, and as we will see in Sect. 6 for the nonlocal theories that we will propose) the presence of an apparent ghost in the effective quantum action does not imply that a ghost is truly present in the physical spectrum of the theory. Similarly, we will see in Sect. 5.1 that the issue of causality is different for a nonlocal fundamental QFT and a nonlocal quantum effective action. A nonlinear completion of the degravitation model. As a first example we consider the theory defined by the effective nonlocal equation of motion where 2 is the fully covariant d'Alembertian. Equation (55) is the most straightforward generalization of eq. (50) to GR. This model was proposed in [40] to introduce the degravitation idea. Indeed, at least performing naively the inversion of the non-local operator, eq. (55) can be rewritten as G µν = 8πG [2/(2 − m 2 )]T µν . Therefore the low-momentum modes of T µν , with |k 2 | m 2 , are filtered out and in particular a constant term in T µν , such as that due to a cosmological constant, does not contribute. 6 The degravitation idea is very interesting, but eq. (55) has the problem that the energy-momentum tensor is no longer automatically conserved, since in curved space the covariant derivatives ∇ µ do not commute, so [∇ µ , 2] = 0 and therefore also [∇ µ , 2 −1 ] = 0. Therefore the Bianchi identity ∇ µ G µν = 0 no longer ensures ∇ µ T µν = 0. In [41] it was however observed that it is possible to cure this problem, by making use of the fact that any symmetric tensor S µν can be decomposed as where S T µν is the transverse part of S µν , i.e. it satisfies ∇ µ S T µν = 0. Such a decomposition can be performed in a generic curved space-time [42,43]. The extraction of the transverse part of a tensor is itself a nonlocal operation, which is the reason why it never appears in the equations of motions of a local field theory. 7 Here however we are already admitting nonlocalities, so we can make use of this operation. Then, in [41] (following a similar treatment in the context of nonlocal massive gravity in [44]) it was proposed to modify eq. (55) into so that energy-momentum conservation ∇ µ T µν = 0 is automatically ensured. This model can be considered as a nonlinear completion of the original degravitation idea. Furthermore, eq. (58) still admits a degravitating solution [41]. Indeed, consider a modification of eq. (58) of the form 6 Observe however that the inversion of the nonlocal operator is more subtle. Indeed, by definition, Rather, applying 2 to both sides and using The same holds for the inversion of (2 − m 2 ). Thus, more precisely, the inversion of eq. (55) is In any case, a constant vacuum energy term T µν = −ρ vac η µν does not contribute, because of the 2 operator acting on T µν , while S µν only has modes with k 2 = −m 2 , so it cannot contribute to a constant vacuum energy. 7 In flat space ∇ µ → ∂ µ and, applying to both sides of eq. (56) ∂ µ and ∂ µ ∂ ν we find that In a generic curved spacetime there is no such a simple formula, because [∇ µ , ∇ ν ] = 0, but we will see in Sect. 6 how to deal, in practice, with the extraction of the transverse part. with µ is a regularization parameter to be eventually sent to zero. If we set T µν = −ρ vac g µν , eq. (59) admits a de Sitter solution G µν = −Λ g µν with Λ = 8πG [µ 2 /(m 2 + µ 2 )] ρ vac . In the limit µ → 0 we get Λ → 0, so the vacuum energy has been completely degravitated. However, the cosmological evolution of this model induced by the remaining cosmological fluid, such as radiation or nonrelativistic matter, turns out to be unstable, already at the background level [45,46]. We will see in Sect. 7 how such an instability emerges. In any case, this means that the model (58) is not phenomenologically viable. The RT and RR models. The first phenomenologically successful nonlocal model of this type was then proposed in [45], where it was noticed that the instability is specific to the form of the 2 −1 operator on a tensor such as R µν or G µν , and does not appear when 2 −1 is applied to a scalar, such as the Ricci scalar R. Thus, in [45] it was proposed a model based on the nonlocal equation where the factor 1/3 is a useful normalization for the mass parameter m. We will discuss its phenomenological consequences in Sect. 7. We will denote it as the "RT" model, where R stands for the Ricci scalar and T for the extraction of the transverse part. A closed form for the action corresponding to eq. (60) is currently not known. This model is however closely related to another nonlocal model, proposed in [47], and defined by the effective action Again, we will see that this model is phenomenologically viable, and we will refer to it as the RR model. The RT and RR models are related by the fact that, if we compute the equations of motion from eq. (61) and we linearize them over Minkowski space, we find the same equations of motion obtained by linearizing eq. (60). However, at the full nonlinear level, or linearizing over a background different from Minkowski, the two models are different. We have seen above that nonlocal terms of this sort may be related to a mass for some degree of freedom. One might then ask whether this is the case also for the RR and RT models. In fact, the answer is quite interesting: the nonlocal terms in eqs. (60) or (61) correspond to a mass term for the conformal mode of the metric [48,49]. Indeed, consider the conformal mode σ (x), defined choosing a fixed fiducial metricḡ µν and writing g µν (x) = e 2σ (x)ḡ µν (x). Let us restrict the dynamics to the conformal mode, and choose for simplicity a flat fiducial metricḡ µν = η µν . The Ricci scalar computed from the metric g µν = e 2σ (x) η µν is then Therefore, to linear order in σ , R = −62σ + O(σ 2 ) and (upon integration by parts) Thus, the R2 −2 R terms gives a nonlocal but diff-invariant mass term for the conformal mode, plus higher-order interaction terms (which are nonlocal even in σ ) which are required to reconstruct a diff-invariant quantity. The same is true for the nonlocal term in the RT model, since the RR and RT models coincide when linearized over Minkowski space. How not to deal with effective nonlocal theories In this section we discuss some conceptual aspects of general nonlocal theories, that involve some subtleties. The bottomline is that quantum field theory must be played according to its rules and, as we have already seen in Sect. 3 with the explicit example of the anomaly-induced effective action, the rules for quantum effective actions are different from the rules for the fundamental action of a QFT. Causality We begin by examining causality in nonlocal theories (we follow the discussion in app. A of [50]; see also [29][30][31][51][52][53][54] for related discussions). In a fundamental QFT with a nonlocal action, the standard variational principle produces acausal equations of motion. Consider for instance a nonlocal term dx φ 2 −1 φ in the action of a scalar field φ , where 2 −1 is defined with respect to some Green's function G(x; x ). Then Thus, the variation symmetrizes the Green's function. However, the retarded Green's function is not symmetric; rather, G ret (x ; x) = G adv (x; x ), and therefore it cannot be obtained from such a variation. In a fundamental action, nonlocality implies the loss of causality, already at the classical level (unless, as in eq. (51), we have a gauge symmetry that allows us to gauge away the nonlocal term in the equations of motion). However, quantum effective actions are in general nonlocal, as in eq. (2), (27) or (45). Of course, this does not mean that they describe acausal physics. These nonlocal effective actions are just a way to express, with an action that can be used at tree level, the result of a quantum computation in fundamental theories which are local and causal. Therefore, it is clear that their nonlocality has nothing to do with acausality. Simply, to reach the correct conclusions one must play QFT according to its rules. The variation of the quantum effective action does not give the classical equations of motion of the field. Rather, it provides the time evolution, or equivalently the equations of motion, obeyed by the vacuum expectation values of the corresponding operators, as in eq. (10). These equations of motion are obtained in a different way depending on whether we consider the in-in or the in-out matrix elements. The in-out expectation values are obtained using the Feynman path integral in eq. (9), and are indeed acausal. Of course, there is nothing wrong with it. The in-out matrix element are not observable quantities, but just auxiliary objects which enter in intermediate steps in the computation of scattering amplitudes, and the Feynman propagator, which is acausal, enters everywhere in QFT computations. The physical quantities, which can be interpreted as physical observables, are instead the in-in expectation values. For instance, 0 in |ĝ µν |0 in can be interpreted as a semiclassical metric, while 0 out |ĝ µν |0 in is not even a real quantity. The equations of motion of the in-in expectation values are obtained from the Schwinger-Keldysh path integral, which automatically provides nonlocal but causal equations [55,56]. In practice, the equations of motion obtained from the Schwinger-Keldysh path integral turn out to be the same that one would obtain by treating formally the 2 −1 operator in the variation, without specifying the Green's function, and replacing in the end 2 −1 → 2 −1 ret in the equations of motion (see e.g. [9]). 8 Thus nonlocal actions, interpreted as quantum effective actions, provide causal evolution equations for the in-in matrix elements. Degrees of freedom and ghosts Another subtle issue concerns the number of degrees of freedom described by a nonlocal theory such as (61). Let us at first treat it as we would do for a fundamental action. We write g µν = η µν + h µν and expand the quantum effective action to quadratic order over flat space. 9 The corresponding flat-space action is [47] Γ (2) where where now 2 is the flat-space d'Alembertian. We then add the usual gauge fixing term of linearized massless gravity, plus terms proportional to k µ k ν , k ρ k σ and k µ k ν k ρ k σ , that give zero when contracted with a conserved energy-momentum tensor. The term in the second line in eq. (67) gives an extra contribution toT This term apparently describes the exchange of a healthy massless scalar plus a ghostlike massive scalar. The presence of a ghost in the spectrum of the quantum theory would be fatal to the consistency of the model. However, once again, this conclusion comes from a confusion between the concepts of fundamental action and quantum effective action. To begin, let us observe that it is important to distinguish between the effect of a ghost in the classical theory and its effect in the quantum theory. Let us consider first the classical theory. At linear order, the interaction between the metric perturbation and an external conserved energy-momentum tensor T µν is given by where h µν is the solution of the equations of motion derived from eq. (65). Solving them explitly and inserting the solution for h µν in eq. (69) one finds [45] S int = 16πG with ∆ µνρσ (k) given by eq. (67). The quantity ∆ µνρσ (k) therefore plays the role of the propagator in the classical theory [and differs by a factor of −i from the quantity usually called the propagator in the quantum theory,D µνρσ (k) = −i∆ µνρσ (k)]. A 'wrong' sign in the term proportional to 1/(k 2 − m 2 ) in eq. (67) might then result in a classical instability. Whether this is acceptable or not must be studied on a caseby-case basis. For instance, taking m = O(H 0 ), as we will do below, the instability will only develop on cosmological timescales. Therefore, it must be studied in the context of a FRW cosmology, where it will also compete with damping due to the Hubble friction. Whether this will result or not in a viable cosmological evolution, both at the level of background evolution and of cosmological perturbations, can only be deduced an explicit quantitative study of the solutions of these cosmological equations. We will indeed see in Sect. 7 that the cosmological evolution obtained from this model is perfectly satisfying. A different issues is the presence of a ghost in the spectrum of the quantum theory. After quantization a ghost carries negative energy, and induces vacuum decay through the associated production of ghosts and normal particles, which would be fatal to the consistency of the theory. However, here we must be aware of the fact that the spectrum of the quantum theory can be read from the free part of the fundamental action of the quantum theory. To apply blindly the same procedure to the quantum effective action is simply wrong. We have already seen this in Sect. 3 for the anomaly-induced effective action, where the action (30) with N > 25, or the action (42), naively seem to have a ghost, but in fact are perfectly healthy effective quantum actions, derived from fundamental QFTs that have no ghost. Another example that illustrates the sort of nonsense that one obtains if one tries to read the spectrum of the quantum theory from the quantum effective action Γ , consider for instance the one-loop effective action of QED, eq. (2). If we proceed blindly and quantize it as if it were a fundamental action, we would add to eq. (2) a gauge fixing term L gf = −(1/2)(∂ µ A µ ) 2 and invert the resulting quadratic form. We would then obtain, for the propagator in the m e → 0 limit, plus terms proportional to k µ k ν that cancel when contracted with a conserved current j µ . 10 Using the identities log k 2 and we see that the "propagator" (71) has the standard pole of the electromagnetic field, proportional to −iη µν /k 2 with a positive coefficient, plus a continuous set of ghostlike poles proportional to +iη µν /(k 2 + m 2 ), with m an integration variable. We would then conclude that QED as a continuous spectrum of ghosts! Of course this is nonsense, and it is just an artifact of having applied to the quantum effective action a procedure that only makes sense for the fundamental action of a QFT. In fact, the proper interpretation of eq. (71) is that log(k 2 /µ 2 ) develops an imaginary part for k 2 < 0 (e.g. for k 0 = 0, k = 0, i.e. for a spatially uniform but time-varying electromagnetic field). This is due to the fact that, in the limit m e → 0 in which we are working (or, more generally, for −k 2 > 4m 2 e ), in such an external electromagnetic field there is a rate of creation of electron-positron pairs, and the imaginary part of the effective action describes the rate of pair creation [6]. These general considerations show that the spectrum of the theory cannot be read naively from the quantum effective action. Thus, in particular, from the presence of a 'ghost-like' pole obtained from the effective quantum action (65), one cannot jump to the conclusion that the underlying fundamental theory has a ghost. In the next section we will be more specific, and try to understand the origin of this 'wrongsign' pole in the RR and RT theories. Localization of nonlocal theories Nonlocal models can be formally written in a local form introducing auxiliary fields, as discussed in similar contexts in [30,52,[59][60][61][62][63]. This reformulation is quite useful both for the numerical study of the equations of motion, and for understanding exactly why the ghosts-like poles in eq. (67) do not correspond to states in the spectrum of the quantum theory. It is useful to first illustrate the argument for the Polyakov effective action, for which we know that it is the effective quantum action of a perfectly healthy fundamental theory. Localization of the Polyakov action. In D = 2 the Polyakov action becomes local when written in terms of the conformal factor. Let us however introduce a different localization procedure, that can be generalized to 4D. We start from eq. (27), where we used the notation c = −N/(96π). We now introduce an auxiliary field U defined by U = −2 −1 R. At the level of the action, this can be implemented by introducing a Lagrange multiplier ξ , and writing The variation with respect to ξ gives so it enforces U = −2 −1 R, while the variation with respect to U gives 2ξ = cR and therefore ξ = c2 −1 R = −cU. This is an algebraic equation that can be put back in the action so that, after an integration by parts, Γ can be rewritten as [19] The theories defined by eqs. (74) and (77) are classically equivalent. As a check, one can compute the energy-momentum tensor from eq. (77), and verify that its classical trace is given by T = 4c2U = −4cR. So eq. (77), used as a classical action, correctly reproduces the quantum trace anomaly (19) [19]. We can further manipulate the action (77) writing g ab = e 2σ η ab . Using eq. (24) and introducing a new field ϕ from U = 2(ϕ + σ ) to diagonalize the action, we get Taken litteraly, this action seems to suggest that in the theory there are two dynamical fields, ϕ and σ . For c > 0, ϕ would be a ghost and σ a healthy field, and viceversa if c < 0 (in the Polyakov action (74) c = −N/(96π) < 0, but exactly the same computation could be performed with the action (29), where c = −(N − 25)/(96π) can take both signs). Of course, we know that this conclusion is wrong, since we know exactly the spectrum of the quantum theory at the fundamental level, which is made uniquely by the quanta of the conformal matter fields. As we mentioned, even taking into account the anomaly-induced effective action, still σ has no quanta in the physical spectrum, since they are eliminated by the physical-state condition [24]. As for the auxiliary field ϕ, or equivalently U, there is no trace of its quanta in the physical spectrum. U is an artificial field which has been introduced by the localization procedure, and there are no quanta associated with it. This can also be understood purely classically, using the fact that, in D = 2, the Polyakov action becomes local when written in terms of the conformal factor. Therefore, the classical evolution of the model is fully determined once we give the initial conditions on σ , i.e. σ (t i , x) andσ (t i , x) at an initial time. Thus, once we localize the theory introducing U, the initial conditions on U are not arbitrary. Rather, they are uniquely fixed by the condition that the classical evolution, in the formulation obtained from eq. (77), must be equivalent to that in the original theory (27). In other words, U is not the most general solution of eq. (76), which would be given by a particular solution of the inhomogeneous equation plus the most general solution of the associated homogeneous equation 2U = 0. Rather, it is just one specific solution, with given boundary conditions, such as U = 0 when R = 0 in eq. (76). Thus, if we are for instance in flat space, there are no arbitrary plane waves associated to U, whose coefficients a k and a * k would be promoted to creation and annihilation operators in the quantum theory. In this sense, the situation is different with respect to the conformal mode σ : the conformal mode, at the quantum level, is a quantum field with its own creation and annihilation operators, but the corresponding quantum states do no survive the imposition of the physical-state condition, and therefore do not belong to the physical Hilbert space. The U field, instead, is a classical auxiliary field and has not even creation and annihilation operators associated to it. Localization of the RR theory. We next consider the RR model. To put the theory in a local form we introducing two auxiliary fields U and S, defined by This can be implemented at the Lagrangian level by introducing two Lagrange multipliers ξ 1 , ξ 2 , and rewriting eq. (61) as The equations of motion derived performing the variation of this action with respect to h µν is where At the same time, the definitions (79) imply that U and S satisfy Using the equations of motion we can check explicitly that ∇ µ K µν = 0, as it should, since the equations of motion has been derived from a diff-invariant action. Linearizing eq. (81) over flat space we get Let us we restrict to the scalar sector, which is the most interesting for our purposes. We proceed as in GR, and use the diff-invariance of the nonlocal theory to fix the Newtonian gauge We also write the energy-momentum tensor in the scalar sector as A straightforward generalization of the standard computation performed in GR (see e.g. [64]) gives four independent equations for the four scalar variables Φ,Ψ , U and S. For the Bardeen variables Φ and Ψ we get [47] 11 Thus, just as in GR, Φ and Ψ remain non-radiative degrees of freedom, with a dynamics governed by a Poisson equation rather than by a Klein-Gordon equation. This should be contrasted with what happens when one linearizes massive gravity with a Fierz-Pauli mass term. In that case Φ becomes a radiative field that satisfies (2 − m 2 )Φ = 0 [64,66,67], and the corresponding jump in the number of radiative degrees of freedom of the linearized theory is just the vDVZ discontinuity. Furthermore, in local massive gravity with a mass term that does not satisfies the Fierz-Pauli tuning, in the Lagrangian also appears a term (2Φ) 2 [64], signaling the presence of a dynamical ghost. To linearize eq. (82) we first observe that, taking the trace of eq. (84), we get where is the linearized Ricci scalar. From eq. (66), Therefore, eq. (90) can also be rewritten in the suggestive form Equation (92) also implies that, to linear order, and therefore eq. (90) can be rewritten as Inserting this into eq. (82) we finally get where, in all the linearized equations, 2 = −∂ 2 0 +∇ 2 is the flat-space d'Alembertian. Similarly the linearized equation for S is just given by eq. (83), again with the flatspace d'Alembertian. Thus, in the end, in the scalar sector we have two fields Φ and Ψ which obey eqs. (88) and (89) and are therefore non-radiative, just as, in GR. Furthermore, we have two fields U and S that satisfy Klein-Gordon equations with sources. In particular U satisfies the massive KG equation (96), so is clearly the field responsible for the ghost-like 1/(k 2 − m 2 ) pole in eq. (68), while S satisfies a massless KG with source, and is the field responsible for the healthy 1/k 2 pole in eq. (68). This analysis shows that the potential source of problems is not one of the physical fields Φ and Ψ , but rather the auxiliary field U. However, at this point the solution of the potential problem becomes clear (see in particular the discussions in [30,52,61,63] in different nonlocal models, and in [45,47,54] for the RR and RT models), and is in fact completely analogous to the situation that we have found for the Polyakov effective action. In general, an equation such as 2U = −R is solved by are fixed by the definition of 2 −1 (e.g. at the value a k = a * k = 0 if the definition of 2 −1 is such that U hom = 0). They are not free parameters of the theory, and at the quantum level it makes no sense to promote them to annihilation and creation operators. There is no quantum degree of freedom associated to them. To conclude this section, it is interesting to observe that the need of imposing boundary conditions on some classical fields, in order to recover the correct Hilbert state at the quantum level, is not specific to nonlocal effective actions. Indeed, GR itself can be formulated in such a way that requires the imposition of similar conditions [54,64]. Indeed, let us consider GR linearized over flat space. To quadratic order, adding to the Einstein-Hilbert action the interaction term with a conserved energy-momentum tensor, we have We decompose the metric as where h TT µν is transverse and traceless, Thus, the 10 components of h µν are split into the 5 components of the TT tensor h TT µν , the four components of ε µ , and the scalar s. Under a linearized diffeomorphism h µν → h µν − (∂ µ ξ ν + ∂ ν ξ µ ), the four-vector ε µ transforms as ε µ → ε µ − ξ µ , while h TT µν and s are gauge invariant. We similarly decompose T µν . Plugging eq. (99) into eq. (98) ε µ cancels (as it is obvious from the fact that eq. (98) is invariant under linearized diffeomorphisms and ε µ is a pure gauge mode), and we get The equations of motion derived from S This result seems to suggest that in ordinary massless GR we have six propagating degrees of freedom: the five components of the transverse-traceless tensor h TT µν , plus the scalar s. Note that h TT µν and s are gauge invariant, so they cannot be gauged away. Furthermore, from eq. (101) the scalar s seems a ghost! Of course, we know that in GR only the two components with helicities ±2 are true propagating degrees of freedom. In fact, the resolution of this apparent puzzle is that the variables h TT µν and s are nonlocal functions of the original metric. Indeed, inverting eq. (99), one finds where P µν is the nonlocal operator (66). Observe that the nonlocality is not just in space but also in time. Therefore, giving initial conditions on a given time slice for the metric is not the same as providing the initial conditions on h TT µν and s, and the proper counting of dynamical degrees of freedom gets mixed up. If we want to study GR in terms of the variables h TT µν and s, which are nonlocal functions of the original variables h µν , we can do it, but we have to be careful that the number of independent initial conditions that we impose to evolve the system must remains the same as in the standard Hamiltonian formulation of GR. This means in particular that the initial conditions on s and on the components of h TT µν with helicities 0, ±1 cannot be freely chosen, and in particular the solution of the homogeneous equations 2s = 0 associated to the equation 2s = (κ/4)T is not arbitrary. It is fixed, e.g. by the condition that s = 0 when T = 0. Just as for the auxiliary field U discussed above, there are no quanta associated to s (nor to the components of h TT µν with helicities 0, ±1), just as in the standard 3 + 1 decomposition of the metric there are no quanta associated to the Bardeen potentials Φ and Ψ . The similarity between the absence of quanta for the field U in the localization procedure of the RR model, and the absence of quanta for s in GR, is in fact more than an analogy. Comparing eqs. (94) and (103) we see that, at the level of the linearized theory, U reduces just to s in the m = 0 limit. The boundary condition that eliminates the quanta of U in the RR theory therefore just reduces to the boundary condition that eliminates the quanta of s in GR. The bottomline of this discussion is that the 'wrong-sign' pole in eq. (68) is not due to a ghost in the quantum spectrum of the underlying fundamental theory. It is simply due to an auxiliary field that enters the dynamics at the classical level, but has no associated quanta in the physical spectrum of the theory. A different question is whether this auxiliary field might induce instabilities in the classical evolution. Since we will take m of order of the Hubble parameter today, H 0 , any such instability would only develop on cosmological timescale, so it must be studied on a FRW background, which we will do in the next section. The above analysis was performed for the RR model. For the RT model the details of the localization procedure are technically different [45,68]. In that case we define again U = −2 −1 R, and we also introduce S µν = −Ug µν = g µν 2 −1 R. We then compute S T µν using eq. (56). Thus, eq. (60) is localized in terms of an auxiliary scalar field U and the auxiliary four-vector field S µ that enters through eq. (56), obeying the coupled system where the latter equation is obtained by taking the divergence of eq. (56). We see that, at the full nonlinear level, the RT model is different from the RR model. However, linearizing over flat space they become the same. In fact in this case, using eq. (92), to linear order we have In flat space the extraction of the transverse part can be easily performed using eq. (57), without the need of introducing auxiliary fields. This gives, again to linear order, S T µν = −P µν P ρσ h ρσ . Using the fact that, to linear order, G µν = −(1/2)E µν,ρσ h ρσ , we see that the linearization of eq. (60) over flat space gives the same equation as eq. (84). Thus, the RR and RT model coincide at linear order over flat space, but not on a general background (nor at linear order over a non-trivial background, such as FRW). It should also be stressed that the RR and RT models are not theories of massive gravity. The graviton remains massless in these theories. Observe also, from eq. (67), that when we linearize over flat space the limit m → 0 of the propagator is smooth, and there is no vDVZ discontinuity, contrary to what happens in massive gravity. The continuity with GR has also been explicitly verified for the Schwarzschild solution [68]. 12 Cosmological consequences We can now explore the cosmological consequences of the RT and RR models, as well as of some of their extensions that we will present below, beginning with the background evolution, and then moving to cosmological perturbation theory and to the comparison with cosmological data. Background evolution and self-acceleration We begin with the background evolution (we closely follow the original discussions in [45,46] for the RT model and [47] for the RR model). It is convenient to use the localization procedure discussed in Sect. 6, so we deal with a set of coupled differential equations, rather than with the original integro-differential equations. The RT model Let us begin with the RT model. In FRW, at the level of background evolution, for symmetry reasons the spatial component S i of the auxiliary field S µ vanish, and the only variables are U(t) and S 0 (t), together with the scale factor a(t). Eqs. (105)-(107) then become We supplement these equations with the initial conditions at some time t * deep in the radiation dominated (RD) phase. We will come back below to how the results depend on this choice. Observe that we do not include a cosmological constant term. Indeed, our aim is first of all to see if the nonlocal term produces a self-accelerated solution, without the need of a cosmological constant. It is convenient to pass to dimensionless variables, using x ≡ ln a(t) instead of t to parametrize the temporal evolution. We denote d f /dx = f , and we define Y = U −Ṡ 0 , h = H/H 0 , Ω i (t) = ρ i (t)/ρ c (t) (where i labels radiation, matter and dark energy), and Ω i ≡ Ω i (t 0 ), where t 0 is the present value of cosmic time. Then the Friedmann equation reads where γ ≡ m 2 /(9H 2 0 ). This shows that there is an effective DE density where ρ 0 = 3H 2 0 /(8πG). We can trade S 0 for Y , and rearrange the equations so that U and Y satisfy the coupled system of equations The result of the numerical integration is shown in Fig. 1. In terms of the variable x = ln a, radiation-matter equilibrium is at x = x eq −8.1, while the present epoch corresponds to x = 0. From the left panel of Fig. 1 we see that the effective DE vanishes in RD. This is a consequence of the fact that, in RD, R = 0, together with our choice of boundary conditions U(t * ) =U(t * ) = 0 at some initial value t * deep in RD. As a consequence, 2 −1 R remains zero in an exact RD phase, and only begins to grow when it starts to feel the effect of non-relativistic matter. The evolution of the auxiliary field U = −2 −1 R is shown in the left panel of Fig. 2. We see however that, as we enter in the matter-dominated (MD) phase, the effective DE density start to grow, until it eventually dominates, as we see from the right panel of Fig. 1. The numerical value of Ω DE today can be fixed at any desired value, by choosing the parameter m of the nonlocal model (just as in Λ CDM one can chose Ω Λ by fixing the value of the cosmological constant). In Fig. 1 m has been chosen so that, today, Ω DE 0.68, i.e. Ω M 0.32. This is obtained by setting γ 0.050, which corresponds to m 0.67H 0 . Of course, the exact value of Ω M , and therefore of m, will eventually be fixed by Bayesian parameter estimation within the model itself, as we will discuss below. We also define, as usual, the effective equation-of-state parameter of dark energy, w DE , from 13ρ Once fixed m so to obtain the required value of Ω M , ρ DE (x) is fixed, and therefore we get a pure prediction for the evolution of w DE with time. The right panel of Fig. 2 shows the result, plotted as a function of redshift z. We observe that w DE (z) is on the phantom side, i.e. w DE (z) < −1. This is a general consequence of eq. (118), together with the fact that, in the RT model, ρ DE > 0,ρ DE > 0, and H > 0, so (1 + w DE ) must be negative. Near the present epoch we can compare the numerical evolution with the widely used fitting function [71,72] w (where a = e x ), and we get w 0 −1.04, w a −0.02. These results are quite interesting, because they show that, at the level of background evolution, the nonlocal term generates an effective DE, which produces a self-accelerating solution with w DE close to −1. It is interesting to observe that, in terms of the field U = −2 −1 R, eq. (60) can be replaced by the system of equations We now observe that, under a shift U(x) → U(x) + u 0 , where u 0 is a constant, eq. (121) is unchanged, while (u 0 g µν ) T = u 0 g µν , since ∇ µ g µν = 0. Then eq. (120) becomes We see that in principle one could chose u 0 so to cancel any vacuum energy term in T µν . In particular, given that m H 0 , one can cancel a constant positive vacuum energy T 00 = ρ vac = O(m 4 Pl ) by choosing a negative value of u 0 such that −u 0 = O(m 2 Pl /H 2 0 ) ∼ 10 120 (viceversa, choosing a positive value of u 0 amounts to introducing a positive cosmological constant). This observation is interesting, but unfortunately by itself is not a solution of the cosmological constant problem. We are simply trading the large value of the vacuum energy into a large value of the shift parameter in the transformation U(x) → U(x) + u 0 , and the question is now why the shifted field should have an initial condition U(t * ) = 0, or anyhow U(t * ) = O(1), rather than an astronomically large initial value. The next point to be discussed is how the cosmological background evolution depends on the choice of initial conditions (112). To this purpose, let us consider first eq. (116). In any given epoch, such as RD, MD, or e.g. an earlier inflationary de Sitter (dS) phase, the parameter ζ has an approximately constant value ζ 0 , with ζ 0 = 0 in dS, ζ 0 = −2 in RD and ζ 0 = −3/2 in MD. In the approximation of constant ζ eq. (116) can be integrated analytically, and has the solution [45] U(x) = 6(2 where the coefficients u 0 , u 1 parametrize the general solution of the homogeneous equation U + (3 + ζ 0 )U = 0. The constant u 0 corresponds to the reintroduction of a cosmological constant, as we have seen above. We will come back to its effect in Sect. 7.4. The other solution of the homogeneous equation, proportional to e −(3+ζ 0 )x , is instead a decaying mode, in all cosmological phases. Thus, the solution with initial conditions U(t * ) =U(t * ) = 0 has a marginally stable direction, corresponding to the possibility of reintroducing a cosmological constant, and a stable direction, i.e. is an attractor in the u 1 direction. Perturbing the initial conditions is equivalent to introducing a non-vanishing value of u 0 and u 1 . We see that the introduction of u 0 will in general lead to differences in the cosmological evolution, which we will explore below, while u 1 corresponds to an irrelevant direction. In any case, it is reassuring that there is no growing mode in the solution of the homogeneous equation. Consider now eq. (115). Plugging eq. (123) into eq. (115) and solving for Y (x) we get [45] Y where In particular, in dS there is a growing mode with α + = (−3 + √ 21)/2 0.79. In RD both modes are decaying, and the mode that decays more slowly is the one with α + = (−5 + √ 13)/2 −0.70 while in MD again both modes are decaying, and α + = (−9 + √ 57)/4 −0.36. Thus, if we start the evolution in RD, in the space {u 0 , u 1 , a 1 , a 2 } that parametrizes the initial conditions of the auxiliary fields, there is one marginally stable direction and three stable directions. However, if we start from an early inflationary era, there is a growing mode corresponding to the a 1 direction. Then Y will grow during dS (exponentially in x, so as a power of the scale factor), but will then decrease again during RD and MD. We will study the resulting evolution in Sect. 7.4, where we will see that even in this case a potentially viable background evolution emerges. In any case, it is important that in RD and MD there is no growing mode, otherwise the evolution would necessarily eventually lead us far from an acceptable FRW solution. This is indeed what happens in the model (58), where the homogeneous solutions associated to an auxiliary field are unstable both in RD and in MD (see app. A of [46]), and is the reason why we have discarded that model. The RR model Similar results are obtained for the RR model. Specializing to a FRW background, and using the dimensionless field W (t) = H 2 (t)S(t) instead of S(t), eqs. (80)-(83) become where again γ = m 2 /(9H 2 0 ), ζ = h /h and From this form of the equations we see that there is again an effective dark energy density, given by ρ DE = ρ 0 γY . To actually perform the numerical integration of these equations, and also to study the perturbations, it can be more convenient to use a variable V (t) = H 2 0 S(t) instead of W (t) = H 2 (t)S(t). Then eqs. (126)-(128) are replaced by In eqs. (131) where Ω (x) = Ω M e −3x + Ω R e −4x . Then eqs. (131) and (132), with h 2 given by eq. (130) and ζ given by eq. (133), provide a closed set of second order equations for V and U, whose numerical integration is straightforward. The result of the numerical integration is shown in Fig. 3. Similarly to eq. (112) for the RT model, we set initial conditions U = U = V = V = 0 at some initial time x in deep in RD (we will see in Sect. 7.4.1 how the results depend on this choice). In this case we get w 0 −1.14, w a 0.08 [47], so the RR model differs more from Λ CDM, compared to the RT model, at the level of background evolution. In the RR model, to obtain for instance a value Ω M = 0.32, i.e. Ω DE = 0.68, we must fix m 0.28H 0 . The dependence on the initial conditions can be studied as before. The equation for U is the same as in the RT model, so the homogeneous solution for U is again u 0 + u 1 e −(3+ζ 0 )x . The homogeneous equation for V is the same as that for U, so similarly the homogeneous solution for V is v 0 + v 1 e −(3+ζ 0 )x . In the early Universe we have −2 ≤ ζ 0 ≤ 0 and all these terms are either constant or exponentially decreasing, which means that the solutions for both U and V are stable in MD, RD, as well as in a previous inflationary stage. From this point of view the RR model differs from the RT model which, as we have seen, has a growing mode during a dS phase. Note also the the constant u 0 now no longer has the simple interpretation of a cosmological constant term since, contrary to eq. (107), eq. (83) is not invariant under U → U + u 0 . Cosmological perturbations In order to asses the viability of these models, the next step is the study of their cosmological perturbations. This has been done in [65]. Let us considering first the scalar perturbations. We work in the Newtonian gauge, and write the metric perturbations as We then add the perturbations of the auxiliary fields, see below, we linearize the equations of motion and go in momentum space. We denote by k the comoving momenta, and define where k eq = a eq H eq is the wavenumber of the mode that enters the horizon at matterradiation equilibrium. To illustrate our numerical results, we use as reference values κ = 0.1, 1, 5. The mode with κ = 5 entered inside the horizon already during RD, while the mode κ = 1 reentered at matter-radiation equality. In contrast, the mode with κ = 0.1 was outside the horizon during RD and most of MD, and re-entered at z 1.5. Overall, these three values of k illustrate well the k dependence of the results, covering the range of k relevant to the regime of linear structure formation. We summarize here the results for the RT and RR models, referring the reader to [65] for details and for the (rather long) explicit expression of the perturbation equations. RT model In the RT model we expand the auxiliary fields as In FRW, the background valueS i vanishes because there is no preferred spatial direction, but of course its perturbation δ S i is a dynamical variable. As with any vector, we can decompose it into a transverse and longitudinal part, Since we restrict here to scalar perturbations, we only retain δ S, and write δ S i = ∂ i (δ S). Thus in this model the scalar perturbations are given by Ψ , Φ, δU, δ S 0 and δ S, see also [68,73]. Fig. 4 shows the time evolution of the Fourier modes of the Bardeen variable Ψ k for our three reference values of κ (blue solid line) and compare with the cor- responding result in Λ CDM (purple dashed line). 14 As customary, we actually plot k 3/2 Ψ k , whose square gives the variance of the field per unit logarithmic interval of momentum, according to where the bracket is the ensemble average over the initial conditions, that we take to be the standard adiabatic initial conditions. Note also that, if start the evolution choosing real initial conditions on Ψ k , it remains real along the evolution. We see from fig. 4 that, up to the present time x = 0, the evolution of the perturbations is well-behaved, and very close to that of Λ CDM, even if in the cosmological future the perturbations will enter the nonlinear regime much earlier than for Λ CDM. In particular, the perturbation of the 'would-be' ghost field U, up to the present time, are small, with k 3/2 U k ∼ 10 −4 . Observe that in the cosmological future the perturbation becomes non-linear, both for Ψ k and for δU k , with the nonlinear- ity kicking in earlier for the lower-momentum modes. 15 This can be understood as follows. Any classical instability possibly induced by the nonlocal term will only develop on a timescale t such that mt is (much) larger than one. However, we have seen that, to reproduce the typical observed value of Ω M , m is of order H 0 , and in fact numerically smaller, with m 0.28H 0 for the RT model (see sect. 7.3 for accurate Bayesian parameter estimation). Thus, instabilities induced by the nonlocal term, if present, only develop on a timescale larger or equal than to a few times H 0 , and therefore in the cosmological future. Beside following the cosmological evolution for the fundamental perturbation variables, such as Ψ k (x) (recall that x ≡ ln a(t) is our time-evolution variable, not to be confused with a spatial variable!), the behavior of the perturbations can also be conveniently described by some indicators of deviations from Λ CDM. Two useful quantities are the functions µ(x; k) and Σ (x; k), defined by where the subscript 'GR' denotes the same quantities computed in GR, assuming a Λ CDM model with the same value of Ω M as the modified gravity model. The advantage of using Ψ and Φ −Ψ as independent combinations is that the former enters in motion of non-relativistic particles, while the latter determines the light propagation. The numerical results for the RT model are shown in upper panels of Fig. 5. We see that, in this model, the deviations from Λ CDM are very tiny, of order of 1% at most, over the relevant wavenumbers and redshifts. In the forecast for experiments, µ(x; k) is often approximated as a function independent of k, with a power-like dependence on the scale factor, For the RT model we find that the scale-independent approximation is good, in the range of momenta relevant for structure formation, but the functional form (140) only catches the gross features of the a-dependence. The lower panel of Fig. 5 compares the function µ(a, k) computed numerically for κ = 5, with the function (140), setting µ s = 0.012 and s = 0.8. Another useful indicator of deviations from GR is the effective Newton's constant, which is defined so that the Poisson equation for the Bardeen variable Φ takes the same for as in GR, with Newton's constant G replaced by a function G eff (x; k). In the RT model, for modes inside the horizon, [65,73], wherek = k/(aH). This gives again the information that, for the RT model, deviations from Λ CDM in structure formations are quite tiny. We will see in more detail in Sect. 7.3 how the predictions of the model compare with that of Λ CDM for CMB,SNae, BAO and structure formation data. RR model In the RR model, in the study of perturbations we find convenient to use U and V = H 2 0 S (rather than W = H 2 (t)S). In the scalar sector we expand the metric as in eq. (134) and the auxiliary fields as U(t, x) =Ū(t) + δU(t, x), V =V (t) + δV (t, x). Thus, in this model the scalar perturbations are described by Ψ , Φ, δU and δV . The results for the evolution of Ψ are shown in Fig. 6. We see that again the perturbations are well-behaved, and very close to Λ CDM. Compared to the RT model, the deviations from Λ CDM are somewhat larger, up to the present epoch. However, contrary to the RT model, they also stay relatively close to Λ CDM even in the cosmological future. The functions µ and Σ are shown as functions of the redshift in the upper panels of Fig. 7, for our three reference value of the wavenumber. At a redshift such as z = 0.5, typical for the comparison with structure formation data, they are of order 5%, so again larger than in the RT model. For the RR model µ, as a function of the scale factor, is well reproduced by eq. (140), with see the lower panel of Fig. 7. By comparison, the forecast for EUCLID on the error σ (µ s ) over the parameter µ s , for fixed cosmological parameters, is σ (µ s ) = 0.0046 for s = 1 and σ (µ s ) = 0.014 for s = 3 [74]. Thus (barring the effect of degeneracies with other cosmological parameters), we expect that the accuracy of EUCLID should be sufficient to test the prediction for µ s from the RR model, and possibly also for the RT model. Finally, the effective Newton's constant in the RR model, for sub-horizon scales, is given by Thus in the sub-horizon limit, G eff (x; k) becomes independent of k. However, contrary to the RT model, it retains a time dependence. The behavior of G eff as a function of the redshift is shown in the lower right panel of Fig. 7. Nonlinear structure formation has also been studied, for the RR model, using N-body simulations [70]. The result is that, in the high-mass tail of the distribution, massive dark matter haloes are slightly more abundant, by about 10% at M ∼ 10 14 M /h 0 . The halo density profile is also spatially more concentrated, by about 8% over a range of masses. 16 Tensor perturbations have also been studied in [69,76], for both the RR and RT models, and again their evolution is well behaved, and very close to that in Λ CDM. Bayesian parameter estimation and comparison with Λ CDM The results of the previous sections show that the RR and RT nonlocal models give a viable cosmology at the background level, with an accelerated expansion obtained without the need of a cosmological constant. Furthermore, their cosmological perturbations are well-behaved and in the right ballpark for being consistent with the data, while still sufficiently different from Λ CDM to raise the hope that the models might be distinguishable with present or near-future observations. We can therefore go one step forward, and implement the cosmological perturbations in a Boltzmann code, and perform Bayesian parameter estimation. We can then compute the rele-vant chi-squares or Bayes factor, to see if these models can 'defy' Λ CDM, from the point of view of fitting the data. We should stress that this is a level of comparison with the data, and with Λ CDM, that none of the other infrared modifications of GR widely studied in the last few years has ever reached. The relevant analysis has been performed in [75], using the Planck 2013 data then available, together with supernovae and BAO data, and updated and extended in [69], using the Planck 2015 data. In particular, in [69] we tested the nonlocal models against the Planck 2015 TT, TE, EE and lensing data from Cosmic Microwave Background (CMB), isotropic and anisotropic Baryonic Acoustic Oscillations (BAO) data, JLA supernovae, H 0 measurements and growth rate data, implementing the perturbation equations in a modified CLASS [77] code. As independent cosmological parameters we take the Hubble parameter today H 0 = 100h km s −1 Mpc −1 , the physical baryon and cold dark matter density fractions today ω b = Ω b h 2 and ω c = Ω c h 2 , respectively, the amplitude A s and the spectral tilt n s of primordial scalar perturbations and the reionization optical depth τ re , so we have a 6-dimensional parameter space. For the neutrino masses we use the same values as in the Planck 2015 baseline analysis [78], i.e. two massless and a massive neutrinos, with ∑ ν m ν = 0.06 eV, and we fix the effective number of neutrino species to N eff = 3.046. Observe that, in the spatially flat case that we consider, in Λ CDM the dark energy density fraction Ω Λ can be taken as a derived parameter, fixed in terms of the other parameters by the flatness condition. Similarly, in the nonlocal models m 2 can be taken as a derived parameter, fixed again by the flatness condition. Thus, not only the nonlocal models have the same number of parameters as Λ CDM, but in fact the independent parameters can be chosen so that are exactly the same in the nonlocal models and in Λ CDM. The results are shown in Table 1. On the left table we combine the Planck CMB data with JLA supernovae and with a rather complete set of BAO data, described in [69]. On the right table we also add a relatively large value of H 0 , of the type suggested by local measurement. The most recent analysis of local measurements, which appeared after [69] was finished, gives H 0 = 73.02 ± 1.79 [79]. In the last row we give the difference of χ 2 , with respect to the model that has the lowest χ 2 . Let us recall that, according to the standard Akaike or Bayesian information criteria, in the comparison between two models with the same number of parameters, a difference |∆ χ 2 | ≤ 2 implies statistically equivalence between the two models compared, while |∆ χ 2 | 2 suggests "weak evidence", and |∆ χ 2 | 6 indicates "strong evidence". 17 Thus, for the case BAO+Planck+JLA, Λ CDM and the RT model are statistically equivalent, while the RR model is on the border of being strongly disfavored. Among the various parameter, a particularly interesting result concerns H 0 , which in the nonlocal models is predicted to be higher than in Λ CDM. Thus, adding a high prior on H 0 , of the type suggested by local measurements, goes in the direction of Table 1 Parameter tables for Λ CDM and the nonlocal models. Beside the six parameters that we have chosen as our fundamental parameters, we give also the values of the derived quantities z re (the redshift to reionization) and σ 8 (the variance of the linear matter power spectrum in a radius of 8 Mpc today) For the RR and RT models, among the derived parameters, we also give γ = m 2 /(9H 2 0 ). From [69]. favoring the nonlocal models, as we see from the right table. In this case Λ CDM and the RT model are still statistically equivalent, although now with a slight preference for the RT model, while the RR model becomes only slightly disfavored with respect to the RR model, χ 2 RR − χ 2 RT 2.8, and statistically equivalent to Λ CDM, Table 1 From the values in the Table, in the case BAO+Planck+JLA, we find, for the total matter fraction Ω M = (ω c + ω b )/h 2 0 , the mean values Ω M = {0.308, 0.300, 0.288} for Λ CDM, the RT and RR models, respectively, and h 2 0 Ω M = {0.141, 0.142, 0.143}, which is practically constant over the three models. Using BAO+Planck+JLA+(H 0 = 73.8) these numbers change little, and become Ω M = {0.305, 0.298, 0.286} for Λ CDM, the RT and the RR model, see [69] for full details, and plots of one and two-dimensional likelihoods. In particular, the left panel of Fig. 8 shows the twodimensional likelihood in the plane (Ω M , σ 8 ). We see that the nonlocal models predict slightly higher values of σ 8 and slightly lower values of Ω M . The fit to the CMB temperature power spectrum, obtained with the data in Table 1, is shown in the right panel of Fig. 8. 18 Extensions of the minimal models The RR and RT models, as discussed above, are a sort of 'minimal models', that allow us to begin to explore, in a simple and predictive setting, the effect of nonlocal terms. However, even if the general philosophy of the approach should turn out to be correct, it is quite possible that the actual model that describes Nature will be more complicated. A richer phenomenology can indeed be obtained with some well-motivated extensions of these models, as we discuss in this section. Effect of a previous inflationary era The minimal models studied above are characterized by the fact that the initial conditions for the auxiliary fields and their derivatives are set to zero during RD. As we have discussed in Sect. 6, the choice of initial conditions on the auxiliary fields is part of the definition of the model, and different initial conditions define different nonlocal models. In principle, the correct prescription should come from the fundamental theory. We now consider the effect of more general initial conditions, 18 We should also stress that the analysis in [69,75] has been performed using, for the sum of the neutrino masses, the value of the Planck baseline analysis [78], ∑ ν m ν = 0.06 eV, which is the smallest value consistent with neutrino oscillations. Increasing the neutrino masses lowers H 0 . In Λ CDM this would increase the tension with local measurements, which is the main reason for choosing them in this way in the Planck baseline analysis. However, we have seen that the nonlocal models, and particularly the RR model, predict higher values of H 0 , so they can accommodate larger neutrino masses without entering in tension with local measurements. A larger prior on neutrino masses would therefore favor the nonlocal models over Λ CDM. This possibility is currently being investigated [80]. in particular of the type that could be naturally generated by a previous phase of inflation. 19 RT model. We consider first the effect of u 0 in the RT model [46]. From eq. (123) we see that the most general initial condition of U amounts to a generic choice of the parameters u 0 and u 1 , at some given initial time. The parameter u 1 is associated to a decaying mode, so the solution obtained with a nonzero value of u 1 is quickly attracted toward that with u 1 = 0. However, u 0 is a constant mode. We have seen in eq. (122) that, in the RT model, the introduction of u 0 corresponds to adding back a cosmological constant term. From eq. (122) we find that the corresponding value of the energy fraction associated to a cosmological constant, Ω Λ , is given by Ω Λ = γu 0 . In the case u 0 = 0, for the RT model, γ 5 × 10 −2 , see Table 1. Then the effect of a non-vanishing u 0 will be small as long as |u 0 | 20. However larger values of u 0 can be naturally generated by a previous inflationary era. Indeed, we see from eq. (123) that in a deSitter-like inflationary phase, where ζ 0 0, if we start the evolution at an initial time t i at beginning an inflationary era and set U(t i ) =U(t i ) = 0, we get, during inflation where x i = x(t i ). At the end of inflation, x = x f , we therefore have where ∆ N = x f − x i 1. Consider next the auxiliary field Y (x). If we choose the initial conditions at the beginning of inflation so that the growing mode is not excited, i.e. a 1 = 0 in eq. (124), at the end of inflation we also have Y (x f ) 4∆ N. These values for U(x f ) and Y (x f ) can be taken as initial conditions for the subsequent evolution during RD. The corresponding results where shown in [46]. This choice of a 1 is however a form of tuning of the initial conditions on Y . Here we consider the most generic situation in which a 1 = 0. In this case during inflation Y will grow to a value of order exp{0.79∆ N}, where ∆ N is the number of efolds and α + 0.79 in a deSitter-like inflation. It will then decrease as exp{−0.70x} during the subsequent RD phase, see eq. (125). Despite the growth during inflation (exponential in x, so power-like in the scale factor a), the DE density associated to Y , ρ DE = γY ρ 0 , is still totally negligible in the inflationary phase, because ρ 0 = O(meV 4 ) is utterly negligible compared to the energy density during inflation. Thus, this growth of Y does not affect the dynamics at the inflationary epoch, nor in the subsequent RD era. Nevertheless, this large initial value at the end of inflation can produce a different behavior of Y near the present epoch, when the effective DE term γY (x) becomes important. 20 19 We are assuming here that the effective nonlocal theory given by the RR or RT model is still valid at the large energy scales corresponding to primordial inflation. Whether this is the case can only be ascertained once one has a understood the mechanism that generates these nonlocal effective theories from a fundamental theory. 20 Two caveats are however necessary here. First, as already mentioned, we are assuming that the nonlocal models are valid in the early inflationary phase. Second, we are assuming that the large To be more quantitative let us recall that, if inflation takes place at a scale M ≡ (ρ infl ) 1/4 , the minimum number of efolds required to solve the flatness and horizon problems is given by The inflationary scale M can range from a maximum value of order O(10 16 ) GeV (otherwise, for larger values the effect of GWs produced during inflation would have already been detected in the CMB temperature anisotropies) to a minimum value around 1 TeV, in order not to spoil the predictions of the standard big-bang scenario. Assuming instantaneous reheating, the value of the scale factor a * at which inflation ends and RD begins is given by ρ infl = ρ R,0 /a 4 * , where ρ R,0 is the present value of the radiation energy density, and as usual we have set the present value a 0 = 1. Plugging in the numerical values, for x * = log a * we find x * −65.9 + log 10 16 GeV M . Recall also that RD ends and MD starts at x = x eq −8.1. Thus, assuming that the number of efolds ∆ N is the minimum necessary to solve the horizon and flatness problems, during RD (i.e. for x * < x < x eq ) we have value of Y generated during inflation is still preserved by reheating. During reheating the energy density of the inflaton field is transferred to the radiation field. Since γY is just the DE energy density, it is in principle possible that even the energy density associated to Y is transferred to the radiation field, just as the inflaton energy density. In this case the evolution could resume at the beginning of RD with a small initial value of Y . Since, during RD, Y only has decaying modes, the solution would then be quickly attracted back to that obtained setting Y (x * ) = 0 at some x * in RD. where we used the fact that, during RD, Y (x) ∝ e −0.70x , see eq. (125). In Fig. 9 we show the result for ρ DE and w DE obtained starting the evolution from a value x in = −15 deep in RD, setting as initial conditions U(x in ) = 4∆ N, U (x in ) = 0, and with Y ( , as determined by eq. (150). We show the result for three different values of the inflationary scale M, and also show again, as a reference curve, the result for the minimal RT model. We see that the results, already for the background evolution, are quantitatively different from the minimal case. Comparing with the observational limits of w DE (z) from Fig. 5 of the Planck DE paper [81] we see that the predictions of these non-minimal nonlocal models for w DE (z) are still consistent with the observational bounds, so even these models are observationally viable, at least at the level of background evolution. Observe that now, in the past, w DE (z) is no longer phantom, since ρ DE (x) = γY (x) now starts from a large initial value and, at the beginning, it decreases. Then, w DE (z) crosses the phantom divide at If we start the evolution at an initial time x i at beginning an inflationary era with initial conditions V (x i ) = V (x i ) = 0 we get, during inflation, Then, at the end of inflation, V (x f ) 2(∆ N) 2 /(3h 2 dS ). This value is totally negligible, since even for an inflationary scale as small as M = 1 TeV, h 2 dS ∼ 10 15 . Thus, as initial conditions for the subsequent evolution in RD, we can take U(x in ) = 4∆ N and V (x in ) = 0, at a value x in deep in RD. Of course, one could take an initial value V (x in ) = O(1), but this would not really affect the result. The point is that, for V , inflation does not generate a very large value at the beginning of RD. The result is shown in Fig. 10 where, again, we express ∆ N in terms of the inflationary scale using eq. (148). We see that the RR model with a large initial value of u 0 gets closer and closer to Λ CDM. We find that w DE (z = 0) ranges from Observe also that in the cosmological future ρ DE (x) continues to grow, although slowly, see the lower panel in Fig. 10. From the point of view of the comparison with observations, a sensible strategy is therefore to start from the minimal RR model, since it predicts the largest deviations from Λ CDM and therefore can be more easily falsified (or verified). Indeed, already the next generation of experiments such as EUCLID should be able to discriminate clearly the minimal RR model from Λ CDM. However, one must keep in mind that the non-minimal model with a large value of u 0 is at least as well motivated physically as the 'minimal' model, but more difficult to distinguish from Λ CDM. The RR model with a large value of u 0 is also conceptually interesting because it gives an example of a dynamical DE model that effectively generates a dark energy that, at least up to the present epoch, behaves almost like a cosmological constant, without however relying on a vacuum energy term, and therefore without suffering from the lack of technical naturalness associated to vacuum energy. Observe that these nonlocal models do not solve the coincidence problem, since in any case we must choose m of order H 0 , just as in Λ CDM we must choose the cosmological constant Λ of order H 2 0 . However, depending on the physical origin of the nonlocal term, the mass parameter m might not suffer from the problem of large radiative corrections that renders the cosmological constant technically unnatural. Observe also that, just as in Λ CDM, the inflationary sector is a priori distinct from the sector that provides acceleration at the present epoch. Thus, one can in principle supplement the nonlocal models with any inflationary sector at high energy, adding an inflaton field with the desired inflaton potential, just as one does for Λ CDM. However in these nonlocal models, and particularly in the RR model, there is a very natural choice, which is to connect them to Starobinski inflation, since in a model where is already present a nonlocal term proportional to R2 −2 R is quite natural to also admit a local R 2 term. As first suggest in [48,50], one can then consider a model of the form where M S 10 13 GeV is the mass scale of the Starobinski model and Λ 4 S = M 2 S m 2 . As discussed in [50], at early times the non-local term is irrelevant and we recover the standard inflationary evolution, while at late times the local R 2 term becomes irrelevant and we recover the evolution of the non-local models, although with initial conditions on the auxiliary fields determined by the inflationary evolution. A general study of the effect of the initial conditions on the auxiliary fields in the RR model has been recently performed in [82]. In particular, it has been observed that there is a critical valueū 0 −14.82 + 0.67 log γ. For initial conditions u 0 > u 0 the evolution is of the type that we have discussed above (denoted as 'path A' in [82]). For u <ū 0 a qualitatively different solution ('path B') appears. On this second branch, after the RD and MD epoch, there is again a DE dominated era, where however w DE gets close to −1 but still remaining in the non-phantom region w DE > −1 (and, in the cosmological future, approaches asymptotically an unusual phase with w DE = 1/3, Ω DE → −∞ and Ω M → +∞, see Fig. 4 of [82]). In Fig. 11 we show the evolution in the recent epoch for such a solution, for three different values of u 0 = −30, −60, −100. As we see from eq. (129), the DE density in this case starts in RD from a non-vanishing value ρ DE (x in )/ρ 0 = (γ/4)u 2 0 . For instance, for u 0 = −60, requiring Ω M = 0.3 fixes γ 0.00157, so ρ DE (x in )/ρ 0 1.4. It then decreases smoothly up to the present epoch, where ρ DE (x = 0)/ρ 0 0.7, resulting in a non-phantom behavior for w DE (z). 21 For sufficiently large values values of −u 0 , this second branch is still cosmologically viable (while we see from the figure that, e.g., u 0 = −30 gives a value of w DE (0) too far from −1 to be observationally viable), and has been compared to JLA supernovae in [82]. Observe however that a previous inflationary phase would rather generate the initial conditions corresponding to 'path A' solutions. Exploring the landscape of nonlocal models The study of nonlocal infrared modifications of GR is a relatively recent research direction, and one needs some orientation as to which nonlocal models might be viable and which are not. At the present stage, the main reason for exploring variants of the models presented is not just to come out with one more nonlocal model that fits the data. Indeed, with the RT and RR models, both in their minimal and nonminimal forms discussed above, we already have a fair number of models to test against the data. Rather, our main motivation at present is that identifying features of the nonlocal models that are viable might shed light on the underlying mechanism that generates their specific form of nonlocality from a fundamental local theory. A first useful hint comes from the fact, remarked in Sect. 4.2, that at the level of models defined by equations of motions such as eqs. (58) or (60), models where 2 −1 acts on a tensor such as G µν or R µν are not cosmologically viable, while models involving 2 −1 R, such as the RT model, are viable. A similar analysis can be performed for models defined directly at the level of the action. At quadratic order in the curvature, a basis for the curvature-square terms is given by R 2 µνρσ , R 2 µν and R 2 . Actually, for cosmological application it is convenient to trade the Riemann tensor R µνρσ for the Weyl tensor C ρσ µν . A natural generalization of the nonlocal action (61) is then given by 21 In the RT model the situation is different. Indeed, in [46] it was found that cosmological solutions such that, today, ρ DE (x = 0)/ρ 0 is positive and equal to, say, 0.7, only exist for u 0 larger than a critical valueū 0 −12. Thus, again 'path A' solutions only exists only for u 0 larger than a critical value, but below this critical value there are no viable 'path B' solutions. The reason can be traced to the fact that in the RT model a non-vanishing initial value of u 0 corresponds to ρ DE (x in )/ρ 0 = γu 0 , linear in u 0 , while in the RR model corresponds to ρ DE (x in )/ρ 0 = (γ/4)u 2 0 . Thus, a negative value of u 0 in the RT model implies a negative initial value of ρ DE (x in )/ρ 0 , resulting in a qualitatively different evolution. In particular, for u 0 negative and sufficiently large, it becomes impossible to obtain ρ DE (x = 0)/ρ 0 positive and equal to 0.7 by the present epoch. where µ 1 , µ 2 and µ 3 are parameters with dimension of squared mass. This extended model has been studied in [76], where it has been found that the term R µν 2 −2 R µν is ruled out since it gives instabilities in the cosmological evolution at the background level. The Weyl-square term instead does not contribute to the background evolution, since the Weyl tensor vanishes in FRW, and it also has well-behaved scalar perturbations. However, its tensor perturbations are unstable [76], which again rules out this term. These results indicate that models in which the nonlocality involves 2 −1 applied on the Ricci scalar, such as the RR and RT model, play a special role. This is particularly interesting since, as we saw in eq. (63), a term R2 −2 R has a specific physical meaning, i.e. it corresponds to a diff-invariant mass term for the conformal mode. The same holds for the RT model, since at linearized order over Minkowski it is the same as the RR model. This provides an interesting direction of investigation for understanding the physical origin of these nonlocal models, that we will pursue further in Sect. 8. One can then further explore the landscape of nonlocal models, focusing on extensions of the RR model. Indeed, already the RT model can be considered as a nonlinear extension of the RR model, since the two models become the same when linearized over Minkowski. An action for the RT model would probably include further nonlinear terms beside R2 −2 R, such as higher powers of the curvature associated to higher powers of 2 −1 . We have seen in Sect. 7.3 that the RT model appears to be the one that fits best the data, so it might be interesting to explore other physically-motivated nonlinear extensions of the RR model. In particular, in [50] we have explored two possibilities, that could be a sign of an underlying conformal symmetry, and that we briefly discuss next. The ∆ 4 model. A first option is to consider the model whose effective quantum action is where ∆ 4 is the Paneitz operator (39). This operator depends only on the conformal structure of the metric, and we have seen that it appears in the nonlocal anomalyinduced effective action in four dimensions. In FRW the model can again be localized using two auxiliary fields U and V , so that the full system of equations reads [50] h 2 (x) = Ω (x) + (γ/4)U 2 1 + γ[−3V − 3V + (1/2)V (U + 2U)] , U + (5 + ζ )U + (6 + 2ζ )U = 6(2 + ζ ) , where as usual Ω (x) = Ω M e −3x + Ω R e −4x . The effective DE density can then be read from ρ DE (x)/ρ 0 = h 2 (x) − Ω (x). In the 'minimal' model with initial conditions U(x in ) = U (x in ) = V (x in ) = V (x in ) = 0 at some value x in deep in RD, we find that the evolution leads to w DE (z = 0) −1.36, too far away from −1 to be consistent with the observations. Also, contrary to the RR model, there is no constant homogeneous solution for U in RD and MD, because of a presence of a term proportional to U in eq. (157). Rather, the homogeneous solutions are U = e α ± x with α + = −2 and α − = −(3 + ζ 0 ), which are both negative in all three eras, and indeed whenever ζ 0 > −3, which is always the case in the early Universe. Therefore, there is no 'non-minimal' model in this case. No large value for U or V is generated during inflation, and in any case even a large initial value at the end of inflation would decrease exponentially in RD, quickly approaching the solution of the minimal model. Therefore, this model is not cosmologically viable. The conformal RR model. Another natural modification related to conformal symmetry would be to replace the 2 operator in the RR model (or in the RT model) by the 'conformal d'Alembertian' [−2 + (1/6)R] [50], which again only depends on the conformal structure of space-time. We will call it the 'conformal RR model'. More generally, one can also study the model [83] Γ ξ RR = m 2 with ξ generic, although only ξ = 1/6 is related to conformal invariance. Its study is a straightforward repetition of the analysis for the RR model. We can localize it by introducing two fields U = (−2 + ξ R) −1 R and S = (−2 + ξ R) −1 U, and then eqs. (130)-(132) become U + (3 + ζ )U + 6ξ (2 + ζ )U = 6(2 + ζ ) , where again ζ ≡ h /h. This models has some novel features compared to the ξ = 0 case [83]. Indeed, as we see from Fig. 12, the DE density goes asymptotically to a constant, and correspondingly also the Hubble parameter becomes constant, so the evolution approaches that of Λ CDM. This can also be easily undestood analytically, observing that in a regime of constant (and non-vanishing) R, the operator (−2 + ξ R) −1 acting on R reduces to (ξ R) −1 . Then the nonlocal term in the action (159) reduces to a cosmological constant Λ = m 2 /(12ξ 2 ), leading to a de Sitter era with H 2 = Λ /3 = m 2 /(6ξ ) 2 , i.e. H = m/(6ξ ). Similarly, from eq. (161) we see that, asymptotically, U → 1/ξ . Note that this solution only exists for ξ = 0. In particular, for the conformal RR model we have ξ = 1/6, so asymptotically H → m and h → 3γ 1/2 , in full agreement with the numerical result in Fig. 12. As we see from the bottom panel in Fig. 12, for the physically more relevant case ξ = 1/6, w DE (z) is very close to −1, for all redshifts of interest. Therefore, similarly to the non-minimal RR model discussed in Sect. 7.4.1, the conformal RR model is phenomenologically viable but more difficult to distinguish from Λ CDM, compared to the minimal RR model with ξ = 0. Toward a fundamental understanding The next question is how one could hope to derive the required form of the nonlocalities, from a fundamental local QFT. This is still largely work in progress, and we just mention here some relevant considerations, following refs. [48,49]. Perturbative quantum loops The first idea that might come to mind is whether perturbative loop corrections can generate the required nonlocality. We have indeed seen that, among several other terms, the expansion in eq. (16) also produces a term of the form µ 4 R2 −2 R, where µ is the mass of the relevant matter field (scalar, fermion or vector) running in the loops. One could then try to argue [84] that the previous terms in the expansion, such as R log(−2/µ 2 )R or µ 2 R2 −1 R, do not produce self-acceleration in the present cosmological epoch, and just retain the µ 4 R2 −2 R in the hope to effectively reproduce the RR model. Unfortunately, it is easy to see that this idea does not work. Indeed, as we have seen in detail in Sect. 2, to obtain a nonlocal contribution we must be in the regime in which the particle is light with respect to the relevant scale, |2/µ 2 | 1. In the cosmological context the typical curvature scale is given by the Hubble parameter, so at a given time t a particle of mass µ gives a nonlocal contribution only if µ 2 < ∼ H 2 (t). In the opposite limit µ 2 H 2 (t) it rather gives the local contribution (18). Thus, to produce a nonlocal contribution at the present cosmological epoch, we need µ 2 < ∼ H 2 0 . Then, retaining only the Einstein-Hilbert term and the µ 4 R2 −2 R term, we get an effective action of the form apart from a coefficient δ = O(1) that we have reabsorbed in µ 4 . Comparing with eq. (61) we see that we indeed get the RR model, but with a value of the mass scale m given by Since µ < ∼ H 0 , for m we get the ridiculously small value m < ∼ H 0 (H 0 /m Pl ) ∼ 10 −60 H 0 . To obtain a value of m of order H 0 we should rather use in eq. (164) a value µ ∼ (H 0 m Pl ) 1/2 , which is of the order of the meV (such as a neutrino!). However, in this case µ H 0 , and for such a particle at the present epoch we are in the regime (18) where the form factors are local. Therefore we cannot obtain the RR model with a value m ∼ H 0 , as would be required for obtaining an interesting cosmological model. The essence of the problem is that, with perturbative loop corrections, the term R2 −2 R in eq. (163) is unavoidably suppressed, with respect to the Einstein-Hilbert term, by a factor proportional 1/m 2 Pl . 22 Dynamical mass generation for the conformal mode The above considerations suggest to look for some non-perturbative mechanism that might generate dynamically the mass scale m [48]. An interesting hint, that follows from the exploration of the landscape of nonlocal models presented above, is that the models that are phenomenologically viable are only those, such as the RR and RT model, that have an interpretation in terms of a mass term for the conformal mode, as we saw in eq. (63). Thus, a mechanism that would generate dynamically a mass for the conformal mode would automatically give the RR model, or one of its nonlinear extensions such as the RT model or the conformal RR model. Dynamical mass generation requires non-perturbative physics, in which case it emerges as a very natural consequence, as we know from experience with several two-dimensional models, as well as from QCD. As we discussed, an effective mass term for the gluon, given by the gauge-invariant but nonlocal expression (53), is naturally generated in QCD. The 22 It has been pointed out in [82] that such a small value of m 2 could be compensated using a nonminimal model with a large value of |u 0 |. This would however lead to a model indistinguishable from Λ CDM. Furthermore, with m/H 0 ∼ 10 −60 , the required value of u 0 would be huge. For instance, in the RT model Ω Λ = γu 0 . Since γ ∼ (m/H 0 ) 2 , this would require u 0 ∼ 10 120 . In the RR model, where the effective DE is quadratic in u 0 , this would still require u 0 ∼ 10 60 . Observe that one should also tune the matter content so that the term µ 4 /2 2 in k W (−2/µ 2 ) vanishes, since we have seen that this term induces unacceptable instabilities in the tensor sector. question is therefore whether some sector of gravity can become non-perturbative in the IR, in particular in spacetimes of cosmological relevance such as deSitter. Indeed, it is well know that in deSitter space large IR fluctuations can develop. This is true already in the purely gravitational sector, since the graviton propagator grows without bound at large distances, and in fact the fastest growing term comes from the conformal mode [85], although the whole subject of IR effects in deSitter is quite controversial (see e.g. [86] for a recent discussion and references). Another promising direction for obtaining strong IR effects is given by the quantum dynamics of the conformal factor, which includes the effect of the anomalyinduced effective action. Indeed, the term σ∆ 4 σ in eq. (42) can induce long-range correlations, and possibly a phase transition reminiscent of the BKT phase transition in two dimensions [87]. Further work is needed to put this picture on firmer ground.
Predictors of residual force enhancement in voluntary contractions of elbow flexors☆ Highlights • Residual force enhancement (RFE) occurs in voluntary elbow flexor contractions.• RFE effects on electromyographic activity and torque depend on operating muscle length.• RFE contributes to flattening of the torque–angle relationship of elbow flexors.• RFE in the elbow flexor muscles is achieved primarily by an increase in neuromuscular efficiency. Introduction The capacity of a muscle to produce force is known to depend on the history of contraction. 1À3 Contraction histories that lead to an increase in force compared to the force predicted by the forceÀlength (FxL) and forceÀvelocity relationship have been of special interest to the scientific community. If we stretch an activated muscle and then hold it at a constant length, its isometric force, even after achieving a steady state, will exceed the force obtained if the muscle had been taken to that same length passively and then activated. This difference in isometric force production as a result of a previous active stretch is called residual force enhancement (RFE) and has been observed in in vitro/in situ muscle preparations ranging from the sarcomere to the muscle tendon unit level. 4À7 Depending on the experimental conditions, the magnitude of RFE can vary from no force enhancement to an increase of 400%. 8 Despite the general acceptance of RFE as an important muscle property, its role in human movement and the underlying mechanisms that are responsible for its occurrence remain Peer review under responsibility of Shanghai University of Sport. a matter of debate. 9,10 Human movements comprise a wide range of muscle contraction velocities, and eccentric contractions are an essential part of many everyday functional tasks. 11,12 The occurrence of RFE in vivo has been confirmed in most previous studies. 10,13À16 However, the observed increase in force output is generally less "dramatic" in vivo than it is in situ or in vitro, and results are less consistent than those described for isolated or in situ muscle preparations. For the special case of human voluntary contractions, the greatest mean value of RFE reported in the literature is approximately 16%. 10,13,17,18 To our knowledge, with the exception of some studies on the thumb adductors in the hand 18À20 , and one recent investigation on RFE and bilateral force deficit in human elbow flexors, 21 no information is available regarding the role of RFE in upper limb muscles. Flexor muscles in the upper limb typically do not bear body weight but nevertheless are frequently exposed to eccentric contractions when carrying objects and weights. 22 In comparison to lower limb muscles, contractions of upper limb muscles usually have little tendon strain that would affect the relative length changes between fascicles and entire muscle tendon units during everyday movements. 23À26 Considering that history-dependent properties are thought to be related to changes in the contractile element length, it may be that RFE is more pronounced in upper limb than lower limb muscles. Contraction of the elbow flexors often involves large changes in muscle length. 23,27 In addition, the operating range of the elbow flexors is often found to include the ascending, the plateau, and the descending region of the FxL relationship, a feature that is not commonly observed in other muscles of humans. 28À30 This wide excursion of the elbow flexor muscles, with sarcomeres reaching lengths beyond 3.2 mm, 31 provides a unique opportunity for analyzing RFE in the different regions of the FxL relationship during voluntary contractions. Although it has been suggested that RFE is greatest on the descending limb of the FxL relationship in isolated fiber and muscle preparations, 2,32,33 the dependence of RFE on the regions of the FxL relationship has not been systematically analyzed for voluntary contractions. One important factor to keep in mind when analyzing RFE in human muscles is the complex neuromuscular control involved in voluntary force production. Maximal voluntary activation is harder to achieve for eccentric than concentric and isometric contractions. 34 Maximal work/torque achieved during voluntary eccentric contractions is only a fraction of what a muscle could do if a neural regulatory mechanism did not limit the recruitment and/or discharge of motor units during eccentric contractions. 34,35 Since force enhancement mechanisms are thought to take place during the stretch and to depend on the activation and effort level, 19,36 the difficulty in reaching a truly maximal eccentric force may limit RFE in voluntary contractions. In addition, it has been suggested that activation-or its in vivo proxy, the electromyogram-seems to depend on the history of contraction. Oskouei and Herzog 19,36 and Jones et al. 37 showed that the activation required to exert a given submaximal force with the thumb adductor muscle is less if the contraction is preceded by active lengthening. In addition, Joumaa and Herzog 38 found that the metabolic energy cost of force production (ATP consumption per unit of force) was reduced after active stretch in skinned fibers of rabbit psoas muscle. It may be possible that the role of RFE in human voluntary contractions is mostly related to a reduction in metabolic energy rather than an increase in maximum force output. In this study, we aimed to test whether RFE occurs in voluntary contractions of the human elbow flexors and to examine if RFE depends on the region of the FxL relationship and the stretch amplitude. RFE was quantified by analyzing the maximum torque-generating potential on the ascending, plateau, and descending regions of the FxL relationship, and by measuring the corresponding electromyographic activity (EMG) and neuromuscular efficiency (NME) of the biceps brachii muscle for purely isometric reference contractions and for isometric contractions preceded by an active stretch ("enhanced contractions"). In addition, the dependence of RFE on the individual capacity for producing (negative) work during stretch was evaluated. We expected that RFE in the elbow flexors would manifest itself by (i) an increase in torque-generating potential and/or (ii) an increase in the NME of torque production. In addition, we expected RFE to be (i) greatest on the descending limb of the FxL relationship, (ii) greater for long compared to short stretches, and (iii) positively related to the subjects' relative capacity to produce (negative) work during stretch. Subjects Sixteen subjects (8 males and 8 females) participated in this study. All subjects gave free, written, informed consent, and all procedures were approved by the Human Research Ethics Board of the Federal University of Santa Catarina. The following inclusion criteria were observed: (i) age between 18 and 35 years; (ii) active in strength training for at least the past 6 months; and (iii) in good general health and having no pain, injuries, or surgeries in the shoulder, elbow, or wrist. Mean § standard deviation (SD) age, height, and weight were 26 § 5 years, 170 § 9 cm, and 69 § 6 kg, respectively. Instruments Elbow flexor torques were measured using a Biodex Multi-Joint System 4 isokinetic dynamometer (System 4 Pro; Biodex Medical Systems, Shirley, NY, USA). Subjects were seated with the back and legs supported and the hip and knee joint at 80 and 90 of flexion, respectively. The dynamometer was oriented at 30 to the chair in the transverse plane. Position and height of the dynamometer and chair were adjusted such that the elbow flexion axis (center of the trochlea and capitulum) was aligned with the axis of the dynamometer arm. The shoulder was positioned at 30 of flexion and 30 of abduction using a goniometer (Goniometer G-20; Arktus, Santa Tereza do Oeste, PR, Brazil). Active submaximal isokinetic elbow flexions were performed to verify that the dynamometer and elbow joint axes remained aligned throughout the entire range of motion. In case of noticeable misalignment, subjects were repositioned until proper alignment throughout the entire active range was achieved. Straps across the thorax, waist, and thigh were used to stabilize subjects. Support to the elbow was provided at the distal part of the humerus using the appropriate accessory provided by the Biodex system. No straps were used around the arm. A tape was placed on the padding of the support accessory marking the edge of the olecranon. This position was maintained throughout all trials. Full extension was defined as 0 . The forearm was kept in the supinated position by adjusting the grip lever accordingly. Surface EMG of the biceps brachii was recorded using a Miotool 400 EMG system (Miotec Equipamentos Biom edicos Ltda., Porto Alegre, RS, Brazil). Bipolar electrodes (interelectrode spacing 20 mm) were placed at one-third along a line from the cubital fossa to the medial acromion, and a ground electrode was placed on the mid-third of the clavicula. Before placing the electrodes, the skin was shaved and cleaned with alcohol. Skin impedance was checked (Multimeter Fluke 115, Everett, WA, USA) and pronounced acceptable if it was less than 5 kV. If it was greater than 5 kV, the electrodes were removed and skin preparation continued. The software Miotec Suite 1.0 (Miotec Equipamentos Biom edicos Ltda., Porto Alegre) was used for acquiring synchronized data of EMG, torque, and position at 1000 Hz. NME was defined as the ratio between torque and EMG for each contraction. Weight and height of the participants were measured using a scale and a stadiometer (Soehnle Professional; Soehnle, Backnang, Germany). Procedures After being fully informed about the experimental protocol, subjects took part in a standardized warm-up, which consisted of 15 submaximal concentric elbow flexion and extension repetitions at 120 /s from 40 to 140 . Following warm-up, the elbow angle of maximal isometric flexor force was identified. Subjects were asked to perform 2 maximum isometric voluntary contractions (MIVCs) at 80 , 90 , and 100 of elbow flexion. The force-generating potential at each angle was then compared for the three angles by accounting for the modeled changes in moment arm at these positions. 39 If the estimated force was greatest at either 80 or 100 , another 2 contractions were performed at 70 or 110 , respectively, and this procedure was repeated until the position of maximal force was uniquely identified. This position was then defined as the plateau of the FxL relationship (a plateau ). From this position, an angle corresponding to the ascending limb (a plateau + 30 ) and an angle corresponding to the descending limb (a plateau ¡ 40 ) of the FxL relationship was identified. At each of the three angles (a plateau À 40 , a plateau , a plateau + 30 ), maximum voluntary contractions were performed to achieve the aims of this study. These sets of maximal contractions included (i) a purely isometric contraction, (ii) 2 maximum isometric contractions preceded by a "long" active stretch of 40 , (iii) 2 maximum isometric contractions preceded by a "short" active stretch of 20 , and (iv) a final isometric contraction. The order of stretch amplitudes (ii and iii of the series) and the 3 elbow angles corresponding to the different regions of the FxL was randomized. For the ascending limb testing, only the short active stretch could be performed because most subjects could not reach the initial angle needed for the long stretch ((a plateau + 30 ) + 40 ). For the isometric reference contractions, subjects were instructed to exert maximum contractions for a period of 5 s. For the stretch-isometric test contractions, subjects were instructed to perform a maximum isometric contraction for 2 s at the initial length, followed by the active stretch, and then by an isometric contraction of 3 s. All stretches were performed at 90 /s. Verbal encouragement and visual feedback of the elbow flexor torque were provided to subjects during all contractions. To minimize fatigue, a rest period of at least 2 min between contractions was strictly enforced and was extended upon a subject's request. All contractions were repeated once, and the contraction with the higher torque was used for analysis. The difference between the initial (i) and final (iv) MIVC was less than 10% for all subjects. Elbow angle, torque, and EMG were exported to MATLAB (R2013a; MathWorks, Natick, MA, USA), and a processing routine was used to analyze the data. Torque data were low pass filtered using a fourth-order, recursive Butterworth filter with a cut-off frequency of 10 Hz and were normalized to the MIVC torque at the plateau of the FxL relationship (1 s window around the peak torque). Passive torques after all contractions were also measured (1 s window, 1 s after deactivation), and the passive torques from the purely isometric reference contractions were subtracted from the corresponding stretch-isometric test contractions. EMG data were band pass filtered at 20À500 Hz and rectified. A linear envelope EMG was then calculated by using a low pass, recursive Butterworth filter at 6 Hz. EMG data were normalized to the corresponding MIVC EMG at each angle. Mean normalized EMG, torque, and NME were determined for the isometric reference and for the isometric contractions preceded by the long stretch and the short stretch contractions at each FxL region. Each set of comparisons (long stretch £ isometric; short stretch £ isometric) was synchronized, and a 500 ms window at 1 s following the end of the stretch was analyzed. Onset of contractions was identified when the elbow flexor torque exceeded the baseline noise by 3 SDs. For comparisons across stretch amplitudes and the different regions of the FxL relationship, all primary outcomes were normalized and expressed as percentage differences from the reference contractions. The (negative) mechanical work during stretch was calculated as the torqueÀangular displacement integral, normalized by maximum toque (MIVC). This normalization process was performed to account for strength differences among subjects and the resulting unit of work is MIVC. (MIVC represents the peak torque produced at the plateau of the FxL relationship) Data analysis SPSS Version 23.0 software (IBM Corp., Armonk, NY, USA) was used to analyze the data. Mean § SD across subjects were calculated for normalized torque, EMG, and NME. Normality of the data was tested using the Shapir-oÀWilk test. A 2-factor repeated measures analysis of variance in a linear mixed model approach was used to analyze the effects of stretch (no stretch (isometric reference contraction), short stretch, and long stretch) and the potential interaction between stretch and the regions of the FxL relationship (ascending, plateau, and descending regions) for each outcome measure independently (torque, passive torque, EMG, NME). Significant interactions or main effects were followed up with multiple comparisons between conditions using Bonferroni corrections. Following this initial analysis, a second 2-factor repeated measures analysis of variance was used to analyze the differences in percentage changes of EMG, torque, and NME from the isometric reference conditions between stretch amplitude (short, long) and the regions of the FxL relationship (ascending, plateau, and descending regions). The relationship between the normalized work performed during active stretching and the percentage changes in EMG and torque that could be associated to an increase in NME for each stretch amplitude and for each FxL region was tested using a one-tailed Pearson correlation. An a level of 0.05 was used for all statistical tests. Results Descriptive statistics for torque, EMG, and NME across conditions are shown in Table 1. A significant interaction between the effect of active stretch and the FxL curve region was observed for torque (F(3, 71.764) = 4.290, p = 0.008; Table 1). The elbow flexor torque was significantly decreased following active stretching compared to the isometric reference contraction at the plateau (but not at the ascending and descending limbs) region of the FxL relationship (F(2, 33.700) = 9.052, p = 0.001) for both the short (p = 0.008) and the long stretch magnitude (p = 0.002; Fig. 1). There was no significant interaction between active stretch and region on the FxL relationship for EMG or NME (F(3, 49.701) = 1.067, p = 0.372 and F(3, 51.937) = 0.371, p = 0.745, respectively). There was a significant reduction in EMG (F(2, 48.666) = 13.965, p < 0.001) and an increase in NME (F(2, 48.469) = 8.585, p = 0.001) for contractions preceded by an active stretch compared to the isometric reference contractions. These differences were found for every stretch amplitude and for all regions of the FxL relationship (Table 1). There was no significant main effect of stretch on passive torque, regardless of the region of the FxL relationship or the amplitude of the stretch (p = 0.889). There was a significant main effect of FxL region on the percentage change in torque (F(2, 48.205) = 29.905, p < 0.001) and the percentage change in EMG (F(2, 45.478) = 4.394, p = 0.018) between the contractions preceded by active stretch and the isometric reference contractions ( Fig. 2A, B). There was no difference between stretch amplitude for the percentage changes in torque (F(1, 57.820) = 0.286, p = 0.595) or EMG (F(1, 57.524) = 0.123, p = 0.727). The percentage increase in NME observed for all contractions preceded by an active stretch was not statistically different across stretch amplitudes (F(1, 50.930) = 1.054, p = 0.309) or regions on the FxL curve (F(2, 45.656) = 2.230, Table 1 Mean § SD values of elbow flexor isometric torques, EMGs (biceps brachii), and NME (ratio between torque and EMG) for purely isometric contractions (reference) and for contractions preceded by a long (40 ) and short (20 ) active stretch at the ascending, the plateau, and the descending limb of the FxL relationship. p = 0.119). NME increased on average by 19% for contractions preceded by a stretch compared to the corresponding isometric reference values (Fig. 2C). The normalized work perfomed during the stretch was À18.3 § 7.9 MIVC. , À17.2 § 2.8 MIVC. , and À19.6 § 3.4 MIVC. for the short active stretches performed at the ascending, plateau, and descending limb of the FxL relationship, respectively. For the long stretches, the normalized work performed was À35.6 § 7.8 MIVC. and À35.4 § 6.6 MIVC. for plateau and the descending regions of the FxL relationship. With the exception of contractions on the ascending limb, a normal distribution of the work performed by the different subjects during the stretch was observed. There was a significant moderate correlation between the normalized negative work and the percentage change in torque from the isometric reference for the short (r = À0.544, p = 0.015) and the long (r = À0.491, p = 0.027) stretch amplitude at the plateau of the FxL relationship (Fig. 3). Work was not significantly correlated to changes in EMG or NME. Discussion The primary aim of this study was to determine the role of RFE produced by different stretch magnitudes on the torquegenerating potential and NME of the elbow flexors at the plateau, ascending, and descending limbs of the FxL relationship. In general, torque, EMG, and NME following active stretches differed from the values observed for the purely isometric reference contractions. While although the detailed effects of active stretch on torque and EMG differed between regions of the FxL relationship, NME increased in a similar manner for all muscle lengths. There was a substantial interindividual variability in torque-generating potential in response to active stretching, which was partly accounted for by differences in the (negative) work capacity between subjects. RFE has been observed in isolated muscle preparations on the ascending limb, plateau, and descending limb of the lengthÀtension relationship. 1,7,8,40 We did not find enhancement in maximum torque-generating potential at any muscle length or stretch amplitude. Instead, we observed a reduction in torque potential following active stretching at the plateau of the FxL relationship, whereas torque potential was unaffected on the ascending or descending limbs. Owing to the limited number of studies on in vivo RFE, and the different experimental conditions (amplitude of stretch, final muscle length, and muscle analyzed), comparisons across studies with regard to a possible dependence of RFE on the regions of the FxL relationship cannot be drawn easily. 10 Previous studies on RFE at different muscle lengths opted to control for joint angles, neglecting what region of the FxL relationship on which the muscles might be working. 33,41,42 Considering that the FxL relationship may shift with respect to the joint angle across individuals, 43 and may shift depending on the level of activation, 44,45 conclusions about the dependence of RFE on the FxL relationship are limited. Following active stretching, the torqueÀangle curve was flatter than the corresponding curve for the purely isometric reference contractions (Fig. 1). Previous studies on human knee flexor and extensor muscles also revealed a joint angle dependence of voluntary contraction-induced RFE. For example, Shim and Garner 42 found RFE for the knee extensors at 100 of knee flexion (long length) but not at 40 . They also reported significant RFE for the knee flexors at 10 of knee flexion (long length) but not at 70 . Similarly, Power et al. 41 found greater RFE at long (100 of knee flexion) compared to short muscle length (60 of knee flexion). It seems, therefore, that the flattening of the torque-angle curve for muscles in the enhanced state is a result observed in human muscles other than those investigated here. There was no increase in torque-generating potential after stretch in the elbow flexors in our study. At a casual glance, this might be interpreted as a lack of RFE in voluntary elbow flexor contractions. However, there was a mean increase in NME of 19% for the isometric contractions preceded by stretch, suggesting that some mechanism enabled greater force production for a given activation cost. 19,37,38 Isometric force and EMG are not always related in a linear manner, and the greatest NME for the elbow flexors appears to occur at about 50% of MIVC. 46 Therefore, an increased NME might be partly accounted for by this nonlinearity. The approximately 20% decrease in EMG observed at the plateau corresponds to a decrease in torque of approximately 15%, if we adopt the forceÀEMG relationship for submaximal contractions by Doheny et al. 46 This reduction is slightly greater than the 11% observed experimentally in our study. On the ascending and descending limb of the FxL relationship, a small but nonsignificant increase in torque-generating potential was observed for contractions preceded by stretch despite a significant reduction in EMG. This is a clear indication of an increase in NME for these 2 regions, independent of any nonlinearity that may exist between force and EMG. Since NME in the enhanced state was similar across all regions of the FxL relationship, our initial hypothesis of a greater RFE on the descending limb compared to the other regions of the FxL relationship is not supported. Rather, our results indicate that RFE may contribute to a more equal torque capacity of the elbow flexor muscles across their working range than is possible for purely isometric contractions not preceded by stretch, and that forces can be produced for less activation, and thus less metabolic cost, than otherwise possible. This finding suggests a variety of possible functional roles for voluntary muscle contractions following active muscle stretching besides other than the generally accepted notion of increased force potential. It is generally accepted that RFE increases with the amplitude of muscle stretching in isolated muscle preparations. 1,47 Bullimore et al. 3 showed that RFE increased with stretch amplitude only when the extra stretch occurred on the descending limb of the FxL relationship. In our study, the comparisons between stretch amplitudes (40 and 20 ) were always within a given region of the FxL relationship. Interestingly, however, the effect of stretch amplitude on EMG and on torque did not depend on the FxL region (Fig. 2). Lee and Herzog 18 reported that the effect of stretch amplitude on the RFE differed between voluntary and electrically activated adductor pollicis muscle. Specifically, for the electrically stimulated muscle, they found the expected increase in RFE with increasing stretch magnitude. However, for the voluntary contractions, RFE increased from the smallest to the intermediate stretch amplitude, but then decreased from the intermediate to the greatest stretch amplitude, leaving the RFE for the shortest and longest stretches the same. Therefore, there is precedence in human voluntary contractions where stretch amplitude did not affect the amount of RFE or NME. Despite the non-statistically significant difference between the 20 and 40 stretch for torque, EMG, or NME observed in our study, visual inspection of the percentage change in torque and EMG between the 2 stretch amplitudes shows a trend that deserves attention. Although an apparently greater reduction in EMG after the long stretch amplitude compared to the short stretch amplitude was observed, isometric torque after the long stretch tended to be higher than the isometric torque after the short stretch. This apparently paradoxical effect led to a 6% higher NME for the long compared to the short stretch condition. Although not statistically significant, this increase in NME for the increased stretch magnitude may provide some interesting functional advantages. RFE during human in vivo voluntary contractions is not as consistent as that observed in isolated muscle preparations. We found a high intersubject variability for all outcome measures. Variable results have also been found by others who sometimes group people into responders (i.e., individuals who responded to the active stretch with a significant increase in torque) and nonresponders (i.e., individuals who did not present any residual force/torque enhancement after active stretch). 10,19,37,48,49 Although grouping subjects according to outcome should be done with caution (since a purely random result would give some positive and some negative torque changes), the strategy used in these studies highlights the need for a better understanding of individual characteristics that may favor RFE properties during voluntary contractions. In our study, the differences in elbow torques between the isometric contractions following active stretch and the purely isometric reference contractions at the plateau region of the FxL relationship were shown to depend on an individual's ability to produce work during the stretch. Mechanical work during muscle stretch has been suggested as a possible predictor of RFE in isolated extensor digitorum longus and soleus muscles from mice. 50 However, in contrast to this previous study in which work was changed by changing the magnitude of stretch, we found that a significant linear relationship between work and torque change exists for a given stretch magnitude. Subjects who produced great relative eccentric work had a small loss in torque after stretch. Paternoster et al. 51 found RFE for multijoint leg extensions in 8 of the 16 subjects in their sample and observed that these 8 subjects produced peak forces during stretch that exceeded the force exerted in the isometric reference contraction. However, 2 of the nonresponders also had force during stretch that exceeded the isometric reference value, and there was no correlation between the peak force during stretch and RFE. We conclude from the results of our study that RFE contributes to a greater NME of the elbow flexors on the plateau and ascending and descending limbs of the FxL relationship. The increase in NME occurs without a significant increase in maximal torque-generating potential but is primarily caused by a reduction in EMG for similar torques, thus resulting in a greater torque/EMG ratio. Furthermore, the torque and EMG changes that ultimately result in the enhancement in NME differ in a characteristic manner based on the region of the FxL relationship. Our results suggest that (i) RFE contributes to "flatten" the elbow flexor torqueÀangle relationship, favoring torque production at lengths where the purely isometric torques are reduced substantially, and (ii) RFE contributes to a reduction in energy cost of torque production during isometric contractions for the entire operating range. This study has limitations that need to be kept in mind when interpreting our results. First, maximum voluntary torque production is complex in nature and depends on multiple factors, such as motivation and familiarization. All subjects in our study were accustomed to performing elbow flexor contractions against high resistance, because only subjects with a minimum of 6 months' experience in strength training were included. In addition, instructions were given consistently across trials by two researchers, and any fatigue-or motivation-related effect was controlled for by repeating the reference contraction from the beginning of testing at the end of each series of dynamic contractions. Second, FxL regions were identified for all elbow flexors as a group, whereas different muscles within an agonist group may present different FxL curves. Finally, the EMG of the biceps brachii muscle was used in our study as a measure of elbow flexor activation. The biceps brachii muscle has been shown to contribute most to the elbow flexor torque 52 and was found to represent well the EMG of the remaining elbow flexor muscles during isometric contractions (e.g., Doheny et al. 46 ). We acknowledge that EMG is merely a proxy for activation, but one that is generally accepted in human studies and one that probably works well for the steady-state situations analyzed here.
Molecular variation among virulent and avirulent strains of the quarantine nematode Bursaphelenchus xylophilus Bursaphelenchus xylophilus is an emerging pathogenic nematode that is responsible for a devastating epidemic of pine wilt disease worldwide, causing severe ecological damage and economic losses to forestry. Two forms of this nematode have been reported, i.e., with strong and weak virulence, commonly referred as virulent and avirulent strains. However, the pathogenicity-related genes of B. xylophilus are not sufficiently characterized. In this study, to find pathogenesis related genes we re-sequenced and compared genomes of two virulent and two avirulent populations. We identified genes affected by genomic variation, and functional annotation of those genes indicated that some of them might play potential roles in pathogenesis. The performed analysis showed that both avirulent populations differed from the virulent ones by 1576 genes with high impact variants. Demonstration of genetic differences between virulent and avirulent strains will provide effective methods to distinguish these two nematode virulence forms at the molecular level. The reported results provide basic information that can facilitate development of a better diagnosis for B. xylophilus isolates/strains which present different levels of virulence and better understanding of the molecular mechanism involved in the development of the PWD. Electronic supplementary material The online version of this article (10.1007/s00438-020-01739-w) contains supplementary material, which is available to authorized users. Introduction Pine wilt disease (PWD) is one of the most serious global tree diseases affecting coniferous forests around the world. It is caused by the pine wood nematode (PWN), Bursaphelenchus xylophilus (Nematoda: Aphelenchoididae) (Mota and Vieira 2008;Jones et al. 2013). This nematode is listed as a major plant quarantine organism for most countries in the world (Evans et al. 1996;Futai 2013). B. xylophilus is native to North America where it causes only a limited damage to native pines, although non-native species are profoundly affected (Jones et al. 2013). PWN was introduced into Japan during the early twentieth century and it spread subsequently to other East Asian (China, Taiwan, South Korea) and European (Portugal, Spain) countries (Mota et al. 1999;Robertson et al. 2011;Futai 2013;Ding et al. 2016;Filipiak et al. 2017). The nematodes are transmitted to healthy trees by longhorn beetles in the genus Monochamus, during maturation feeding of the insect (Togashi and Shigesada 2006). Once B. xylophilus enters the pine tree, it migrates through the resin canals, destructively feeding on the parenchymal cells (Shinya et al. 2013). In many regions worldwide pine is among the most important tree genera for the forest industry and the rapid spread of this disease has become a major problem (Mota and Vieira 2008;Vicente et al. 2012;Espada et al. 2016). Every year thousands of trees displaying symptoms of PWD are felled and removed in the affected areas. B. xylophilus causes the death of host trees in less than one year after infection under suitable environmental conditions, and Communicated by Stefan Hohmann. Electronic supplementary material The online version of this article (https ://doi.org/10.1007/s0043 8-020-01739 -w) contains supplementary material, which is available to authorized users. the risk of this problem is likely to increase due to climate changes (Pereira et al. 2013;Qiu et al. 2013). Earlier research provided evidence that B. xylophilus could effectively compete for the host tree and vector insects with native species B. mucronatus, and reduce or displace it from the original locations (Futai 1980;Cheng et al. 2009). However, in our in vitro and in vivo research conducted in plate cultures of Botryotinia fuckeliana and pine seedlings, respectively, a range of variation in results of such competition was observed among particular isolates of these two competing nematodes (Tomalak and Filipiak 2013). Kiyohara and Bolla (1990) reported a great variation in the virulence level of B. xylophilus collected throughout Japan. Mortality of pine seedlings ranged from 0 to 100% and the virulence of nematode isolates from a single stand of pines also varied significantly. With respect to pathogenicity, two major groups of strains, i.e., virulent and avirulent, have been reported in B. xylophilus (Bolla and Boschert 1993;Akiba et al. 2012). To date, many studies have reported that the virulence level of B. xylophilus is associated with its reproductive potential and population growth ability (Kiyohara and Bolla 1990;Aikawa et al. 2003;Wang et al. 2005;Aikawa and Kikuchi 2007;Shinya et al. 2012;Qiu et al. 2013;Filipiak 2015). The generation time of virulent isolates cultured virulent on B. fuckeliana at 25 °C was shorter than that of avirulent isolates and the rate of population increase was faster than in the avirulent isolates (Wang et al. 2005). In the in vivo situation, when virulent and avirulent isolates were separately inoculated into Pinus thunbergii seedlings, nematode density of the virulent population increased with time after inoculation, while the avirulent isolate never reproduced (Kiyohara and Bolla 1990;Aikawa and Kikuchi 2007). Thus, the results of the earlier studies suggest that virulence is closely associated with the nematode reproductive potential, irrespective of in vitro or in vivo conditions. In PWD the mechanism of pathogenicity is complicated and involves many pathogenic factors including host pines, nematodes, beetles, fungi, bacteria, environmental factors, and other aspects (Shinya et al. 2013). A draft genome sequence of B. xylophilus was reported in 2011 (Kikuchi et al. 2011). Its availability should facilitate additional studies on B. xylophilus pathogenicity. To date, however, only a few transcriptome-type studies (Espada et al. 2016;Li et al. 2019Li et al. , 2020Hu et al. 2020) and only one genome-wide analysis (Palomares-Rius et al. 2015) have been reported. The main objective of this study was to reveal genetic differences related to pathogenicity of B. xylophilus. It was achieved by resequencing of four B. xylophilus strains followed by comparison and analysis of the complete nuclear genome sequences of two virulent (BxPt67OL and BxMad24C) and two avirulent (C14-5 and OKD-1) populations of this nematode. Nematode strains In the reported study, two virulent and two avirulent strains of B. xylophilus were examined. The origin and ecological features of the nematode strains are summarized in Table 1. The virulence of the strains was demonstrated by previous inoculation tests on pine seedlings (Aikawa et al. 2003;Filipiak 2015). Prior to the examination, all strains were cultured on B. fuckeliana/potato dextrose agar at 25 °C for ca. 2 weeks. Propagated nematodes were stored in 20 μl distilled H 2 O at − 20 °C until being used for subsequent DNA extraction. Genomic DNA preparation and sequencing Genomic DNA of nematodes was isolated with a QIAamp DNA Micro Kit (Qiagen, Hilden, Germany) according to the protocol provided by the manufacturer. DNA concentration and its purity were measured using a NanoDrop spectrophotometer (Thermo Fisher Scientific Inc., MA, USA). 50-100 ng of DNA was used to construct 350-bp libraries, using a TruSeq Nano DNA Library Prep Kit (Illumina, San Diego, USA) with the standard protocol. Libraries were sequenced on an Illumina HiSeqX Ten, according to the manufacturer's recommended protocol, to produce 150-bp paired end reads (Genomed, Warsaw, Poland). Data analysis The quality of the obtained raw sequencing data was first assessed by FastQC (Andrews 2010; https ://www.bioin forma tics.babra ham.ac.uk/proje cts/fastq c). Subsequently, adapters and low quality bases were removed by Trimmomatic in paired-end mode (Bolger et al. 2014 Variant effect analysis Variant consequences for genes where evaluated with the ensembl variant effect predictor (VEP) (McLaren et al. 2016). Then, genes with one high impact variant which were present in at least one virulent populations, but absent in any of the avirulent populations were selected. Similarly, genes present in at least one avirulent population but absent in virulent ones were selected for further analysis. The calculations for each set of genes were made using in-house Linux tools and Python scripts. This approach allowed for identification of gene sets, which were further examined with Gene Ontology enrichment analysis. GO-enrichment analysis The WormBase database was used to determine the nematode gene annotations related to their functions (Gene Ontology Consortium 2004; https ://geneo ntolo gy.org/ docs/ontol ogy-docum entat ion). Subsequently, using the topGO package from the R environment (https ://bioco nduct or.org/packa ges/relea se/bioc/html/topGO .html), GOenrichment analyzes were performed for both of the above gene sets. The topGO package is designed to facilitate semi-automated enrichment analysis for Gene Ontology (GO) terms. Each GO category (molecular function-MF, cellular component-CC, biological process-BP) is tested independently. GO enrichment analysis allows testing the over-representation of GO terms. In our study, we used two methods, i.e., the classic and elim, to identify the over-representation of GO terms (Alexa and Rahnenfuhrer 2018). The classic method analyzes GO identifiers in a standard way, in isolation from their hierarchical structure, therefore, it often reports more general GO terms as statistically significant. The elim method was designed to be more conservative then the classic method and therefore one expects the p values returned by the former method are lower bounded by the p values returned by the later method. Moreover, it tries to take into account the hierarchy of GO terms and focus on the most specific ones (Alexa et al. 2006;Alexa and Rahnenfuhrer 2018). In our analysis, we focused on statistically significant GO terms, indicated by p values below the standard significance level alpha = 0.05 and with more than 5 significant genes for both Fisher classic and Fisher elim methods (except for several GO terms with ≤ 5 significant genes, but substantially lower p values (p < 0.01), which are also shown in Table 3). Genes characterized by those GO terms demonstrated higher incidence in the analyzed gene set, than it would be expected at random. Sequencing In order to investigate the genetic differences between virulent and avirulent B. xylophilus populations, four representative strains with different pathogenic traits were resequenced. After sequencing and filtering of low quality bases ca. 37 millions of read pairs for both the virulent populations (BxPt67OL and BxMad24C), and ca. 37 and ca. 35 millions of read pairs for the avirulent populations (C14-5 and OKD-1, respectively) have been obtained ( Table 2). The obtained sequences were deposited at NCBI Sequence Read Archive (SRA) under accession number PRJNA630377. After mapping to the B. xylophilus reference genome, for the virulent populations BxPt67OL and BxMad24C ca. 3.7 Gb and ca. 9.9 Gb paired end data were obtained, respectively. For the avirulent populations C14-5 and OKD-1 ca. 6.6 Gb and ca. 1.6 Gb paired end data were obtained, respectively. Overall, 32.70% and 87.29% reads of the virulent populations (BxPt67OL and BxMad24C, respectively) and 57.67% and 15.01% of the avirulent populations (C14-5 and OKD-1, respectively) were successfully aligned to the B. xylophilus genome. Variable levels of mapping of the examined populations to the B. xylophilus reference genome were due to the presence of random bacterial sequences in extracted DNA. Unmapped reads were analysed with the Kraken program (Wood and Salzberg 2014). The performed analysis showed that virulent populations BxPt67OL and BxMad24C contained 34.85% and 10.46% while the avirulent populations C14-5 and OKD-1 contained 27.93 and 56.53% reads assigned to the bacteria, respectively. Unfortunately, the share of bacterial data resulted in less available data derived from nematodes and, consequently, less coverage, especially for the population OKD-1. Sequence variants The program GATK HaplotypeCaller has determined the sequence variants based on the paired-end reads mapped to the B. xylophilus reference genome. The genomes of B. xylophilus were found to be highly variable comparing to reference Ka4C1 strain; 1,161,206 and 1,049,303 variant positions for virulent BxPt67OL and BxMad24C, and 2,156,336 and 1,926,667 for avirulent C14-5 and OKD-1 populations were detected, respectively ( Table 2). The results of the analysis showed that avirulent populations had 1.8 × more variants than virulent ones. The B. xylophilus reference genome corresponds to the virulent population, therefore the detection of fewer variants for the virulent populations seems to be as expected. The ensembl variant effect predictor (VEP) program revealed the presence of variants with significant consequences for the protein function (i.e., high impact variants). Significant differences were found between virulent and avirulent populations. While the virulent BxPt67OL and BxMad24C populations had 5142 and 4102 high impact variants comparing to Ka4C1 strain, the avirulent C14-5 and OKD-1 had 8220 and 9090 high impact variants, respectively. Moreover, 2324 (BxPt67OL) and 1838 (BxMad24C) genes with high-impact variants were identified in virulent populations, and 3578 (C14-5) and 3831 (OKD-1) genes were identified in avirulent populations, respectively (Table 2; Table ESM1). Both analysed virulent populations differed from the reference strain by 469 genes with highimpact variants, while both avirulent populations differed by as many as 1576 genes (Table ESM1). GO-enrichment analysis For the above genes, functional annotation was performed with topGO against Wormbase database to assign it to B. xylophilus molecular functions, biological processes and cellular components. Currently, 17,704 B. xylophilus genes are annotated in WormBase (WS277 release). In the presently reported research nematode gene annotations for 10,438 genes were determined (Table ESM2). For the avirulent populations, the GO identifiers distinguished 6 biological processes, 19 molecular functions and 1 cellular component for the avirulent populations, while for the virulent populations, the GO analysis distinguished 1 biological process, 3 molecular functions, and 1 cellular component (Table 3). For each GO category, genes associated with molecular functions, biological processes, and cellular components were also identified and presented in Tables ESM3, ESM4, ESM5, ESM6, ESM7 and ESM8. For the biological processes, proteolysis, protein retention in ER lumen, sodium ion transmembrane transport, sulfate transmembrane transport, NAD biosynthetic process, and DNA conformation change were significantly enriched in the avirulent populations, whereas only oxidation-reduction process was enriched in the virulent populations. For the avirulent populations, most highly represented terms of gene ontology (GO) in the molecular function category were related to endopeptidases (aspartic-type endopeptidase activity, metalloendopeptidase activity, calcium-dependent cysteine-type endopeptidase activity, peptidase activity and peptidase activity: acting on l-amino acid peptides) and other peptidases (serine-type peptidase activity, carboxypeptidase activity). The other molecular functions belonged to sodium channel activity, ER retention sequence binding, lipase activity, secondary active sulfate transmembrane transporter activity, ATPase activity: coupled to movement of substances and coupled to transmembrane movement of substances, hydrolase activity: acting on acid anhydrides and catalyzing transmembrane movement of substances, primary active transmembrane transporter activity, P-P-bond-hydrolysis-driven transmembrane transporter activity, phospholipase activity, flavin adenine dinucleotide binding, and serine hydrolase activity, whereas for the virulent populations oxidoreductase activity, heme binding, and cofactor binding were recorded. For the cellular component, only integral component of membrane was significantly enriched in the avirulent populations, and in the virulent populations only membrane was found (Table 3). Discussion The virulence mechanisms of B. xylophilus is complicated and involves many factors. It is suggested that B. xylophilus may use different genes or pathways to overcome the pine 1 3 antinematodal response (Sommer and Streit 2011;Santos et al. 2012;Figueiredo et al. 2013). One of the approach to elucidate these mechanisms is whole-genome sequencing ). In our study, four representative populations of B. xylophilus, two each of virulent and avirulent phenotypes, which originated from Japan and Portugal, respectively, were selected. All those populations presented different phenotypic or ecological traits and they had already been used in several previous studies (Aikawa et al. 2003;Filipiak 2015). The examined populations were closely related, and they shared some similar characteristics within the virulence category, including faster and slower development throughout the life cycle, in virulent (BxPt67OL and BxMad24C) and avirulent (C14-5 and OKD-1) populations, respectively. Our genome-wide analysis revealed that virulent populations of B. xylophilus has high level of genome variation (BxPt67OL-1.56% and BxMad24C-1.41% comparing to reference genome). Differences were bigger when we compared avirulent populations to reference genome (C14-5-2.91% and OKD-1-2.60%). Obtained data are in line with genomic inter-strain differences reported by Palomares-Rius et al. (2015). It seemed that the level of diversity in the B. xylophilus genome was high but similar situation had been also observed in other hyper-diverse organisms, including nematodes (e.g., Caenorhabditis brenneri or C. remanei) (Cutter et al. 2013;Palomares-Rius et al. 2015;Ding et al. 2016). Previous studies indicated that organisms with large population size, short generation time, and small body size were more likely to be hyper-diverse (Cutter et al. 2013). B. xylophilus presents all the aforementioned characters. It is also possible to find some sequence polymorphisms among different B. xylophilus isolates originating from a local area (Ding et al. 2016). Our genome analysis revealed also the presence of bacterial sequences in the sequencing data. It has previously been confirmed that B. xylophilus is associated with a range of bacterial species which might form an important component of the infection process (Vicente et al. 2012;Espada et al. 2016). A number of random bacteria is also frequently present in the nematode laboratory cultures. However, in this study we did not focus on this issue. In the presently reported study, Gene Ontology (GO) was enriched in 6 terms involved in biological processes, 19 terms in molecular functions, and 1 term in cellular component for avirulent populations. In contrast, for virulent populations, Gene Ontology (GO) was enriched in 1 term involved in biological processes, 3 terms in molecular functions, and 1 term in cellular component. All these enriched terms were statistically significant (p < 0.05). The conducted study revealed that the examined avirulent populations contained more high impact variants and all GO categories (i.e., biological process, molecular function and cellular component) were significantly more enriched, especially those involved in molecular functions (Table 3). Earlier research confirmed that the effective antioxidant ability is of critical importance in establishing the infection (Shinya et al. 2013;Vicente et al. 2015). Our study revealed that oxidoreductase activity (GO:0016491) and oxidation-reduction process (GO:0055114) were enriched in the virulent populations as statistically significant (Table 3). A high representation of these two terms may indicate a higher oxidative stress tolerance in virulent populations. In the early stages of invasion, B. xylophilus has to overcome host defense mechanisms, such as strong oxidative stress. It was previously confirmed that according to the functional annotation, some of the cell wall degradation-related genes were upregulated significantly. The oxidoreductase and hydrolase genes are considered to be key factors that allow B. xylophilus to invade its host (Qiu et al. 2013). Only successful, virulent nematodes are able to tolerate the plant defenses, and further migrate and proliferate inside of the host tree. However, it was also proved that different virulent B. xylophilus populations exhibited different tolerance to oxidative stress which is crucial component in host defense mechanisms (Vicente et al. 2015;Ding et al. 2016). Previous research suggested that catalases of highly virulent B. xylophilus were crucial for the nematode survival under prolonged exposure to oxidative stress in vitro (Vicente et al. 2015). Moreover, recent studies indicated that twelve antioxidant proteins were identified in the secretome of B. xylophilus. The secreted antioxidant enzymes would play an important role in B. xylophilus self-protection from oxygen free radicals in the pine tree (Kikuchi et al. 2011;Shinya et al. 2013;Ding et al. 2016). To ensure successful infestation, B. xylophilus needs to break through pine host defense system. It is considered that reactive oxygen species (ROS) are to be the first line of defense in plants. Moreover, the suppression subtractive hybridization has revealed that the pathogenesis-related genes and cell wall-related genes induced by reactive oxygen species are crucial in the defense against PWN infestation (Nose and Shiraishi 2011;Hirao et al. 2012). Breaking down ROS defense also facilitate its ongoing and persistent infestations by weakening the resistance in host plants. ROS oxidize DNA, proteins, and lipids, which cause damage to organelles and inhibit cell functions in plant attackers. Among ROS, especially oxygen H 2 O 2 , is an important factor for regulation of host-nematode interactions and partly govern the success or failure of the disease . Many reports suggest that nematode surface coat protein plays various crucial roles in host-parasite interactions, including regulation of microbes' adhesion to the nematode's body surface, lubrication, elicitation of the host defense responses, and modulation to help counter host defense responses (Spiegel and McClure 1995; Gravato-Nobre and 1 3 Evans 1998;Shinya et al. 2010). B. xylophilus produces also surface coat proteins to help protect itself from ROS (Shinya et al. 2010(Shinya et al. , 2013. The identified surface coat proteins contained a potential regulator of ROS production, and ROS scavengers. These functions seem to be essential for establishing their infection and causing sudden death of host pine trees by the PWD (Shinya et al. 2010). Another studies revealed that after inoculation of virulent populations of B. xylophilus into resistant Japanese black pine (P. thunbergii) more frequent accumulation of phenolic compounds around the cortex resin canals was observed. It was suggested that this accumulation was a very effective defense against infection due to restricting migration of B. xylophilus (Ishida et al. 1993). Hirao et al. (2012) assessed the difference in expressed sequence tag (EST) transcript diversity of activated defense genes and differences in the timing and magnitude of expression of these genes between resistant and susceptible P. thunbergii trees following PWN inoculation. In susceptible trees after PWN inoculation, pathogenesis related genes and antimicrobial-related genes were rapidly induced to high levels within 1-day post-inoculation. In contrast, a moderate defense response mediated by pathogenesis related protein expression followed by significant upregulation of cell wall-related genes induced by ROS was a very effective defense against PWN infection (Hirao et al. 2012). In similar research by Liu et al. (2017) studied the gene expression profiling between resistant and susceptible masson pines (P. massoniana) after inoculation with B. xylophilus. The resistant and susceptible phenotypes had a different defense mechanism in response to B. xylophilus. Detailed gene expression analysis suggested that terpenoids were prominent defense compounds against this nematode. Moreover, the higher activity of ROS-scavenging enzymes was effective to inhibit the death of resistant masson pines after PWN inoculation (Liu et al. 2017). Earlier study revealed that genomic variants that introduce frameshift or stop codon mutations (i.e., high impact variants) could have serious effects on protein structures and functions (Palomares-Rius et al. 2015). High levels of possible loss of function may be related to proteolysis, which includes metalloendopeptidase activity, aspartictype endopeptidase activity, and cysteine-type endopeptidase inhibitor activity (Palomares-Rius et al. 2015). In our research, metalloendopeptidase activity (GO:0004222) and aspartic-type endopeptidase activity (GO:0004190) were also over-represented for avirulent populations. It was confirmed that the GO term of aspartic-type endopeptidase activity enriched as a down-regulated function led to the low feeding activity (Palomares-Rius et al. 2015;Tanaka et al. 2019). Moreover, other, specific activity, such as hydrolase activity (GO:0016820) was also overrepresented in possible loss-of-function variation. The fact that such variants are enriched in these genes may suggest that regions with such expansions are also subjected to changes in terms of point mutations or small insertions and deletions (Palomares-Rius et al. 2015). It is known that peptidase families have a diverse range of biological roles, such as moulting, development, food digestion, and parasitism, in nematodes. Also in B. xylophilus, each gene in the expanded peptidase family has distinct role. This is a clear example of gene family evolution by gene duplication and functional divergence (Tanaka et al. 2019). Nematode peptidases, which hydrolyse polypeptides or proteins, participate in a wide range of molecular, biological, and cellular processes, such as a digestion of host proteins, moulting and embryonic development of the egg (Tanaka et al. 2019). The genome sequence of B. xylophilus revealed a large number of predicted peptidase genes (808 peptidase genes), representing the highest gene number among characterised nematode genomes. They include aspartic (106), metallo (230), cysteine (142), serine (170), threonine (13), unknown_32 (8), and unknown_69 (136) genes (Kikuchi et al. 2011;Tanaka et al. 2019). Moreover, according to Shinya et al. (2013) the GO analysis clearly showed the expansion of peptidases in the secretome of B. xylophilus. Especially, a large number of cysteine and aspartic peptidases were detected. Based on these results and the previous study, one plausible explanation for the phenotypic differences between the virulent (high virulence and fast lifecycle) and avirulent (low virulence and slow lifecycle) populations is likely to be the lack of activities of effectors or digestive proteases. This could lead them to display low ingestion of nutrients and provoke a delay in development. Moreover, the effects of unique variations in specific genes could also be important in explaining the different ecological traits of avirulent populations (Palomares-Rius et al. 2015). Our findings showed that the level of diversity in the B. xylophilus genome is high and comparable with that in other hyper-diverse organisms. We identified genes affected by genomic variation, and functional annotation of those genes indicated that some of them might have potential roles in pathogenesis. This comparative genome study with geographically distant B. xylophilus populations can facilitate understanding of the complex evolutionary/epidemic history of this pathogen. We believe that demonstrating genetic differences between virulent and avirulent populations will provide effective methods to distinguish these two nematode virulence forms at the molecular level. We hope that the presented data will facilitate a better understanding of molecular mechanisms of pine wilt disease and diagnose this nematode species. In turn, it may help to develop effective strategies for control of B. xylophilus. However, further research is needed to determine specific roles of these genes in the pathogenesis.
Shifts in Foliage Biomass and Its Vertical Distribution in Response to Operational Nitrogen Fertilization of Douglas-Fir in Western Oregon : Nitrogen (N) fertilization is a commonly applied silvicultural treatment in intensively managed coast Douglas-fir ( Pseudotsuga menziesii (Mirb.) Franco var. menziesii ) plantations. Field trials were established in a randomized complete block design by Stimson Lumber Company (Gaston, Oregon), to test the economic viability of N fertilization on their ownership and to better understand Douglas-fir growth responses. The 23 stands comprising the trials were Douglas-fir dominated, had a total age of 16–24 years, had been precommercially thinned, and had a density of 386–1021 trees ha − 1 . Fertilizer was applied aerially at a rate of 224 kg N ha − 1 as urea during the 2009–2010 dormant season. In the dormant season of 2016–2017, seven growing seasons following application, 40 trees were felled and measured with the objective of assessing crown attributes and aboveground allometrics. Branch-level foliage mass equations were developed from 267 subsampled branches and were applied to the 40 felled sample trees on which the basal diameter and height of all live branches were measured, allowing estimation of both the total amount of foliage and its vertical distribution. A right-truncated Weibull distribution was fitted to data, with the truncation point specified as the base of live tree crown. The resulting tree-level parameter estimates were modeled as functions of tree-level variables. Stand-level factors not explicitly measured were captured through the use of linear and nonlinear mixed-e ff ects models with random stand e ff ects. Fertilization resulted in more total crown foliage mass in the middle crown-third and caused a downward shift in the vertical distribution of foliage, with implications for feedback responses in crown development and photosynthetic capacity. Defining the morphological responses of Douglas-fir crowns to nitrogen fertilization provides a framework for studying influences on stand dynamics and should ultimately facilitate improved site-specific predictions of stem-volume growth. relative distribution Introduction Net primary production (NPP) of forest stands can be estimated through quantification of ecophysiological processes, including mechanisms that influence the quantity and photosynthetic efficiency of foliage (e.g., [1]). NPP is simply the difference between gains accrued through net photosynthesis and losses to construction and maintenance respiration, with the net difference measurable as dry plant matter [2]. When attempting to quantify growth responses to spatially varying environmental conditions, changing climate, or alternative silvicultural regimes, identifying the effects on fundamental ecophysiological mechanisms can provide unique insights, particularly if these mechanisms can be integrated with empirical relationships into "hybrid" growth models [3]. If environmental variables relevant to fundamental ecophysiological processes can be measured, estimated, or forecast in a cost-effective manner for operational stands, prediction of tree growth under a range of alternative management activities and environmental conditions should be enhanced [4,5]. Under this scenario, any anthropogenic manipulation of resource availability (e.g., through thinning or fertilization) could be accounted for at a mechanistic level to predict growth responses, facilitating more accurate predictions where spatial or temporal variation in environmental conditions can be adequately characterized or predicted. Reliable quantification of foliage amount and its vertical distribution are key components in the hybrid modeling approach because they are driving factors for the amount of intercepted photosynthetically active radiation and are closely linked to growth distribution and changes in allometric relationships [6,7]. Silvicultural treatments such as thinning or fertilization have been shown to influence foliage production [8][9][10], foliage distribution [11][12][13], and related gross crown dimensions [9][10][11][12][13][14]. Quantifying effects of silvicultural treatments on foliage quantity and distribution therefore incorporate important feedback mechanisms between silvicultural treatments, growth responses, crown and other morphological changes, and associated ecophysiological processes. Recognition of these mechanisms has motived efforts to model the vertical distribution of foliage by fitting continuous probability distributions to foliage mass (or area) binned by vertical crown segment [11]. Several distributions have been explored, including the Weibull [12,13,[15][16][17][18][19][20][21], normal [22], generalized logistic [23], and beta (β) [11,[24][25][26]. The β-distribution offers the advantage of extreme flexibility over a domain formed by the closed interval [0, 1] [11]. Foliage distributions can vary among trees by level of species shade-tolerance [27,28] and factors such as social position and stand density, particularly in even-aged stands [11,13,20]. Incorporating the influence of silvicultural treatments on foliage distribution into growth models quantifies the recognized link between foliage distribution and the vertical distribution of stem increment on individual trees [29,30]. If quantified with sufficient accuracy, representation of this mechanism in growth models can refine predicted responses of stem form and total stem volume. Models of vertical foliage distribution on individual trees can also facilitate simulation of canopy processes, including net photosynthesis over various time scales and net primary production (NPP) on annual cycles. These production measures can supplement measured or predicted site indices in hybrid growth models (e.g., [31,32]). Empirical models have traditionally been limited to conventional inventory data, making them highly dependent on a static index of site productivity [7]. In theory, potential response to silvicultural treatments such as fertilization should be dependent on resource availability as determined by soil and climatic variables. Availability of resources other than nutrients, in particular water, often dominates long-term forest productivity [33], controls interannual variability in forest productivity [34], and even affects the ability of trees to respond to fertilization [35,36]. The importance of water limits the efficacy of nutrient availability alone to predict the growth response to fertilization. Improvements in characterizing site quality through soil attributes and seasonal weather patterns, and the ecophysiological responses to these conditions, are therefore potentially advantageous for predicting growth responses to silvicultural manipulations. Commercial application of nitrogen (N) fertilizer is a common silvicultural tool for increasing volume production of coast Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco var. menziesii) plantations. Regional fertilization trials have demonstrated that N is the most limiting nutrient to growth in the Pacific Northwest [37][38][39][40][41], but growth response is extremely variable among different stands, sites [40], and geographic regions. This reality has compelled researchers to identify physiologically relevant site and stand attributes that may indicate site capacity for potential growth response. These site and stand attributes have included slope, elevation, forest floor carbon to nitrogen ratio (C:N), and relative stand density [42]. Water availability has also been shown to influence the magnitude of growth response to fertilization (e.g., [35,43,44]), probably due to coupling of water and nutrient uptake, as well as the constraint imposed by water availability on carrying capacity of the site for leaf area [45,46]. Transpiration rates and carbon assimilation by forests are influenced by foliage amount and its vertical distribution [47][48][49]. The following crown responses to fertilization are therefore relevant: (1) an Forests 2020, 11, 511 3 of 27 increase in total foliage quantity [8,9]; (2) a shift in vertical distribution of foliage [24,26,50,51]; and (3) an increase in water use efficiency defined as stem growth per unit of transpired water [44]. Accurate quantification of the relationship between foliage dynamics and growth responses to fertilization should ultimately lead to reliable identification of responding sites. The core of this predictive capacity should be the magnitude and duration of growth responses in Douglas-fir plantations [26]. The goal of this study was to examine the influence of nitrogen fertilization on foliage quantity and its vertical distribution on individual Douglas-fir trees, in part to facilitate future hypothesis tests regarding effects of these foliage attributes on corresponding changes in stem form on fertilized and unfertilized control trees [52]. The specific objectives of this analysis were to: (1) Develop branch-level prediction equations for total branch foliage mass and test for differences between fertilized and unfertilized control branches; (2) Develop tree-level prediction equations for total foliage mass and test for differences between fertilized and unfertilized control trees; and (3) Develop models for describing relative vertical distribution of foliage within individual Douglas-fir tree crowns and test for fertilization effects on vertical foliage distribution. This study did not allow examination of differences among crown classes or site types, but tree-level covariates and tree or stand random effects were introduced to account for these sources of variation and isolate the effects of nitrogen fertilization. More broadly, this study aimed to augment the extensive, historical fertilization studies in the Pacific Northwest implemented by the Nutrition Project of the Stand Management Cooperative (SMC) and its predecessor, the Regional Forest Nutrition Research Project (RFNRP), specifically to further the understanding of fundamental physiological mechanisms driving growth response of intensively managed Douglas-fir plantations to nitrogen fertilization. Study Area In 2009 Stimson Lumber Company (SLC) installed a fertilization field trial to test the economic viability of accelerating growth by operational application of nitrogen (N) fertilizer. SLC distributed the installations across their ownership in the northern Coast Range of western Oregon during the dormant season of 2009-2010 ( Figure 1). Implementation of the field trial conformed to a randomized complete block design with twenty-three operational stands serving as blocks. Stands were selected for this study based on relative uniformity (in terms of stand composition and structure) to minimize experimental error, and on geographic distribution to represent the geographic range of SLC ownership. Stands selected for this study met the following criteria: (1) Douglas-fir comprised ≥90% of the stand basal area; (2) All stands were precommercially thinned to a residual density of 565-865 trees per hectare; (3) Birth age from seed germination ranged from 15 to 25 years; (4) Top height growth indicated a 50-year site index from 34 to 37 m [53]; and (5) Access was sufficient for expedient remeasurement of permanent plots. Each stand was divided into two parts, or experimental units, with fertilization randomly assigned to one (hereafter referred to "fertilized" treatment) and no fertilization assigned to the other (hereafter referred to as "control" treatment). Operational terrain largely dictated partitioning of stands (blocks) into two experimental units. The fertilization experimental units were designed sufficiently large to facilitate further splitting into yet smaller experimental units to receive a later second application of fertilizer. The broader objective of testing multiple fertilizer applications was outside of the scope of the analysis presented here. Nitrogen was applied aerially as pelletized urea (46% N) at a rate of 224 kg N ha −1 to all experimental units designated for fertilization during the 2009-2010 dormant season. Application was completed when weather conditions were cool (i.e., <21 • C), and when precipitation was expected within a day or two to reduce the risk of volatilization. The expected precipitation events also maximized the probability that urea prills caught in tree crowns would reach the ground via wind and rain. A total of 562 hectares were fertilized in this study, with fertilized and control experimental units ranging in size from 4 to 44 hectares and from 2 to 16 hectares, respectively. Data Collection Permanent plots were established within each experimental unit in the 2009-2010 dormant season (just before fertilization), including three plots in the control experimental units and three or six plots in the fertilized experimental units. Plots were circular and covered 0.04 ha (11.35 m radius). All trees with diameter at breast height (D) > 5.0 cm within the plots were numbered and measured for D (nearest 0.25 cm), total height (H; nearest 0.03 m), and height-to-crown base (HCB; height to the lowest live branch; nearest 0.03 m). Any visible damage or deformity was recorded along with the measurements. Trees within the plots were remeasured during the 2016-2017 dormant season, providing growth responses for the first seven-year period following application. During the 2016-2017 dormant season, two trees from each experimental unit were felled in each of ten stands randomly selected from the twenty-three constituting the fertilization trial, yielding a 40-tree sample of felled trees. One felled sample tree had D equal to the quadratic mean diameter (QMD) for the pooled plot data in a given experimental unit, and one felled sample tree had D equal to the 90th percentile of the diameter distribution for the pooled plot data in a given experimental unit (DBH90). The latter tree was selected to represent a combination of the top height or site tree component of the stand and future crop trees at rotation age. All felled sample trees were free from visually obvious damage (i.e., bear or porcupine scarring, broken top, fork), and were located in an area with local Data Collection Permanent plots were established within each experimental unit in the 2009-2010 dormant season (just before fertilization), including three plots in the control experimental units and three or six plots in the fertilized experimental units. Plots were circular and covered 0.04 ha (11.35 m radius). All trees with diameter at breast height (D) > 5.0 cm within the plots were numbered and measured for D (nearest 0.25 cm), total height (H; nearest 0.03 m), and height-to-crown base (HCB; height to the lowest live branch; nearest 0.03 m). Any visible damage or deformity was recorded along with the measurements. Trees within the plots were remeasured during the 2016-2017 dormant season, providing growth responses for the first seven-year period following application. During the 2016-2017 dormant season, two trees from each experimental unit were felled in each of ten stands randomly selected from the twenty-three constituting the fertilization trial, yielding a 40-tree sample of felled trees. One felled sample tree had D equal to the quadratic mean diameter (QMD) for the pooled plot data in a given experimental unit, and one felled sample tree had D equal to the 90th percentile of the diameter distribution for the pooled plot data in a given experimental unit (DBH 90 ). The latter tree was selected to represent a combination of the top height or site tree component of the stand and future crop trees at rotation age. All felled sample trees were free from visually obvious damage (i.e., bear or porcupine scarring, broken top, fork), and were located in an area with local stand structure (e.g., density) consistent with that on the permanent plots in the same experimental unit. Selected trees were visible from permanent plot boundaries (i.e., within approximately 15 m). Felled sample tree measurements were consistent with the standing tree measurements in the permanent plots, so included D (nearest 0.25 cm), H (nearest 0.03 m), and HCB (nearest 0.03 m) ( Table 1). Live crown length (CL; nearest 0.03 m) was calculated as the distance between tree tip and crown base (H-HCB), and crown ratio (CR; %) was calculated as the ratio of CL to H, expressed as a percentage (i.e., 100 × (CL/H)). Height to point-of-insertion (BH; nearest 0.01 m) and basal diameter (BD; nearest 0.1 mm) of every live branch from the base of the live crown to the tree tip were measured. We defined basal branch diameter as the diameter at a distance from bole approximately equal to one branch diameter to avoidbasal bulges that vary in length proportional to branch diameter. Basal diameter of some branches that were damaged from tree felling were estimated based on the approximate taper of intact branches. Each tree crown was divided vertically into thirds of equal length. Branches were numbered consecutively in each crown third, starting with 1 at the base of the third. Three branches were collected randomly from each of the crown thirds, using a random number generator to identify each candidate branch. The sampling protocol required that two branches have a basal diameter >15 mm and one branch a basal diameter >5 mm but <15 mm. Additionally, the total branch length (BL; nearest 1 cm) and distance from tree bole to first live foliage (LLF; nearest 1 cm) on the branch were recorded. Branches were transported to the lab, foliage was separated from branchwood by age class (maximum of six annual cohorts), and each age class was placed in a separate container for drying. After drying for at least 48 hours at 70 • C, each foliage age class was weighed (nearest 0.1 g), providing an oven-dry foliage mass for each sample branch by needle age class. To assess the seven-year responses of foliage production to fertilization, total branch foliage mass (BFM) was expressed as the summed mass of all age classes ( Figure 2). Branch height ranged from 2.88 to 24.42 m, basal diameter ranged from 5.0 to 45.6 mm, total branch length ranged from 18 to 430 cm, distance from bole to first live foliage on a branch ranged from 0 to 175 cm, and foliage mass ranged from 3.2 to 750.2 g ( Table 2). Branch-Level Foliage Mass Based on model forms used in previous studies, numerous linear, log-transformed [55], and weighted and unweighted nonlinear [11,20,25] models were tested to develop the following form of a branch-level equation for predicting total foliage mass (g or kg) from branch level variables: where relHACB was the relative height above crown base (1.1 − (DINC/CL)), I Fert was an indicator variable for branches on fertilized trees (1 if fertilized; 0 otherwise), BFM was total foliage mass, BD was basal branch diameter, LLF was distance from tree bole to closest live foliage on the branch, BL was total branch length, DINC was absolute depth into crown (tree height − branch height), and RDINC was relative depth into crown ([tree height − branch height]/crown length). A value of 1.1 was used as a surrogate for maximum relHACB to ensure a nonzero value for the lowest live branch (DINC = CL), thereby avoiding computational problems in the log-transformed and nonlinear regressions [55]. Model errors were assumed additive, random, normal, and independent with variances proportional to a power of branch basal diameter (BD) to be determined from the data. To account for site/stand effects previously observed in the relationship described by Equation (1) (e.g., [11]), stand-level random effects were included in preliminary modeling and assessed by trial and error [12,13,25,27]. Preliminary models also tested for differences in foliage mass between treatments by including an indicator variable for fertilized tree branches (i.e., I Fert ); however, no significant differences were found. Models were fitted in R using the base lm function for linear models and using the lme or nlme functions within the nlme package for linear and nonlinear mixed effects models [56,57]. All model parameter estimates were tested for significant difference from zero at α = 0.05. Final model selection was based on distribution of the residuals, biological relationships of the various predictors, and the following goodness-of-fit criteria considering the alternative weights (BD −m ; m = 0, 1, . . . , 4): likelihood ratio tests, Akaike's Information Criterion (AIC; [58]), generalized R g 2 [59], and unweighted and weighted root mean square error (RMSE and wRMSE, respectively). The latter three criteria were defined as follows: where Y i was the measured foliage mass (g) of a given branch i (i = 1, 2, . . . , N; N = 267);Ŷ i was the model predicted foliage mass for branch i; Y i was the average mass of all foliage sample branches; and w i were the normalized weights. Normalized weights were estimated as the weight for each observation estimated from the variance function divided by the sum of weights for all observations. For example, the normalized weights for a power variance function using branch basal diameter (BD) as the covariate was computed as: was the weight estimated for branch i from the power variance function of basal diameter (BD i ); and t was the power variance coefficient optimized from the data. Total Crown Foliage Mass Using the best performing branch-level foliage mass equation, foliage mass was predicted for all live branches on the 40 felled sampled trees and summed for an estimate of total crown foliage mass (TFM; kg). Equation (5) was developed for estimating total crown foliage mass from tree-level predictors by testing various weighted and unweighted linear and nonlinear models. Model forms were based on previous studies [11][12][13]60,61]. To test the hypothesis that, seven years after application, fertilization increased total crown foliage mass on trees with otherwise identical diameter, height, and crown dimensions, an indicator variable (i.e., I Fert ) was included for fertilized trees. The general formulation of the model with all potential predictor variables was as follows: where B was tree basal area (m 2 ); R was crown ratio above breast height (CL/(H − 1.37)); HMC was the height to the middle of the crown ((m); HCB + (CL/2)); RHMC was relative height to middle crown (HMC/H); I Fert was an indicator variable for fertilized trees (1 if fertilized; 0 otherwise); and D, H, CL, and CR were as described above. The parameters on which random stand effects or tree size (i.e., QMD or DBH 90 ) effects were tested were selected by an iterative process between residual plots and biological expectation. Models were fit in R using the base lm function for linear models and using the lme or nlme functions within the nlme package for linear and nonlinear mixed effects models [56,57]. All model parameter estimates were considered significantly different from zero if p ≤ 0.05. Significance of the indicator variable (I Fert ) would suggest a significant fertilization effect assuming least squares parameter estimates are minimum variance and unbiased [62,63]. Vertical Distribution of Foliage The vertical distribution of foliage mass was estimated empirically for each of the 40 fully measured felled trees by applying the selected branch-level foliage mass equation to each live branch on each felled tree. All foliage on a given branch was assigned to the height of branch attachment. Several alternative probability density functions were fitted in two different ways to the resulting distribution of foliage between the tip of each tree to the live crown base (HCB). In the first approach branch foliage mass estimates were assigned to the height of branch insertion into the tree bole and to the corresponding depth into live crown (DINC). In the second approach, branch foliage mass estimates were grouped into DINC bins of equal length. The probability density functions (PDFs) fitted to the data included the following: (1) a right-truncated Weibull distribution (RTW); (2) an untruncated Weibull distribution (UTW); (3) a Johnson's S B distribution (JSB); and (4) a beta distribution (BET) (see Equations (6)-(8) below). The minima of the distributions were assumed zero (i.e., tree tip), and the maximum values (Johnson and beta) or truncation values (Weibull) were assumed equal to HCB. Preliminary analyses of the ability of different combinations of PDF and resolution of vertical binning indicated that 20 bins were sufficient if not preferable. Summing foliage mass within a fixed segment of crown length reduced the noise introduced by inter-whorl branches and variations in branch size within annual shoots of the main stem [12,13,25,64]. The foliage height assumption described for estimating empirical distributions does not account for the branch angle of origin [11] or branch angle of termination. Significant curvature in the primary branch axis, particularly in older branches, often causes these two angles to differ substantially. In addition, young secondary branches off the primary branch are arranged in generally circular distribution around the branch axis and likewise for higher order branches, all of which can also be somewhat pendant. In short, it would be very difficult to correct simultaneously for branch angle of origin and branch angle of termination to improve the accuracy of empirical estimates of foliage height; similarly, correcting for the variation in spatial arrangement of higher order branches would be even more difficult. Maguire and Bennett [11] and Xu and Harrington [20] argued that adjusting for branch angle would offer only a slight benefit because estimating the effects of branch angle, branch curvature, and the orientation of higher order branches simultaneously would be very complicated. The net improvement in accuracy is likely to remain low relative to assuming that all foliage on a given branch is held at the level of branch attachment to the main stem. Any bias from this assumption would most likely have the effect of underestimating the height of some foliage, with bias increasing with height on the stem height and perhaps reversing in larger trees near the live crown base. Procedures for obtaining maximum likelihood estimates were as described in Weiskittel et al. [12] and Nelson et al. [27], using an expectation/maximization (EM) algorithm modified from Robinson [65], obtaining initial values for the algorithm using moment-based estimators. The forms of the PDFs fitted to foliage distribution were as follows: Forests 2020, 11, 511 10 of 27 where X represented absolute or relative depth into crown (DINC or RDINC); f 3 (X), f 4 (X), and f 5 (X) were relative density of foliage mass per m of crown length for RTW and per unit relative crown length for the JSB and BET distributions; β, η and Ψ were the Weibull shape, scale, and truncation parameters (Equation (6)); τ and ω were the two JSB shape parameters (Equation (7)); a and b were the two beta shape parameters (Equation (8)); and Γ(x) was the gamma function. Performance of both DINC and RDINC were compared as alternative variables for representing vertical position of foliage in preliminary fitting of the RTW and JSB distributions. Scaling to relative depth into crown (RDINC) facilitated comparison across varying crown lengths [12,13,25,27], and fitting of BET, whose domain is [0,1]. However, the absolute scale performed better and was more biologically interpretable, so was retained in final model selection (with the exception of BET). After predicting foliage mass (g) from each PDF fitted to each tree, RMSE and mean absolute bias (MAB) were computed as the criteria for selecting the best-fitting PDF. After identifying the best-fitting PDF, equations were developed to predict maximum likelihood estimates of the tree-level PDF parameters from tree-level variables. To test for a fertilization effect on vertical distribution of foliage (seven years after application), a fertilization indicator variable was added to the models. A series of linear and nonlinear fixed-effects models and linear and nonlinear mixed-effects models were evaluated starting with model forms from previous studies [11][12][13]25,27]. Inclusion of random effects and procedures for final model selection followed those for the branch-level and total crown foliage mass equations. All distributions and parameter prediction models were fitted in R [57]. Maximum likelihood estimates of distribution parameters and estimated distributions of foliage density and foliage mass were computed with R functions developed by John Kershaw (University of New Brunswick, Fredericton; [66]) and previously applied by Weiskittel et al. [12]. Tree-level parameter prediction models were fitted using the base lm function for linear models and using the lme or nlme functions within the nlme package for linear and nonlinear mixed effects models [56,57]. All model parameter estimates were tested for statistical significance at α = 0.05. Models for estimating each parameter of the fitted distributions were selected and fitted individually. However, because the set of parameters for a given PDF are contemporaneously correlated, estimates from individual equation fits are inconsistent and inefficient. Therefore, after final models were selected for predicting each of the parameters for each PDF, the models were refitted as a system-of-equations using iterative three-stage least squares (3SLS; [67]). In the presence of contemporaneously correlations among parameter estimates for a PDF, three-stage least squares lead to consistent and asymptotically more efficient estimates. The 3SLS systems were fitted using the systemfit package in R [57,68], following procedures from Schmidt [69] to obtain 3SLS estimates. Once the three-stage least squares estimates were obtained, performance was evaluated by comparing the RMSE from the system fit (3SLS) to RMSE from ordinary least squares (OLS). Goodness of fit for the system was evaluated by McElroy's R M 2 [70]. Statistical significance (i.e., α = 0.05) of the fertilization indicator variable in one or more of the equations for the system would indicate a significant difference in one or more PDF parameter estimates between fertilized and control trees, suggesting a significant fertilization effect on relative vertical distribution of foliage. Branch-Level Foliage Mass The best performing model was based on the form presented by Garber and Maguire [25] for three conifer species in central Oregon and by Maguire and Bennett [11] for coastal Douglas-fir. This model form had the highest log likelihood score and R g 2 , and lowest AIC and RMSE relative to alternative models. The final model (Equation (9)) included a random stand effect on the first relative depth into crown (RDINC) term and was weighted using a power variance function of BD to correct for heteroscedasticity. The final model form was: where FM ijk was the observed foliage mass (g) for branch i (i = 1, 2, . . . , n j ) on tree j (j = 1, 2, 3, 4) in stand k (k = 1, 2, . . . , 10); BD ijk was basal diameter (mm) for the sample branch; RDINC ijk was relative depth into the tree crown of the sample branch; α 0 , α 1 , α 2 , and α 3 were parameters to be estimated from the data; δ 3,k was a random effect of the kth stand; and ε 1,ijk was the random error term for the ith branch on the jth tree in the kth stand. Random effects δ 3,k and ε 1,ijk were assumed to have a multivariate normal distribution specified as follows: where δ 3 was the 10x1 vector of random stand effects; ε 1 was the 267x1 vector of random branch errors; 0 δ3 was the mean vector of random stand effects; 0 ε1 was the mean vector of random errors; σ δ3 2 I was the variance-covariance matrix for the random stand effects; and σ ε1 2 Σ was the block-diagonal variance-covariance matrix for the random errors, with the block diagonals allowing for potential correlations between branch mass observations within a tree. Zero covariance was assumed between random stand effects and random branch errors. Branch diameter and relative depth into crown imposed highly significant fixed effects on branch foliage mass, and the random stand effect was also significant (Table 3). Fertilization was not detected to have any marginal effect on the amount of foliage held on a branch of given basal diameter at a given relative depth into crown, apparently because any increase in foliage mass was proportional to branch diameter. Table 3. Parameters, estimates, standard errors, and p-values for the final branch-level foliage mass model (Equation (9)). The final model was applied to estimate total branch foliage mass from measured RDINC and BD of all live branches on the 40 felled sample trees. The model indicated that for a given BD, branch foliage mass peaked approximately halfway between crown base and tree tip (Figure 3). Parameter The final model was applied to estimate total branch foliage mass from measured RDINC and BD of all live branches on the 40 felled sample trees. The model indicated that for a given BD, branch foliage mass peaked approximately halfway between crown base and tree tip (Figure 3). (9)) for all live branches measured from felled sampled trees. RDINC = 1 at crown base and RDINC = 0 at tree tip. Total Crown Foliage Mass The best performing model for predicting total tree foliage mass was based on forms tested by Maguire and Bennett [11] and Williams et al. [13]. The final model (Equation (10)) was a nonlinear mixed-effects model that included a nested random effect of experimental unit within a given stand. The final model took the following form: where TFMjkl was the total foliage mass (kg) for tree j in experimental unit l in stand k (j = 1, 2; l = 1, 2; k = 1, 2, ..., 10); Bjkl was tree basal area (m 2 ); Rjkl was crown ratio above breast height (CL/(H − 1.37)); HMCjkl was height to middle of crown length (m); Hjkl was tree height (m); IFert was an fertilization indicator variable (1 if fertilized; 0 otherwise); β0, β1, β2, β3, and β4 were parameters to be estimated from the data; δ1,kl was the random effect of the l th experimental unit in the k th stand; and ε2,jkl was the random tree error for the j th tree in the l th experimental unit in the k th stand. Random effects δ1,kl and ε2,jkl were assumed to have a multivariate normal distribution specified as follows: (9)) for all live branches measured from felled sampled trees. RDINC = 1 at crown base and RDINC = 0 at tree tip. Total Crown Foliage Mass The best performing model for predicting total tree foliage mass was based on forms tested by Maguire and Bennett [11] and Williams et al. [13]. The final model (Equation (10)) was a nonlinear mixed-effects model that included a nested random effect of experimental unit within a given stand. The final model took the following form: where TFM jkl was the total foliage mass (kg) for tree j in experimental unit l in stand k (j = 1, 2; l = 1, 2; k = 1, 2, . . . , 10); B jkl was tree basal area (m 2 ); R jkl was crown ratio above breast height (CL/(H − 1.37)); HMC jkl was height to middle of crown length (m); H jkl was tree height (m); I Fert was an fertilization indicator variable (1 if fertilized; 0 otherwise); β 0 , β 1 , β 2 , β 3 , and β 4 were parameters to be estimated from the data; δ 1,kl was the random effect of the lth experimental unit in the kth stand; and ε 2,jkl was the random tree error for the jth tree in the lth experimental unit in the kth stand. Random effects δ 1,kl and ε 2,jkl were assumed to have a multivariate normal distribution specified as follows: where δ 1 was a 20x1 vector of random effects for the lth experimental unit in the kth stand; ε 2 was a 39x1 vector of random tree errors; 0 δ1 was the mean vector of random experimental unit effects; 0 ε2 was the mean vector of random tree errors; σ δ1 2 Λ was the variance-covariance matrix for the random experimental unit effects; and σ ε2 2 Σ was the block-diagonal variance-covariance matrix for random errors, with the block diagonals allowing for potential correlations between foliage mass of trees within an experimental unit. Zero covariance was assumed between experimental unit random effects and tree random errors. Equation (10) had the highest R g 2 and log likelihood score and lowest AIC and RMSE of all candidate models. One tree was excluded from model fitting because it imposed unusually strong influence on model behavior and appeared as an unrepresentative outlier, perhaps due to an asymmetric crown or measurement errors. All tree-level covariates imposed significant effects on total tree foliage mass (all p < 0.05; Table 4). Table 4. Parameters, estimates, standard error, and p-values for the final tree-level foliage mass model (Equation (10)). Vertical Distribution of Foliage Based on the root mean square error (RMSE) and mean absolute bias (MAB), performance was similar between the truncated and untruncated Weibull distributions, regardless of whether parameters were estimated from individual branch data or data aggregated into vertical segments of constant length. In contrast, the beta and Johnson distributions performed better with data aggregated by live crown segment, but the higher variances on parameter estimates from all distributions using aggregated data suggested that aggregation could have masked patterns in finer-scale tree-to-tree variability that have been observed in other studies (e.g., [26]). Overall, the best performing PDF was a right-truncated Weibull fitted to unaggregated data, as measured by RMSE, compared to beta and Johnson distributions fitted to the same data (Table 5). Therefore, the right-truncated Weibull was selected as the final model for characterizing vertical distribution of foliage on individual Douglas-fir trees. To test for potential differences in vertical foliage distribution between fertilized and unfertilized Douglas-fir trees in the SLC study, equations were developed for predicting tree-level parameter estimates obtained from the right-truncated Weibull distributions fitted to the data. Tree-level dimensions were screened in both a transformed and untransformed state, and an indicator variable for fertilization was also tested in various components. Preliminary models suggested the most influential variables included diameter at breast height (D), a surrogate for stem taper and/or relative crown size (e.g., D/H), total crown foliage mass (TFM), and a direct measure of crown size (e.g., crown length, height-to-crown base, crown ratio). Because total crown foliage mass is predicted (i.e., Equation (10)), and subsequently used as a predictor, the possibility of correlation among errors has to be accounted for. This becomes particularly important when applying this system to an independent tree or tree-list. Therefore, predicted total crown foliage mass (TFM) from the final model (Equation (10)) was used instead of observed TFM, in which case, TFM can be viewed as a transformation of exogenous tree-level variables. The final equations took the following form: whereη j was the expectation/maximization (EM) prediction of the scale parameter in Equation (6) and β j was the EM-prediction of the shape parameter in Equation (6) for tree j;TFM j was the predicted total crown foliage mass (kg) from Equation (10); ε 3 was the residual error for prediction of estimatedη j ; ε 4 was the residual error around the predictions of estimatedβ j ; and all other variables were described previously. Prior to refitting as a system-of-equations, the residual plots of individual equations were examined and no evidence of increase or decreasing variance across the range of either right-truncated Weibull distribution parameters was found. The iterative three-stage simultaneous least squares (3SLS) estimates of all tree-level parameters were significantly different from zero at α = 0.05 ( Table 6). The system explained 91.6% and 41.7% of the original variation in the scale and shape parameter estimates, respectively, and the McElroy R M 2 [70] indicated that the system accounted for 85.6% of the combined variation in estimates of the two right-truncated Weibull parameters. The significance of the parameter estimate on the fertilization indicator (κ 3 : p-value = 0.024) suggested a significant effect of fertilization on the vertical distribution of foliage. Table 6. Parameters, estimates, standard errors, p-values, and fit statistics for the models predicting EM-estimates (see text) of tree-level, right-truncated Weibull parameters η (Equation (11)) and β (Equation (12)) fitted by iterative three-stage least squares (3SLS). To compare the foliage distributions between the control and fertilized treatments, total crown foliage mass and Weibull distribution parameters were estimated based on the average size of the 40 felled sample trees in the study. Relative foliage mass density was then estimated from the right-truncated Weibull PDF (Equation (6)) and plotted on relative depth into crown (RDINC; Figure 4). The distribution of relative foliage mass density peaked at approximately mid-crown, and the peak location was nearly identical between the control and fertilized treatments (0.55 and 0.56, respectively). Fertilization induced a slight decrease in relative foliage mass density in the top compared to unfertilized control trees. The final model for total crown foliage (Equation (10)) predicted approximately 10.35% more foliar mass (23.30 kg on fertilized trees versus 21.11 kg on unfertilized controls). This additional foliage mass apparently accumulated at mid-crown. The fertilization effect on the shape (skew) parameter β appeared to drive the decreases in relative foliage mass density observed near tree tip and near crown base and the slight downward shift in the mode of the foliage distribution. The increase in total crown foliage mass induced by fertilization drove the relative foliage mass density increases observed at mid-crown. These results confirm that nitrogen fertilization increases total crown mass and shifts the relative vertical distribution of foliage mass on individual Douglas-fir trees. To examine further the vertical distribution of foliage, the cumulative form of the right-truncated Weibull distribution (CDF) was estimated by dividing the crown into 100 segments of depth into crown, determining the cumulative proportion of foliage mass at a given depth, and multiplying the respective cumulative proportion by the total crown foliage mass. As with relative foliage mass density per unit vertical segment (Figure 4), a direct treatment effect was evident in the cumulative foliage mass for the fertilized and unfertilized tree of average size ( Figure 5). Based on the CDF, foliage mass first increased with increasing RDINC just above mid-crown and continued to increase to within approximately 15% of crown base. Very slight decreases apparent in the top-third of the crown were probably not statistically or biologically significant. To examine further the vertical distribution of foliage, the cumulative form of the right-truncated Weibull distribution (CDF) was estimated by dividing the crown into 100 segments of depth into crown, determining the cumulative proportion of foliage mass at a given depth, and multiplying the respective cumulative proportion by the total crown foliage mass. As with relative foliage mass density per unit vertical segment (Figure 4), a direct treatment effect was evident in the cumulative foliage mass for the fertilized and unfertilized tree of average size ( Figure 5). Based on the CDF, foliage mass first increased with increasing RDINC just above mid-crown and continued to increase to within approximately 15% of crown base. Very slight decreases apparent in the top-third of the crown were probably not statistically or biologically significant. Branch-Level Foliage Mass The first objective of this study was addressed by developing a robust model for estimating branch foliage mass in the target population and testing for fertilization effects on total branch foliage mass. The weighted, nonlinear mixed-effects model utilizing branch-and tree-level variables proved most effective among alternatives for predicting foliage mass at the branch-level. This model form has previously been found the best among alternatives explored for Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) [11], as well as for other conifer species (e.g., grand fir (Abies grandis (Dougl. ex D. Don) Lindle.), lodgepole pine (Pinus contorta Dougl. ex Loud.), and ponderosa pine (Pinus ponderosa Dougl. ex Laws.) [25]; balsam fir (Abies balsamea (L.) Mill.), northern white-cedar (Thuja occidentalis (L.)), eastern hemlock (Tsuga canadenis (L.) Carr.), eastern white pine (Pinus strobus (L.)), and red spruce (Picea rubens (Sarg.)) [12]). As with this and previous models, branch foliage mass increased with increasing branch diameter; however, as branches near crown base start losing foliage due to shading, relative depth into crown becomes an important variable for modeling the decline in foliage mass as branches of large diameter become relegated to a lower canopy position near crown base. This decline in foliage mass is slower in more shade-intolerant species [13,25,71], but photosynthetically active radiation (PAR) eventually falls below the light compensation point and branches can no longer survive. As a result, branch foliage mass increases with branch diameter while Branch-Level Foliage Mass The first objective of this study was addressed by developing a robust model for estimating branch foliage mass in the target population and testing for fertilization effects on total branch foliage mass. The weighted, nonlinear mixed-effects model utilizing branch-and tree-level variables proved most effective among alternatives for predicting foliage mass at the branch-level. This model form has previously been found the best among alternatives explored for Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) [11], as well as for other conifer species (e.g., grand fir (Abies grandis (Dougl. ex D. Don) Lindle.), lodgepole pine (Pinus contorta Dougl. ex Loud.), and ponderosa pine (Pinus ponderosa Dougl. ex Laws.) [25]; balsam fir (Abies balsamea (L.) Mill.), northern white-cedar (Thuja occidentalis (L.)), eastern hemlock (Tsuga canadenis (L.) Carr.), eastern white pine (Pinus strobus (L.)), and red spruce (Picea rubens (Sarg.)) [12]). As with this and previous models, branch foliage mass increased with increasing branch diameter; however, as branches near crown base start losing foliage due to shading, relative depth into crown becomes an important variable for modeling the decline in foliage mass as branches of large diameter become relegated to a lower canopy position near crown base. This decline in foliage mass is slower in more shade-intolerant species [13,25,71], but photosynthetically active radiation (PAR) eventually falls below the light compensation point and branches can no longer survive. As a result, branch foliage mass increases with branch diameter while simultaneously decreasing with depth into crown. The net effect is an increase in branch foliage mass with increasing distance from tree tip until reaching a peak mid-crown, below which branch foliage mass decreases toward crown base. This pattern has been consistently observed in other studies and demonstrates the importance of including both branch diameter and some measure of position in crown as predictors [11,12,18,25,31,55,72,73]. The random stand effect interacted with relative depth into crown but was consistent with observed differences among sites in the foliage mass supported by branches of a given diameter and crown depth (e.g., [11]). Although the stands within this study were selected in part based on their similar structure and silvicultural history, the degree to which small differences in age, site quality, stand density and other potential factors affect the foliage mass carried by branches of similar size and crown position is poorly understood. No significant fertilization effects on foliage mass of branches of a given size were apparent based on either the final model or extensive preliminary model fitting. The second phase of the first objective therefore resulted in failure to reject the implied null hypothesis that branches on fertilized trees in fertilized stands do not hold greater amounts of foliage biomass after accounting for their size and crown position. Fertilization has been demonstrated to increase branch size [9], probably to an extent that is commensurate with any increase in foliage mass and increase in tree size, and to have no effect on number of whorl or inter-whorl branches per unit length of main stem [74]; hence, the model accounted for the increase in foliage mass of fertilized trees predominantly through the increase in branch diameters. The height within the crown where fertilization increased foliage mass was evident by plotting branch foliage mass on relative depth into crown (Figure 2). It is unclear from the analysis presented whether increases in branch size were correlated with foliated branch length, branch width, or branch length, any of which may indicate finer scale mechanisms explaining the greater foliage mass on branches with larger diameters. Additional branch-level terms within models have the potential to improve model performance, and subsequently account for these differences; however, including some of these variables may also exacerbate potential adverse consequences of multicollinearity [55,63]. Increases in branch diameter were most likely correlated with increases in branch length, and depth into crown should account for the decline in foliated branch length with increasing shading toward the crown interior and at crown base. Because stand density and water relations typically affect the latter, the significant stand-level random effect presumably accounted for some of these relationships. Total Crown Foliage Mass The nonlinear mixed-effects model for estimating total tree foliage mass on both fertilized and unfertilized trees addressed the second objective of this analysis. At the tree level, a significant effect of fertilization proved significant in a final model that performed best among many alternatives for estimating the total foliage mass of the Douglas-fir trees in the study. Fertilization increased total tree foliage mass through the increases in foliage production and branch size observed in previous studies [8,9]. The model incorporated various transformations of tree-level variables previously used by Maguire and Bennett [11] and Williams et al. [13]. Among the strongest predictors were a combination of tree basal area (B), crown ratio above breast height (R), height to mid-crown (HMC), and the ratio of tree basal area to relative height to mid-crown. The product of crown ratio and tree basal area can be viewed as a surrogate for basal area at crown base, with CR serving as a crude taper model for basal area (B). The ratio of tree basal area to relative height to mid-crown served as a similar measure, and the combination agreed with previous observations that crown mass increased with crown length and decreased with increasing height [11][12][13]61,75]. Crown size for a given DBH is highly variable among stands of varying density and top height, suggesting that these supplemental predictors were highly beneficial [11]. Previous research has shown that diameter at crown base [76], sapwood area at breast height [77], sapwood area at crown base [78], and gross crown dimensions [79] can all serve as strong predictors of total crown foliage mass [11]. The combination of transformed DBH and crown variables used in the final tree-level model in this study likely represented the functional effect of these other direct measures, and perhaps with less measurement error. The second objective of this study implied testing the null hypothesis that fertilization did not affect the amount of foliage held by individual trees, after accounting for diameter, height, and crown size of the tree. The final tree foliage model firmly rejected this null hypothesis. For trees of equal size initial size, fertilization resulted in approximately 2.2 kg (~10%) more foliar mass, seven years following application. Growth responses to fertilization treatment further indicated that indirect growth responses were driven by positive feedbacks through associated increase in foliage mass. Although crown lengths on fertilized tree were slightly longer than those on control trees in this study (average crown length 13.34 and 13.64 m, respectively), this was offset by slightly taller height-to-crown base (average height to crown base: 7.14 and 7.41 m, respectively). The fertilization effect through the height-to-crown base term may be attributable to differences in diameter at crown base (e.g., [75]), as the largest increases in inside-bark stem diameter as a result of fertilization have been shown to be at or near this point [52]. Several studies have concluded that short-term (2-5-year) growth response to fertilization was attributable to increased photosynthetic efficiency [9] and/or increased water use efficiency [44]. In contrast, the primary factor in the sustained long-term volume growth responses to fertilization was greater foliar area [8,9,35,44,80]. This increase in foliar area has also been found contingent upon initial foliage amount and light limitations at time of fertilization [8,9,81]. Similar conditions prevailed in this study, where the largest increases in foliage mass were observed mid-crown, where favorable light conditions may still exist among the largest, most competitive branches. The individual-tree foliage and corresponding growth responses to fertilization summed to significant growth responses at the stand level as well, due to increases in stand-level foliage mass and faster accumulation of growing stock [52]. The significant interaction between fertilization and height-to-crown base (HCB) may have been of particular importance in this study given that the positive effect of fertilization on foliage mass increased with increasing initial height-to-crown base. This positive interaction suggested a greater response by increasingly dominant trees, a seven-year direct response to fertilization not captured by tree growth responses in diameter, height, or crown length. The growth response linked to crown base was also consistent with the influence of fertilization on stem form [52] as mentioned previously. The increase in total crown foliage mass following fertilization aligns with observations from previous studies, as fertilization has been demonstrated to increase production of foliage, increase needle and branch size, increase longevity of lower branches, and subsequently increase live crown length [8,9]. Vertical Distribution of Foliage The third objective of this analysis was addressed by first estimating the empirical distribution of foliage mass over relative depth into crown using the branch-level foliage equations. Different probability density functions were then fitted to these empirical distributions for each tree, and prediction equations for the parameters estimated in the selected probability density function (PDF) were developed and tested for any fertilization effects. As has been found in previous studies, the Weibull distribution was the best performing distribution for characterizing the vertical distribution of foliage [12,13,[15][16][17][18][19][20][21]. Right-truncation at crown base provided a more biologically appropriate model than the unmodified Weibull distribution with a domain of [0,∞). That fact that at least some live foliage remains at crown base probably gave an edge to the right-truncated Weibull distribution over the beta distribution. In the final model vertical distribution of foliage depended on DBH, total crown foliage mass, a surrogate for stem form (i.e., H/D), and crown-related variables. As with other studies, crown size and foliar mass have consistently emerged as strong predictors of the parameters controlling the shape of the distribution [11][12][13]. The combination of tree basal area and the relative position of mid-crown was an effective index of foliage distribution. Meeting the third objective also required testing the null hypothesis that fertilization had no effect on the vertical distribution of foliage mass. In fact, fertilization imposed a significant effect on vertical foliage distribution. The distributional patterns in response to fertilization revealed that the largest increases in foliage mass occurred near mid-crown, where large established branches were still receiving significant amounts of light but could benefit from more. The decrease in foliage mass compared to the control near the upper portion of the stem (Figure 3), could potentially be the result of increased height growth in response to fertilization. Also, the pattern of diminished foliage mass near crown base is driven by increased shading in response to foliage production at higher levels in the crown as the tree grows in height. The acceleration in foliage production stimulated by fertilization almost certainly exacerbated this foliage loss. Also, differences in foliage distribution between control and fertilized trees entailed not only increases in total crown foliage mass, but also increases in the shape (skew) parameter β, suggesting that the mode of the distribution is shifted slightly downward with a longer tail toward the tip of the tree. There was no significant effect of fertilization on the scale (kurtosis) parameter. Based on the distribution of branch-level foliage (Figure 2), the relative height of the peak in the distribution is similar between treatments, but the differences in maxima indicate the increase in total crown foliage mass after fertilization. These patterns are best observed by examining the vertical distribution of foliage on an absolute scale for the average size of the twenty felled fertilized trees and the average size of the twenty felled unfertilized trees ( Figure 6; Table 1). Forests 2020, 11, x FOR PEER REVIEW 20 of 27 compared to the control near the upper portion of the stem (Figure 3), could potentially be the result of increased height growth in response to fertilization. Also, the pattern of diminished foliage mass near crown base is driven by increased shading in response to foliage production at higher levels in the crown as the tree grows in height. The acceleration in foliage production stimulated by fertilization almost certainly exacerbated this foliage loss. Also, differences in foliage distribution between control and fertilized trees entailed not only increases in total crown foliage mass, but also increases in the shape (skew) parameter β, suggesting that the mode of the distribution is shifted slightly downward with a longer tail toward the tip of the tree. There was no significant effect of fertilization on the scale (kurtosis) parameter. Based on the distribution of branch-level foliage (Figure 2), the relative height of the peak in the distribution is similar between treatments, but the differences in maxima indicate the increase in total crown foliage mass after fertilization. These patterns are best observed by examining the vertical distribution of foliage on an absolute scale for the average size of the twenty felled fertilized trees and the average size of the twenty felled unfertilized trees ( Figure 6; Table 1). Table 1). RDINC = 1 at crown base and RDINC = 0 at tree tip. Water availability, vapor pressure deficit, and other crown microclimatic factors may be contributing to the responses observed in foliage distributions on these fertilized trees. Gravitational water potentials are more negative [82] and vapor pressure deficits are likely to be higher in the upper canopy, which may limit foliage production. A higher severity of this effect is expected for the more Figure 6. Effect of nitrogen fertilization on vertical foliage distribution over depth into crown (DINC; m) as modeled with a right-truncated Weibull distribution (control = solid line; fertilized = dashed line). Vertical foliage distribution is represented as absolute foliage mass (kg). Distributions were standardized to the average tree size by treatment in the study (See Table 1). RDINC = 1 at crown base and RDINC = 0 at tree tip. Water availability, vapor pressure deficit, and other crown microclimatic factors may be contributing to the responses observed in foliage distributions on these fertilized trees. Gravitational water potentials are more negative [82] and vapor pressure deficits are likely to be higher in the upper canopy, which may limit foliage production. A higher severity of this effect is expected for the more dominant trees and may partially explain the lower foliage mass per unit relative crown length in the upper crown of fertilized trees, despite more than adequate light availability. The largest increases in foliage mass observed at mid-crown may represent a balance between minimizing water loss and maximizing light capture, contingent upon the degree of crown shading [13]. Factors forcing this balance may be particularly strong in the middle and eastern parts of the Oregon Coast Range where summer water availability is routinely limiting. Determining soil water availability may therefore be critical in determining seasonal and diurnal patterns in stomatal conductance and associated capacity for response to fertilization [35]. Douglas-fir foliage distributions have exhibited an upward shift with increasing crown competition [11]. The results of this study suggest that fertilization may shift the relative distribution of foliage mass in the opposite direction toward the midpoint of the live crown length. Trees sampled in this study ranged from the middle (i.e., QMD) to upper end (i.e., DBH 90 ) of the diameter distributions, limiting the opportunity to assess effects across the full range of current tree-crown social position. This sampling strategy may limit short-term stand-level inference but does focus on the most likely crop trees at rotation age. In many other studies of foliage distribution, sampling often targets the most vigorous trees in the stand [12]. Factors such as stand age [83], stand density [20], and tree relative height [11], influence vertical foliage distribution [12]. Future studies could include trees from the full mid-rotation range of diameters and crown positions to clarify silvicultural options and strengthen inferences on early stand-level responses to silvicultural treatments. However, the major gains in wood production and value recovery will depend on tree near and above mid-rotation quadratic mean diameter. The described data will facilitate improvements in models for simulating combination of fertilization with other silvicultural treatments such as thinning. Fertilization effects on the entire stand canopy and internal stand dynamics will advance our understanding of both direct and indirect effects of this silvicultural practice, particularly if better quantification of growing stock accumulation emerges and leads to improvements in estimating indirect effects of increased basal area, cambial surface area, crown surface area, and foliage area or mass. Changes to many measures of crown size influence dynamic factors such as potential height growth, crown recession, and probability of mortality, as well as ecophysiological mechanisms. Crown responses therefore play a central role in refining growth and yield models (e.g., ORGANON [84]). Understanding the implications of shifts in foliage distribution in response to fertilization is just one step toward obtaining more precise and reliable predictions of tree growth and stand development over time. Conclusions Light capture and the subsequent process of converting fixed carbon to stem-wood requires accurate quantification of complex canopy structures, including spatial distribution of foliage mass and area [85][86][87][88]. Our understanding of growth efficiency, commonly defined as stem volume growth per unit leaf area, has improved in recent years [71] and can be used to assess the relationship between tree growth, stand structure, and stand productivity [88][89][90]. In even-aged stands with a relatively simple stand structure, growth efficiency decreases with increasing total leaf area [71,88,91]. This measure of tree productivity has the potential to become more useful and insightful if some of its variability within and between stands can be predicted from the quantity and distribution of foliage area on subject trees. Pressler's hypothesis states that stem increment along the stem is proportional to the quantity of foliage above that stem point (as cited in [92]). Given the predicted cumulative distribution of foliage (Figure 4), the cross-sectional increment would be expected to be larger in the lower two-thirds of a given tree crown, and smaller from that point to tree tip. Putney [52] developed a variable exponent taper model based on the same destructively sampled trees in this study, finding that the largest growth response of diameter inside-bark to fertilization occurred at mid-stem, and that a slight decrease in stem diameter growth response occurred near tree tip. The results presented in this study provide evidence for the "cause" of this "effect." Given successful quantification of cumulative foliage distribution predictable from silvicultural treatments and their effects on tree-level variables, a unique opportunity presents itself for refining periodic and cumulative growth predictions by supplementing empirical predictors with more physiologically direct mechanisms. Furthermore, because the foliage analyses presented in this study are directly relevant to photosynthetic capacity and net primary productivity (NPP), an opportunity is open to explore methods of incorporating these responses and relationships into growth and yield models. Hybrid models with empirical and mechanistic elements that combine statistical-and ecophysiological-based approaches offer a potential advantage of increased flexibility and simplicity over purely statistical or process-based models. A working hypothesis is that hybrid models can mechanistically represent silvicultural influences through relatively few, wisely selected physiological principles [93]. These models typically derive stand-level productivity predictions directly from physiological processes to supplement or replace covariates in statistical models (e.g., estimating net photosynthesis to supplement site index; [31,32]). Hybrid modeling has been coined the future of forest growth modeling [94] and has been successfully utilized to reduce mean square error (MSE) of specific output in numerous growth models [7,32,[95][96][97][98]. However, improvements to these models are only achievable through successful identification and quantification of only key mechanistic processes, so that the quantity and quality of data required is operationally feasible. Developing the framework and methodology for quantification of mechanisms is therefore crucial to furthering the development of these models. In this study, fertilization resulted in increased crown foliar mass of individual trees, concentrated at mid-crown ( Figure 6). The decrease in foliage near the upper crown and near crown base resulted from growth reallocation to the middle third of the crown, and increased shading of the lower crown by the increased foliar mass above. The model presented in this study demonstrated that light availability will be crucial for the maximization of foliage production, particularly in the first 3-4 years following fertilization [9] and suggests that available soil water and concurrent stand density regulation are essential. While not novel, the stand selection criterion in this study that required precommercially thinned stands corroborates the notion that stand density must be optimal prior to fertilization (i.e., well below thresholds where density-induced competition may occur, where fertilized trees do not get adequate light, or where the stand has already reached maximum foliage mass carrying capacity). Prior to fertilization, tree crowns must be maintained in a vigorous and well-formed condition, despite inevitable natural disturbances, to ensure maximal capacity for response. Prime candidates for fertilization should be free as possible from disease and storm damage, exhibit optimal or near optimal pretreatment density, and possess healthy crowns.
Two-step Conversion of Acetic Acid to Bioethanol by Ethyl Esterification and Catalytic Hydrogenolysis Biomass, especially lignocellulose, is well known as an abundant renewable energy resource. The use of lignocellulose for biofuel production has been highlighted over the past two decades1),2). Conventionally, bioethanol production from lignocellulose proceeds through a multi-step process, including saccharification, alcohol fermentation and distillation3). In the saccharification step, acid-catalyzed or enzymatic hydrolysis is usually applied to obtain fermentable saccharides3),4). However, not only fermentable saccharides, such as glucose, but also unfermentable products, such as pentoses, various saccharide-derived and lignin-derived products, are obtained simultaneously. Besides, carbon dioxide (CO2) is emitted as a by-product during the fermentation by yeast. Therefore, the use of all compounds produced in the saccharification step with reducing the CO2 emission seems to be a promising way in order to establish an efficient and environmentally friendly bioethanol production. Our research group has proposed a highly efficient process of bioethanol production from lignocellulose as shown in Fig. 15). First, the two-step, semi-flow, hot-compressed water treatment is applied to produce various water-soluble products from lignocellulose6)~10). In the next step, the obtained products are anaerobically fermented to acetic acid11). Then, acetic acid is esterified into ethyl acetate, followed by catalytic hydrogenolysis to produce ethanol. This study focused on the esterification and hydrogenolysis step, which involves the following reactions: Introduction Biomass, especially lignocellulose, is well known as an abundant renewable energy resource. The use of lignocellulose for biofuel production has been highlighted over the past two decades 1),2) . Conventionally, bioethanol production from lignocellulose proceeds through a multi-step process, including saccharification, alcohol fermentation and distillation 3) . In the saccharification step, acid-catalyzed or enzymatic hydrolysis is usually applied to obtain fermentable saccharides 3),4) . However, not only fermentable saccharides, such as glucose, but also unfermentable products, such as pentoses, various saccharide-derived and lignin-derived products, are obtained simultaneously. Besides, carbon dioxide (CO2) is emitted as a by-product during the fermentation by yeast. Therefore, the use of all compounds produced in the saccharification step with reducing the CO2 emission seems to be a promising way in order to establish an efficient and environmentally friendly bioethanol production. Our research group has proposed a highly efficient process of bioethanol production from lignocellulose as shown in Fig. 1 5) . First, the two-step, semi-flow, hot-compressed water treatment is applied to produce various water-soluble products from lignocellulose 6) 10) . In the next step, the obtained products are anaerobically fermented to acetic acid 11) . Then, acetic acid is esterified into ethyl acetate, followed by catalytic hydrogenolysis to produce ethanol. This study focused on the esterification and hydrogenolysis step, which involves the following reactions: CH3COOC2H5 + 2H2 → 2C2H5OH (hydrogenolysis) (2) The net result can be described as follows: Therefore, one mole of ethanol is produced from one mole of acetic acid even though ethanol is used as a reactant. Although we have also developed direct hydrogenolysis of acetic acid to ethanol using a Lewis acid catalyst 12),13) , this two-step method is still a good candidate for ethanol production owing to the fast reaction rate of Eq. (2). In industry, this two-step reaction is applied to produce alcohols from carboxylic acids to overcome the problem of low reactivity of direct 18) , and the most common method uses acid catalysts 18),19) . However, to avoid the use of acid catalysts and to invent a more environmentally benign technology, we studied a catalyst-free process with supercritical ethanol (critical temperature and pressure 243 °C/6.4 MPa). We have already demonstrated that the esterification reaction of fatty acids proceeds without catalyst in supercritical methanol 20) 23) . For the hydrogenolysis of organic esters, copper (Cu) metal has been widely used 24) 32) , and various catalysts are known, such as Cu _ Zn 24) 26) and Cu _ Cr 24),27) 29) . Although significant efforts have been conducted for developing the hydrogenolysis catalysts 24) 32) , the hydrogenolysis of ethyl acetate into ethanol has not been fully elucidated. Besides, the separation of ethyl acetate and ethanol by distillation is difficult because their boiling points are almost the same. Therefore, complete hydrogenolysis to ethanol would be more critical for bioethanol production. In this study, ethyl esterification of acetic acid was first investigated in supercritical or subcritical ethanol to obtain ethyl acetate. The effects of temperature and molar ratio of the reactant on the yield of ethyl acetate were reported. Subsequently, catalytic activities of Cu _ Zn-and Cu _ Cr-type catalysts for hydrogenolysis of ethyl acetate were evaluated with a flow-type reactor. The effects of temperature and hydrogen pressure were also studied. Based on the obtained results, appropriate reaction conditions for esterification and hydrogenolysis were discussed in order to establish an actual bio ethanol production process. 1. Esterification of Acetic Acid into Ethyl Acetate Acetic acid (extra pure reagent (EP), 99 %, Nacalai Tesque, Inc., Kyoto, Japan) was treated with ethanol (specially prepared reagent (SP), 99.5 %, Nacalai Tesque) in its supercritical or subcritical state to produce ethyl acetate by using a flow-type reactor shown in Fig. 2(a). Ethanol and acetic acid were supplied by high-pressure pumps into a coiled tubular reactor, which was made from Hastelloy HC-276 steel (outer diameter, 3.2 mm; inner diameter, 1.2 mm; length, 84 m). The reactor was placed in a salt bath, heated at designated temperatures. After cooled by a cooling jacket, the product was collected in glass bottles. The pressure inside the reactor was controlled by a back-pressure regulator to be 20 MPa. The residence time (reaction time, t) was calculated by dividing the inner volume V of the reactor (95 mL) by the total volumetric flow-rate of the reaction mixture as the following equation: where F is the total flow-rate of acetic acid and ethanol respectively. The densities were estimated by using a steady-state process simulator, PRO/II ver. 9.1 (Schneider Electric, Rueil-Malmaison, France) on the Non-random two-liquid (NRTL) model. The product was diluted by methanol (SP, 99.8 %, Nacalai Tesque) and analyzed by high-performance liquid chromatography (HPLC) with an LC-10A system (Shimadzu Corp., Kyoto, Japan) under the following conditions: column, cadenza CD-C18 (250 4.6 mm, Imtakt Corp., Kyoto, Japan); flow-rate, 1.0 mL/min; eluent, methanol; detector, UV at 205 nm wavelength; oven temperature, 40 °C. Based on the HPLC chart, the yield of ethyl acetate was determined in mol% upon the fed acetic acid. Hydrogenolysis of Ethyl Acetate into Ethanol A flow-type experimental set-up, as illustrated in Fig. 2(b), was used for the vapor-phase catalytic hydrogenolysis of ethyl acetate. The reactor was made from Incoloy NCF800 steel (inner diameter, 13.8 mm; length, 300 mm) with an inner tube (outer diameter, 3.2 mm), in which a thermocouple was inserted. A vaporizer filled with glass beads (diameter, 2 mm) was placed at the top of the reactor. A Cu-type catalyst was placed in the middle of the reactor (bed height: 10 mm), and the rest space was filled with glass wool. As the catalyst, Cu _ Zn (N211, Cu _ Zn 48/44, w/w), Cu _ Cr _ Mn (N202E, Cu _ Cr _ Mn 38/37/2, w/w) and Cu _ Cr _ Ba _ Si (N201H, Cu/Cr/Ba/Si 38/36/11/8, w/w) were purchased from Nikki Chemical Co., Ltd., Kawasaki, Japan, and used as a powder (30-100 mesh). Before each experiment, the catalyst was activated under N2 flow (250 mL/min, 0.1 MPa) and then H2 flow (250 mL/min, 2.0 MPa) at 250 °C for 2 h each. The reactor was heated with a cylindrical electric furnace. The gas flow-rates were controlled with mass flow controllers, and the pressure was maintained with a back-pressure regulator. The N2 (99.99 %) and H2 gases (99.9 %) were purchased from Imamura Sanso K.K., Ohtsu, Japan. During the above H2 treatment for 2 h, the water generation was confirmed and completed in time sufficiently shorter than 2 h for all catalysts. Therefore, the reduction treatment of all catalysts was thought to be sufficient. After the activation, the H2 flow-rate, temperature and pressure were adjusted at designated values. Ethyl acetate (EP, 99 %, Nacalai Tesque) was then injected into the vaporizer with a high-pressure pump at designated flow rates, and the resulting vapor mixed with H2 was introduced into the reactor. Since the bulk volume of the catalyst was only approximately 1.4 mL, the residence time of ethyl acetate in the catalyst region was very short. After passing through the reactor, the liquid products were recovered in a cold trap at 20 °C, and the gaseous products were collected in a gasbag. 1. Esterification of Acetic Acid into Ethyl Acetate The effect of temperature on esterification was first investigated. Figure 3 shows the yield of ethyl acetate when acetic acid was treated with ethanol at various temperatures under the pressure of 20 MPa (acetic acid/ ethanol 1 : 5 mol/mol). It was confirmed that the esterification reaction of acetic acid to ethyl acetate proceeded without catalyst in supercritical or subcritical ethanol. Higher temperatures resulted in faster reaction rates for ethyl acetate formation. In addition, the yield of ethyl acetate tended to increase quickly in the early stage of the reaction, but the formation rate became slow when the reaction time was prolonged. At the given reaction temperatures (190-310 °C), there was no side reaction because only acetic acid and ethyl acetate were found in HPLC analysis (i.e., yield of ethyl acetate conversion (consumption) of acetic acid, mol%). The yield of ethyl acetate was relatively low (63.5 mol%) at 190 °C, even after the treatment for 48 min. The yield was improved as the temperature was increased, and seemed to reach equilibrium at about 70 mol% at 250 °C. When the temperature was further increased to 270, 290 and 310 °C, the yield of ethyl acetate was increased and equilibrated at about 80 mol% even though the yield was slightly different. Therefore, the reaction was considered to reach equilibrium at about 80 mol% under the given molar ratio of acetic acid and ethanol (1 : 5) and reaction temperatures around 300 °C. In terms of energy consumption, lower reaction temperatures are more preferred; therefore, 270 °C is sufficient for ethyl esterification of acetic acid. With regard to pressure, lower pressures are more desirable because of safety reasons. However, low pressure reduces the density of the reaction mixture and shortens the residence time in the reactor. Therefore, a more extended reactor is necessary. Thus, although this study was conducted at 20 MPa, the optimum reaction pressure will depend on the situation. As noted above, since the yield of ethyl acetate relates to the molar ratio of acetic acid and ethanol, the effect of the molar ratio was investigated at 270 °C/20 MPa as shown in Fig. 4. The high yield is expected if the ethanol ratio is high because the forward reaction will become dominant, while the reverse reaction of ethyl acetate to regenerate acetic acid is suppressed. When the ethanol ratio was increased from 1 : 1 to 1 : 10, the yield of ethyl ester was improved to be about 90 mol%. However, as the ethanol ratio was further increased (1 : 20, 1 : 30, and 1 : 50), the esterification reaction proceeded very slowly and the ester yield remained low. These behaviors can be explained by an autocatalytic effect of acetic acid. Because acetic acid can act as an acid catalyst, it will enhance the esterification reaction. However, a large amount of ethanol diluted acetic acid, weakened the autocatalytic effect of acetic acid, and led to a slow reaction. Besides, we can also explain the fast reaction rate in the early stage of the reaction, because the concentration of acetic acid is high at the beginning. A similar phenomenon has been reported in our previous study on methyl esterification of fatty acids in supercritical methanol 23) . In this study, the maximum ethyl acetate yield was achieved to be 89 mol% under the conditions of 270 °C/20 MPa for 60 min and the molar ratio of 1 : 10 (acetic acid/ethanol). However, a lower ethanol ratio may be desirable. More applicable operating conditions will be discussed later, along with the conditions of hydrogenolysis. Figure 5 shows the catalytic activities of several Cutype catalysts evaluated as the ethanol yield after the hydrogenolysis of ethyl acetate at 200-350 °C/2.0 MPa. The molar ratio of ethyl acetate/H2 was 1 : 8. Since one mole of ethyl acetate requires two moles of H2 for hydrogenolysis as shown in Eq. (2), this ratio corresponds to four times the required amount. 2. Hydrogenolysis of Ethyl Acetate into Ethanol All catalysts used in this study were not so active at 200 °C, while the ethanol yield increased with increasing the reaction temperature. Among these catalysts, only Cu _ Zn exhibited a high activity at 250 °C, but Cu _ Cr _ Mn and Cu _ Cr _ Ba _ Si showed high activity at 300 °C. There was almost no side reaction for all catalysts (i.e., yield of ethanol conversion of ethyl acetate, mol%) at the temperatures between 200 to 300 °C because only trace amounts of by-products were found in GC analysis. The ethanol yield continued to increase even at 350 °C for Cu _ Zn, but the yield decreased for the other two catalysts. In the cases of Cu _ Cr _ Mn and Cu _ Cr _ Ba _ Si, the raw material (ethyl acetate) did not remain in the reaction mixture at 350 °C; therefore, the decrease in ethanol yield was due to side reactions other than hydrogenolysis. Actually, unidentified by-products were found to some extent in GC analysis. On the other hand, no side reaction was observed for Cu _ Zn even at 350 °C. The reaction conditions were further evaluated for the Cu _ Zn catalyst in the temperature range of 180-270 °C as shown in Fig. 6. The catalytic hydrogenolysis became effective around at 200 °C and the reactivity increased linearly with increasing the reaction temperature. In the micro GC analysis, ethane and a trace amount of methane were detected as gas products at 240 °C (0.15 wt% upon the fed ethyl acetate) and they increased with the reaction temperature, but their combined yield was only 0.89 wt% even at 270 °C. Therefore, it was suggested that the hydrogenolysis of ethyl acetate to ethanol with Cu _ Zn proceeded quite selectively in the temperature range of 210-270 °C. In the following experiments, the selectivity of hydrogenolysis was also high except for a very small amount of gaseous products, and thus the ethanol yield was almost equal to the conversion of ethyl acetate. The effect of reaction pressure on the hydrogenolysis with Cu _ Zn was investigated as shown in Fig. 7. The ethanol yield was improved as the pressure was increased. When the hydrogenolysis was conducted at 4 MPa, the ethanol yields became about 1.7 times higher than that at 1.0 MPa. This might be owing to the increase in residence time in the reactor, because the density of vapor ethyl acetate and H2 mixture was increased when the pressure was increased. Moreover, the increase in density will enhance the reaction between ethyl acetate and H2. Figure 8 shows the effect of the molar ratio of H2 and ethyl acetate on the ethanol yield. By using a large excess amount of H2 (ethyl acetate/H2, 1 : 32), ethyl acetate was converted to ethanol almost quantitatively (98.7 mol% yield). Such complete hydrogenolysis would be essential for bioethanol production due to the separation problem as described earlier. Although an excess amount of H2 is required for the hydrogenolysis, we can reuse the recovered H2 after the reaction, which still has high purity because the gaseous by-product is quite small in this reaction. The above experiments were conducted by using pure ethyl acetate as a reactant to reveal fundamental On the other hand, the effect of acetic acid was more evident than ethanol; the ethanol yield was reduced by half when 10 mol% acetic acid was added. These results indicate that acetic acid should be converted to ethyl acetate in high yield by the esterification stage to achieve efficient hydrogenolysis. Discussion to Establish the Actual Process Although the Cu _ Zn catalyst exhibited excellent reactivity and selectivity in the hydrogenolysis of ethyl acetate to ethanol, it was found that unreacted ethanol and acetic acid should be reduced as much as possible in the esterification stage for the following hydrogenolysis. In the esterification reaction of acetic acid, the yield of ethyl acetate was improved by using a large excess amount of ethanol. However, when the ethanol ratio was increased, the rate of esterification reaction became slower due to the dilution effect, and more eth-anol remained after the reaction. Besides, a large amount of ethanol will increase the energy consumption of the process. Therefore, the two-step reaction method proposed in this study still has challenges, especially in the esterification step. An appropriate design would be to employ a low ethanol ratio to acetic acid (e.g., 1 : 1) in the esterification reaction, which makes the reaction rate high owing to the autocatalytic effect of acetic acid. We can remove the produced water after a certain treatment time and then repeat the esterification reaction. For example, the pervaporation technique is available to remove water from the reaction mixture 33) , and the water removal will favor the reaction to form ethyl acetate. Ideally, it is better to remove water simultaneously with the esterification reaction. For that, the reactive distillation may also be a candidate 34) . In this way, if high purity ethyl acetate can be obtained from acetic acid, the subsequent hydrogenolysis will be carried out efficiently, and the bioethanol production from acetic acid will be established. Conclusion The two-step reaction for bioethanol production from acetic acid was proposed; ethyl esterification of acetic acid to ethyl acetate, followed by hydrogenolysis to ethanol. The catalyst-free esterification of acetic acid was performed well in subcritical and supercritical ethanol and the autocatalytic effect of acetic acid was found. Therefore, a low ethanol ratio to acetic acid led to a fast reaction rate but caused a low ethyl acetate yield due to the reverse reaction of ethyl acetate with water. Although an appropriate reaction condition was found to be 270 °C/20 MPa, the water removal during the esterification reaction, for example by pervaporation or reactive distillation, will be necessary to achieve a high ethyl acetate yield without a large excess amount of ethanol. As for the hydrogenolysis, the Cu _ Zn catalyst performed excellent reactivity and selectivity in the temperature range of 210-270 °C, where only very small amounts of ethane and methane were formed as by-products. With an excess amount of H2, ethyl acetate was wholly converted to ethanol. However, if unreacted acetic acid or ethanol included in the reactant, the ethanol yield was worsened. An excellent conversion in the esterification stage is also essential for the hydrogenolysis. These findings will contribute to developing efficient and practical conversion process of acetic acid to ethanol by this two-step method, even though some challenges remained. This is also a part of the establishment of the highly efficient bioethanol production from lignocellulose through acetic acid fermentation developed by our research group.
Prolyl-tRNAPro in the A-site of SecM-arrested Ribosomes Inhibits the Recruitment of Transfer-messenger RNA* Translational pausing can lead to cleavage of the A-site codon and facilitate recruitment of the transfer-messenger RNA (tmRNA) (SsrA) quality control system to distressed ribosomes. We asked whether aminoacyl-tRNA binding site (A-site) mRNA cleavage occurs during regulatory translational pausing using the Escherichia coli SecM-mediated ribosome arrest as a model. We find that SecM ribosome arrest does not elicit efficient A-site cleavage, but instead allows degradation of downstream mRNA to the 3′-edge of the arrested ribosome. Characterization of SecM-arrested ribosomes shows the nascent peptide is covalently linked via glycine 165 to \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \(\mathrm{tRNA}_{3}^{\mathrm{Gly}}\) \end{document} in the peptidyl-tRNA binding site, and \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \(\mathrm{prolyl}\mathrm{-}\mathrm{tRNA}_{2}^{\mathrm{Pro}}\) \end{document} is bound to the A-site. Although A-site-cleaved mRNAs were not detected, tmRNA-mediated ssrA tagging after SecM glycine 165 was observed. This tmRNA activity results from sequestration of \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \(\mathrm{prolyl}\mathrm{-}\mathrm{tRNA}_{2}^{\mathrm{Pro}}\) \end{document} on overexpressed SecM-arrested ribosomes, which produces a second population of stalled ribosomes with unoccupied A-sites. Indeed, compensatory overexpression of \batchmode \documentclass[fleqn,10pt,legalpaper]{article} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{amsmath} \pagestyle{empty} \begin{document} \(\mathrm{tRNA}_{2}^{\mathrm{Pro}}\) \end{document} readily inhibits ssrA tagging after glycine 165, but has no effect on the duration of SecM ribosome arrest. We conclude that, under physiological conditions, the architecture of SecM-arrested ribosomes allows regulated translational pausing without interference from A-site cleavage or tmRNA activities. Moreover, it seems likely that A-site mRNA cleavage is generally avoided or inhibited during regulated ribosome pauses. A-site 3 mRNA cleavage is a novel RNase activity that acts on A-site codons within paused ribosomes. Ehrenberg, Gerdes and their colleagues (1) first demonstrated that Escherichia coli RelE protein causes cleavage of A-site mRNA in vitro. Subsequently, A-site cleavage was also shown to occur at stop codons during inefficient translation termination in cells that lack RelE and related proteins (2,3). The latter finding indicates that another unknown A-site nuclease also exists in E. coli. Indeed, it is possible the ribosome itself catalyzes A-site cleavage. The molecular requirements for A-site cleavage are incompletely understood, but an unoccupied ribosome A site appears to be important for both RelE-dependent and RelE-independent nuclease activity (1,2). A-site nuclease activity truncates mRNAs and produces stalled ribosomes that are unable to continue standard translation. In bacteria, ribosomes stalled at the 3Ј termini of such truncated messages are "rescued" by the tmRNA quality control system. tmRNA is a specialized RNA that acts first as a tRNA to bind the A-site of stalled ribosomes, and then as an mRNA to direct the addition of the ssrA peptide degradation tag to the C terminus of the nascent polypeptide (4,5). As a result of tmRNA activity, incompletely synthesized proteins are targeted for proteolysis and stalled ribosomes undergo normal translation termination and recycling (5). In this manner, A-site mRNA cleavage and tmRNA work together as a translational quality control system that responds to paused and stalled ribosomes. Although a paused ribosome can be a manifestation of translational difficulty, translational pausing is also used to control and regulate protein synthesis. In many instances, the newly synthesized nascent peptide inhibits either translation elongation or termination (6,7). A recently described example is the SecM-mediated ribosome arrest, which controls expression of SecA protein from the secM-secA mRNA in E. coli (8). The SecM nascent peptide interacts with the ribosome exit channel to elicit a site-specific ribosome arrest (9). The SecM-stalled ribosome is postulated to disrupt a downstream mRNA secondary structure that sequesters the secA ribosome binding site (9,10). Thus, efficient initiation of secA translation depends upon ribosome pausing at the upstream secM open reading frame (11). SecM-mediated ribosome pausing is regulated in turn by the activity of SecA protein. SecM is secreted co-translationally by the general Sec machinery, which is powered in part by the SecA ATPase (12). It is thought that the mechanical pulling force exerted by SecA on the SecM nascent chain during secretion alleviates the ribosome arrest and allows translation to continue (13,14). This intriguing regulatory circuit allows the cell to monitor protein secretion activity via ribosome pausing and adjust SecA synthesis accordingly. One outstanding question is how A-site cleavage and tmRNA activities affect regulatory translational pauses such as the SecM-mediated ribosome arrest. If all paused ribosomes are subject to A-site cleavage, then this nuclease activity would be expected to interfere with SecA regulation. The experiments presented in this paper demonstrate that A-site mRNA cleavage and the tmRNA quality control system do not significantly affect SecM-mediated ribosome arrest. Two recent reports have demonstrated that the A-site of SecM-arrested ribosomes is filled with tRNA (15,16). The cryo-EM structure from Frank and colleagues (15) shows that ϳ40% of SecM-arrested ribosomes contain a fully accommodated A-site tRNA. Ito and colleagues (16) have recently analyzed SecM-arrested ribosomes prepared by in vitro translation and concluded that the A-site tRNA is a prolyl-tRNA Pro . In our analysis of SecM-arrested ribosomes in vivo, we also find that the P-and A-sites of the SecM-arrested ribosome are occupied with peptidyl-and aminoacyl-tRNAs, respectively. Additionally, we show that the occupied A-site prevents tmRNA recruitment during ribosome arrest and may also inhibit A-site mRNA cleavage. Thus, regulation by SecM ribosome arrest is able to operate efficiently in the presence of quality control systems that alleviate ribosome stalling. EXPERIMENTAL PROCEDURES Bacterial Strains and Plasmids- Table 1 lists the bacterial strains and plasmids used in this study. All bacterial strains were derivatives of E. coli strain X90 (17). Strain CH12 (X90(DE3)) was generated using the Novagen (DE3) lysogen kit according to the manufacturer's instructions. Strain CH2198 (X90 ssrA(his6)(DE3)) was obtained by introducing the ssrA(His6) allele (18) of tmRNA into the ssrA chromosomal locus using the phage Red recombination method with minor modifications (17,19). The same method was used to delete the rna (encoding RNase I), rnb (encoding RNase II), and pnp (encoding PNPase) genes. The rnr::kan disruption and the strain expressing truncated RNase E have been described previously (2,20). All gene disruptions and deletions were introduced into strain CH113 by phage P1-mediated transduction. The kanamycin resistance cassette was removed from strain CH113 ⌬rnb::kan using FLP recombinase as described (19), allowing construction of the ⌬rnb rnr::kan double mutant. Lac Ϫ strains of X90 and X90 ssrA::cat were obtained by curing the strain of the FЈ episome as described (17). The details of all strain constructions are available upon request. mRNA Expression and RNA Analysis-E. coli strains were grown overnight at 37°C in LB medium supplemented with the appropriate antibiotics (150 g/ml of ampicillin, 25 g/ml of tetracycline, or 50 g/ml of kanamycin). The next day, cells were resuspended at an optical density at 600 nm (A 600 ) of 0.05 in 15 ml of fresh medium and grown at 37°C with aeration. Once cultures reached an A 600 of ϳ0.3, mRNA expression was induced with isopropyl ␤-D-thiogalactopyranoside (1.5 mM). After further incubation for 30 min, 15 ml of ice-cold methanol was added to the cultures, the cells collected by centrifugation, and the cell pellets frozen at Ϫ80°C. Total RNA was extracted from cell pellets using 1.0 ml of a solution containing 0.6 M ammonium isothiocyanate, 2 M guanidinium isothiocyanate, 0.1 M sodium acetate (pH 4.0), 5% glycerol, 40% phenol. The disrupted cell suspension was extracted with 0.2 ml of chloroform, the aqueous phase removed and added to an equal volume of isopropyl alcohol to precipitate total RNA. RNA pellets were washed once with ice-cold 75% ethanol and dissolved in either 10 mM Tris-HCl (pH 7.5), 1 mM EDTA or 10 mM sodium acetate (pH 5.2), 1 mM EDTA. SecM Expression and Protein Analysis-Strains were cultured as described above for RNA analysis. Protein extraction and Western blot analyses were conducted as described (17). Anti-His 6 polyclonal antibodies were obtained from Santa Cruz Biochemical. Monoclonal antibodies specific for E. coli ␤-galactosidase and the FLAG M2 epitope were obtained from Sigma. SsrA(His 6 )-tagged SecM proteins were purified by Ni 2ϩ -NTA agarose (Qiagen) affinity chromatography as described (17,18). Ni 2ϩ -NTA purified protein was further purified by reverse phase HPLC. N-terminal gas-phase sequencing was performed on a Porton 2020 protein sequencer (Beckman-Coulter) with a dedicated in-line HPLC (model 2090) for separation of phenylthiohydantoin derivatives. Molecular masses were determined by liquid chromatography mass spectrometry. Samples were applied to a Zorbax 300SB-C18 reverse phase column in aqueous 0.1% formic acid and proteins eluted using a linear gradient of acetonitrile using an Agilent 1100 LC nano-system. Eluted proteins were infused into a Waters Q-Tof II TM mass spectrometer for ionization. ␤-Galactosidase assays were conducted essentially as described (23). Strains expressing secAЈ::lacZ translational fusions were inoculated at A 600 of 0.05 in LB medium and grown at 37°C with aeration to A 600 of 0.3-0.6. ␤-Galactosidase activity for each construct was measured from 5 to 8 independent cultures and reported as mean Ϯ S.D. Cell Extract Fractionation-Strains CH12 ⌬rna::kan and CH113 ⌬rna::kan containing plasmid pSecMЈ or pSecMЈ-(P166A) were grown in 1 liter of LB media at 37°C with aeration in Fernbach flasks. At A 600 ϳ 0.6, SecMЈ expression was induced by the addition of isopropyl ␤-D-thiogalactopyranoside to 1.5 mM and cultures incubated for 1 h at 37°C with aeration. Cultures were harvested over ice, the cells were collected by centrifugation and washed once with cold, high-Mg 2ϩ S30 buffer (60 mM potassium acetate, 30 mM magnesium acetate, 0.2 mM EDTA, 10 mM Tris acetate (pH 7.0)). Washed bacterial pellets were resuspended in 10 ml of cold high-Mg 2ϩ S30 buffer and the cells were broken by one passage through a French press at 12,000 p.s.i. Cell lysates were cleared by centrifugation at 30,000 ϫ g for 15 min at 4°C, and the supernatants layered onto cushions of cold high-Mg 2ϩ S30 buffer containing 1.1 M sucrose in ultracentrifuge tubes (Beckman number 344057). Samples were centrifuged in a Beckman-Coulter Optima TM ultracentrifuge at 45,000 ϫ g for 1 h at 4°C using an MLS-50 rotor. Total RNA was extracted from the high-speed supernatants and pellets for analysis as described above. RESULTS SecM Ribosome Arrest Leads to mRNA Cleavage-To determine whether SecM-mediated ribosome arrest leads to A-site cleavage, we generated plasmids to express mRNA encoding SecM and the first 62 residues of SecA (Fig. 1A). Three SecM variants were used throughout this work: (i) FLAG-SecM, which is the wild-type protein fused to an N-terminal FLAG epitope tag; (ii) FLAG-(⌬ss)SecM, which lacks the secretion signal sequence (⌬ss ϭ deleted signal sequence; residues 1-37); and (iii) FLAG-(⌬ss-P166A)SecM, which lacks the signal sequence and has alanine in place of proline 166. Deletion of the SecM signal sequence prevents its secretion and leads to a profound ribosome arrest, whereas the P166A variant completely abrogates arrest (9,14). The FLAG sequence was added to facilitate analysis of SecM proteins by Western blot. However, secretion of FLAG-SecM protein resulted in the removal of the FLAG epitope along with the signal sequence (see below). Each SecM protein was expressed in wild-type cells (tmRNA ϩ ) and cells that lack tmRNA (⌬tmRNA), and the corresponding messages examined by Northern blot analysis using a probe specific for the ribosome binding site upstream of secM. In addition to the full-length mRNAs, truncated flag-secMAЈ and flag-(⌬ss)secMAЈ messages were also detected (Fig. 1B). The truncated mRNAs did not hybridize to a probe specific for the downstream secA sequence (data not shown). No truncated flag-(⌬ss-P166A)secMAЈ mRNA was apparent, suggesting that ribosome arrest was required for mRNA cleavage. Interestingly, steady state levels of truncated flag-secMAЈ and flag-(⌬ss)secMAЈ mRNAs were similar in wild-type and ⌬tmRNA cells (Fig. 1B). This finding was noteworthy because tmRNA activity usually promotes rapid degradation of truncated mRNAs, including those produced by A-site mRNA cleavage (2,3,24). Moreover, the truncated mRNAs appeared to be somewhat larger than in vitro transcripts that terminate in the codon for glutamine 167, a position that is adjacent to the A-site of the arrested ribosome (Fig. 1, A and B) (16). The 3Ј ends of the truncated messages were mapped more precisely using S1 nuclease protection analysis. The termini were somewhat heterogeneous but strong cleavage was detected inside and adjacent to the secM stop codon (Fig. 1C). A, secMAЈ mRNA variants are shown with the FLAG epitope, signal sequence, and oligonucleotide probe binding sites indicated. SecM residues glycine 165 to threonine 170 and the encoding mRNA sequence are shown, as is the complementary sequence of the S1 nuclease probe and the 3Ј terminus of the truncated in vitro transcripts used. The position of the P166A alteration is indicated by (Ala). Arrows indicate the positions of HphI and Sau96I restriction endonuclease cleavages in the S1 probe used to generate gel migration standards. B, Northern blot of secMAЈ mRNAs purified from tmRNA ϩ and ⌬tmRNA cells. The positions of full-length and truncated flag-(⌬ss)secMAЈ mRNA are indicated. Both in vitro transcripts were truncated after the second nucleotide of the glutamine 167 codon. C, S1 nuclease protection map of truncated secMAЈ mRNAs. Cleavages were detected in the secM stop codon and at positions 1-4 nucleotides downstream. No S1 protection was detected with RNA purified from a strain that had not been induced with isopropyl ␤-D-thiogalactopyranoside (IPTG). Truncated and full-length transcripts were produced by in vitro transcription and analyzed by S1 nuclease protection. The HphI and Sau96I oligonucleotide standards were generated by annealing the 3Ј-labeled S1 probe to a complementary DNA oligonucleotide followed by digestion with the appropriate endonucleases. No cleavages were detected in the codon for glutamine 167, which would have produced an S1 protection pattern similar to that observed with the truncated control in vitro transcript (Fig. 1C, truncated lane). As suggested by the Northern analysis described above, mRNA cleavage occurred ϳ13 to 19 nucleotides downstream of the predicted A-site codon during SecM ribosome arrest. 3Ј 3 5Ј Exonucleases Generate Truncated secM mRNA during Ribosome Arrest-Two models account for ribosome arrestdependent cleavage at the secM stop codon: (i) A-site cleavage due to inefficient translation termination as originally described in Refs. 2 and 3, or (ii) exonucleolytic trimming of downstream mRNA to the 3Ј margin of the arrested ribosome. To differentiate between these possibilities, we fused secM codons 150 -166 in-frame between two thioredoxin genes (trxA). The encoded FLAG-TrxA-SecMЈ-TrxA fusion protein contained the minimal SecM peptide motif ( 150 FST-PVWISQAQGIRAGP 166 ) sufficient for ribosome arrest (9). However, in contrast to the wild-type secM gene, the flag-trxA-secMЈ-trxA stop codon is positioned several hundred nucleotides downstream of the predicted ribosome arrest site (21). Northern analysis of flag-trxA-secMЈ-trxA mRNA also showed ribosome arrest-dependent truncated messages (data not shown), and S1 nuclease protection analysis detected two prominent cleavage sites at 13 and 19 nucleotides downstream of the proline 166 codon (wild-type SecM numbering) (Fig. 2B, wild-type lane). The cleavages were the same distance from the proline 166 codon as was observed with flag-secMAЈ and flag-(⌬ss)secMAЈ mRNAs (Figs. 1C and 2A). Although the cleavage patterns were not strictly identical between truncated messages, the secM stop codon was clearly not required for mRNA cleavage. We reasoned that if ribosome arrest-dependent mRNA cleavage was due to exonuclease activity, then cleavage could be modulated by deletion of known 3Ј 3 5Ј exoribonucleases. Fig. 2B shows the effects of specific exoribonuclease deletions on mRNA cleavage using the flag-trxA-secMЈ-trxA message. Deletion of RNase R leads to an increase in the ϩ19 cleavage product and a decrease in the ϩ13 cleavage product compared with wild-type (Fig. 2B). Similarly, removing polynucleotide phosphorylase (PNPase) activity also lead to increased levels of the ϩ19 product (Fig. 2B). In contrast, there was a slight decrease in the ϩ19 product in ⌬RNase II cells (Fig. 2B). The RNase R/RNase II double deletion strain exhibited less cleavage at both sites, whereas deletion of the C-terminal domain of RNase E had little effect on cleavage (Fig. 2B). Although RNase E is an endoribonuclease, the C-terminal domain is required for the organization of the degradosome, a multienzyme complex that contains PNPase and is important for the degradation of many mRNAs in E. coli (25,26). In general, the accumulation of specific cleavage products was dependent upon exoribonuclease activities. The SecM Nascent Peptide Is Linked to tRNA Gly during Ribosome Arrest-The accumulation of truncated secM messages in tmRNA ϩ cells and the involvement of exoribonucleases in mRNA cleavage are inconsistent with what is known about A-site cleavage. Moreover, the SecM-induced ribosome arrest occurs at the codon for proline 166 (9, 16), a position that is 13-15 nucleotides upstream of the stop codon (Fig. 1A). We sought to confirm the position of SecM-stalled ribosomes using a mini-gene that encodes SecM residues glutamine 149glutamine 167 directly downstream of the FLAG epitope. Additionally, the flag-secMЈ mini-gene was synonymously recoded to change the codon for proline 153 from CCC to CCG, and the codon for glycine 161 from GGC to GGA. Northern analysis using a probe specific for the ribosome binding site of flag-secMЈ detected truncated mRNA, and this cleavage appeared to depend upon ribosome stalling because truncated mRNA was not observed with the P166A variant (Fig. 3, RBS probe blot). The position of the arrested ribosome was determined by identifying the nascent peptidyl-tRNA by Northern blot analysis. Induction of FLAG-SecMЈ synthesis led Fig. 1C. Numerical position is reported with respect to the codon corresponding to SecM proline 166, where position ϩ1 is the first nucleotide of the codon corresponding to SecM glutamine 167. Downward arrows labeled DdeI, EcoRI, and Sau96I indicate mRNA cleavage sites corresponding to the migration positions of S1 oligonucleotide probe standards. B, S1 nuclease protection analysis of flag-trxA-secMЈ-trxA mRNA purified from cells lacking 3Ј 3 5Ј exoribonucleases. Gene deletions and disruptions were constructed as described under "Experimental Procedures." Positions ϩ13 and ϩ19 downstream of the codon corresponding to SecM proline 166 are indicated. The DdeI, EcoRI, and Sau96I oligonucleotide standards were generated by annealing the labeled S1 probe to a complementary DNA oligonucleotide followed by digestion with the appropriate endonucleases. IPTG, isopropyl ␤-D-thiogalactopyranoside. to a shift in the electrophoretic mobility of tRNA 3 Gly but not that of tRNA 2 Pro (Fig. 3, glyV and proL probe blots). The tRNA 3 Gly mobility shift was not observed when the FLAG-(P166A)SecMЈ variant was expressed (Fig. 3, glyV probe blot). The tRNA 3 Gly mobility shift was not seen when RNA samples were incubated at pH 8.9 for 1 h at 37°C to deacylate tRNAs (data not shown) (22). The arrested ribosome could be positioned unambiguously because the recoded mini-gene contained only one codon (GGC of glycine 165) that is decoded by tRNA 3 Gly . Therefore, during SecM-mediated ribosome arrest, the nascent peptide is covalently linked to tRNA 3 Gly via glycine 165 and the codon for proline 166 is positioned in the A-site. SecM-arrested Ribosomes Contain Prolyl-tRNA Pro in the A-site-Elegant studies have shown that SecM ribosome arrest is prevented if proline residues are replaced with the imino acid analog, azetidine-2-carboxylic acid (14). Based on this finding, it has been reasonably assumed that proline 166 is incorporated into the SecM nascent peptide during ribosome arrest (8,9,14). However, recent work from Ito and colleagues (16), as well as our analysis, indicates that ribosome arrest occurs prior to proline 166 addition. One model that is consistent with all available data postulates that prolyl-tRNA Pro occupies the A-site of the SecM-arrested ribosome. If this model is correct, tRNA Pro should be stably associated with arrested ribosomes. Extracts from cells expressing FLAG-(⌬ss)SecM were separated into high-speed pellet and supernatant fractions by ultracentrifugation through sucrose cushions. Polyacrylamide gel analysis of RNA extracted from these fractions showed that the rRNA (i.e. ribosomes) was present in the pellet fraction, whereas the majority of tRNA was in the supernatant fraction (data not shown). Partitioning of tRNA to the supernatant fraction was confirmed by Northern analysis for tRNA 2 Arg (Fig. 4, argQ probe blot), which was not predicted to associate with SecM-arrested ribosomes. In contrast, a higher proportion of tRNA 2 Pro was found in the pellet fractions from cells expressing FLAG-(⌬ss)SecM, but not FLAG-(⌬ss-P166A)SecM (Fig. 4, proL probe blot). Enrichment of tRNA Pro in pellet fractions was dependent upon cognate tRNA/codon interactions. tRNA 2 Pro , the cognate tRNA for CCU and CCC codons, was not enriched in high-speed pellets if SecM proline 166 was encoded by CCG (Fig. 4, proL probe blot), even though the CCG codon fully supports ribosome arrest (9). Moreover, although tRNA 1 Pro partitioned to the pellet fractions when the CCU construct was expressed, significantly more tRNA 1 Pro was found in the pellet fraction when its cognate CCG codon was used to code for proline 166 (Fig. 4, proK probe blot). The partitioning of tRNA 1 Pro to the ribosome fraction with the CCU construct may be due to association with trailing ribosomes within the SecM-stalled polysome, because tRNA 1 Pro is not known to decode CCU and is found in the high-speed supernatant in the absence of ribosome arrest (Fig. 4, (⌬ss)P166A lanes). Finally, the association of tRNA Pro with pellet fractions was not inhibited by the tmRNA quality control system (Fig. 4, ⌬tmRNA versus tmRNA ϩ ). tmRNA Activity at SecM-arrested Ribosomes-The data presented thus far indicate that tmRNA does not play a significant role in rescuing SecM-arrested ribosomes. However, published reports show SecM and SecM variants are ssrA-tagged by tmRNA as a consequence of ribosome arrest (27,28). We examined tmRNA-mediated peptide tagging of SecM proteins in cells that express tmRNA(His 6 ), which encodes a hexahistidine-containing ssrA peptide that is resistant to proteolysis (18). Western blot analysis using antibodies specific for His 6 detected two ssrA(His 6 )-tagged species of (⌬ss)SecM (Fig. 5A, (⌬ss)SecM His6 lane). A similar ssrA(His 6 )-tagged doublet was observed with signal sequence-containing FLAG-SecM (data not shown), but not with the FLAG-(⌬ss-P166A)SecM protein, which does not cause ribosome arrest (Fig. 5A, (⌬ss)P166A). All ssrA(His 6 )-tagged species were also detected by Western analysis using antibody specific for the N-terminal FLAG epitope (Fig. 5A, anti-FLAG panel). Gly . RNA from strains expressing flag-secMЈ (pSecMЈ) and flag-(P166A)secMЈ (pSecMЈ(P166A)) mini-genes was analyzed by Northern blot to identify ribosome arrest-dependent peptidyl-tRNA. The RBS, glyV, and proL oligonucleotide probes were specific for the ribosome binding site of mRNA, tRNA 3 Gly , and tRNA 2 Pro , respectively. The migration positions of full-length mRNA, truncated mRNA, tRNA 3 Gly , peptidyl-tRNA 3 Gly , and tRNA 2 Pro are indicated. Samples containing overexpressed tRNA 2 Pro are indicated by 1[tRNA 2 Pro ]. The slower migrating species detected in the proL probe blot is probably incompletely processed tRNA 2 Pro , as it is present in all overexpressed tRNA 2 Pro samples, regardless of SecM expression. IPTG, isopropyl ␤-D-thiogalactopyranoside. To determine the sites of tagging, we purified ssrA(His 6 )-tagged FLAG-SecM and FLAG-(⌬ss)SecM by Ni 2ϩ -NTA affinity chromatography and subjected the purified proteins to mass spectrometry and N-terminal sequence analysis. Although FLAG-SecM was initially expressed as an N-terminal FLAG fusion, the N-terminal amino acid sequence (AEPNA) of the purified protein indicated that the epitope tag had been removed along with the signal sequence peptide during secretion (data not shown). The masses of tagged SecM species were consistent with the addition of ssrA-(His 6 ) tags after glycine 165 ( 5B, (⌬ss)SecM spectrum). We suspected that the tagged proteins detected by Western blot analysis corresponded to the two species observed by mass spectrometry. These assignments were confirmed through analysis of the FLAG-(⌬ss-Q167UAA)SecM protein, which was synthesized from a construct containing a mutation that changes glutamine 167 codon to a stop codon (UAA) (Fig. 1A). The FLAG-(⌬ss-Q167UAA)SecM protein lacks four C-terminal amino acid residues, but still causes ribosome arrest (9). FLAG-(⌬ss-Q167UAA)SecM protein was tagged after glycine 165, but not after threonine 170 (Fig. 5A, (⌬ss)Q167UAA, and data not shown). Presumably, the premature stop codon prevented ribosomes from translating to the 3Ј end of truncated mRNA. The effect of tmRNA activity on total SecM protein production was examined by Western blot analysis using a monoclonal antibody specific for the N-terminal FLAG epitope present on all FLAG-(⌬ss)SecM variants. Two species of FLAG-(⌬ss)SecM accumulated in ⌬tmRNA (Fig. 5A, anti-FLAG panel, ⌬tmRNA lane). The higher molecular weight protein represented full-length polypeptide and this species co-migrated with FLAG-(⌬ss-P166A)SecM (which does not cause ribosome arrest) on SDS-polyacrylamide gels (Fig. 5A, anti-FLAG panel). The lower molecular weight species seen in ⌬tmRNA cells corresponded to incompletely synthesized FLAG-(⌬ss)SecM protein (to residue glycine 165) produced during ribosome arrest (Fig. 5A, and data not shown). However, analyses of cetyl trimethylammonium bromide precipitates and isolated ribosomes indicated that most of the incompletely synthesized FLAG-(⌬ss)SecM protein was not covalently linked to tRNA and therefore did not represent ribosome-bound nascent chains (data not shown). Therefore, incompletely synthesized FLAG-(⌬ss)SecM polypeptide chains were released from the arrested ribosome in a tmRNA-independent manner. In contrast to ⌬tmRNA cells, fulllength FLAG-(⌬ss)SecM protein was not detected in tmRNA ϩ cells (Fig. 5A, anti-FLAG panel, tmRNA ϩ lane). Presumably, the full-length FLAG-(⌬ss)SecM protein was ssrA-tagged and degraded rapidly in wild-type cells. Similarly, full-length FLAG-(⌬ss)SecM did not accumulate to very high levels in tmRNA(His 6 )-expressing cells, although the two ssrA(His 6 )tagged species were readily detected (Fig. 5A, anti-FLAG panel). Prolyl-tRNA Pro in the A-site Inhibits tmRNA Activity-SsrA tagging after glycine 165 appears to contradict the other data indicating that tmRNA plays no significant role in resolving SecM-arrested ribosomes. However, this work and previous studies relied upon SecM overexpression (27,28), which is predicted to deplete limiting tRNA Pro species. tRNA 3 Gly , which holds the SecM nascent chain during ribosome arrest, is found at ϳ4,400 molecules per E. coli cell, whereas tRNA 2 Pro and tRNA 3 Pro , which occupy the arrested ribosome A-site, are present at only ϳ1,300 copies per cell (29). Therefore, if the number of SecM-arrested ribosomes exceeds 1,300 per cell, a second population of stalled ribosomes with unoccupied A-sites will accumulate due to prolyl-tRNA Pro sequestration, potentially allowing for adventitious ssrA tagging after glycine 165. To test this model, we overexpressed tRNA 2 Pro and examined the effects on ssrA peptide tagging and three other properties of SecM ribosome arrest: (i) nascent peptidyl-tRNA stability, (ii) cleavage of flag-secMЈ mRNA, and (iii) regulation of secA translation. Overexpression of tRNA 2 Pro significantly suppressed ssrA(His 6 ) tagging after glycine 165, but increased tagging after threonine 170 (Fig. 5A, (⌬ss)SecM-ptRNA 2 Pro lane). Although tmRNA activity was significantly altered, tRNA 2 Pro overexpression had no effect on nascent peptidyl-tRNA 3 Gly accumulation (Fig. 3, glyV probe blot), and actually appeared to increase flag-secMЈ mRNA cleavage (Fig. 3, RBS probe blot). Finally, tRNA 2 Pro overexpression had no effect on the regulation of secA translation. We made secAЈ::lacZ translational fusions and confirmed that deletion of the SecM signal sequence increased SecAЈ-LacZ expression, whereas further introduction of the P166A mutation reduced fusion protein synthesis (Fig. 6). Overexpression of tRNA 2 Pro had no significant effect on the ribosome arrestdependent increase in ␤-galactosidase activity (Fig. 6). Moreover, deletion of tmRNA had no effect on SecAЈ-LacZ expression, as determined by Western blot and ␤-galactosidase activity analyses (Fig. 6). We also attempted to examine the effects tRNA 1 Pro overexpression on ribosome arrest from con-structs that encoded proline 166 as CCG. Unfortunately, all plasmid clones carrying the proK gene under its own promoter also contained mutations in the tRNA 1 Pro -encoding sequence (data not shown). Seven distinct mutations were found mapping to the D-arm, T-arm, anticodon loop, and the promoter (data not shown). These results suggest that high-level overexpression of tRNA 1 Pro is deleterious to the cell. DISCUSSION Several lines of evidence indicate that the primary SecM-mediated ribosome arrest is resistant to A-site mRNA cleavage and subsequent tmRNA recruitment. First, although the secM mRNA was truncated in a ribosome arrest-dependent manner, the cleavage sites were 13 to 19 nucleotides downstream of the A-site codon. Second, the steady-state number of SecM-arrested ribosomes (as determined by Northern analysis of nascent peptidyl-tRNA) was not significantly affected by tmRNA. Third, incompletely synthesized SecM protein (to residue glycine 165) accumulated in tmRNA ϩ and tmRNA(His 6 )-expressing cells. Fourth, SecM-dependent regulation of secA translation was essentially identical in ⌬tmRNA and tmRNA ϩ cells (27). Finally, A-sitebound tRNA 2 Pro inhibits ssrA tagging after SecM glycine 165. The surprising discovery of A-site-bound prolyl-tRNA Pro has also been recently reported by Ito and colleagues (16). That study used entirely different methods than ours to characterize arrested ribosomes produced in vitro (16), and is completely congruent with our analysis of the in vivo SecM ribosome arrest. Altogether, our data strongly suggest that tmRNA recruitment during the primary ribosome arrest is an artifact of SecM overexpression, and that A-site mRNA cleavage and ssrA tagging at this site do not occur under physiological conditions. We feel this conclusion makes biological sense because A-site cleavage is predicted to interfere with cis-acting SecM regulation of secA translation initiation. Moreover, co-translational secretion of the SecM nascent peptide ensures that SecA is synthesized in close proximity to the inner membrane (30), a phenomenon that presumably requires synthesis of SecM and SecA from the same mRNA molecule. Deletion of the SecM signal sequence prevents co-translational secretion and thereby precludes the mechanism that normally alleviates ribosome arrest (14). Secreted SecM also elicited ribosome arrest in our study, presumably because the overexpressed protein saturated the secretion machinery. The (⌬ss)SecM-mediated ribosome arrest exhibits a t1 ⁄ 2 Ͼ 4 -5 min in vivo (14), which exceed the half-life of bulk E. coli mRNA turnover (ϳ2.4 min at 37°C) (31). Thus, prolonged translational pausing allows degradation of the downstream mRNA to the 3Ј edge of the arrested ribosome (Fig. 7). Presumably, the 5Ј portion of the message is protected by ribosomes queued behind the primary SecM-arrested ribosome. The influence of SecM ribosome arrest on mRNA degradation appears to differ from other reported ribosome pauses, which tend to stabilize mRNA downstream of the arrest site (32)(33)(34). Although specific endonuclease cleavage between cistrons has been observed in E. coli operons (35), we find the same cleavages in mRNAs that lack the secM-secA intergenic region. Moreover, the cleavage appears to require ribosome arrest, arguing against a sequence-specific endonuclease activity. Our observations suggest that 3Ј 3 5Ј exoribonucleases generate the 3Ј termini of truncated secM messages. First, cleavage occurred downstream of the A-site codon, at sites consistent with the 3Ј border of a stalled E. coli ribosome (36,37). Second, the proportion of ϩ13 and ϩ19 cleavage products was dependent upon exoribonuclease activities present in the cell. Longer cleavage products accumulated in the absence of either RNase R or PNPase, both of which degrade secondary structure-containing RNAs more efficiently than RNase II (38,39). Although RNase R and PNPase are not known to work together, our data suggests that these enzymes may cooperate to convert the ϩ19 cleavage product to the ϩ13 product. Finally, RNase II can indirectly inhibit the degradation of structured mRNAs by removing 3Ј single-stranded regions required by PNPase to bind substrate (39,40). These biochemical properties are consistent with the accumulation of cleavage products in our exoribonuclease knock-out strains. The details of mRNA cleavage notwithstanding, it is interest- ing that the secM stop codon is in position to be cleaved during prolonged ribosome arrest. SecM-arrested ribosomes clearly resumed translation, and upon reaching the 3Ј end of the truncated mRNA, they stalled for a second time (Fig. 7). However, tmRNA is readily recruited to ribosomes stalled at the extreme 3Ј termini of mRNAs, and SecM was ssrA-tagged after the C-terminal residue threonine 170 (Fig. 7, ribosome fate II). Because little full-length (⌬ss)SecM protein accumulated in tmRNA ϩ or tmRNA(His 6 ) cells, it appears that degradation of mRNA to the ribosome edge preceded the resumption of translation. It is unclear whether exoribonuclease cleavage also leads to ssrA-dependent degradation of SecM under physiological conditions. Secreted SecM was shown to be rapidly degraded in tmRNA ϩ cells (14), and we find the non-degradable ssrA(His 6 ) tag stabilizes SecM in the periplasm. However, both of these studies employed SecM overexpression. At lower expression levels, co-translational secretion of SecM is expected to prevent ribosome arrest, and thereby inhibit mRNA cleavage and subsequent tmRNA recruitment/ssrA-tagging (Fig. 7, ribosome fate I). In any event, prolonged ribosome arrest stimulates SecA expression, so significant protein synthesis must occur prior to degradation of the downstream secA cistron. A-site mRNA cleavage and tmRNA activities were clearly not able to resolve the majority of primary SecM-arrested ribosomes. However, ssrA tagging after glycine 165 indicates limited tmRNA recruitment during primary ribosome arrest, at least when SecM is overexpressed. Ivanova et al. (41) showed tmRNA is recruited to ribosomes stalled on mRNAs where the 3Ј terminus is 12 nucleotides downstream of the A-site codon, albeit at a ϳ20-fold lower rate than maximum. Therefore, cleavage of mRNA to the 3Ј-edge of the arrested ribosome could allow relatively inefficient tmRNA recruitment, provided the A-site is not occupied with prolyl-tRNA Pro (Fig. 7, ribosome fate III). Alternatively, limited A-site mRNA cleavage may have occurred under SecM overexpression conditions (Fig. 7, ribosome fate IV). It appears that A-site nuclease activity is restricted to codons within unoccupied A-sites (1, 2), so presumably A-site cleavage in this instance would be an artifact of SecM overexpression. Based on Northern blot analysis, ϳ60% of cellular tRNA 3 Gly is sequestered as SecM nascent peptidyl-tRNA during SecM overexpression. This corresponds to roughly 2,600 SecM-arrested ribosomes per cell, of which only ϳ1,300 can simultaneously contain A-site prolyl-tRNA Pro (29). Therefore, we estimate ϳ50% of the SecM-arrested ribosomes have unoccupied A-sites in the absence of compensatory tRNA 2 Pro overexpression, in accord with recent studies of SecM-arrested ribosomes (15,16). Given incomplete A-site occupancy, perhaps the lack of A-site mRNA cleavage reflects sequence specificity of the A-site nuclease. The RelE protein shows marked preference for A-site codons, cleaving CAG and UAG at the highest rate (1). However, we have observed RelEindependent A-site mRNA cleavage at several different codons, suggesting that many sequences are potential substrates (2). 4 Alternatively, the low rate of A-site mRNA cleavage may be due to the substantial structural rearrangements that occur in the ribosome during SecM-mediated arrest (15). SecM-induced structural rearrangements originate in the 50 S exit channel and are propagated to the 30 S subunit via inter-subunit bridges and ribosome-bound tRNAs (9,15). Although structural changes are transmitted by tRNA, SecM-arrested ribosomes adopt the same conformation independent of A-site bound prolyl-tRNA Pro (15). Several elements of 16 S rRNA are rearranged during ribosome arrest, including helix 44, which forms part of the 30 S A-site and makes contact with A-site mRNA (15). Clearly, alteration of A-site structure could significantly affect A-site nuclease activity, whether catalyzed by the ribosome or a trans-acting factor. Interestingly, Aiba and colleagues (28) showed that expression of a fusion protein containing the SecM-derived residues 161 GIRAGP 166 resulted in significant mRNA cleavage at sites immediately adjacent to the A-site codon, although the majority of cleavages still occurred at other positions corresponding to the 3Ј-and 5Ј-edges of the paused ribosome. We also observed similar cleavages near the A-site codon when expressing a fusion protein containing the longer SecM-derived 149 QFSTPVWISQAQGIRAGP 166 sequence (Fig. 2B), but failed to detect these mRNA cleavages when expressing full-length SecM sequences. Perhaps the fulllength SecM nascent peptide is required for complete structural rearrangement and inhibition of A-site mRNA cleavage. Gene regulation by translational pausing has long been recognized in prokaryotes, although its importance is still often underestimated. Indeed, the SecM ribosome arrest is a newly characterized example of translational attenuation, which was shown to control inducible expression of erythromycin and chloramphenicol resistance genes over 20 years ago (42)(43)(44)(45). The role of translational pausing in transcriptional attenuation of the E. coli trp operon was recognized even earlier (46,47). In each case, A-site mRNA cleavage and tmRNA activities have the potential to interfere with regulation by "rescuing" paused ribosomes. However, in our view, it makes little sense to employ regulatory strategies that are undermined by translational quality control systems, and we predict that regulatory ribosome pauses are generally immune to A-site cleavage and tmRNA activities. The mechanisms involved are likely varied, and characterization of other ribosome pauses will hopefully increase our understanding of the molecular requirements for A-site mRNA cleavage.
Global Prevalence of Antifungal-Resistant Candida parapsilosis: A Systematic Review and Meta-Analysis A reliable estimate of Candida parapsilosis antifungal susceptibility in candidemia patients is increasingly important to track the spread of C. parapsilosis bloodstream infections and define the true burden of the ongoing antifungal resistance. A systematic review and meta-analysis (SRMA) were conducted aiming to estimate the global prevalence and identify patterns of antifungal resistance. A systematic literature search of the PubMed, Scopus, ScienceDirect and Google Scholar electronic databases was conducted on published studies that employed antifungal susceptibility testing (AFST) on clinical C. parapsilosis isolates globally. Seventy-nine eligible studies were included. Using meta-analysis of proportions, the overall pooled prevalence of three most important antifungal drugs; Fluconazole, Amphotericin B and Voriconazole resistant C. parapsilosis were calculated as 15.2% (95% CI: 9.2–21.2), 1.3% (95% CI: 0.0–2.9) and 4.7% (95% CI: 2.2–7.3), respectively. Based on study enrolment time, country/continent and AFST method, subgroup analyses were conducted for the three studied antifungals to determine sources of heterogeneity. Timeline and regional differences in C. parapsilosis prevalence of antifungal resistance were identified with the same patterns among the three antifungal drugs. These findings highlight the need to conduct further studies to assess and monitor the growing burden of antifungal resistance, to revise treatment guidelines and to implement regional surveillance to prevent further increase in C. parapsilosis drug resistance emerging recently. Introduction Candida species, the causative agents of the majority of human fungal infections, are becoming a major public health concern [1,2]. In intensive care units (ICUs) around the world, the majority of fungus-related systemic bloodstream infections are caused by species of Candida, leading to high death rates and significant healthcare expenses for both governments and hospitalized patients [3,4]. Although Candida albicans is the most common and invasive species, its dominance has declined over the last two decades as the number of invasive infections caused by non-albicans Candida species has increased [5]. Of these, the Candida parapsilosis (C. parapsilosis) complex, which consists of the three cryptic species: A precise protocol was agreed upon before the search began, outlining the databases to be searched, eligibility criteria, and all other methodological details. The study was carried out in accordance with the updated guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) [12] (Table S1). Search Strategy To identify studies on the prevalence and pattern of antifungal resistance of C. parapsilosis bloodstream infections worldwide, a systematic literature search was conducted in PubMed, Scopus, ScienceDirect and Google Scholar databases. Only articles written in English were included. There were no constraints on study period, study design, or place of publication (Table S2). Data Management and Study Selection Initially, all the records identified based on a systematic literature search were exported to Endnote X8 (Clarivate Analytics, London, UK) to be managed. After that, duplicate potential articles were removed by automatic strategy as well as manual search before the screening and assessment of the remaining articles based on title and abstract was independently carried out by two reviewers (D.Y., M.H.A.). Thereafter, the full texts of potential records were downloaded and assessed for eligibility according to the inclusion and exclusion search criteria, by two authors (D.Y., K.H.). Any disagreement or uncertainty were revealed by discussion and consensus. Data Extraction The relevant data were extracted from eligible studies by two authors (D.Y. and M.H.A.). Precautions were taken to minimize errors and ensure consistency in data extraction. The following data were extracted to a predesigned Excel spreadsheet: author name, year of publication, study period, study design, country, target group, gender, method of species detection, method of antifungal susceptibility testing (AFST), sample size, total number of cases tested and number of C. parapsilosis species resistant cases for several antifungals. Overall, the data from the studies recruited from various geographical locations across the world were analysed. Data Analysis The data entered in the Excel sheet were analysed using the R package and software. The proportion of resistance to several antifungals was calculated as the number of resistant cases relative to the total number of isolates tested for the relevant antifungal through the use of the Metaprop command. Accordingly, the prevalence of resistance to the studied antifungals (at 95% confidence intervals (CI)) was estimated for each eligible study and subsequently for the world by pooling the antifungal resistance prevalence rates of all included studies using the random-effect model. Heterogeneity between the studies was evaluated by the I 2 statistics in accordance with Cochran's Q-test. A cut-off value > 75% of I 2 statistic was indication of substantial heterogeneity [16], whilst a p value of <0.05 was considered to be a significant degree of heterogeneity. Publication bias was tested graphically using a funnel plot and statistically by Egger's regression test. Subgroup and Sensitivity Analysis For the purpose of exploring the potential sources of heterogeneity, a subgroup analysis was carried out based on different subgroups which are the enrolment time of study, country where the study was conducted, and the AFST method used by using metaprop codes in meta and metafor packages of R (version 3.6.3), in RStudio (version 1.2.5033). Data analysis and the creation of the Forest and Funnel plots were performed. Study Selection In a flow diagram, Figure 1 shows the results of the literature search and article selection processes. A total of 925 records were initially identified through electronic database searches. After excluding 493 duplicate records, the title and/or abstract of the remaining 432 studies were assessed for inclusion, from which 93 were eligible for full-text screening. Finally, a total of 79 studies met the eligibility criteria and included in this SRMA from which 71 studies were for fluconazole resistance prevalence, 63 for amphotericin B and 58 for voriconazole resistance. Characteristics of Included Studies The detailed characteristics of the 79 included studies are summarized in Table 1. Seventy-nine studies published between 1995 and 2022 met the inclusion criteria for antifungal resistance. A total of 14,371 C. parapsilosis isolates were identified and subjected to AFST. Fifty (63.3%) of the studies were conducted in America and Asia (24, 26 respectively), 19 (24.1%) in Europe, and 6 (7.6%) in Africa. With respect to the study design, the majority (68.4%, n = 54) were cross-sectional studies, (2.5%, n = 2) prospective or retrospective cohort, (1.3%, n = 1) case control, and the remaining 5 (6.3%) were population-based surveillance studies. Of the 79 articles, 71 provided data on fluconazole resistance, 63 for amphotericin B, 58 for voriconazole, 46 for caspofungin, 40 for itraconazole, 34 for micafungin and anidulafungin each and 23 for Posaconazole. Meta-analysis was performed for the three most important antifungal drugs. Characteristics of Included Studies The detailed characteristics of the 79 included studies are summarized in Table 1. Seventy-nine studies published between 1995 and 2022 met the inclusion criteria for antifungal resistance. A total of 14,371 C. parapsilosis isolates were identified and subjected to AFST. Fifty (63.3%) of the studies were conducted in America and Asia (24, 26 respectively), 19 (24.1%) in Europe, and 6 (7.6%) in Africa. With respect to the study design, the majority (68.4%, n = 54) were cross-sectional studies, (2.5%, n = 2) prospective or retrospective cohort, (1.3%, n = 1) case control, and the remaining 5 (6.3%) were populationbased surveillance studies. Of the 79 articles, 71 provided data on fluconazole resistance, 63 for amphotericin B, 58 for voriconazole, 46 for caspofungin, 40 for itraconazole, 34 for micafungin and anidulafungin each and 23 for Posaconazole. Meta-analysis was per- Prevalence of Fluconazole-Resistant C. parapsilosis Isolates The pooled prevalence of fluconazole-resistant C. parapsilosis, as well as the results of subgroup analysis, are shown in Table 2. The results of the seventy-one studies included in this part of the SRMA show a varied picture of fluconazole resistance rates, ranging from 0% to 100%. In 22 (31.0%) studies, all the identified isolates were susceptible to fluconazole with resistance rates of 0%, while in two other studies, fluconazole resistance was found in 100% of the tested C. parapsilosis isolates. The pooled resistance rate of C. parapsilosis to fluconazole across the 71 observational studies was estimated to be 15.2% (95% CI: 9.2-21.2) (Figure 2) Significant heterogeneity was observed across all the included studies (I 2 = 98%, p < 0.0001). In addition, subgroup analysis was carried out based on enrolment time, country, continent and AFST method to further investigate the potential sources of heterogeneity. Prevalence of Amphotericin B-Resistant C. parapsilosis Isolates The pooled prevalence of amphotericin B-resistant C. parapsilosis, as well as the results of subgroup analysis, are shown in Table 2. The results of the 63 studies included in this part of the SRMA show a slightly varied picture of amphotericin B resistance rates, ranging from 0% to 46.9%. In 51 (81.0%) studies, all the identified isolates were susceptible to amphotericin B with resistance rates of 0%, while one study showed the highest amphotericin B resistance rate of 46.9% of the tested C. parapsilosis isolates. The pooled resistance rate of C. parapsilosis to amphotericin B across the 63 observational studies was estimated to be 1.3% (95% CI: 0.0-2.9) (Figure 3). Significant heterogeneity was observed across all the included studies (I 2 = 96%, p < 0.01). Accordingly, subgroup analysis was carried out based on enrolment time, country, continent and AFST method to further investigate the potential sources of heterogeneity. The fluconazole resistance rate has risen dramatically in the last six years, from 11.6% before 2016 to 36.7% in the period from 2016 to 2022. According to the meta-analysis, Africa had the highest prevalence of fluconazole resistance at 27.7% (95% CI: 2.7-52.8), followed by America at 21.2% (95% CI: 7.6-34.7) and Europe at 13.3% (95% CI: 1.3-25.3), while Asia had the lowest frequency of fluconazole resistance at 6.0% (95% CI: 2.9-9.1). Based on the country level (Table S3), the highest prevalence rate of fluconazole-resistant C. parapsilosis isolates was reported in South Africa at 51.5%, followed by Mexico at 27.0%, then Brazil at 25.3%. The lowest RA prevalence was reported in Finland and Argentina at 0.0%, followed by Japan and Portugal (0.6%), then China (1.7%). Notably, remarkable differences in fluconazole resistance rate obtained with AFST methods were observed. A slightly high overall estimate was observed when broth microdilution (16.5%; CI: 8.5-24.5) or E-test and broth microdilution (13.0%; 95% CI: 0.5-25.6) were used, while a very low number of C. parapsilosis isolates were found to be fluconazole-resistant through DP-Eiken test (0.6%; 95% CI: 0.0-2.9) and all isolates were fluconazole susceptible when MALDI-TOF was used (0.0%; 95% CI: 0.0-11.6). Prevalence of Amphotericin B-Resistant C. parapsilosis Isolates The pooled prevalence of amphotericin B-resistant C. parapsilosis, as well as the results of subgroup analysis, are shown in Table 2. The results of the 63 studies included in this part of the SRMA show a slightly varied picture of amphotericin B resistance rates, ranging from 0% to 46.9%. In 51 (81.0%) studies, all the identified isolates were susceptible to amphotericin B with resistance rates of 0%, while one study showed the highest amphotericin B resistance rate of 46.9% of the tested C. parapsilosis isolates. The pooled resistance rate of C. parapsilosis to amphotericin B across the 63 observational studies was estimated to be 1.3% (95% CI: 0.0-2.9) (Figure 3). Significant heterogeneity was observed across all the included studies (I 2 = 96%, p < 0.01). Accordingly, subgroup analysis was carried out based on enrolment time, country, continent and AFST method to further investigate the potential sources of heterogeneity. An amphotericin B resistance rate of 1.6% has been reported before 2016, while it decreased to 0.0% during 2016-2022. According to the meta-analysis, the four continents showed almost the same resistance rate from 0.0-0.2% (95% CI: 0.0-0.7). Based on the country level (Table S3), the highest prevalence rate of amphotericin B-resistant C. parapsilosis isolates was reported in Malaysia at 2.9% (95% CI: 0.0-8.3), followed by Portugal at 1.2% (95% CI: 0. 2-4.4). Notably, remarkable differences in amphotericin B resistance rate obtained with AFST methods were observed. A slightly high overall estimate was observed when broth microdilution and E-test (5.3%; 95% CI: 0.0-15.5) or E-test (5.3%; 95% CI: 0.0-1.1) were used. An amphotericin B resistance rate of 1.6% has been reported before 2016, while it decreased to 0.0% during 2016-2022. According to the meta-analysis, the four continents showed almost the same resistance rate from 0.0-0.2% (95% CI: 0.0-0.7). Based on the country level (Table S3), the highest prevalence rate of amphotericin B-resistant C. parapsilosis isolates was reported in Malaysia at 2.9% (95% CI: 0.0-8.3), followed by Portugal at 1.2% (95% CI: 0. 2-4.4). Notably, remarkable differences in amphotericin B resistance rate obtained with AFST methods were observed. A slightly high overall estimate was observed when broth microdilution and E-test (5.3%; 95% CI: 0.0-15.5) or E-test (5.3%; 95% CI: 0.0-1.1) were used. Prevalence of Voriconazole-Resistant C. parapsilosis Isolates The pooled prevalence of voriconazole-resistant C. parapsilosis, as well as the results of subgroup analysis, are shown in Table 2. The results of the 58 studies included in this section of the SRMA reveal a varied picture of voriconazole resistance rates, ranging from 0.0 to 62.5%. In thirty-one (53.4%) studies, all the identified isolates were susceptible to voriconazole with resistance rates of 0%, while the highest resistance rate was 62.5% of the tested C. parapsilosis isolates. The pooled resistance rate of C. parapsilosis to voriconazole across the 58 observational studies was estimated to be 4.7% (95% CI: 2.2-7.3) (Figure 4). Significant heterogeneity was observed across all the included studies (I 2 = 91%, p < 0.01). Accordingly, subgroup analysis was carried out based on enrolment time, country, continent and AFST method for further investigation of the potential sources of heterogeneity. Prevalence of Voriconazole-Resistant C. parapsilosis Isolates The pooled prevalence of voriconazole-resistant C. parapsilosis, as well as the results of subgroup analysis, are shown in Table 2. The results of the 58 studies included in this section of the SRMA reveal a varied picture of voriconazole resistance rates, ranging from 0.0 to 62.5%. In thirty-one (53.4%) studies, all the identified isolates were susceptible to voriconazole with resistance rates of 0%, while the highest resistance rate was 62.5% of the tested C. parapsilosis isolates. The pooled resistance rate of C. parapsilosis to voriconazole across the 58 observational studies was estimated to be 4.7% (95% CI: 2.2-7.3) ( Figure 4). Significant heterogeneity was observed across all the included studies (I 2 = 91%, p < 0.01). Accordingly, subgroup analysis was carried out based on enrolment time, country, continent and AFST method for further investigation of the potential sources of heterogeneity. The voriconazole resistance rate has increased obviously in the last six years, from 3.2% before 2016 to 17.9% (2016-2022). According to the meta-analysis, Africa had the highest prevalence of voriconazole resistance at 12.0% (95% CI: 2.4-21.6), while Asia had the lowest frequency of voriconazole resistance at 1.2% (95% CI: 0.3-2.0). Based on the country level (Table S3), the highest prevalence rate of voriconazole-resistant C. parapsilosis isolates was reported in South Africa at 19.7% (95% CI: 13.5-25.8), followed by Mexico at 17.2% (95% CI: 5.8-35.8), then Brazil at 11.7% (95% CI: 0.0-25.5). The lowest RA prevalence was reported in Argentina, Czechia, India, Iran and Japan at 0.0%. Cleary, remarkable variations in voriconazole resistance rate obtained with AFST methods were noticed. A slightly high overall estimate was observed with E-test and broth microdilution (9.2%; 95% CI: 0.0-22.1), followed by broth microdilution (4.4%; 95% CI: 2.1-6.8). Quality Assessment and Publication Bias Supplementary Table S4 presents the results of the JBI critical appraisal checklist's assessment of the 79 included studies' quality. In summary, 72 (91.1%) of the studies were found to have a low risk of bias, whilst seven (8.9%) were found to have moderate risk of bias. Visual assessment of the symmetrical and asymmetrical funnel plots ( Figure 5) revealed the absence and presence of publication bias, respectively. This was statistically confirmed by the Egger's test for fluconazole, amphotericin B and voriconazole (p < 0.0001, 0.1828 and <0.0001 respectively). The voriconazole resistance rate has increased obviously in the last six years, from 3.2% before 2016 to 17.9% (2016-2022). According to the meta-analysis, Africa had the highest prevalence of voriconazole resistance at 12.0% (95% CI: 2.4-21.6), while Asia had the lowest frequency of voriconazole resistance at 1.2% (95% CI: 0.3-2.0). Based on the country level (Table S3), the highest prevalence rate of voriconazole-resistant C. parapsilosis isolates was reported in South Africa at 19.7% (95% CI: 13.5-25.8), followed by Mexico at 17.2% (95% CI: 5.8-35.8), then Brazil at 11.7% (95% CI: 0.0-25.5). The lowest RA prevalence was reported in Argentina, Czechia, India, Iran and Japan at 0.0%. Cleary, remarkable variations in voriconazole resistance rate obtained with AFST methods were noticed. A slightly high overall estimate was observed with E-test and broth microdilution (9.2%; 95% CI: 0.0-22.1), followed by broth microdilution (4.4%; 95% CI: 2.1-6.8). Quality Assessment and Publication Bias Supplementary Table S4 presents the results of the JBI critical appraisal checklist's assessment of the 79 included studies' quality. In summary, 72 (91.1%) of the studies were found to have a low risk of bias, whilst seven (8.9%) were found to have moderate risk of bias. Visual assessment of the symmetrical and asymmetrical funnel plots ( Figure 5) revealed the absence and presence of publication bias, respectively. This was statistically confirmed by the Egger's test for fluconazole, amphotericin B and voriconazole (p < 0.0001, 0.1828 and <0.0001 respectively). Discussion Invasive fungal infections caused by nosocomial pathogens such as non-albicans Candida including C. parapsilosis have emerged, besides a gradual increase in bloodstream infec-tions in healthcare settings, as a result of the widespread administration of broad-spectrum antibiotics, immunosuppressive drugs, and chemotherapy, increased organ transplantation, application of medical support technology, the extension of human life, along with the increase in the prevalence of acquired immune deficiency syndrome (AIDS) [96][97][98]. Antifungal drugs are currently the most effective treatment for Candida infections [99,100]. Amphotericin B is considered as a representative of polyene antifungal drugs and has been widely used in the treatment of severe fungal infections [101]. It has been reported that amphotericin B is effective in treating more than 70% of fungal infections. However, it has several clear side effects, mainly nephrotoxicity. The first-generation azoles such as fluconazole and itraconazole show relatively good efficacy [102]. However, the bioavailability of itraconazole differs greatly, and fluconazole resistance develops readily [103]. In contrast, the new triazoles such as voriconazole and Posaconazole show a broader antifungal spectrum, higher bioavailability, and significantly fewer adverse effects than the first-generation triazole drugs. Echinocandins such as caspofungin, micafungin and anidulafungin, inhibit the synthesis of glucan synthase, and inhibit formation of the cell wall, ultimately resulting in cell death [104]. Caspofungin was the first echinocandin to be approved by the US Food and Drug Administration (FDA) and proven to be safe and efficacious against Candida species comparatively [105]. Although many authors have broadly addressed the burden of C. parapsilosis candidemia and other invasive candidiasis prevalence and antifungal susceptibility profiles, no SRMA summarizes this issue up to date. Here, we conducted a SRMA to address the prevalence of drug-resistant C. parapsilosis globally by synthesizing data published to date on C. parapsilosis antifungal susceptibility worldwide and provide a point of reference for subsequent studies. The findings of this SRMA were generated by pooling eligible data on the prevalence of antifungal resistant C. parapsilosis reported in 79 published studies. The increasing number of nosocomial C. parapsilosis complex infections has raised concerns about conducting antifungal susceptibility tests to optimize clinical treatments. According to CLSI and IDSA, as the first-line drugs, the standardized regimen for C. parapsilosis infections treatment are azoles (fluconazole and voriconazole), amphotericin B, then caspofungin. In the present SRMA, data concerning prevalence of fluconazole, amphotericin B and voriconazole resistance are available and sub-grouped based on the enrolment time, country/continent and AFST method. A total of 71 studies were included, from which the pooled estimate revealed that 15.2% (95% CI 9.2-21.2) of all C. parapsilosis cases, 11.6% of cases before 2016 and 36.7% of the cases from 2016 to 2022 had resistance to fluconazole. In the 71 included studies, C. parapsilosis clinical isolates were identified using conventional and/or molecular methods. Conventional methods, such as morphological characterization on CHROMagar and Cornmeal agar, and biochemical assimilation on API 20C, ID 32C, Vitek 2 and AUXACOLOR, were the most frequently employed methods, while ITS, D1/D2, PCR-RFLP-SADH, AFLP and MALDI-TOF-MS are among the molecular identification techniques. These studies were conducted in 20 different countries from four continents (Europe, America, Asia and Africa). Based on the available literature, Argentina (0.0%; 95% CI 0.0-2.3) and Finland (0.0%; 95% CI 0.0-13.2) have the lowest prevalence. On the other hand, South Africa (51.5%; 95% CI 20.2-82.7) has the highest prevalence. Variation could be seen between and cross continents. For instance, although South Africa has the highest prevalence, the prevalence of fluconazole resistant C. parapsilosis in different counties in the same continent, e.g., Tunisia (3.2%; 95% CI 0.0-7.4) and Egypt (7.4%; 95% CI 2.4-16.3), are dramatically low. It is unclear whether this difference in relative prevalence is the result of different sample size and different geographical regions or both. Data on AFST method of fluconazole resistant C. parapsilosis were available. Broth microdilution (16.5%; 95% CI 8.5-24.5) was the highest resistance to fluconazole. In this study, we also investigated the prevalence of amphotericin B resistant C. parapsilosis from a total of 63 studies, from which the pooled estimate showed that 1.3% (95% CI 0.0-2.9), 1.6% of cases before 2016 had been resistance to amphotericin B. The range of prevalence of amphotericin B resistance among the 20 different countries was 0.0-2.9%. Malaysia has the highest prevalence of amphotericin B resistance (2.9%; 95% CI 0.0-8.3). Data on AFST method of amphotericin B resistance showed that the studies using both broth microdilution and E-test have the highest prevalence of amphotericin B resistant C. parapsilosis (5.3%; 95% CI 0.0-15.5). Before 2016, the prevalence of fluconazole resistance was the highest, followed by voriconazole resistance, while the prevalence of amphotericin B was the lowest. A similar pattern of antifungal resistance prevalence was found in the period from 2016 to 2022. This finding shows a steady increase in the prevalence of fluconazole resistant C. parapsilosis in the last seven years compared to studies conducted before 2016. Regardless of the high rate of fluconazole resistance in many parts of the world, fluconazole remains one of the most effective antifungal drugs. However, the high resistance rate in this study should not be neglected because fluconazole-resistant precursors might accumulate in developing country settings. Overall, the prevalence of fluconazole resistant C. parapsilosis was higher than the prevalence of voriconazole resistant C. parapsilosis all over the four continents (ranging from 6.0-27.7, 1.2-12.0 respectively). Consequently, it is recommended to change the first-line treatment of C. parapsilosis infections from fluconazole towards voriconazole, especially in Africa, which showed sharply increased fluconazole resistance prevalence. Even though the prevalence of amphotericin B resistance is not significantly high all over the world, it is not recommended as first-line treatment for C. parapsilosis infections because it has many side-effects, cannot be administered orally, and due to its toxicity. In contrast, the same scenario of antifungal resistance could not be concluded if countries were compared. Hence, it is worthy to monitor the prevalence of antifungal resistance nationally in different countries to determine the most suitable first-line treatment for each country, because the present viewpoint might be changed if more studies were conducted locally. Although many novel molecular AFST methods have emerged recently, broth microdilution and disc diffusion (E-test) remain the gold standard AFST assays according to CLSI and EUCAST reports, able to determine antifungal resistance with high sensitivity and specificity. The overall prevalence of fluconazole resistance C. parapsilosis identified in the current study was consistent with the finding of SRMA from India (resistance to fluconazole = 17.63%), amphotericin B = 2.15%, voriconazole = 6.61% [106]. In general, high rates of resistance to fluconazole are unfortunate realities in the majority of C. parapsilosis infections. Such high rates could reflect the frequent, unjustified and inadequate extensive usage in general care while having an unknown impact on antifungal susceptibility. Finally, a key strength of this SRMA is a comprehensive estimation of global C. parapsilosis antifungal-resistance, despite the alarming indicative results at the level of continent, in most of the included studies rates was obtained from a smaller sample size. Therefore, expanded surveillance as well as additional studies with a large and systematic sample collection covering various geographical regions across the world are highly recommended. However, there are several limitations. First, the included studies did not encompass all the countries of the world, and only a limited number of representative studies in the same country were analysed, so the estimated prevalence might not fully reveal the magni-tude of drug-resistant C. parapsilosis for each county. Second, substantial heterogeneity was observed in the included studies, although this observation is common in meta-analyses estimating prevalence. Finally, the potential effect of gender, age, socioeconomic status, and lifestyle of the included patients on the prevalence of antifungal resistant C. parapsilosis could not be analyzed because of the unavailability of data in many of the included studies. Conflicts of Interest: The authors declare no conflict of interest.
Comprehensive plasma lipidomic pro les reveal a lipid-based signature panel as a diagnostic and predictive biomarker for cerebral aneurysms Yong-Dong Li (  dr_liyongdong@sina.com ) Shanghai Jiao Tong University A liated Sixth People's Hospital, Shanghai, China Yue-Qi Zhu Shanghai Jiao Tong University A liated Sixth People's Hospital, Shanghai, China Bing Zhao Renji Hospital, Shanghai Jiao Tong University, School of Medicine Yu He Shanghai Jiao Tong University A liated Sixth People’s Hospital Bin-Xian Gu Shanghai Jiao Tong University A liated Sixth People’s Hospital Hao-Tao Lu Shanghai Jiao Tong University A liated Sixth People’s Hospital Yi Gu Shanghai Jiao Tong University A liated Sixth People’s Hospital Li-Ming Wei Shanghai Jiao Tong University A liated Sixth People’s Hospital Yao-Hua Pan Renji Hospital, Shanghai Jiao Tong University, School of Medicine Zheng-Nong Chen Shanghai Jiao Tong University A liated Sixth People’s Hospital Yong-Ning Sun Shanghai Jiao Tong University A liated Sixth People’s Hospital Wu Wang Shanghai Jiao Tong University A liated Sixth People’s Hospital lipidomic technologies have provided new insights into this complex area. Plasma lipid species and classes/subclasses have been identi ed as associated with type 2 diabetes mellitus (T2DM) 16 and cardiovascular diseases (CVDs) 17,18 , suggesting that these lipid species might be useful biomarkers for these diseases. However, to the best of our knowledge, no studies have investigated the plasma lipid pro le and biomarkers associated with CAs. Accordingly, we performed an untargeted lipidomics evaluation using the plasma from HCs, patients with UCAs, and those with RCAs, to identify a plasma lipid pro le for patients with CA using a LC-MS platform in a large case-control study. First, we analyzed the lipidomic pro les comprehensively in the three groups and reported an in-depth analysis of the plasma lipid alterations that could be used to differentiate among them. Subsequently, we built a four-lipid biomarker signature that could not only diagnose and predict UCAs/RCAs, but also predicted subtype patients with severe RCA or high-risk UCA. Baseline characteristics The brief study designs are shown in Fig.1, A-B. The study rst recruited 360 patients (cohort 1, 144 men and 216 women, median age: 55.5 years; age range: 17-87 years), including 120 HCs, 120 patients with UCA, and 120 age-and sex-matched patients with RCA. The baseline characteristics of the human subjects in the three groups, based on their diagnostic status, are shown in Extended Data 1 Table S1. There were no signi cant differences in sex, age, hypertension, diabetes mellitus, hyperlipemia, coronary heart disease, smoking, alcohol consumption, and body mass index (BMI) among the three groups of participants. The duration of plasma storage at measurement did not differ among the three groups. There were 147 aneurysms in the 120 patients with UCA and 145 aneurysms in the 120 patients with RCA. Next, we enrolled 72 men and 108 women for trend testing of the primary results from the 180 patients (cohort 2), including 60 HCs, 60 patients with UCA, and 60 age-and sex-matched patients with RCA. Baseline characteristics of the human subjects in the three groups are shown in Extended Data 2 Table S1. Overview of the distribution of lipid species and subclass intensities in the three groups To enable comprehensive plasma lipidomic pro ling of CAs, lipidomic analysis was performed with an untargeted LC-MS method using a CSH C18 column and using the same LC-MS with consistent quality control in a total of 360 participants from one center and 68 QC samples (Extended Data 1 Fig. S1). After QC and support vector regression (SVR), LC-MS detected 1312 lipids (972 ESI+ and 340 ESI-) covering 8 lipid categories and 29 subclasses in all three groups, in which TG was the most abundant lipid in all three groups, followed by phosphatidylcholine (PC), sphingomyelin (SM), phosphatidylethano-lamine (PE), and ceramides (Cer) (Fig.2 A). In the UCA and RCA samples, the numbers of identi ed lipids were the same as those identi ed in the HC samples after case-by-case review. We then performed PCA to analyze the lipid data set and identify the characteristics of each group. QC samples, shown as calamus ellipses, were center clustered, which indicated good reproducibility of the instruments and stability during the lipidomics study (Fig. 2B). The lipid intensity of every sample accompanied by sex and age is shown in Fig. 2 C. The total lipid intensity comparisons of the three groups, with and without sex and/or age data, is shown in Fig. 2 D-I. The trend of total lipid intensity decreased from HC to UCA to RCA, and a signi cant difference among the three groups was observed (Fig. 2 D, ANOVA test, p < 0.001). The same decreased trend was also observed in the F (female) or M (male) subgroups ( Fig. 2 G, ANOVA test, p < 0.001) and age subgroups ( Fig. 2 I, ANOVA test, p < 0.05) among the three groups. There were no signi cant differences between M and F in each of the three groups ( Fig. 2 E), among the four age subgroups in each of the three groups ( Fig. 2 F), and among the four age subgroups in the F or M subgroups (Fig. 2 H). In addition, the same decreased trend was observed in 26 of the 29 subclasses among the three groups ( Fig. 2 J, ANOVA test, p < 0.05). The same trend was well tested in cohort 2 (Extended Data 2 and Extended Data 2 Fig. S2.) Plasma lipidomic pro ling of HCs, UCAs, and RCAs To investigate the lipidomic changes associated with CAs, three paired comparisons were performed. The differential lipids that satis ed the criterion of variable importance in the projection (VIP) of > 1.0 and p value < 0.05 were considered as potential differential lipids. An Orthogonal partial least squares discriminant analysis (OPLS-DA) model was employed to further investigate lipid changes and differential lipids. OPLS-DA score plots revealed that all three groups could be discriminated ( Fig. 3 A-C). Parameters for the explained variation (R2), an indicator of model robustness, and the cross-validated predictive ability (Q2) were obtained, as shown in Fig. 3A-C. The heatmap depicts the relative abundance of all lipids in all three groups (Fig. 3 D). Among these three paired comparisons, the lipid pro les of RCA were better distinguished from those of the HC group (Fig. 3 D). As summarized in Extended Data 1 Table S2-3, 75 and 130 differential lipids were identi ed from the comparisons of UCA vs. HC and RCA vs. HC participants, respectively. Compared with the HC group, 5.5% (75) of the identi ed lipids were signi cantly different in the UCA group (p value < 0.05; VIP > 1.0), while, 9.9% (130) of identi ed lipids were signi cantly altered in the RCA group (p value < 0.05; VIP > 1.0), and the number of altered lipids were greater in the RCA group than in the UCA group (c2 test, p < 0.05). From this trend, we found that the number of altered lipids correlated positively with the severity of the clinical CA status (UCA and RCA). We hypothesized that increased lipid alteration in patients with a CA resulted in increased likelihood of CA rupture. Interestingly, compared with HCs, except for 10 lipids in UCA group, the other altered lipids were underrepresented, with lower levels in the UCA and RCA groups (Extended Data 1 Table S2-3). This indicated that most altered lipids were decreased in the plasma of the CA group, and the degree of the decreased lipids was much lower in the RCA group than that in UCA the group. Using a combination of our results and published research (13,14), we hypothesized that the decreased lipids in plasma probably accumulated in the CA and normal artery wall, which resulted in the formation, development, and rupture of CAs. This trend was also tested in cohort 2 (Extended Data 2 and Extended Data 2 Fig. S3). In addition, as shown in Extended Data 1 Tables S2-5, there were 35 differential lipids including 4 lipid subclasses (1 LPC, 4 SMs, 7 PCs, and 23 TGs) that were signi cantly altered among the three groups (Fig. 3E), which could be used as potential biomarkers to diagnose or to discriminate CAs from HCs. The relative levels of these 35 differential lipid pro les were presented as a heatmap and a differential lipid pro le was observed when comparing UCA to HC, RCA to HC, and RCA to UCA (Fig. 3 F). The lipids exhibited the same decreasing trends from HC to UCA to RCA, and a signi cant difference was observed among the three groups and between each of the two groups ( Fig. 3 G, p < 0.001). In patients with RCAs, all the differential lipids were decreased as compared with HCs (Extended Data 1 Table S3 and Fig.4B). 75 out of 130 altered lipids were signi cantly decreased (FC< 0.75 or log2FC < -0.42), and 24 lipids showed a less than 0.5-fold decrease (log2FC < -1). Of the 75 signi cantly decreased lipids, 86.7% (65/75) were TGs, and all the 24 lipids with a less than 0.5-fold decrease were TGs (Extended Data 1 Table S3, and Fig.4B). Therefore, although there were many lipids altered from UCA to RCA, we only identi ed one distinct lipid subclass (TGs) with respect to clinical classi cation. In addition, as shown in Extended Data 1 Table S4 and Fig.4C, there were 119 identi able lipids exhibiting statistically signi cant differential abundance between RCAs and UCAs, and the majorities (115 lipids) were decreased in RCAs as compared with UCAs. Of these, 46 out of 119 altered lipids were signi cantly decreased (FC< 0.75 or log2FC < -0.42) and 16 lipids showed a less than 0.5-fold decrease (log2FC < -1). Of the 46 signi cantly decreased lipids, 95.7% (44/46) were TGs, and all the 16 lipids were TGs (Extended Data 1 Table S4 and Fig.4C). Of note, as compared with HCs, we observed that TG became the predominant and distinct pro le of altered lipid, which indicated that TG metabolism was severely disrupted in patients with CAs. In addition, TGs were also the distinct pro le of lipids in patients with UCA and RCA as compared with HCs in cohort 2 (Extended Data 2 and Extended Data 2 Fig. 4.) Lipid-based diagnostic prediction model for CA vs. HC To investigate the lipid-based diagnostic prediction model for CAs (UCAs + RCAs), we rst discriminated CAs from HCs, and then differentiated UCAs from RCAs. We assigned cohort 1 (n = 360; HCs = 120, CAs = 240) to the training cohort, and cohort 2 (n = 180; HCs = 60, CAs =120) to the validation cohort. Therefore, the ratio of samples in the training and validation sets was 2:1. The baseline characteristics of subjects in the two cohorts are shown in Extended Data 1 Table S6. To discriminate CAs from HCs, the two cohorts were subjected to an independent and comprehensive analysis to discover biomarkers. First, the lipid pro les between the HCs and the patients with CAs were compared in the two cohorts, which identi ed 61 differentially abundant (variable importance in projection (VIP) of > 1.0 and p value < 0.05) candidates in both cohorts. Secondly, we used random forest algorithms and the least absolute shrinkage and selection operator (LASSO) to decrease the number of lipid biomarkers, which produced 12 overlapping biomarkers from the two algorithms. First, the training cohort was used to train the 4-lipid prediction model. In the training cohort, the nomogram's calibration plot demonstrated good agreement between observation and prediction (Fig. 5B). The Hosmer-Lemeshow (HL) test statistic was not signi cant (P = 0.244), indicating a good t to the model. The receiver operator characteristic (ROC) analysis for the nomogram of the four biomarkers yielded areas under the ROC curve (AUCs) of 0.814 (95% con dence interval (CI): 0.767-0.861, Fig. 5 C, D) for the training cohort. The dp-score showed speci city and sensitivity of 60.0% and 87.5, respectively (Fig. 5D), using the best cutoff value. Next, the four-lipid signature and the same statistical model were applied in the validation cohort (60 HC and 120 CA cases) to assess the accuracy of the signature. In the validation cohort, the lipid biomarkers showed excellent diagnostic accuracy to the identify patients with CAs. As it did in the training cohort, the nomogram showed favorable calibration in the validation cohort (Fig. 5E). The HL test result was not signi cant (p = 0.387), and in the validation cohort, the AUCs were 0.803 (95% CI: 0.735-0.871, Fig. 5F, G) for the nomogram. The speci city and sensitivity of the dp-score were 85 % and 69.2%, respectively (Fig. 5G). Next, decision curve analysis (DCA) was used to compare the lipid performance of the model in the training and validation cohorts. To diagnose CAs in the validation and training cohorts, the developed model showed the highest net bene t within the ranges of most of the potential thresholds (Fig. 5H). Based on the lipid nomogram, the net bene ts were similar, with several overlaps, within this range. This suggested the possibility of using the nomogram in clinical practice for the diagnosis and prediction of CAs. A lipid-based combination diagnostic prediction model for CA vs. HC Sex, age, and hypertension are known to be associated with CAs; therefore, whether a model combining our lipid signature with these three preoperative clinical features could improve the diagnostic accuracy to detect CA in clinic was assessed. The results showed that the diagnostic accuracy for CA of the combined signature was slightly better in both the validation and training cohorts (AUC values of 0.802 and 0.836, respectively, Fig. 5 I, J, and Extended Data 1 Fig. S2). Moreover, compared with the preoperative clinical features alone, including gender, age and hypertension, the combination signature demonstrated signi cantly improved diagnostic accuracy. Finally, using the cutoffs determined using the Youden index from this four-lipid signature model, all patients were categorized into high-and low-risk groups. The univariate and multivariate logistic regression analyses results are shown in Extended Data 1 Table S8. In both clinical cohorts, multivariate analysis showed that the four-lipid signature was an independent predictor to discriminate CAs from HCs, (training cohort: odds ratio [OR], 10.13; 95% CI, 5.93-17.74; p < 0 .001; validation cohort: OR, 12.66; 95% CI, 5.84-30.28, p < 0.001, Extended Data 1 Table S8). Lipid-based diagnostic prediction model for UCA vs. RCA To differentiate UCAs from RCAs, we assigned cohort 1 (n = 240; UCAs = 120, RCAs = 120) to the training cohort, and cohort 2 (n = 120; UCAs = 60, RCAs = 60) to the validation cohort. The baseline characteristics of cohort 1 and cohort 2 subjects are shown in Extended Data 1 Table S9. Consistent with the model for CAs vs. HCs, we still used the four biomarkers as the diagnostic prediction model for UCAs vs. RCAs. We constructed a nomogram and a dp-score that was obtained according to the coe cients and the constant derived from multinomial logistic regression (Extended Data 1 Table S10). The risk score model was obtained as follow: -0.0412+ (0.0320*PE (20:1p/18:2)+H) + (-0.7768* CerG1(d40:4)+NH4) +(-0.3816* TG(18:0p/16:0/16:1)+NH4) + (-1.2405* TG(54:2e)+NH) . The DCA for the lipid nomogram of UCAs vs. RCAs is presented in Fig. 6 H. The decision curve indicated that at a threshold probability of 20-85% for a patient or doctor, then using the developed lipid nomogram for the diagnosis and prediction UCAs from RCAs would add more bene t than either the treat-none scheme or the treat-all-patients scheme. The net bene t of the test cohort was somewhat lower than the net bene t of the training cohort based on the lipid nomogram within this range. This suggested possibility of using the nomogram in clinical practice to diagnose and predict UCAs from RCAs A lipid-based combination diagnostic prediction model for UCAs vs RCAs To further increase the accuracy of diagnosis of URCs or RCAs in the clinic, we also assessed a combination the three preoperative clinical features and the developed model lipid signature. In the validation and training cohorts, the combination signature demonstrated slightly better diagnostic accuracy for URCs or RCAs (AUCs of 0.719 and 0.78, respectively, Fig. 6 I, J and Extended Data 1 Fig. S3). Moreover, in terms of diagnostic accuracy, compared that that of the preoperative clinical features sex, age, and hypertension, the combination signature demonstrated a signi cant improvement. Finally, using the Youden index-derived cutoffs from the four-lipid signature model, all patients were categorized into high-and low-risk groups. The results of for the univariate and multivariate logistic regression analyses results are shown in Extended Data 1 Table S11. In both cohorts, multivariate analysis identi ed the fourlipid signature as an independent predictor to discriminate UCA form RCA (training cohort: OR, 6.44; 95% CI, 3.68-11.56; p < 0.001; validation cohort: OR, 4.65; 95% CI, 2.10-10.84, p < 0.001, Extended Data 1 Table S11). Lipid-based subtyping of RCA A lower lipid intensity in RCA was associated with patients with severe RCA To explore the lipid-de ned speci c subtypes within CA, strati cation analysis was carried out using nonnegative matrix factorization (NMF) consensus-clustering 19,20 . NMF consensus-clustering was rst performed for RCA in cohort 1, and two major lipid subtypes (R-I and R-II) were identi ed among the RCA samples ( Fig. 7 A, Extended Data 1 Fig. S4), with 65 cases belonging to R-I and 55 belonging to R-II. The total lipid intensity of R-II was signi cantly lower than that of R-I (Fig. 7 B, p < 0.001). The relative abundance of the two subtypes according to age, sex, etc., is shown in Fig. 7C. To explore the clinical characteristics between the R-I and R-II subgroups, the baseline characteristics of the subjects in the two subgroups were compared. There were no signi cant differences in sex, age, hypertension, diabetes mellitus, hyperlipemia, smoking, alcohol consumption, and BMI between R-I and R-II (Extended Data 1 Table S12). Subsequently, we compared the aneurysm characteristics associated with RCA, such as aneurysm size, location, aneurysm neck, single or multiple, regular or irregular, bifurcation or sidewall aneurysm, modi ed sher grade (MFG) 21 , Glasgow Coma Scale (GCS), the Coma number at onset, ventricular drainage (VD), and hospital day. There were signi cant differences in MFG, GCS, and the number of Coma at onset between R-I and R-II (Fig. 7 D); no other signi cant differences were observed (Extended Data 1 Table S12). Fig. 7 E-F show typical cases in the R-I subtype, and Fig. 7 G-H show typical cases in the R-II subtype. These results indicated that patients in the R-II subgroup were associated with severe RCA. Diagnostic prediction potential of the four-lipid signature for patients with severe RCA As mentioned-above, patients in R-II are often associated with severe or poor outcomes in the clinic; therefore, we next evaluated the diagnostic predictive potential of the four-lipid biomarker signature for patients with severe RCA. First, the Youden index-derived cutoffs from the four-lipid signature were used to separate the patients with RCA into high-and low-risk groups, which were then analyzed as independent predictors using univariate and multivariate logistic regression. Multivariate logistic regression analysis identi ed the four-lipid signature as an independent predictor of detection severe RCA, in the two clinical cohorts (training cohort: OR, 6.46; 95% CI, 2.71-16.55; p < 0.001; validation cohort: OR, 4.33; 95% CI, 1.30-16.06, p = 0.01, Extended Data 1 Table S13). Thus, in addition to diagnosing RCA, the four-lipid signature could also predict patients with severe RCA. Lipid-based subtyping of UCA UCA subtypes predict slow or rapid aneurysm growth Next, NMF consensus-clustering was performed for UCA in cohort 1, and two major lipid subtypes (U-I and U-II) were identi ed among the UCA samples (Fig. 8A, Extended Data 1 Fig. S5) with 54 belonging to subtype U-I and 66 cases belonging to U-II. The total lipid intensity of U-II was signi cantly lower than that of U-I (Fig. 8B, p < 0.001). The relative abundance of the two subtypes by age, sex, and etc., is shown in Fig. 8C. To explore the clinical characteristics between U-I and U-II, the baseline characteristics of the human subjects and the aneurysm characteristics in the two subgroups were compared. However, there were no signi cant differences in sex, age, hypertension, diabetes mellitus, hyperlipemia, smoking, alcohol consumption, BMI, aneurysm size, location, aneurysm neck, single or multiple, regular or irregular, bifurcation or sidewall, and hospital day between the U-I and U-II subtypes (Extended Data 1 Table S14). We next checked the patients using MRA follow-up. Although, most patients with UCAs were treated when detected, we still enrolled 23 UCA patients in this study, among whom 20 were followed-up using MRA for more than seven years. According to NMF consensus-clustering, 10 patients were classi ed in the U-I subgroup and 10 were in the U-II subgroup. Other than the lipid intensity of U-II being signi cantly lower than that of U-I, we noticed that seven of ten UCAs (70%) in the U-II subtype were enlarged, while only two of ten UCAs (20%) in the U-I subtype were enlarged. Therefore, the aneurysm growth rate was higher in the U-II than in the U-I subtype (c2 test, p < 0.05), which indicated that the lower lipid intensity in the U-II subtype was associated with rapid CA progression, when compared to those in the U-I subtype. Fig. 8 D-E show the typical cases in U-I, and Fig. 8 F-G show the typical cases in U-II. These illustrated that rapid aneurysm growth in the U-II subtype group increased the chance of CA rupture, while the slow growth observed in the U-I subtype group might be associated with slower aneurysm progression and resistance to CA rupture. Diagnostic prediction potential of the four-lipid signature for high-risk patients Patients in the U-II subtype group are associated with rapid aneurysm growth and the chance of CA rupture; therefore, we next evaluated the diagnostic predictive potential of our lipid biomarkers for patients with UCA with rapid growth. First, the Youden index-derived cutoffs from four-lipid signature model were used to separate the patients with RCA into high-and low-risk groups, which were then assessed as independent predictor using univariate and multivariate logistic regression analyses. In both cohorts, multivariate logistic regression analysis identi ed the four-lipid signature an independent predictor to detect patients with UCA with rapid growth (training cohort: OR, 6.04; 95% CI, 2.73-14.05; p < 0.001; validation cohort: OR, 145.3, 95% CI, 21.3-3203.3, p < 0.001, Extended Data 1 Table S15). In addition, when we extracted 20 UCA cases from cohort 1 and analyzed them separately. Surprisingly, using the Youden index-derived cutoffs from the dp-score, we observed a high consistency between NMF subtype and the risk groups. Among the 20 patients, multivariate analysis identi ed the 4-lipid signature as an independent predictor to detect patients with UCA with rapid growth (OR, 12.91; 95% CI, 1.61-180.5; p < 0.05, Extended Data 1 Table S16). These results showed that the lipid signature had a signi cant predictive potential to detect patients with UCA with rapid growth (high-risk UCA patients). Discussion In the present study, we performed a comprehensive lipidomic analysis of human plasma from HCs, and from patients with UCA and RCA, to investigate altered lipidomic features and identify lipid signatures associated with CAs. We found: 1) The total lipid intensity and most lipid classes from the HC, UCA, and RCA groups decreased signi cantly (p < 0.05). 2) The number of the altered lipids increased and correlated with the severity of the CAs, with low abundance in most altered lipids from all three comparisons (UCA vs. HC, RCA vs. HC, and RCA vs. UCA). 3) TGs were the distinct pro le of lipids in plasma from CA samples. 4) The NMF-de ned lipidspeci c subtypes could not only discriminate between minor (R-I) and severe (R-II) states of patients with RCA, but also predicted slow (U-I) or rapid (U-II) aneurysm growth in patients with UCA. 5) A model incorporating four selected lipid signatures showed good calibration and diagnostic prediction for CA vs. HC and UCA vs. RCA in both the training and validation clinical cohorts. 6) The four-lipid signature was demonstrated as an independent predictor of discrimination of CA from HC, UCA from RCA, and RCA or UCA subtypes in both the training and validation clinical cohorts. Decreased plasma lipid levels appeared to correlate negatively with lipid accumulation in the CA wall, as described by Frösen et al. 13,14 , who revealed that a lipid accumulation was associated with SMCderived foam cell formation, in ammation, mural SMC loss, and CA wall degenerative remodeling, eventually leading to a ruptured CA wall. We hypothesized that lipid accumulation in the CA wall and the normal artery led to decreased plasma lipid levels. Although Frösen et al. found that the plasma total cholesterol and triglyceride levels were often normal, they did not further examine the plasma lipid species and classes/subclasses. Therefore, our results support the theory that lipids and their oxidation products accumulate in the CA wall and lead to the formation, development, and rupture of CA. In addition, another important result in this study was that TGs represent pivotal lipids associated with CAs. In normal situations, circulating very-low-density lipoprotein (VLDL), intermediate density lipoproteins (IDL) and LDL, which contain TGs, normally ux into and out of the endothelium via transcytosis [23][24][25][26] . However, many CA walls have experienced total erosion of their endothelium or do not possess a functionally intact endothelium 7 , which increases lipoprotein in ux into the vessel wall and causes their accumulation in the CA wall 27 . However, further study is required to determine whether TGs are the most notably altered lipids in CA walls, and how CA formation, development, and rupture are induced by TGs. In this study, the dp-score, as a diagnostic prediction model, was developed using a four-lipid marker signature. The model could discriminate patients with CA from HCs accurately. The dp-score could differentiate patients with CA with different prognoses effectively and was identi ed using multivarite analysis as an independent predictive risk factor. The dp-score had a better discriminatory potential than other predictive risk factors (age, gender and hypertension). Moreover, a nomogram was developed comprising the dp-score, age, gender, and hypertension, which showed a slightly superior diagnostic accuracy for CAs in the validation and training cohorts. Thus, the developed nomogram demonstrated a good predictive performance and could be used to predict the prognosis of CA. Ruptured cerebral aneurysms, which are the most common etiology of nontraumatic subarachnoid hemorrhage (SAH), can cause a catastrophic event with a mortality rate of 25 to 50%, while permanent disability occurs in nearly 50% of survivors; therefore, only approximately one-third of patients who suffer from SAH have a positive outcome [2][3][4][5] . In general, the hemorrhage stage is the key factor in determining illness severity in patients with RCA. Patients with considerable SAH are often associated with poor outcomes or disability/mortality. By contrast, patients with minor SAH are often associated with positive outcomes. Using NMF consensusclustering analysis, the patients with RCAs were classi ed into two major lipid subtypes (R-I and R-II). The patients in R-I exhibited minor hemorrhage and were associated with improved outcomes, while the patients in R-II exhibited considerable hemorrhage and were associated with poor outcomes. The predominant difference between the two subtypes was the difference in lipid intensity. The lipid intensity in RII was lower than that in R-I (p < 0.05). while, the 4-lipid signature could act as an independent predictor of distinguish R-I from R-II and detection severe RCA patients in both training and validation clinical cohorts. Therefore, we could distinguish between the two situations or suggest the severity of RCA patients using plasma lipidomics. Furthermore, many catastrophic events, such as disability/mortality, might be rescued if plasma lipidomics or plasma lipid biomarkers could be routinely applied for physical examinations in the future. With the development of modern CT and MR technology, more and more UCAs can be detected. For patients with UCAs in the clinical settings, we need to de nitively resolve whether the patient should undergo treatment or observation. Simply, UCAs could be classi ed according to their lower or higher risk status. In general, we could discriminate a UCA at lower or higher risk status from its aneurysm characteristics, such as size, shape (regular or irregular), location (bifurcation or sidewall) or aneurysm neck at MRA, CTA, or DSA. In this study, although the lipid intensity in U-II was lower than that in U-I (p < 0.05), there were no signi cant differences in these aneurysm characteristics between the two subgroups, according to NMF consensus-clustering, which was also consistent with our results in the RCA group. Aneurysm rupture did not correlate with size, shape, location, or aneurysm neck in the clinic. Fortunately, we found that U-II subgroup aneurysms were associated with rapid CA progression compared to those in the U-I subgroup. while, the 4-lipid signature demonstrated as an independent predictor of distinguish U-I from U-II and detection UCA patients with rapid growth (high-risk UCA patients) in both cohorts. From this point of view, we might explain the phenomenon observed in the clinic: Why do certain smaller aneurysms rupture easily, while larger aneurysms are more resistant to rupture to some degree. The essential disparity might lie in the difference in lipid accumulation in the CA wall or in the rate of decrease in plasma lipids. Therefore, we hypothesized that the faster the decrease in plasma lipids, the faster the progression in aneurysm growth and ease of rupture. Currently, except for MRA and/or CTA, there is no other valid approach that can be used for the early diagnosis of UCAs in the clinic. In addition, these techniques are often time-consuming and hospitals with few cases would have limited access, and there are still a large number of cases that could not be detected. Therefore, there is an urgent need for accurate noninvasive biomarkers for the early and differential diagnosis of UCAs. In this study, the 4 lipid signatures not only showed good calibration and diagnostic prediction for CA vs HC and UCA vs RCR, but also demonstrated as an independent predictor of discrimination CA from HC, UCA form RCA, and RCA or UCA subtypes. Several limitations were associated with this study. First, although we observed speci c lipid decreases in plasma, we did not use CA tissue to test whether these decreased lipid species accumulated in the CA wall, because CA tissues are di cult to obtain. Although Frösen et al. [13][14] demonstrated that lipids accumulate in all cerebral vasculature and CA walls, they did not indicate which lipids accumulated in the CA wall. Therefore, this detail deserves further investigation. Second, insu cient numbers of participants in the UCA group were subjected to MRA long-term follow-up, and more cases are needed to support our conclusions. Lastly, this study focused on the results observed via lipidomics analysis; however, did not thoroughly investigate the mechanisms or possible signaling pathways. Furthermore, our study did not include aspects related to systemic immunity, which deserves further study. In summary, comprehensive lipidomic analysis identi ed decreased lipids as a prominent feature of CAs, and a four-lipid biomarker signature could not only better diagnose and predict UCAs/RCAs from HCs, but also predicted subtypes of patients with severe RCA or high-risk UCA. On the one hand, these results highlight a possible key role of plasma lipidomics to support the theory of lipid accumulation in the CA wall. On the other hand, the results highlight the favorable predictive capability of the four-lipid biomarker signature as a diagnostic predictive potential tool to assess the prognosis of CA. Although lipidomic data of the CA wall are lacking and long-term follow-up with more case are needed, our data provides important biological insights and clear clinical implications for the future. Materials And Methods Study design and patient recruitment A case-control study with a matched sex and age design was used. Samples were selected (n = 540) from the 1388 participants with available blood samples according to the following order: We rst divided these patients into three groups according to their diagnostic status (HC, UCA, and RCA); and then four subgroups according to age [≤ 45 (n = 135), 46 to 55 (n = 135), 56 to 65 (n = 135), and > 65 years (n = 135)] were classi ed in each group; nally, men (n = 216) and women (n = 324), with a ratio of 1:1.5, were allocated to each subgroup. Blood samples were taken from an arm vein or from the femoral artery with no intravenous transfusion. Peripheral blood samples (10 mL) were collected from each patient into EDTA-vacutainers (BD, Franklin Lakes, NJ, USA). Plasma was centrifuged at 3000 × g for 10 min to remove cells within three hours and then stored at −80 °C until use. Patients with CA diagnosed using computed tomography angiography (CTA), magnetic resonance angiography (MRA), and/or digit subtraction angiography (DSA), and patients without CA (diagnosed by MRA) with normal liver and renal function, and electrolyte levels served as a comparison group. Patients were referred for these procedures for numerous reasons, including acute processes, such as subarachnoid hemorrhage (SAH) and intracranial hemorrhage, as well as for nonacute indications to rule out cerebrovascular diseases, such as headache, dizziness, or no symptoms. Patients with abnormal liver functions caused by a malignant tumor, hepatitis, hepatic cirrhosis, liver function failure, and other hepatic diseases were excluded from the study. The collected baseline data included clinical information, biochemical characteristics, and the demographics of all participants. Image acquisition and image review MRA, CTA, and/or DSA examinations were described previously and are not presented here (6). The aneurysm type was classi ed as saccular and fusiform. Aneurysm size was recorded as the maximum 2D angiographic or MRA dimension: (1) < 3 mm, (2) 3-5 mm, (3) > 5-10 mm, or (4) > 10 mm. The number of aneurysms was classi ed into two groups: Single and multiple aneurysms. Aneurysm locations were grouped into four categories: The anterior communicating artery (ACA), the middle cerebral artery (MCA), the internal carotid artery (ICA), and the vertebral and basal artery system (VBAS), or into bifurcation and sidewall aneurysms. Aneurysm shape was classi ed as either regular or irregular (with daughter sac or lobulated). The aneurysm neck was de ned as either narrow or wide (≥ 4 mm or fundus:neck ratio ≤ 2). Aneurysm growth was de ned as an aneurysm that increased by > 1 mm in maximum diameter during follow-ups, as compared with the initial examination. Three observers, who were highly experienced in neurointerventional radiology and had previously tested the application common-standard interpretation techniques, were blinded to all clinical, CTA, MRA, and DSA results. They analyzed all datasets independently on an o ine-workstation from multiple on-screen viewing angles. In the event of interobserver discrepancies in the detection of intracranial aneurysms, consensus was achieved or a majority decision was obtained. , followed by incubation at room temperature for 30 min. The solution was centrifuged at 14000 × g for 15 min at 10 °C, and the upper organic solvent layer was obtained and dried under nitrogen. For LC-MS analysis, the samples were re-dissolved and vortexed in 200 μL of an isopropanol solution. To monitor the stability and repeatability of the instrument analyses, quality control (QC) samples were prepared by pooling each sample, and these were analyzed together with the other samples. The sample queue was randomly tested to remove bias. The initial mobile phase was 30% solvent B at a ow rate of 300 μL/min. It was held for 2 min, and then linearly increased to 100% solvent B in 23 min, followed by equilibration at 5% solvent B for 10 min. Mass spectra were acquired by Q Exactive Plus in positive and negative mode. Electrospray ionization (ESI) parameters were optimized and preset for all measurements as follows: Source temperature, 300 °C; Capillary Temp, 350 °C. In positive ion mode, the ion spray voltage was set at 3000 V, the S-Lens RF level was set at 50%, and the scan range of the instrument was set at m/z 200-1800. In negative ion mode, the ion spray voltage was set at -2500 V, the S-Lens RF level was set at 60%, and the scan range of the instrument was set at m/z 250-1800. Identi cation by lipid Search LipidSearch software (Thermo Scienti c™) was employed to conduct lipid identi cation and data processing on the raw data, which included peak extraction, lipid identi cation, peak alignment, and quanti cation. This software contains MS2 & MS3 databases of 8 categories, 300 subclasses, and about 1.7 million lipid molecules. Normalization of lipid data For large-scale lipidomics studies, we used the support vector regression (SVR) normalization method to normalize lipid data to effectively remove the intra-batch and inter-batch variations in the LC-MS analysis. In brief, the intensities of 360 (120 healthy, 120 Non-SAH, and 120 SAH) samples were extracted from the lipid data, resulting in a 1312 × 360 lipid-expression matrix. Then, QC-SVR, implemented in the R/Bioconductor package, MetNormalizer, was used to normalize the expression matrix. For subsequent quantitative analyses, the normalized intensities were log2-transformed. In addition, the samples were removed when the missing values was more than 30%. The remaining missing values were imputed by the nearest 10 neighbors using the k-Nearest Neighbor algorithm. Lipid difference analysis The data was rst log2 scaled before multivariate data analysis (MVDA), computed with the ropls R package. This package implements unsupervised principal component analysis (PCA) analysis, with supervised partial least squares discriminant analysis (PLS-DA) and orthogonal partial least squares discriminant analysis (OPLS-DA) based on the original, NIPALS-based, versions of the algorithms. Using this package, the R2 and Q2 quality metrics, the score and orthogonal distances, the permutation diagnostics, as well as the variable importance for projection (VIP) values could be calculated. Permutation testing was performed 200 times. Score plots, loadings, and permutation plots were generated with the calculated results using R. The variables were standardized (mean-centered and Pareto scaled, which is the same as the Par scaling method in software SIMCA) prior to model building. The ropls package is available from the Bioconductor repository. Univariate statistical analysis includes Student's t-test and variance multiple analysis. Lipidomic subtype identi cation in aneurysm patients: To identify the lipidomic subtypes in the lipid expression matrix from the 240 aneurysm samples, we used the NMF v.0.21.0 consensus cluster method from the R package. NMF is a machine learning method that can e ciently identify distinct molecular patterns and molecular classi cations. For each lipid in the samples, we rst calculated the coe cient of variation, which was used produce a descending order of ranked metabolites. NMF v.0.20.6 in R v.3.6.1 unsupervised consensus-clustering was then used to analyze the top 30% most-variant lipids. The nsNMF algorithm was carried out using 469 iterations for the clustering runs and 200 iterations for the rank survey. The average silhouette width for clustering solutions between 2 and 6 clusters and the pro les of the cophenetic score were used to select the preferred cluster result. From the lipidomic data, the silhouette width, rank survey pro les of the cophenetic score, together with the consensus membership heat maps, suggested a two-subtype solution for patients with aneurysm. The categorical basic and demographic characteristic variables are shown as numbers and percentages, and c2 test was used for their comparisons. Continuous variables, shown as the mean (± SD), if normally distributed, were compared using an unpaired t-test. Non-normally distributed data are presented as the median (interquartile range), and differences were determined using one-way analysis of variance (ANOVA) and a Wilcoxon rank sum text. Statistical signi cance was indicated by a p value £ 0.05. To select biomarkers, least absolute shrinkage and selection operator (LASSO) was performed using the "glmnet" package and package caret was used to perform random forest algorithms. Subsequently, the generalized linear model (glm, regressions) was constructed to analyze the biomarkers. The "car" package was used to calculated the VIP values. The "pROC" package was used to plot the ROC curve. The "rms" package was used to construct the nomogram and produce the calibration plots. the was conducted using the "HLtest.R." was used to perform the Hosmer-Lemeshow test, and "ggDCA.R." was used to perform the DCA. In addition, to build the cd-score, we tted a logistic regression model. The performance of the cd-scorebased classi er was assessed using a ROC curve, which provided assigned cases to high-and low-risk groups using Youden index-derived cutoff thresholds from the four-lipid signature model. The Wilcoxon test was used to examine the distribution of the cd-score between clinical categories. To compare the discriminatory performance of the dp-score, age, sex, hypertension, and other factors, ROC was used. To determine the effects of potential risk factors, multivariate logistic regression analysis was used. R software, version 3.6.1 was used to perform all the analyses. Note that cohort 1 and cohort 2 were analyzed independently (without being normalized together) for comprehensive lipidomic pro le analysis, and the training cohort (cohort 1) and validation cohort (cohort 2) were normalized together for biomarker analysis. Brief study designs. (A) step 1, work ow for lipidomic analysis and subtype analysis (RI and RII for RCA, UI and UII for UCA). (B) step 2, work ow for building the diagnostic prediction model for CA vs HC and RCA vs UCA, and in addition to lipid signature with diagnostic prediction (dp) score for CA vs HC, UCA vs RCA, and RCA or UCA subtypes. Note: QC = Quality control; SVR = Support vector regression; HC = Health control; UCA = Unruptured cerebral aneurysm; RCA = Ruptured cerebral aneurysm.
Corrigendum: Heterogeneous behavioral adoption in multiplex networks (2018 New J. Phys. 20 125002) Heterogeneity is found widely in populations, e.g. different individuals have diverse personalities and a different willingness to accept novel ideas or behaviors. Whereas population heterogeneity is rarely considered in studying the social contagions on complex networks, especially on multiplex networks. To explore the effect of population heterogeneity on the dynamics of social contagions, a novel model based on double-layer multiplex networks is proposed, in which information diffuses synchronously on the two layers, and each layer is assigned with different adoption thresholds. Meanwhile, populations are classi fi ed into the activists and conservatives according to their willingness to adopt new behaviors. To qualitatively understand the effect of population heterogeneity on social contagions, a generalized edge-based compartmental theory is proposed. Through rigorous theoretical analysis and extensive simulations, we fi nd the activists in the two layers promote the adoption of behavior. More interestingly, the crossover phenomena in phase transition are found in the growth of the fi nal adopted size when increasing the information transmission Introduction Real complex systems ranging from economic network to the ecological network [1][2][3] can be described as multiplex networks [4][5][6][7], in which each subnetwork represents one distinct subsystem. Existing researches revealed that the multiplexity of networks have significant influence on the dynamics in multiplex networks [8][9][10][11][12][13][14][15]. For cascading failure in interdependent networks, the system exhibited a discontinuous phase transition [16] unlike the continuous one on a single network [17]. For synchronizing process, when the specific microscopic correlation features between the natural frequencies of the oscillators and their effective coupling strengths vanish, Zhang et al [18] found the far more general process can occur in adaptive and multiplex networks. Moreover, Boccaletti et al discovered that instead of classic result of the second-order type, i.e. continuous and reversible, in complex networks' structure and dynamics, the synchronizing process rather reminds first-order (discontinuous and irreversible) transitions [19]. For evolutionary games in multiplex network, interactions between layers influence evolution of cooperation [20][21][22]. Researchers found spreading dynamics on multiplex network also behaves distinctively from the single-layer network [23][24][25][26][27][28][29][30][31][32][33][34][35]. A few interlayer links can induce the global outbreak of disease, although the epidemic cannot outbreak on a single-layer network [36]. Granell et al revealed that metacritical point exists in the asymmetric coevolution of spreading dynamics in complex networks [37]. Recently, the social contagions in multiplex networks attracts extensive studies and have been used to describe the dynamics of information diffusion, behavior and innovation adoption [38][39][40]. Different from the epidemic spreading [41][42][43], the social contagions display one inherent characteristic of social reinforcement effect [25], which means adopting a certain social behavior requires verification of its credibility and legitimacy. It was found that the mulplexity of networks promotes the spreading of Markovian social contagions [44][45][46][47]. Wang et al [48] considered that information transmits through multiple channels and switches among these channels. Based on the communication channel alteration (CCA) mechanism, a non-Markovian threshold model on multiplex networks was proposed. The authors found that the time delay induced by the CCA mechanism slows down the transmission rate of information, but it does not affect the final spreading size of the information nor change the type of phase transition. Chen et al [49] considered that the behavior adoption of a single person is affected by neighbors from different layers of a multiplex network simultaneously. They found that because of the synergy effect between different layers, the final spreading size is enhanced, and a few seeds can stimulate a global spreading of information. In addition, Wang et al [50] studied the effect of inter-layer correlation on the social contagions in multiplex networks, and a non-Markovian social contagion that considers the inter-layer correlation was proposed. They found that the correlation between the layers of multiplex networks promotes behavior adoption but will not alter the increment pattern of the final adoption size. From the above literatures, previous researchers have ubiquitously investigated the heterogeneity in populations, such as different people have different attitudes towards new ideas or behaviors [51][52][53][54][55][56]. Nevertheless, in real systems, individuals can be in several social networks simultaneously, and they have different intimate relationships in different subnetworks, which can affect their willingness to adopt a new idea or behavior. For example, families or cronies can convince you more easily to adopt a new idea or behavior, but you need to verify an information many times before accepting it on a virtual social network. Naturally, we consider that the adoption thresholds in different layers of a multiplex network are different, and the adoption willingness is also different from person to person. Previous literatures mainly studied the effect of population heterogeneity on social contagions on a single network [57,58]. However, the social contagions with heterogeneous populations on multiplex networks has not been systematically studied. To study the effect of heterogeneous populations on the social contagions in multiplex network, a non-Markovian social contagion model based on two-layer multiplex network has been proposed in our paper. In this model, to reflect adoption heterogeneity of populations, we randomly select a fraction of q nodes as activists, who have strong willingness to adopt the behavior, and the remaining nodes as the conservatives, who have weak willingness to adopt the behavior. In addition, to investigate the influence of different social circles on the spreading dynamics, different adoption thresholds are assigned to the two layers of the network. To theoretically analyze the dynamics of social contagions, a generalized edge-based compartmental theory is established. Through rigorous theoretical analysis and extensive simulations, we find that the population heterogeneity affects both the final adoption size and phase transition. Increasing the proportion of activists can promote the final adoption size. Most interestingly, the crossover phenomena [59] in phase transition of the final adoption size are found. The phase transition of the adoption size changes from the continuous to the hybrid pattern, which exhibits the characteristics of both continuous and discontinuous transitions, when the proportion of activists is adjusted from a large value to a small value. At last, we find that changing the degree heterogeneity in both layers of the network will alter the crossover phenomena in phase transition. Model descriptions To study the complex contagions, a double-layer multiplex network is adopted, in which the two layers A and B represent two different communication channels. Nodes correlate one to one in the two layers, and one pair of the inter-correlated nodes stand for the same person in two different social subnetworks. Edges in different layers represent different types of connections between individuals. To avoid the intra degree-degree correlations, the uncorrelated configuration model [60] is used in our model following the two given independent degree distributions P k A i A ( )and P k B i B ( ). Note that the self and multiple edges are avoided in building the networks. Degree of node i is denoted by k i A in layer A and k i B in layer B. Thus, the degree of node i can be denoted by k k k , Assuming that there is no degree-degree correlations between two layers, we get a joint degree distribution P k P k P k Each node i holds adoption thresholds T A and T B in layers A and B, respectively. The larger value of adoption threshold, the less willingness of behavior adoption. Besides, in reality individuals have different willingness to adopt the behavior when they receive the information. Usually, the individuals with greater willingness to adopt the behavior are named as activists, while, the individuals with weaker willingness are named as conservatives. To reflect the different willingness of individuals in the network, a fraction of q nodes are randomly selected as activist and the remaining 1−q nodes are selected as conservatives. To describe the social contagion on the multiplex network, we adopt the generalized susceptible-adoptedrecovered model, where each node in the two layers can be in any of the three states: susceptible (S), adopted (A) and recovered (R) state. Nodes in the susceptible state do not adopt the behavior. The nodes in the adopted state have adopted the behavior and are willing to transmit the behavior information to their neighbors. Finally, the nodes in the recovered state lose interest in the behavior and will not transmit the behavior information to their neighbors. Initially, a fraction of ρ 0 nodes are selected randomly as the adopted nodes (seeds). We set ρ 0 =1/N in our paper, namely only one node in the network is selected as the seed. At each time step, each A-state node tries to diffuse the behavior information to its S-state neighbors in both layer A and layer B synchronously with rate λ A and λ B , respectively. Note that once the information is transmitted successfully through an edge between the A-S node-pair, it will never be transmitted again. In other words, only non-redundant information transmission is allowed [61]. In addition, the A-state nodes can try many times to transmit the information to their susceptible neighbors until they enter into the recovered state. If a piece of information is successfully transmitted from an A-state node i to an S-state neighbor j in layer > . Obviously the adoption of behavior is determined by the accumulated pieces of information in both layers, so the non-Markovian effect is induced in the dynamics of behavior spreading. At each time step, the A-state nodes will lose interest in the behavior and enter into the recovered state with rate γ. When the nodes enter into the R-state from the A-state, they will not participate in the transmission of the behavior information, namely, they will neither transmit the information to neighbors nor receive information from A-state neighbors. Finally, the dynamics of the information spreading terminate when all A-state nodes enter into the R-state. Theoretical analysis To theoretically analyze the proposed model in section 2 and describe the strong dynamical correlations among the states of nodes in the process of information spreading, we establish an edge-based compartmental theory. In the edge-based compartmental theory, a node i is assumed to be in the cavity state [62], which means that it can only receive information from its neighbors, but cannot transmit the information to its neighbors. In addition, as the probability that the information about the behavior has not been transmitted through an randomly chosen edge to the susceptible neighbors in layer X by time t. The probability that an randomly selected susceptible node i with degree k k k , receives m A and m B pieces of information by time t can be expressed as respectively. According to the differences between the activists and conservatives in section 2, an activist in susceptible state indicates that the accumulated pieces of information he has received in layers A and B are less than the corresponding thresholds, namely m A <T A and m B <T B . While a conservative in susceptible state means that the accumulated pieces of information in layer A or B is less than its threshold, namely m A <T A or m B <T B . We define the probability that an activist with degree k k k , ), and the probability that a conservative with degree k k k , ). Thus, the probability that a randomly selected node with degree k k k , )in susceptible at time t can be expressed as The terms in the righthand of equation (4) stand for the probability that the accumulated pieces of information that an activist has received by time t in layer X are less than the corresponding thresholds The second term in the righthand of equation (5) stands for the probability that both the accumulated pieces of information in layers A and B exceed the corresponding thresholds. Thus the probability that the number of the accumulated information of a randomly selected node in layer is less than the corresponding threshold at time t can be expressed as The probability that an activist stays susceptible is η A η B , and a conservative keeps susceptible with probability Consequently, we can get the fraction of susceptible nodes at time t as The neighbor of a node in cavity state can be in any of the three states: susceptible, adopted and recovered, so θ X (t) is constituted by the following three parts x ( ), respectively, denote the probabilities that a neighbor of the node in the cavity state stays in susceptible, adopted and recovered state and has not transmitted the information to the node. Next, we analyze the above three terms If node i is in the cavity state and node j with degree k k k , neighbors as node i is in cavity state in layer A(B). Thus the probability that node j has received n A pieces of information in layers A is ) . If j is an activist, taking all possible values of n A into consideration, we can get the probability of node j remaining in susceptible state as )accounts for all possible pieces of information that node j receives in layer B. When considering the case that node j is a susceptible neighbor of node i in layer B, we can get the similar equation as In addition, if j is a conservative, the probability that it remains susceptible is Accordingly, taking the characteristics of the susceptible neighbors into consideration, the probability that node i connects to a susceptible node is composed of two parts Given the joint degree distribution P k  ( ), in layer X, an edge connects to a susceptible neighbor with probability stands for the probability that an edge connects to a neighbor with degree k j X in layer X. Next, we analyze the evolution of ξ A (t) and ξ R (t) in layers A and B. Once the behavior information is transmitted successfully through an edge in layer . Thus the evolution of θ A and θ B can be expressed as As for the evolution of t R X x ( ), if the information is not transmitted through an edge and the adopted nodes enter into the recovered state at time t, then the value of t R X x ( ) will increase. Thus we can get the . Based on equations (15) and (16), we can get the integration constant 1 Inserting equations (14), (17) into (7), we can get Now, we can substitute equations (18) into (15) and get the time evolution of θ X (t) in detail According to the evolution mechanism of node state, we can learn that the susceptible nodes move into the adopted state when they adopt the behavior. Meanwhile, the adopted nodes lose interest in the behavior and move into the recovered state. Thus time evolution of the fraction of adopted nodes and recovered nodes can be obtained easily by Through equations (6), (20) and (21), the fraction of nodes in each state at arbitrary time step can be obtained by iteration. Moreover, the final adoption size R X ¥ ( )can also be obtained when t  ¥. To study contagion dynamics, we can analyze the fixed points of equation (19) at the steady state. By setting For the convenience to analyze, we denote f , (23) is tangent to equation (24) when θ A < 1 and θ B < 1, there exists an discontinuous first-order phase transition [63]. At critical point, the following condition satisfies If equation As equations above are too complex to get theoretical solutions, especially when there are different adopted thresholds T A and T B in the two layers of the complex networks. Thus to intuitively analyze the dynamics of the complex contagions, we discuss the special case when T A =T B =1. In this case, equations (8) and (9) are simplified to in addition, equations (10) and (11) can be, respectively, rewritten as Equations (12) and (13) are, respectively, expressed as We substitute equations (30) and (31) into (22), and obtain is the generation function of excess degree distribution P X (k X ). In addition, the generation function of degree distribution P k  ( ) are as follows: The critical condition can be obtained by substituting equations (32) and (33) into (25). Numerical verification and simulation results We perform extensive numerical simulations on artificial two-layered multiplex networks based on Erdös-Rényi (ER) [64] and Scale-Free (SF) á ñ = á ñ = . At the final state the adopted nodes all move into the recovered state, so we can measure the propagation range using the fraction of the recovered nodes at the steady state R ¥ ( ). To determine the threshold λ c from simulations, we adopt the relative variance χ, which has been used widely and successfully to determine the epidemic thresholds [9,65]. The expression of χ is as follows where ... á ñ is the ensemble average, which exhibits a diverging peak at the critical point. Homogeneous two-layered network We first investigate the effect of heterogeneous behavioral adoption on the dynamics of the complex behavior contagions on ER-ER network with Poisson degree distribution P k e figure 1, we explore the effect of activists on the dynamics of the complex contagion, where the adoption thresholds in the two layers are set at T A =1 and T B =3, respectively. Figure 1(a) exhibits the values of the final adopted nodes as a function of information transmission rate λ for three typical values of fraction of activists q. We find that the adoption behavior can be enhanced by the fraction of activists, and there exist crossover phenomena of phase transition by tuning the proportion of the activists and conservatives. Specifically, the final adoption fraction R ¥ ( )increases with the increment of q. In addition, when there is a relatively large fraction of activists in the network, i.e. q=0.8 (blue triangles) and q=0.5 (green squares), it shows continuous phase transition of the final adopted nodes R ¥ ( )at the critical value of transmission rate λ c . When the proportion of activists reduces to q=0.2 (red circles), the transition of R ¥ ( )exhibits a hybrid pattern. In the hybrid phase transition, the value of R ¥ ( )increases continuously at the first threshold, then grows slowly with the increase of λ until it reaches the second threshold, and exhibits an abrupt discontinuous increase of R ¥ ( )at the second threshold. To distinguish the two critical values of λ in hybrid transitions, we denote I l as the threshold where discontinuous phase transition occurs, and II l as the threshold where continuous phase transition occurs. Lines in figure 1(a) are theoretical results obtained from equations (1)- (7), and (14)- (17), which agree well with the numerical simulations. Peaks of χ in figure 1(b) show the critical points obtained from simulations. We can also observe that the threshold λ c decreases with the increment of q. The continuous and discontinuous transitions are caused by the relative proportion of the activists and conservatives in the network. The activists will adopt the behavior once the accumulated pieces of information in any layer exceed the corresponding threshold. The threshold in layer A is T A =1, which means only one piece of information in layer A will induce the adoption of the behavior. Thus when there are large fraction of activists in the networks, the final adoption number of nodes increases quickly with the increment of λ. While when the proportion of conservatives in the network increases, the adoption of the behavior becomes harder as they will adopt the behavior only when the pieces of information in both layers exceed the thresholds. In this scenario, the activists will adopt he behavior first and then stimulate the conservatives to adopt the behavior. For the conservatives, there exists a subcritical state, in which these conservatives have not adopted the behavior, but only one piece of information will induce the adoption of these nodes. Similar to the so called 'powder keg' in explosive percolation [59], the discontinuous phase transition appears when the numbers of information pieces of those nodes in subcritical state exceed the threshold simultaneously. Note that the two critical values I l and II l for q=0.2 separate the parameter space into three regions. When I  l l , the behavior is locally adopted, i.e. only finite small fraction of nodes adopt the behavior. When , more and more activists adopt the behavior with the increase of λ. Thus there is a continuous increase of R ¥ ( ). At last when II l l > , conservatives adopt the behavior simultaneously, leading to a discontinuous increase of R ¥ ( ). The relative variance χ is used to numerically locate the critical points, as shown in figure 1(b). From the analysis above, it can be obtained that both the final adoption size R ¥ ( )and the phase transitions are affected by parameters q and λ. Thus the dependence of R ¥ ( )on q and λ is studied in figure 2. Colors in figures 2(a) and (b) represent the values of R ¥ ( ). We find that the parameter plane (λ, q) is divided into three regions by two critical values q I (white dotted line), q II (white line). In region I, where q q I < , the proportion of activists is extremely small, blocking the propagation of information since there is not enough activists to propagate the information in the initial stage. Consequently, in this region, the global adoption of behavior cannot be stimulated and the final fraction of adopted nodes keeps an extremely small value no matter what value of λ is. In region II, where q q q I II  < , the proportion of activists increases to a relatively large value. In this region, the transition of R ¥ ( )is continuous at II l (see green squares in figure 2(a)), and then R ¥ ( ) increases slowly as more and more activists adopt the behavior. When λ increases to λ I , a considerable number of conservatives move into the subcritical state. Consequently when I l l > , R ¥ ( )increases abruptly to a large value, and a discontinuous phase transition occurs at I l l = (see red triangles in figure 2(a)). In region III, where q q II > , the activists dominate the contagion dynamics. With the increase of λ, more and more activists adopt the behavior. At the same time conservatives will adopt the behavior gradually as a result of being stimulated by the activists. Consequently, R ¥ ( )increases continuously along with λ. In addition, with the increment of the fraction of activists q, the adoption threshold λ c decreases. Our theoretical predictions ofλ c , II l and I l agree well with the numerical predictions, as shown in figure 2(b). Moreover, we find that the average degree does not qualitatively alter the phenomena presented in figure 3. Specifically, the small fraction of activists q induces hybrid phase transition and the large q arouses a continuous phase transition. In addition, increasing average degree k á ñ will enlarge the spreading scale and decrease the outbreak threshold. Our theoretical method agrees with the above phenomena very well. Heterogeneous two-layered network We next study the effect of network structure on the phase transition of the social contagions. First of all, we focus on the ER-SF networks with average degree k k 10 Specifically, the transition type changes from continuous pattern when there is a large proportion of activists, e.g. q=0.5 and 0.8, to hybrid pattern with a small proportion, e.g. q=0.12 for v B =2.1, 3 (see red circles in figures 4(a1) and (b1)) and q=0.2 for v B =4 (see red circles in figure 4(c1)). Lines correspond to the theoretical results. Figures 4(a2)-(c2) show the relative variance χ as a function of λ when v B =2.1 (a2), 3 (b2) and 4 (c2). There are double peaks of χ (red lines) when q=0.12 (see figures 4(a2) and (b2)) and q=0.2 (see figures 4(c2)), which indicates double phase transitions. When the fraction of activists increases, e.g. q=0.5 and 0.8, there is one peak of χ, which suggests one phase transition. For a small proportion of activists q, the higher heterogeneity of degree distribution, e.g. v B =2.1, causes the double phase transitions become harder to be observed. As stated in [61], the discontinuous growth of R ¥ ( )is induced by a finite fraction of individuals in the subcritical state adopting the behavior simultaneously. Individuals in the subcritical state means that his accumulative received information equals to T 1 B -. For SF networks, there are few hubs, which induces the individuals adopting the behavior gradually. Thus, the fraction of individuals in the subcritical state decrease, and the discontinuous disappears. At last, to further explore the impact of network structure on the dynamics of social contagions, we study the process of social contagions on SF-SF networks. The degree distributions of layers A and B are P k k the degree exponents of layer A and B. Figure 5 exhibits the numerical simulations (see the symbols) and theoretical solutions (see the lines), under v A =v B =2.1 (a1), 3 (b1), and 4 (c1), respectively. In figure 5, the adoption thresholds are set as T A =1 and T B =3. Interestingly, we find that when the degree heterogeneity is strong enough, i.e. v A =v B =2.1, the hybrid phase transition disappears (see figure 5(a1)), and there is only one continuous phase transition for all values of q (see figure 5(a1)). When the power exponent v A = v B increases, the hybrid phase transition appears when there is at a relatively small value of activists, e.g, q=0.1 for v A =v B =3 (see figures 5(b1) and (b2)) and q=0.2 for v A =v B =4 (see figures 5(c1) and (c2)). The simultaneous adoption of the behavior by a large fraction of individuals in the subcritical state raises the discontinuous growth of R ¥ ( ). With fixed average degree, the more heterogeneous the degree distribution, the more the nodes, respectively, with big degree and small degree. Since behavior propagates on complex networks in a hierarchical way [53], a small number of nodes with large degree make the susceptible neighbors gradually adopt the behavior. As a result, in SF-SF network there hardly exist a fraction of nodes simultaneously in the subcritical state. Thus, the discontinuous growth of R ¥ ( )disappears. Discussions In summary, we studied the effect of heterogeneity populations on the dynamics of social contagions on multiplex networks. We considered that individuals on social networks have different willingness to accept new ideas or behaviors, and then heterogeneity in populations appears. To represent the heterogeneity of the populations, we randomly selected a fraction of q nodes in the network as activists. The remaining q 1nodes are defined as conservatives. We also considered that information spreadings in different networks have different credibility, e.g. information transmitted among families or friends will be more credible than the information transmitted on virtual social networks, such as Facebook or Twitter. Thus more pieces of information are needed to convince a person on virtual social networks to adopt the behavior. Consequently, we assumed that the populations in different subnetworks have different adoption thresholds. We assigned two thresholds T A and T B to the two layers of the network, respectively. Information is transmitted synchronously in the two layers of the network. The activists will adopt the behavior once the accumulated pieces of information they received in any layer exceed the corresponding threshold. While the conservatives will adopt the behavior only if the accumulated pieces of information in both layers of the network exceed the corresponding thresholds simultaneously. To theoretically analyze the model, a generalized edge-based compartmental theory was established in our paper. Through theoretical analysis and simulation verification, we found that the property of population heterogeneity has significant effects on the dynamics of social contagions on multiplex networks. First of all, on an ER-ER multiplex network, the adoption behavior can be enhanced by the fraction of of activists. The final adoption size grows with the increment of q, and the threshold decreases with q. More importantly, crossover phenomena of phase transition appear when tuning the value of q. When q is relatively large, the phase transition of final adoption size R ¥ ( )increases continuously with transmission rate λ. While when there is a relatively small value of q, there is a hybrid transition of R ¥ ( ). The value of R ¥ ( )versus λ first exhibits a continuous pattern at the first critical value I l , and then follows a discontinuous pattern at the second critical value II l . At last, we studied the effect of degree heterogeneity of the social contagions. On an ER-SF network, we find that there are still crossover phenomena in phase transition when tuning the fraction of activists. However, on an SF-SF network, the crossover phenomenon in phase transition disappears when degree heterogeneity is strong enough. Population heterogeneity is a crucial in studying the social contagion that is often overlooked in previous studies. By considering simultaneously the network multiplicity and the population heterogeneity in studying social contagion, we can better reveal the underlying mechanism of social contagions on complex networks. The main contribution of our study lies in providing a qualitative and quantitative view on the impact of heterogeneous populations and heterogeneous adoption threshold on the dynamics of social contagions. Our work enriches the studies about phase transition phenomenon, and our theory developed in this paper can offer new inspirations to the researches of other spreading dynamics, such as epidemic spreading, innovation spreading, marketing, and diffusion of computer virus. For the behavior spreading on multiplex networks, the number of subnetworks and the total number of the adoption neighbors in all subnetworks are extremely important, and deserve being investigated in the future researches. (1)-(7), and (14)- (17). Other parameters are γ=1.0, average degree k 10 á ñ = , T A =1, and T B =3. The theoretical solutions agree well with numerical simulations. The relative variance χ as a function of λ on both SF networks at subgraph (a2), for v A =v B =3 at (b2), and for v A =v B =4 at (c2).
Vitamin B6 prevents excessive inflammation by reducing accumulation of sphingosine‐1‐phosphate in a sphingosine‐1‐phosphate lyase–dependent manner Abstract Vitamin B6 is necessary to maintain normal metabolism and immune response, especially the anti‐inflammatory immune response. However, the exact mechanism by which vitamin B6 plays the anti‐inflammatory role is still unclear. Here, we report a novel mechanism of preventing excessive inflammation by vitamin B6 via reduction in the accumulation of sphingosine‐1‐phosphate (S1P) in a S1P lyase (SPL)‐dependent manner in macrophages. Vitamin B6 supplementation decreased the expression of pro‐inflammatory cytokines by suppressing nuclear factor‐κB and mitogen‐activated protein kinases signalling pathways. Furthermore, vitamin B6–reduced accumulation of S1P by promoting SPL activity. The anti‐inflammatory effects of vitamin B6 were inhibited by S1P supplementation or SPL deficiency. Importantly, vitamin B6 supplementation protected mice from lethal endotoxic shock and attenuated experimental autoimmune encephalomyelitis progression. Collectively, these findings revealed a novel anti‐inflammatory mechanism of vitamin B6 and provided guidance on its clinical use. clarify the potential mechanism for inhibiting excessive activation of macrophages. Vitamin B6 is a general term for a class of vitamers related to metabolism and function. 8,9 Pyridoxal (PL), a transport form of vitamin B6, can be re-phosphorylated by pyridoxal kinase into the active form pyridoxal 5'-phosphate (PLP), 10 which plays a vital role as a co-factor in more than 150 enzymatic reactions and directly involves in metabolism and immune regulation. 11 Vitamin B6 is considered necessary to maintain normal metabolism and immune response, especially the anti-inflammatory immune response. 12 A previous study reported that vitamin B6 inhibited lipopolysaccharide (LPS)-induced expression of iNOS and COX-2 at the mRNA and protein levels via suppressing NF-κB activation in RAW 264.7 macrophages. 13 It also disturbs NLRP3-dependent caspase-1 processing and suppresses secretion of mature IL-1β and IL-18. 14 In LPS-induced acute pneumonia, vitamin B6 down-regulates the inflammatory gene expressions by increasing AMP-activated protein kinase phosphorylation. 15 In experimental sepsis, vitamin B6 reduces oxidative stress in the lungs and liver. 16 Nevertheless, the exact mechanism of the anti-inflammatory role of vitamin B6 is still unclear and needs further research. Sphingosine 1-phosphate (S1P), a potent bioactive sphingolipid metabolite, is a crucial regulator of immunity. 17 S1P can affect the activation of NF-κB, MAPK and other signalling pathways in many cell types, including macrophages. [18][19][20] Excessive S1P levels are associated with increased inflammation and can lead to inflammatory diseases, such as inflammatory bowel disease and multiple sclerosis. 21,22 Sphingosine 1-phosphate lyase (SPL), a PLP-dependent enzyme, irreversibly degrades S1P into hexadecenal and phosphoethanolamine. 12 SPL regulates the normal physiological function of the body by regulating circulating levels of S1P. 23 In this study, a novel mechanism was demonstrated, whereby vitamin B6 prevented excessive inflammation by reducing the accumulation of S1P in a SPL-dependent manner. S1P supplementation or SPL deficiency would significantly inhibit the anti-inflammatory effects of vitamin B6. Furthermore, vitamin B6 supplementation prevented the development of experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Collectively, these findings revealed a novel anti-inflammatory mechanism of vitamin B6 and provided guidance on its clinical use. | Mice C57BL/6 mice were from the Lab Animal Center of Southern Medicine University (Guangzhou, China). Sgpl1 ± mice were obtained from Jackson Laboratory, and then, these mice were bred to generate Sgpl1 +/+ and Sgpl1 -/littermates. Because homozygotes exhibit serious physical defects, such as vascular defects, polychromasia, kidney defects and palate bone fusion abnormalities, Sgpl1 +/+ and Sgpl1 -/mice were not used to carry out animal experiments. All mice were used at an age of 6-8 weeks. All mice were maintained under specific pathogen-free conditions in the Lab Animal Center | Enzyme-linked immunosorbent assay (ELISA) IL-1β, TNF-α and IL-6 levels in culture supernatant and mouse serum were measured by enzyme-linked immunosorbent assay kit (Excell Bio, China) according to the manufacturer's protocol. In detail, dilution factors were different when serum or culture supernatant tests were performed. Serum was diluted 1:1 (serum: diluent), and culture supernatant was diluted 1:2 (serum: diluent). | Quantitative PCR analysis Total RNA was purified from mouse macrophages using TRIzol reagent (Thermo Fisher Scientific, USA), and cDNA was synthesized using the | Western blotting Macrophages were washed three times with ice-cold PBS and lysed for 20 min on ice in RIPA buffer solution (Sigma-Aldrich) with protease and phosphatase inhibitor cocktails (Sigma-Aldrich). Equal amounts (20 mg) of cell lysates were resolved using 8%-15% polyacrylamide gels transferred to PVDF membrane (Bio-Rad, USA). Membranes were blocked in 5% non-fat dry milk in PBS-T and in- | Statistics All experiments were performed at least twice. When shown, multiple samples represent biological (not technical) replicates of mice randomly sorted into each experimental group. No blinding was performed during animal experiments. Determination of statistical differences was performed using Prism 5 (GraphPad Software, Inc) using unpaired two-tailed t tests (to compare two groups with similar variances), or one-way ANOVA with Bonferonni's multiple comparison test (to compare more than two groups). Difference between mouse survival curves was evaluated by the log-rank (Mantel-Cox) test. P < .05 was considered significant. | Vitamin B6 inhibited pro-inflammatory cytokine production in vivo and in vitro Although previous reports have shown the anti-inflammatory activity of vitamin B6, the associated mechanisms remain unclear. The anti-inflammatory effect of vitamin B6 was first verified in vivo. Acute inflammation was induced in mice using a low dose of LPS, and serum IL-1β, TNFα, and IL-6 levels were suppressed by vitamin B6 ( Figure 1A). Likewise, serum NO levels were significantly reduced in the vitamin B6-treated groups ( Figure 1B). Excessive inflammation can lead to pathological damage and death. To test the anti-inflammatory effect of vitamin B6, a high dose of LPS-induced lethal endotoxic shock was injected in mice. The initial time of death was delayed, and the survival rate was improved in mice treated with vitamin B6 compared to control mice ( Figure 1C). The anti-inflammatory effect of vitamin B6 was verified in vitro. Bone marrow-derived macrophages (BMDMs) were pre-treated with PBS or PL and then stimulated with LPS. We found that the mRNA expressions of IL-1β, TNF-α, IL-6 and iNOS were reduced in the vitamin B6-pre-treated groups compared with the control groups ( Figure 2A). Moreover, BMDMs pre-treated with PL secreted decreased amounts of IL-1β, TNF-α and IL-6 ( Figure 2B). The concentration of NO was reduced in the culture supernatant of BMDMs pre-treated with PL ( Figure 2C). Taken together, these results suggested a protective role of vitamin B6 in excessive inflammation. | Vitamin B6 inhibits pro-inflammatory cytokines through various signalling pathways The specific molecular pathways that mediate the anti-inflammatory effect of vitamin B6 in BMDMs remain unclear. To investigate the pathways, BMDMs were pre-treated with PL and then stimulated with LPS. PL pre-treatment reduced the phosphorylation of p65, p38, ERK and JNK in BMDMs ( Figure 3A and B). The NF-κB inhibitor JSH-23, MEK1/2 inhibitor U0126, p38 inhibitor SB203580 and JNK inhibitor SP600125 were used to inhibit the corresponding signalling pathway, and BMDMs pre-treated with PL had reduced expression levels of IL-1β, TNF-α and IL-6 compared with control groups if the single signalling pathway was inhibited ( Figure 3C). Likewise, the concentrations of NO were reduced in the culture supernatant from BMDMs pre-treated with PL ( Figure 3D). Together, these results indicated that vitamin B6 played an anti-inflammatory role by inhibiting NF-κB and MAPK signalling pathways. | Vitamin B6 reduced accumulation of S1P by promoting SPL activity Studies on direct target molecules mediated by vitamin B6 to regulate anti-inflammatory reactions are lacking. A previous report showed that active forms of vitamin B6 serve as a co-factor in more than 150 enzymatic reactions. 12 SPL activity in macrophages was examined. Results showed that PL did not affect SPL expression in BMDMs ( Figure 4A). However, SPL activity was significantly enhanced when PL was added ( Figure 4B). S1P, a catalytic substrate of SPL, was significantly decreased in the PL addition groups ( Figure 4C). To investigate whether PL plays an anti-inflammatory role through the SPL-S1P axis, S1P recovery experiments were carried out. Western blot analysis revealed that S1P supplementation significantly reduced the phosphorylation of p65, F I G U R E 4 Anti-inflammation effect of PL depends on the S1P in BMDMs. (A) BMDMs were pre-treated with PL or PBS (Ctrl) for 2 h and incubated with LPS for the specified time. Cell homogenates were subjected to immunoblotting and probed with SPL and GAPDH antibodies. These results are from a representative experiment (n = 3) (B-C) BMDMs were pre-treated with PL or PBS (Ctrl) for 2 h and incubated with LPS for 24 h. (B) SPL enzyme activity levels are detected. (C) Supernatants were collected, and S1P concentrations were quantified by ELISA. (D) BMDMs were pre-treated with PL or PBS or/and with different concentrations of S1P for 2 h, and incubated with LPS for 30 min. Western blot analysis of the phosphorylation status of NF-κB p-p65, p38, ERK and JNK. GAPDH is as an internal control. These results are from a representative experiment. (E) Densitometry quantification of band intensity of immunoblot analysis of BMDM lysates (n = 3). (F) BMDMs were pre-treated with PL or PBS or/and with different concentrations of S1P for 2 h, and incubated with LPS for 24 h. Supernatants were collected, and IL-1β, TNF-α and IL-6 concentrations were quantified by ELISA. Concentrations of NO were measured by nitrate reductase assay. Data shown in are the mean ± SD. *P < .05, **P < .01 and ***P < .001. Data are representative of three independent experiments with similar results p38, ERK and JNK ( Figure 4D and E). Moreover, the levels of their phosphorylation recovery were positively correlated with the concentration of S1P ( Figure 4D and E). Likewise, S1P treatment lowered the ability of PL to inhibit the production of IL-1β, TNF-α, IL-6 and NO ( Figure 4F). A high dose of S1P completely counteracted the anti-inflammatory effects of PL ( Figure 4F). These results demonstrated that vitamin B6 played an anti-inflammatory role by reducing accumulation of S1P by promoting SPL activity. | Elimination of anti-inflammatory effects of vitamin B6 by SPL deficiency Vitamin B6 regulates anti-inflammatory reactions through SPL acting as a direct target molecule and would not play an anti-inflammatory role in an SPL-deficient environment. As Spgl1 -/mice exhibit serious physical defects, such as vascular defects, polychromasia, kidney defects and palate bone fusion abnormalities, animal experiments could not be carried out. Therefore, experiments in BMDMs were carried out to investigate the effect of SPL deficiency on the anti-inflammatory effect of vitamin B6. Results showed that SPL deficiency led to significantly reduced SPL activity ( Figure 5A). Treatment with PL did not enhance SPL activity in Spgl1 -/-BMDMs ( Figure 5A). BMDMs from Spgl1 -/mice showed stronger expression of S1P after stimulation than BMDMs from WT mice ( Figure 5B). Accumulation of S1P was not reduced in Spgl1 -/-BMDMs ( Figure 5B). Consistent with these results, production of IL-1β, TNF-α, IL-6 and nitrate increased significantly without the influence of PL in Spgl1 -/-BMDMs ( Figure 5C). These results indicated the dependence of the anti-inflammatory effect of vitamin B6 on the regulation of SPL activity. | S1P counteracted the anti-inflammatory effects of vitamin B6 in vivo In vitro assays showed that vitamin B6 suppressed inflammatory response by reducing the accumulation of S1P in a dependent manner promoting SPL activity. Thus, for the validation of the same mechanism, S1P recovery experiments were performed in vivo. Results showed that mice pre-treated with S1P had up-regulated expression of IL-β, TNF-α, IL-6 and NO ( Figure 6A, B). The anti-inflammatory effect of vitamin B6 was completely removed by S1P supplementation ( Figure 6A, B). Importantly, no differences were seen in the cytokine expressions between the S1P treatment groups and vitamin B6 and S1P co-treatment groups ( Figure 6A, B). Furthermore, we detected the survival rate of mice with lethal endotoxic shock and found that the death rate increased significantly in mice treated with S1P ( Figure 6C). Vitamin B6 could not rescue the mice that were administered S1P simultaneously from lethal endotoxic shock ( Figure 6C). Taken together, these results suggested that vitamin B6 played an anti-inflammatory role by reducing the accumulation of S1P in vivo. | Vitamin B6 suppressed EAE progression in vivo Excessive inflammation is associated with the development of autoimmunity of the central nervous system. Considering the strong anti-inflammatory properties of vitamin B6, the role of vitamin B6 in EAE was investigated. Mice were induced EAE and orally administrated with PBS or vitamin B6 daily. The EAE clinical score of the vitamin B6-treated mice was significantly lower than that of the control groups ( Figure 7A). The overall levels of S1P concentration were lower in mice treated with vitamin B6 than in control mice ( Figure 7B). Similarly, ELISA detection showed down-regulation of IL-1β, TNF-α and IL-6 by vitamin B6 treatment ( Figure 7C). Collectively, these findings identify that vitamin B6 prevents excessive inflammation by reducing accumulation of SIP in a SLPdependent manner ( Figure 7D). Vitamin B6 supplementation was beneficial in controlling excessive inflammation, including the development of EAE. | D ISCUSS I ON Vitamins are trace organic substances required for maintaining normal physiological functions by humans and animals, including growth, metabolism and development. 25,26 The immune regulation function of vitamins has received considerable attention. The immunoregulatory mechanisms of various vitamins, such as vitamin A, C, D, B1 and B5, have been investigated. [27][28][29][30] In the present study, evidence was provided for the anti-inflammatory activity of vitamin B6 in LPS-induced acute infection and autoimmune disease. Vitamin B6 supplementation was found to reduce the accumulation of S1P by enhancing the enzyme activity of SPL. Previous studies have shown the anti-inflammatory activity of vitamin B6 in several inflammatory diseases. In patients with rheumatoid arthritis, vitamin B6 supplementation improved pro-inflammatory responses by suppressing TNF-α and IL-6 levels. 31 Both human and animal studies have shown vitamin B6 supplementation suppressing colon tumorigenesis. 32,33 Clinical trials found an inverse relationship between vitamin B6 intake and the risk of Parkinson's disease and Alzheimer's disease. 34,35 A new study showed that vitamin B6 supplementation effectively prevented lung inflammation. 15 In the present study, vitamin B6 prevented toxic shock by suppressing excessive inflammation, which was consistent with the previous research. 14 The anti-inflammatory mechanism of vitamin B6 is complicated. Vitamin B6 suppresses NF-κB activation and NLRP3-mediated caspase-1 activation. 13,14 Another study showed that vitamin B6 activated AMPK phosphorylation to inhibit LPS-induced macrophage activation by activating DOK3. 15 Consistent with these results, vitamin B6 was found to reduce the expression of pro-inflammatory cytokines via suppression of NF-κB and MAPK signalling pathways. However, direct target molecules mediated by vitamin B6 to suppress these signalling pathways have not been studied. Here, we demonstrated that vitamin B6 suppresses excessive inflammation by regulating SPL activity to reduce S1P levels. SPL is a PLP-dependent enzyme and is a direct target molecule mediated by vitamin B6 to play an anti-inflammatory role. S1P is a bioactive sphingolipid, which binds to cell-surface G protein-coupled receptors (GPCRs), designated S1P1-5, and thereby mediate effects in variety of cell types, including not only macrophage but also lymphocytes. 38 Previous reports showed that the activation of the S1P is involved in regulating differentiation of T cells, including T helper 17 and T helper 1/regulatory T cell balance. 39 We have confirmed that vitamin B6 suppresses excessive inflammation by regulating SPL activity to reduce S1P levels in F I G U R E 6 S1P reversals the anti-inflammatory effect of vitamin B6 in vivo. (A-B) C57BL/6 mice aged 8 weeks were orally administrated with saline (Ctrl) or vitamin B6 (20mg/kg bodyweight), i.p. injected with saline or S1P (85 μg/kg bodyweight), and then i.p. injected with saline or LPS (5 mg/kg bodyweight) 2 h later. After 24 h, serum samples were collected (n = 5 mice for each group). (A) The IL-1β, TNF-α and IL-6 were quantified by ELISA. (B) Concentrations of NO were measured by nitrate reductase assay. (C) C57BL/6 mice aged 8 weeks were orally administrated with saline or vitamin B6 (20mg/kg bodyweight), i.p. injected with control solution or S1P (85 μg/kg bodyweight), and then i.p. injected with saline or LPS (10 mg/kg bodyweight) 2 h later (n = 10 mice for each group) on day one. Then, these mice were given saline or vitamin B6 or S1P every day like first day. The survival rate of the mice was counted. Data shown in are the mean ± SD. **P < .01 and ***P < .001. Data are representative of three independent experiments with similar results macrophages. Vitamin B6 may play a role in regulating differentiation of T cells. Further research is required to clarify this possibility. Taken together, these findings suggest that vitamin B6 supplementation significantly suppressed excessive inflammation by directly affecting SPL activity to reduce accumulation of S1P. Thus, vitamin B6 supplementation may have important therapeutic implications in the clinical management of inflammatory diseases, such as endotoxic shock and multiple sclerosis. CO N FLI C T S O F I NTE R E S T The authors declare no competing financial interests.
Dependence of NAO variability on coupling with sea ice The variance of the North Atlantic Oscillation index (denoted N ) is shown to depend on its coupling with area-averaged sea ice concentration anomalies in and around the Barents Sea (index denoted B ). The observed form of this coupling is a negative feedback whereby positive N tends to produce negative B , which in turn forces negative N . The effects of this feedback in the system are examined by modifying the feedback in two modeling frameworks: a statistical vector autoregressive model ( F VAR ) and an atmospheric global climate model ( F CAM ) customized so that sea ice anomalies on the lower boundary are stochastic with adjustable sensitivity to the model’s evolving N . Experiments show that the variance of N decreases nearly linearly with the sensitivity of B to N , where the sensitivity is a measure of the negative feedback strength. Given that the sea ice concentration field has anomalies, the variance of N goes down as these anomalies become more sensitive to N . If the sea ice concentration anomalies are entirely absent, the variance of N is even smaller than the experiment with the most sensitive anomalies. Quantifying how the variance of N depends on the presence and sensitivity of sea ice anomalies to N has implications for the simulation of N in global climate models. In the physical system, projected changes in sea ice thickness or extent could alter the sensitivity of B to N , impacting the within-season variability and hence predictability of N . Introduction Physical reasoning suggests that winter sea ice variability over the North Atlantic should be sensitive to the overlying atmospheric circulation since the latter can generate anomalies of sea ice velocity, atmospheric heat transport, and oceanic heat transport. As an example of this coupling, the upward trend of the North Atlantic Oscillation (NAO) index (N) from the 1960s through the mid-1990s increased the rate of winter sea ice retreat over the North Atlantic (Deser 2000;Venegas and Mysak 2000;Rigor et al. 2002;Hu et al. 2002;Liu and Curry 2004;Rothrock and Zhang 2005;Ukita et al. 2007). Since then, the NAO trend has reversed, and an overall downward trend in total sea ice extent is emerging that appears to be anthropogenic (Johannessen et al. 2004) and accelerating in summer (e.g. Comiso 2006;Serreze et al. 2007). There nonetheless remains, superimposed on this overall downward trend in sea ice extent, a measurable signature of forcing by atmospheric circulation variability (Comiso 2006;Maslanik et al. 2007;Francis and Hunter 2004;Deser and Teng 2008). A substantial fraction of this atmospheric forcing is connected to the NAO, whose imprint is discernible as wind-driven sea ice extent anomalies in daily data (Kimura and Wakatsucchi 2001), and ice motion and thickness anomalies in multi-year satellite records (Kwok et al. 2005). During positive NAO, sea ice concentrations in the Barents Sea tend to be lower than average in association with increased temperatures related to enhanced atmospheric and oceanic heat transport (Yamamoto et al. 2006;Wang et al. 2000;Liu and Curry 2004). Koenigk et al. (2009) recently used a fully coupled model to show that that sea ice concentrations within the Barents Sea are most sensitive to wind-driven sea ice transport, with oceanic heat transport playing a small role in interannual variations and a large role in variations on longer time scales. NAO-driven sea ice variations project strongly onto, and are likely largely responsible for, the leading pattern of North Atlantic sea ice concentration variability . This leading pattern consists of a dipole pattern of oppositely signed concentration anomalies in the Labrador and Barents Seas, where Barents Sea concentrations are lower during positive NAO. We refer to this variability pattern as the Greenland Sea-ice Dipole (GSD). Modeling studies have shown that a positive GSD-like sea ice pattern sustained from December through April will generate a negative NAO-like hemispheric-scale response Deser et al. 2004;Alexander et al. 2004;Kvamsto et al. 2004). These results evidence the presence of a negative feedback since the positive NAO produces a positive GSD pattern. Deser et al. (2007) showed that this negative feedback begins as a baroclinic response localized to the forcing that reaches peak intensity in 5-10 days and persists for 2-3 weeks. If the GSD pattern is sustained beyond several weeks, the atmosphere develops a larger-scale equivalent barotropic response resembling the negative polarity of the Northern Annular Mode, which is maintained primarily by nonlinear transient fluxes of eddy vorticity (Deser et al. 2007) related in part to changes in Rossby wave breaking (Strong and Magnusdottir 2010b). There is evidence of non-stationarity in the association between sea ice and the NAO related to multi-decadal external forcing. Modeling studies show, for example, a strong impact of the North Atlantic meridional overturning circulation (MOC) on sea ice in the Arctic Ocean and Barents Sea (Delworth et al. 1997;Jungcalus et al. 2005). Reconstructing North Atlantic sea ice extent back to 1800, Fauria et al. (2009) found weak running correlations prior to 1950 and significant correlations thereafter. For the period 1978, Strong et al. (2009 detected significant negative feedback between winter sea ice and the NAO at weekly time scales using satellite observations of sea ice concentration, atmospheric reanalysis data, and the testable definitions of causality and feedback developed by Granger (1969). For interannual and longer time scales, Strong and Magnusdottir (2010a) examined multi-model ensemble simulations of the twentieth to twenty-third centuries, and found that an NAO-driven pattern of sea ice variabilty will persist but change somewhat in form as the ice edge retreats under projected global warming. The present manuscript is focused on how the feedback between sea ice and the NAO affects the variance of sea ice and the variance of the NAO index. If, for example, we turn off the feedback or double the sensitivity of the feedback, how is the variance of the NAO index affected? To quantify the effects of the sea ice-NAO feedback, we use two observationally-motivated models of the sea ice-NAO system. The first model is based on the vectorautoregressive statistical model used in SMS. The second model is an atmospheric global climate model modified so that sea ice anomalies on the lower boundary are stochastic with adjustable sensitivity to the model's evolving NAO index. Our observational data are described in Sect. 2, followed by our modeling methods (Sect. 3), Results (Sect. 4), and a Summary and Discussion (Sect. 5). Data Consistent with SMS, we defined the NAO index N based on the leading empirical orthogonal function (EOF) of weekly mean NCEP/NCAR reanalysis sea level pressure data for the 21-week extended winter from 4 December through 23 April for years . Data were detrended, deseasonalized, and restricted to the domain used in Hurrell (1995) (20°-80°N and 90°W-40°E). A portion of this EOF is contoured in Fig. 1. SMS used a sea ice index (G) based on the GSD pattern. Here, we simplify this by focusing on the Barents Sea center of action since it accounts for nearly the entire feedback signal detected by Magnusdottir et al. (2004). We define an index B which is the area-weighted, weeklymean, sea ice concentration anomaly within the region outlined in Fig. 1, where the anomaly is relative to the long term mean for that week. This definition for B yields an intuitive sign convention whereby high B corresponds to anomalously high sea ice concentrations over the Barents Sea, but we note that the sign of B is opposite to the sign of the GSD index, so the negative feedback in this system is as shown in Sect. 4.1. The B index is calculated using National Snow and Ice Data Center (NSIDC) sea ice concentrations derived from Nimbus-7 Scanning Multichannel Microwave Radiometer and Defense Meteorological Satellite Program Special Sensor Microwave/Imager radiances (Cavalieri et al. 2008). These sea ice data are on a 25-km grid nominally once every 2 days for 1978-1986 and once daily for 1987 to present. We do not include winter 1987-1988 in our study because of a data gap. Modeling We use two modeling frameworks to explore how feedback between sea ice and the NAO affects the variance of sea ice and the variance of the NAO. The first framework is a linear statistical model coupling N and B as a vectorautoregressive (VAR) process (Sect. 3.1, framework denoted F VAR ). The second framework couples a linear stochastic model of B to the NCAR Community Atmosphere Model (CAM) Version 3.0 (Sect. 3.2, framework denoted F CAM ). The simplicity of F VAR allows us to run many long experiments with minimal computational expense, and to obtain explicit expressions for how the variances of N and B are affected by feedback. The F VAR results provide a view of system behavior based on a linear framework, and we compare these linear results to analogous experiments in the F CAM system which is solving nonlinear partial differential equations to determine N. Statistical model In SMS, we studied feedback between sea ice and the NAO using a VAR model that included contemporaneous as well as lagged effects. For our purposes here, we maintain lag order p, simplify the model by excluding contemporaneous effects, and introduce ''feedback scaling parameters'' g and h to be used when experimenting with the model. Denoting N during week t by N t , and B during week t by B t , we write F VAR as where B t and N t are stationary with zero mean, and e Bt and e Nt are uncorrelated white noise disturbances with respective standard deviations r B and r N . The parameters g and h govern, respectively, how sensitive B is to N and how sensitive N is to B. In general, and particularly when fitting F VAR to observations, g = h = 1. The feedback scaling parameters may be given values other than 1 for the purpose of experimentation aimed at determining the response of the system to a stronger (e.g., g = 1.5) or weaker feedback (e.g., g = 0.5). We can write (2) compactly as To determine the appropriate order p, we fit the model to observations at order p (i.e., ''unrestricted'') and compare it to the model fit at order p -1 (i.e., ''restricted''). Where we detect a significant degradation in model strength going from p to p -1, we declare p to be the appropirate model order. Significant degradations in model strength are tested for using the log-likelihood ratio given by Sims (1980) L ðT À cÞðlog jR r j À log jR u jÞ ð4Þ where T is the number of use-able observations, c the maximum number of regressors in the longest equation, and jR u j and jR r j are the determinants of the covariance matrices of the unrestricted and restricted model's residuals, respectively. To quantify the variance of N and the variance of B in F VAR , we want expressions for the 2p ? 1 covariance matrices where m is lag, (Á) T denots transpose, E(Á) denotes expectation, and c BN (m) the covariance where N leads B by m weeks. For convenience, we denote the variances of N and B as c NN and c BB . The 2p ? 1 matrices in (5) elements, we post-multiply (3) by x t-j T , j = 0, 1, ..., p and take expectations, yielding (e.g., Brockwell and Davis 1996) where ! is the covariance matrix of e t : Numerically solving the linear system (6-7) yields all the unique elements of (5), and we will be focusing primarily on the variances c NN and c BB . As we will show in Sect. 4.2, the statistical properties of simulations from F VAR converge to the solutions to (6-7) for large sample size. 3.2 Atmospheric global climate model As described above, F VAR couples a linear expression for N to a linear expression for B where the latter is written F CAM retains (8) as the linear expression for B, but couples it to CAM. To acheive this coupling, we wrote a parallelized module for CAM that introduces a weekly sea ice concentration anomaly according to Eq. 8, but with the values of weekly mean N calculated from the sea level pressure fields being generated within CAM during the run. The module requires three input files: (1) the spatial pattern of the NAO obtained from a long unforced run analyzed the same way as the observational NAO (Sect. 2), (2) a weekly climatology of sea level pressure based on a long unforced run, and (3) a ''B-file'' containing mapped sea ice concentration anomalies for a range of B values and months. The sea ice concentration anomalies in the B-file are given on the model's latitude-longitude grid, and are specified as a function of month and the index B. Each anomaly map is a composite of NSIDC sea ice concentration anomaly observations grouped by the five months December through April, and seven B index bins centered around the integers À3; À2; . . .; 3: As illustrative examples, Fig. 1a shows the composite sea ice anomaly for all January observations with B indices in the bin centered on B = -2, and Fig. 1b shows the composite sea ice anomaly for all February observations with B indices in the bin centered on B = 3. Experimentation with F CAM using more idealized, smooth anomalies produced results similar to those presented here. Week t = 1 is defined as the first seven days of model integration, and F CAM specifies the preceding p weeks as initial conditions. When the model initializes, it therefore sets t = 1 and does the following: 1. Set initial conditions B t=1-i = 0 for i ¼ 1; 2; . . .; p: 2. Set initial conditions N t=1-i = c for i ¼ 1; 2; . . .; p where c is the value of the NAO index calculated from the sea level pressure field in the initial condition file. 3. Calculate B 1 using the 2p initial conditions and Eq. 8. 4. Determine the sea ice concentration anomaly field to be applied during week t = 1 by going to the current model month in the B-file and linearly interpolating the sea ice concentration anomaly at each grid point as a function of B to the value B 1 . The module then takes the steps required to perform the following every seven days beginning on week t = 2: 1. Calculate N t-1 using the preceding week's sea level pressure fields, which the model has stored, weighted by the square root of the grid area, deseasonalized, and projected onto the spatial pattern of the NAO. 2. Calculate B t from Eq. 8 using B t-i and N t-i for i ¼ 1; 2; . . .; p: 3. Determine the sea ice concentration anomaly field to be applied during next seven days by going to the current model month in the B-file and linearly interpolating the sea ice concentration anomaly at each grid point as a function of B to the value B t . For the sea ice concentration anomalies used in our experiments, going to a specific month in the B-file produces anomaly patterns that are similar to those obtained by performing a more expensive bi-linear interpolation as a function of month and B t . F CAM can be thought of as a modeling framework that is intermediate between running CAM coupled to a full ice model and running CAM forced by sea ice linearly interpolated from a climatology. This intermediate framework is useful for our purposes because we can explicitly control aspects of B including whether, and to what degree, B is sensitive to variations in CAM's evolving N. Experiments We designed our experiments to uncover the effects of feedback in the B and N system. In the experiments, we varied the values of the feedback scaling parameters g and h as shown in Table 1. Our control case (CTL) corresponds conceptually to the observed system, with N and B having realistic sensitivity to one another (i.e., g = h = 1). For the CTL case, B evolved as a vector autoregressive process sensitive to the past states of itself, the past states of N, and a stochastic forcing. For the IND experiment, g = h = 0 meaning that B and N evolved independently. In terms of F CAM , the IND case is equivalent to forcing the atmosphere with an anomaly-free sea ice climatology. For the AR experiment, B evolved as an autoregressive process, meaning it was sensitive to the past states of itself and a stochastic forcing. For the VAR2 experiment, B evolved as in CTL but with a doubled sensitivity to N (i.e., g = 2). In the physical system, a change in the responsiveness of B to N could be related to, for example, changes in the thickness or extent of the sea ice. For F CAM , we developed a 150-member ensemble for each of CTL, IND, AR, and VAR2. Each member covered the 21 weeks beginning on December 4, with initial conditions taken from a long unforced run. We define the response as the total variance of the experiment ensemble divided by the total variance of the CTL ensemble. For F VAR , we define the response as the variance in the experiment case divided by the variance of the CTL case, where each variance comes from the solution to the linear system (6-7). We denote the response by a vertical bar followed by a subscript denoting the experiment. For example, the response of c NN in the AR experiment is c NN|AR . Results We have two results sections. In the first (4.1), we present observations of N and B, fit the VAR to these observations, and verify that the F VAR and F CAM models reasonably capture the observed behavior of B and N. In Sect. 4.2, we present results from experimentation with F VAR and F CAM . Observations For observations, the lagged correlations of N and B are shown in Fig. 2, and we will interpret them in the context of a hypothetical, anomalously high value of N. Concurrent with this high value of N (at lag 0), B tends to be anomalously low, indicating a reduction in sea ice over the Barents Sea that is physically consistent with the patterns of temperature advection and sea ice velocity associated with the positive NAO. This tendency for anomalously low B is visible over lags of several weeks forward from the N anomaly. One to six weeks after B is anomalously low, N tends to decrease (positive correlations toward left side of Fig. 2a). This is evidence of the negative feedback between N and B discussed in Sect. 1. Comparison of Fig. 2b and c shows that B is more autocorrelated than N and subject to less short-term fluctuations or ''noise.'' Fitting system (2) to observations, we find the appropriate model order to be p = 4 (method in Sect. 3.1) as in SMS. Parameter values for / matrix are as given in Table 2. By testing the significance of these parameters using Eq. 4, we conclude at the 95% confidence level that there is Granger feedback (Granger 1969) between N and B. SMS provide a closely related result and more discussion of Granger feedback detection in this application. The blue and red curves in Fig. 2 show, respectively, lagged correlations for output from F VAR and F CAM . Both models capture the temporal covariation of N and B reasonably well. Model responses We first use F VAR to show how c NN and c BB respond to the g and h feedback scaling parameter settings in the IND, AR, and VAR2 experiments. As noted in Sect. 3.1, we can For the CTL case, we solved Eqs. 6-7 with g = h = 1, and Table 3 provides some select values from this solution: c NN , c BB , and c BN (3). Figure 3 shows that synthetic data generated from F VAR converge toward these numerical solutions as t becomes large. To provide a context for the three experiment results, we calculated c NN and c BB responses for a large set of values of g and h ranging from -1 to 2.5 (Fig. 4a, b, respectively). In the AR experiment, feedback is turned off by setting g = 0 meaning that B is independent of previous values of N, but N still depends on past values of B. The response is a 5% increase in c NN (i.e., c NN|AR = 1.05, Fig. 4a). To provide the details underlying c NN|AR , we write the equation for c NN from (7) The c NB (i) terms all become positive in AR meaning that anomalies of B tend to be followed by like-signed anomalies of N, and the c NN (i) terms increase, meaning that the autocorrelation of N increases. Both changes contribute toward higher c NN . In the same experiment, the variance of B decreases by 6% (c BB|AR = 0.94, Fig. 4b). In the equation for c BB from (7) setting g = 0 reduces the variance of B because the c BN terms contribute positively to c BB in CTL. In the VAR2 experiment, we increase the strength of the negative feedback by setting g = 2. The variance, c NN , decreases by approximately 2% (c NN|VAR2 = 0.98, Fig. 4a) and c BB increases by approximately 42% (c BB|VAR2 = 1.42, Fig. 4b). In the IND experiment, the settings g = h = 0 isolate N and B from each other, and the variance of each decreases (c NN|IND = 0.99 in Fig. 4a, c BB|IND = 0.93 in Fig. 4b). The variance responses for the IND case are not large, and arise from the significant but small feedback captured by fitting the / i matrix in (3) to observational data. At the end of this section, we show that the variance responses in the F CAM results are larger. Commenting more generally on the surfaces in Fig. 4, the responses become rapidly large in portions of the response plane of g and h that are shaded in Fig. 4c. In these shaded regions, the sign of either g or h becomes negative, rendering the feedback positive and generating very large variances of N and B. The response surfaces are symmetrical about the origin with asymptotes (dashed lines) along which the contoured response is isolated from the other system variable, as in the IND case. Using the c NN response as an example (Fig. 4a), there is an asymptote at h = 0 because this scaling desensitizes N to B, so a change in g, which is a scaling factor in the B equation, has no effect on N. Table 2 c NN 1.02 We now turn to the F CAM results (red circles, Fig. 5). For the purposes of comparison, Fig. 5 shows portions of the F VAR results as curves. These F VAR response curves are taken from the surfaces in Fig. 4 running down the plots from VAR2 to CTL to AR (curve in Fig. 5a taken from Fig. 4a; curve in Fig. 5b taken from Fig. 4b). Considering the AR, CTL, and VAR2 results, the F CAM results agreed qualitatively with the F VAR results: turning feedback off in AR increases c NN and decreases c BB , whereas doubling the feedback sensitivity in VAR2 produces the opposite responses. F VAR models an approximately linear response of c NN to g (curve in Fig. 5a), and this curve is within one standard error of the F CAM results. F VAR models a strongly nonlinear response of c BB to g (curve in Fig. 5b), but the F CAM results lack the curvature of the F VAR model, suggesting an approximately linear response of c BB to g. Linear regressions of the variance responses in F CAM are shown as dashed lines in Fig. 5, and we conclude that the variance of N and B in F CAM depend approximately linearly on the sensitivity of B to N for the range 0Bg B2. For the IND case, the c BB response is the same as in the AR case (not shown) because B lacks sensitivity to N in both experiments (i.e., g = 0, Table 1). For c NN in the IND case, h = 0 and the F CAM result matches in sign, but is stronger than the F VAR result (Fig. 5a). Specifically, F VAR predicts a small c NN response when isolating N from B, but F CAM produces the strongest response in c NN for the cases we examined, amounting to a 6% decrease. F VAR and F CAM are thus in agreement about responses with respect to the sensitivity of B to N (i.e., g), but agree less well about responses with respect to the sensitivity of N to B (i.e., h). This is not entirely unexpected since F CAM generates N using nonlinear equations of motion and parameterized model physics. Summary and discussion We quantified the effects of negative feedback between the North Atlantic Oscillation index (denoted N) and an index of sea ice concentration anomalies in and around the Barents Sea (denoted B). Statistically testable definitions of causality and feedback were used to conclude that, in observations, positive N tends to produce negative B, which in turn forces negative N. We then investigated this feedback by modifying it in two modeling frameworks: a statistical vector autoregressive model (F VAR ) and an atmospheric global climate model (F CAM ) customized so that sea ice anomalies on the lower boundary were a b c Fig. 4 For F VAR output, the response of (a) the variance of N (denoted c NN ) and (b) We defined a control case (CTL) in which B evolved as a vector autoregressive process sensitive to the past states of itself, the past states of N, and a stochastic forcing. For the IND experiment, B and N evolved independently. For the AR experiment, B evolved as an autoregressive process, meaning it was sensitive to the past states of itself and a stochastic forcing. For the VAR2 experiment, B evolved as in CTL but with a doubled sensitivity to N. In the physical system, a change in the responsiveness of B to N could be related to interannually varying properties of the sea ice. For example, a thinner sea ice pack could be more responsive to thermodynamic forcing by NAO-driven atmospheric temperature advection. Also, the position of the sea ice edge relative to the centers of action of the NAO could govern how sensitive B is to wind-driven sea ice advection. In the following conclusions, we take ''feedback strength'' to mean the value of the feedback scaling parameter g, which is the sensitivity of B to N. The variance of N (c NN ) tends to decrease as feedback strength increases in F CAM and F VAR , and this sensitivity depends approximately linearly on g. The variance of B (c BB ) tends to increase as feedback strength increases in F CAM and F VAR , and this sensitivity is approximately linear in F CAM , exhibiting more curvature in F VAR . In F CAM , the IND case produced the strongest response in c NN , amounting to a 6% decrease in variance. This is different from the F VAR prediction that c NN would be very similar in the IND and CTL cases. This difference indicates that F VAR and F CAM produce reasonably similar responses when feedback is scaled by changing the sensitivity of B to N (i.e., g), but produce less similar responses when feedback is scaled by changing the sensitivity of N to B. Based on the F CAM results, the variance of N increased progressively from IND to VAR2 to CTL to AR. In other words, a zero-anomaly sea ice climatology (IND case) produces minimal N variance whereas, given that the sea ice field has anomalies (VAR2, CTL, or AR cases), the variance of N goes down as these anomalies become more sensitive to N. This negative-slope, approximately linear response of c NN to the feedback strength is consistent with the fundamental behavior of negative feedback, and its quantification has implications for the simulation of internal variability in atmospheric global climate models forced by sea ice, and for the predictability of N under projected changes in winter sea ice extent.
Farming system context drives the value of deep wheat roots in semi-arid environments Highlight More extensive root systems can capture more water, but leave the soil in a drier state, potentially limiting water availability to subsequent crops. Introduction Several authors have proposed root traits which improve yield in water-limited environments, including increased root elongation rate and depth of rooting (Cooper et al., 1987;Lopes and Reynolds, 2010), root distribution at depth (Hurd, 1968(Hurd, , 1974O'Brien, 1979), xylem vessel diameter (Richards and Passioura, 1989), angle of seminal roots (Nakamoto and Oyanagi, 1994;Manschadi et al., 2008), and the ratio of root:shoot dry matter (Siddique et al., 1990). Experiments and simulation studies have shown that the capture of subsoil water by deeper wheat roots can make a valuable contribution to yield on a range of deep soil types Lilley and Kirkegaard 2007;Christopher et al., 2008). We briefly describe the evidence for yield benefits from deeper and more extensive root systems, and review several simulation studies estimating the value to the crop of improved capacity to extract water from the soil. Tennant and Hall (2001) reviewed root depth and water uptake of 20 annual crop and pasture species included in ten different field studies in Western Australia. They concluded that rooting depth was strongly affected by soil type, particularly where limiting conditions occurred and that amelioration of chemical or physical constraints increased root depth. Gregory et al. (1984) and showed that even on potentially deep soils, the depth of soil wetting varies seasonally in the semi-arid zone and that dry soil due to limited rewetting can restrict root depth in some seasons. The root penetration rate (RPR), defined as the rate of downward root growth during the vegetative phase, was suggested as a useful indicator to assess genotypes or management interventions which improve root growth in the field . A RPR of 1.8 mm/ o C.day was reported by Barraclough and Leigh (1984) for winter wheat growing in unconstrained soil in the UK and twofold differences in RPR between genotypes in container grown plants have been reported (Hurd, 1968). In field soils, maximum RPR of 1.2-1.3 mm/ o C.day have been reported for spring wheats on structured clay soils in Australia and for both spring and winter wheat cultivars grown on sandy soils Denmark (Thorup-Kristensen et al., 2009). Wasson et al. (2014) found genetic variation in RPR of 0.9-2.2 mm/ o C.day among a range of Australian and Indian cultivars and a biparental population, although this variation was measured in isolated 'hill plots', which do not relate to RPR in field plots (Wasson et al., unpublished). Tennant and Hall (2001) demonstrated that significant increase in water uptake could be achieved by growing longer season crop and pasture species or cultivars. The Danish study of Thorup-Kristensen et al. (2009) showed that roots of winter wheat crops grew twice as deep as spring wheat roots due to the longer duration of the crop while the Australian study of also reported deeper roots and greater water extraction for crops with a longer vegetative period. Increased root density at depth may result from longer residence time in deeper layers, but genetic differences in root morphology also exist (Gregory, 2006). Field experiments of Christopher et al. (2008) and root chamber experiments of Manschadi et al. (2008) compared two wheat genotypes varying in root morphology. They found that cv. Seri had a narrower growth angle than cv. Hartog and that the root system of Seri was deeper, denser and more evenly distributed with depth. Christopher et al. (2008) concluded that when deep water was present, the genotype with a denser root system (Seri) extracted more soil water, extending the duration of green leaf area and increasing yield. Others have also proposed screening for steeper root angle in other species, to select for deeper root systems which have more effective water capture at depth where root length density typically declines (Manschadi et al., 2010;Lynch 2013). McDonald et al. (2012) demonstrated yield benefits for wheat cultivars with a narrow seminal root angle in a range of Australian environments. Genotypic variation in the vigour of spring wheat root systems has also been demonstrated by Richards et al. (2007) and Palta and Watt (2009), and more recently considerable effort has been invested in screening large numbers of wheat lines in Australia in search of deeper and more extensive root systems (Wasson et al., 2012(Wasson et al., , 2014. However, White and Kirkegaard (2010) showed that in southern Australian soils, roots in subsoils are often clumped in soil pores and channels with poor root-soil contact, limiting soil water extraction. To increase water extraction from depth, roots must overcome these constraints and explore a greater soil volume. Field measurements of increased water extraction are important, validating the assumption of the benefits of greater root proliferation, although this validation needs to be site specific. The usefulness of individual root traits is largely determined by the pattern of water availability in the target environment. As a consequence, interactions between these root traits and the seasonal rainfall distribution, soil type and crop management at specific sites influence their impact on yield (Chenu et al., 2011(Chenu et al., , 2013. The advantages of timely sowing for improved water-use efficiency and yield of cereals in rain-fed environments are widely known (French and Schultz, 1984;Stapper and Fischer 1990;Hocking and Stapper 2001), but recently recommended sowing times have been re-evaluated in different regions in the face of climate, equipment and varietal changes . In southern Australia, there has been a decrease in autumn rains on which the wheat crop was traditionally sown and a drier and hotter spring, while summer rainfall has been stable (Pook et al., 2009;Cai et al., 2012). This has stimulated the development of earlier sowing systems based on improved summer fallow management practices to increase soil water storage and use of slower maturing varieties at lower density to maintain optimum flowering times while increasing yield potential. (Kirkegaard and Hunt 2010;Hunt et al., 2013;Richards et al., 2014). The presence of stored soil water increases the likelihood of good crop establishment (Hunt and Kirkegaard, 2011) and the longer vegetative period of the slow-maturing cultivars increased rooting depth and access to stored water during grain-filling . Consequently, there has been a significant transition to earlier sowing of wheat in southern Australia , but it is likely that much of the benefit may rely on the availability of deep stored water, which will vary from season to season. Conclusions drawn from field experiments are limited to the range of seasons experienced, so simulation studies are often used to extrapolate across more seasons. In addition, cultivars that differ in root traits may also differ in shoot traits, confounding the experimental evidence for benefits of variation in root vigour. For example, the stay-green trait in sorghum (Sorghum bicolor) has been related to yield benefits in dry conditions but was also associated with canopy development, leaf anatomy, extensive root growth and greater water uptake (Borrell et al., 2014). Studies in wheat found that expression of the stay-green trait was associated with a yield benefit but was dependant on availability of deep soil water . Simulation studies offer the opportunity to hypothetically modify genetic characteristics of root systems without modifying shoot systems. Several studies which used simulation analysis to investigate the benefits to crop yield of modified root systems are summarized in Table 1. The extent to which the simulation studies have been validated in the field varies. The model of King et al. (2003) is conceptual while the others are process-based and have been validated in linked field studies to various degrees. Farre et al. (2010) and Wong and Asseng (2007) investigated the removal of subsoil constraints at over 30 locations across Western Australia, allowing increased root growth and greater access to soil resources. In that environment, the yield benefit was strongly related to the severity of the constraint and seasonal rainfall, as low rainfall years caused incomplete soil wetting. Benefits of constraint removal were much smaller (<1.0 t ha −1 ) on duplex soils where rooting depth was restricted to 0.9 m compared to the sandy soil (rooting depth 1.5-1.8 m), where yield benefits of up to 2.5 t ha −1 were predicted (Farre et al., 2010). King et al. (2003) used a model that described size and distribution of winter wheat root systems at anthesis. They investigated the predicted impact of a change in root system characteristics such as root distribution with depth, proportional dry matter partitioning to roots, resource capture coefficients for water and N capture and grain yield of cereal crops in the UK. They concluded that a larger investment by the crop in fine roots at depth in the soil, and less proliferation of roots in surface layers would improve yields by accessing extra resources. Dreccer et al. (2002) investigated the impact of ±2% or ±5% change in several root traits including maximum depth of extraction, root length density distribution with depth, and maximum rate of water uptake per unit length. Their simulation was targeted to shallow soils (0.9-1.1 m) in a lowrainfall area of Victoria, Australia and demonstrated up to 16.5% yield benefit from greater rooting depth and a smaller effect of improved rate of water uptake (efficiency; 2.5%). Semenov et al. (2009) also simulated rate of root descent and efficiency of water uptake on shallow (0.75 m) soils in UK and Spain as well as on deeper soils (1.5 m). They found that doubling of RPR had no impact on yield in either Spain or the UK, although slowing RPR decreased yield. Similarly, increased efficiency of water extraction produced a small (1.1%) increase in yield. The authors attributed the small response to the limited soil depth (0.6-0.75 m). The studies of Dreccer et al. (2002), King et al. (2003) and Semenov et al. (2009) all initialized simulations with a full soil water profile, and in these situations soil water content did not limit root penetration. While appropriate to the higher rainfall environments, profile water content at sowing is highly variable in many semi-arid environments and in many Australian examples, profiles do not fully rewet Kirkegaard 2007, 2011;Wong and Asseng 2007;Farre et al., 2010), which limits root depth. Manschadi et al. (2006) investigated modification of root distribution in the soil profile in Queensland, Australia, replicating characteristics of two wheat cultivars (Hartog and Seri) which differed in root density distribution. Their simulations were also reset at sowing in each year with a range of starting soil water conditions (total available water content: 130, 185 or 300 mm depending on location. At each location the profile was set at 1/3, 2/3 capacity or full at sowing). In those summer-dominant rainfall environments the crop relied to a large extent on stored water rather than in-crop rainfall, so the impact of initial conditions was significant. Mean yield increased and year-to-year variability decreased as initial soil water content increased, while the relative benefit of the more extensive root system decreased with increasing initial soil water content. All of the studies mentioned above demonstrated that on deeper soils with a plentiful initial soil water supply, increased root density, uptake efficiency or root depth led to predicted increases in water uptake and grain yields. On shallow soils (~1 m), predicted yield differences were small in the study of Semenov et al. (2009), but up to 16.5% in the study of Dreccer et al. (2002). Lilley and Kirkegaard (2011) conducted simulation analyses in Australia investigating the interaction of agronomic management with root modification on deep soils. They showed that in many years, fallow rainfall and in-crop rainfall were insufficient to fully wet the profile and final root depth of the subsequent crop was restricted by dry soil layers. The study showed that increased capture of deep water can occur through selection of cultivars with more extensive (faster descent and more effective) root systems. However, the impact of individual root traits on grain yield varied with site and season and interacted strongly with crop management, antecedent soil water content, seasonal rainfall distribution and soil type. Although this study considered the impacts of previous management and fallow rainfall conditions by resetting the soil water at the previous harvest (15 December) rather than at sowing, the simulations were restricted to single years. In reality, more effective root systems will leave the soil in a drier state, potentially leaving a legacy of limited water availability to subsequent crops and diminishing the overall system benefit of deeper roots. The analysis of Lilley and Kirkegaard (2011) was also restricted to soils of at least 1.6 m depth, where deep and effective root systems will have the greatest benefits. However, much of the Australian cropping zone has inhospitable subsoils below 0.5-1.0 m (saline, sodic, too acid, too alkaline, too high in boron, aluminium or manganese, or too low in zinc) and other nutrients that roots need (Passioura, 2002;Nuttall et al., 2003;Adcock et al., 2007;Nuttall and Armstrong, 2010). As a result, the previous simulation studies may be overestimating the value of modified root systems for many Australian cropping soils. For example, sodicity constraints have been reported for 59% of Victorian and 63% of South Australian arable land (Ford et al., 1993) and are estimated to affect more than 26% of Queensland (Dang et al., 2006;MacEwan et al., 2010) and around 50% of arable land nationally. Since the majority of previous studies used full soil water profiles at sowing, annual resetting and/or deep soils in the analysis of the value of deep roots, it is possible there has been an overestimation of the likely benefits of deep roots at the systems scale. To investigate this possibility, we conducted a simulation analysis to investigate the impacts of annual resetting of soil water content vs continuous simulation to capture the legacy effect on the predicted value of modified root systems, using diverse semi-arid environments in Australia as a case study. We also compared the benefits for crop yield of modified root systems with those of earlier sowing, an agronomic intervention known to increase maximum rooting depth Thorup-Kristensen et al., 2009;, and the trajectory of shoot biomass and water demand in the crop. Finally, we considered the importance of soil depth, given that previous work suggested benefits of modified root systems would be limited on shallow soils (Tennant and Hall, 2001;Wong and Asseng, 2007;Semenov et al., 2009;Farre et al., 2010;McDonald et al., 2012). While this review focuses on increasing yield through the increased capture of water, we recognize that more extensive root systems will also capture other resources such as N and other nutrients. We have maintained N at non-limiting levels throughout our study to avoid confounding effects on N cycling. The wider implications for modified root systems within wheat farming systems in the context of deep water and N use is considered in Thorup-Kristensen and Kirkegaard (2016). Methods Simulations were conducted to represent a continuous cropping sequence at eight locations in Australia, varying in climate, soil type and soil depth. Three factors were varied at each site, which are summarized in Table 2 and described in detail in the sections below. Soil water content in the simulations was either reset annually after harvest to represent a typical soil profile following an annual crop (similar to Lilley and Kirkegaard, 2011), or allowed to run continuously, capturing the soil water profile left by the previous annual crop (as in Lilley et al., 2004). This comparison was made because in Australia the soil often does not refill between cropping seasons and so legacies of drier soil can persist, especially when the subsoil is dry. The analysis compared the yield of standard wheat cultivars with (i) cultivars modified to have a faster rate of downward root growth and increased water extraction efficiency in the subsoil (>0.6 m), (ii) slower-maturing cultivars sown 3 weeks earlier and (iii) a combination of (i) and (ii) ( Table 2). Site descriptions The eight sites selected represented three contrasting climatic zones of the Australian wheat belt: (i) temperate with equi-seasonal rainfall distribution; (ii) Mediterranean (winter-dominant rainfall); and (iii) a subtropical environment with summer-dominant rainfall ( Table 3). Five of the sites were those selected in the study of Lilley and Kirkegaard (2011), and three further sites were added in the Mediterranean zone. The additional sites all had soils with a maximum rooting depth for annual crops of ~1 m due to chemical and physical subsoil constraints. Soil description Details of the soils for each of the eight sites are summarized in Table 3. Soils were parameterized using measured soil data at each site in 0.1 m layers to the depths indicated. Soil characteristics were obtained from soil measurements, or extracted from the ApSoil database (https://www.apsim.info/Products/ APSoil.aspx) and full details of APSIM parameters for each soil are included in Supplementary Table 1. Volumetric water content at saturation, drained upper limit (DUL), and lower limit of crop extraction (LL) for each of the soil types at the eight sites are shown in Fig. 1. Soil water content at saturation was determined from measured bulk density values, DUL was determined from field measurements of fully wet then drained profiles (Hochman et al., 2001), and LL from field measurements described below. At Harden, maximum root depth was limited to 1.6 m by a weathered granite layer in the soil. At Dalby, downward root growth rate was slowed below 1.6 m by subsoil salinity. A combination of high pH, high chloride and boron concentrations throughout the profile of the Hypercalcic Calcarosol at Birchip resulted in poor soil exploration and constrained roots to a maximum depth of 0.9 m, while at Paskeville high boron content (>30 mg kg −1 ) below 1 m constrained maximum root depth to 1.0 m. Rooting depth of the duplex soil at Esperance was constrained by soil acidity (pH of 5.0 below 1.0 m), physical properties which limited water infiltration, and gravel at 1 m depth. APSIM-Wheat accurately simulates wheat yields across a broad range of environments in Australia and it has been carefully validated on Red loam soils (Kandosols and Chromosols; Isbell, 2002) in southern NSW (Lilley et al., 2004;Lilley and Kirkegaard, 2007). Those studies involved detailed comparison of simulation outputs with experimental data for biomass growth, grain yield and soil water dynamics to establish confidence in the capacity of the APSIM-Wheat model to simulate the processes involved in this analysis. For the other soils, the model has been well validated on similar soil types to those used in this study. These include deep sands (Tenosols) and sand over clay duplexes (Chromosols) in WA (Lawes et al., 2009;Oliver and Robertson, 2009) and deep clays (Vertosols) in northern Australia (Hochman et al., 2001(Hochman et al., , 2007Wang et al., 2003), and calcareous soils with subsoil constraints below 1 m (Calcarosols) in southern Australia (Rodriguez et al., 2006;Hochman et al., 2009, Hunt et al., 2013. Simulation treatments -accounting for the soil water legacy In all simulations the crop sequence was assumed to involve continuous cropping of productive annual crops such as wheat, barley or canola, and water extraction patterns by these annual crops are generally similar to wheat. In this case, for simplicity of the analysis we simulated continuous cropping sequences with wheat sown every year as a representative annual crop. Soil water content at sowing was simulated in two ways: (i) Annual reset. Similar to the method of Lilley and Kirkegaard (2011) the soil water profile was reset annually, on the latest predicted harvest date of all crops in the 100-year simulation at each site. Reset date ranged from 7 November at Dalby to 14 December at Harden (Table 3). The soil water content was reset to the median profile at harvest over 100 years of continuous simulation and is shown in Fig. 1. These simulations were run as single years commencing on the reset date. This differed from the study of Lilley and Kirkegaard (2011) who reset on 15 December each year to a profile which was deemed to represent an annual crop (at the LL from 0 to 1.2 m, below which soil was at the DUL). The change was made as a 15 December reset date was not appropriate for all sites (up to 5 weeks after harvest of the previous crop) and to more accurately represent the soil profile at harvest as these previous rules did not fit the shallow soils. A sensitivity analysis showed that the previous setting by Lilley and Kirkegaard (2011) produced similar results on the deep soil sites, except at Dalby where resetting occurred 5 weeks earlier, and the median profile was drier than a profile that was dry to 1.2 m and full from 1.2 to 2.5 m. The soil water content at sowing was simulated as a consequence of soil water content on the reset date and subsequent rainfall and evaporation until sowing, assuming the summer fallow was maintained weedfree with stubble retained. (ii) Continuous. Simulations were run continuously with a wheat crop sown every year from 1900 so that soil water content at sowing in each year was related to the previous long-term cropping history as well as seasonal rainfall and evaporation. Thus for a continuous simulation using wheat with a root system modified to extract water more effectively below 0.6 m, the improved drying of the subsoil every year can compound as a legacy unless there is adequate rainfall to fully recharge the profile. Therefore, plant available water (PAW) at sowing differed from that in the annually reset simulations. The simulations were run for the years 1900 to 2014 of the climatic record, with the first 15 years discarded so that the effect of initial soil water profile was replaced by the legacy of the crops in the first 15 years. Simulation treatments -root modification To investigate potential impacts of genetic modifications to roots on wheat productivity, root characteristics were modified following the method of Lilley and Kirkegaard (2011). Our earlier study considered rate of root descent and increased water extraction efficiency (i.e. a greater potential rate of water extraction) separately, since they are considered distinct targets for breeding. That study showed that the benefit of each component depended on site conditions (soil type and climate), however in general the benefit of more efficient water extraction was greater than that of faster root descent. The benefits were generally additive and in this analysis we consider the combined effect. APSIM-Wheat uses a maximum root penetration rate (RPR) for field grown wheat of 1.2 mm/ o C.day up to the start of grain filling (for daily average temperatures up to 25 o C) (Wang and Smith, 2004). To represent the effect of soil drying on soil strength and root growth, the RPR through a soil layer is reduced at low water content. RPR is unaffected by Fig. 1. Volumetric water content of the soils at the eight sites at saturation, drained upper limit (DUL), lower limit (LL) of plant water extraction and plant available water content (PAW-harvest) (to which annual simulations were reset -see Table 2) are shown in panels A-G). PAW-harvest is the median PAW at harvest from 100 years of continuous simulation of a cultivar with standard roots and conventional sowing date. Ardlethan and Cootamundra are represented by the same soil. Source of soil water characterization can be found in Supplementary Table 1. soil water content until the proportion of PAW falls below 25%. Below 25% PAW, the RPR is reduced linearly from the maximum RPR to zero root downward growth when no PAW remains. In the modified treatment we configured APSIM-Wheat to increase the rate of root descent by 20% (i.e. maximum RPR 1.44 mm/ o C.day) as simulated in Lilley and Kirkegaard (2011) and within the range reported for field grown plants (maximum 2.2 mm/ o C.day; Wasson et al., 2014). The capacity of wheat root systems to extract water from the soil decreases with depth, due to reduced root length density, increased clumping and confinement of roots to pores and structural features of the soil, and reduced root-soil contact. The APSIM model captures this effect with the KL parameter (Wang and Smith, 2004). The KL value of each soil layer is the maximum proportion of PAW remaining in the soil that can be extracted from the layer on any day, and is set empirically to fit observed data for each combination of crop and soil type (Meinke et al., 1993, Robertson et al., 1993Dardanelli et al., 2003). The actual volume of water extracted from a layer is limited by the crop demand, which is met preferentially from upper most layers first, and the presence of roots in the layer. The robustness and limitations of this approach have been discussed previously (Wang and Smith, 2004;Manschadi et al., 2006). The standard KL profile fitted to observed rates of water extraction by existing wheat varieties for each soil type is shown in Fig. 2. For the modified root system, we increased the extraction efficiency (potential rate of water extraction) of wheat roots in the subsoil by maintaining the KL values at those observed at 0.6 m. As a consequence, the capacity to extract water from the subsoil below 0.6 m was 30-50% of that in the surface, rather than 10-20% as is commonly measured in current wheat varieties. Simulation treatments -sowing window In order to investigate previously demonstrated advantages of earlier sowing for deeper rooting and water extraction we simulated the conventional sowing window at each site, along with a window which opened 3 weeks earlier (Table 3). For the conventional sowing, a mid-fast developing wheat cultivar (e.g. Mace, Scout, Spitfire) was sown, while in the earlier sowing window a slow-developing cultivar (e.g. Bolac, Lancer) was sown. APSIM phenology parameters, vern_sens and photop_sens were 2.3 and 3.9, respectively, for the slow-developing cultivar and 0.5 and 3.0, respectively, for the mid-fast developing cultivar. In each year, sowing occurred within the prescribed window as soon as sowing criteria described in Table 2 were met. Criteria consisted of a minimum rainfall within a set period as well as minimum soil water content in upper profile layers (Table 2). If the criteria were not met within the sowing window, the crop was sown into dry soil on the last day of the window, and emergence occurred after the next rainfall event. Simulated anthesis and maturity dates of these cultivars matched that of local well-adapted cultivars at each site. These cultivars flowered in the optimal windows in each environment and mean anthesis and maturity dates of the standard and the early-sown cultivars occurred within 2 d. Simulation details For all sites, daily climatic data (rainfall, solar radiation, pan evaporation, maximum and minimum temperatures) were extracted from the SILO Patched Point Dataset (Jeffrey et al., 2001; http://www.bom.gov.au/silo/). Climatic information is summarized in Table 3. Soil N in the simulations was maintained at levels non-limiting to plant growth. Fertilizer was applied at sowing and 40 d after sowing so that soil mineral N content was 200 kg N/ha at the sites with deep soil (Harden, Cootamundra, Ardlethan, Wongan Hills and Dalby) and 150 kg N/ha at the sites with shallow soil (Birchip, Paskeville, Esperance). Factorial combinations of the treatments in Table 2 produced eight simulation runs at each of the eight sites, a total of 64 site × soil water legacy × sowing window × root modification runs over a 100-year period. A range of simulation outputs were compiled to provide insights into the magnitude and mechanism of yield benefits arising from differences in root systems associated with either differences in soil water resetting, agronomic management (early sowing) or hypothetical genetic modification (more effective roots). The data extracted from the simulation runs included the soil water content at sowing, final rooting depth, total and distribution of water uptake from the soil profile, flowering date and grain yield. In general, to compare the three treatment factors, differences between treatments were calculated within each year for each variable. The conventionally sown, standard root system cultivar was used as the reference and a set of differences between treatments within reset simulations and within continuous simulations were calculated. The range, mean or median of the within-year differences were calculated for each site, rather than comparisons between long-term means for each scenario. Rooting depth Simulated final rooting depths on deeper soils of the standard cultivar at a conventional sowing date (Table 4) were similar to those reported previously by Lilley and Kirkegaard (2011) and experimentally by others at those sites (Forrest et al., 1985;Hamblin and Tennant, 1987;Milroy et al., 2008). On the shallower constrained soils, the roots usually reached the bottom of the profile (1.0 m) at Paskeville and always at Esperance, while at Birchip impediments to root growth such as dry soil and chemical constraints resulted in an average rooting depth of 0.7 m. These results on shallower soils are similar to experimental results reported by Tennant and Hall (2001), Dreccer et al. (2002), Rodriguez et al. (2006), Oliver and Robertson (2009), and Hunt et al. (2013). Use of annual resetting or continuous simulation made little difference to final rooting depth, however variability was greater on deep soils in the continuous simulation (data not shown). In simulations where roots were modified (downward growth 20% faster), the mean benefit to rooting depth on deep soils was smaller in continuous simulations (−0.06-0.26 m) than in the reset simulations (0.23-0.34 m; Table 4). In addition, variability was greater in the continuous simulation ( Fig. 3) as root depth was more frequently restricted due to soil drying by the previous crop. At Dalby, root modification resulted in slightly shallower mean root depth in the continuous simulation, due to reduced root penetration in dry soil. On soils with a depth constraint (Harden, Paskeville, Esperance and Birchip) there was no effect of modified root systems on final root depth, and roots simply reached the bottom of the accessible profile sooner. Mean final root depth of early-sown crops was increased by 0.03-0.27 m on unrestricted soils, compared to crops sown on the conventional date (Table 4). This was due to an approximately 3-week longer vegetative period when downward root growth occurs. Earlier sowing of the slow-developing cultivars, which also had modified root systems, resulted in a small further increase in the mean root depth at Cootamundra, no effect at Ardlethan, and shallower roots at Wongan Hills and Dalby Table 4 . Mean and range of rooting depth at maturity of wheat crops (standard cultivar, conventional sowing date) for 100 years of continuous and annually reset simulations at eight sites Mean extra final root depth and difference in water uptake (total and post-anthesis) achieved by simulating cultivars with modified root systems (Mod) and/or earlier sowing is also shown. ( Table 4). In the shallow soils at Paskeville and Birchip there were very small (0.02-0.03 m) increases in root depth of early-sown cultivars, but not at Harden or Esperance where roots reached the bottom of the profile when sown in the conventional window. Water uptake More rapid root descent and increased final root depth, combined with more efficient water uptake below 0.6 m resulted in a greater average crop water uptake for crops with modified root systems (Table 4). A smaller water extraction advantage was evident in the continuous simulation compared with the annual reset at all sites (Table 4). Modified root systems led to an average 7-17 mm of additional water extraction on deeper soils, and 2-4 mm on shallow soils (Table 4). The difference in water uptake was highly variable across seasons, ranging from a reduction of 18 mm to an increase of 44 mm, with greatest variability seen on deeper soils (Fig. 4). For earlier-sown crops, mean additional water uptake was 14-31 mm greater than for conventionally sown crops (Table 4) due to both deeper roots associated with longer duration of root descent, and a longer duration of the period of water extraction. Notably, the effect of early sowing on extra water uptake was relatively similar on deep and shallow soils ( Table 4). The effects of early sowing and modified root systems were largely additive, with the combination increasing water uptake (mean: 16-45 mm; Table 4; Fig. 4). The uptake at Dalby for modified and/or early-sown crops was significantly less than for other sites with deep soils. Variability in uptake was generally greater for modified than standard root systems on all soils (Fig. 4). Although mean uptake was higher for early-sown crops on deep soils, variability was similar, but increased when the root system was modified as well. On shallow soils, the larger variability predicted for early sowing was associated with much greater mean extra uptake (Fig 4.). An analysis of the timing of water uptake showed that where root systems were modified, around two-thirds of the additional water extraction occurred post-anthesis, except at Dalby where extra post-anthesis extraction was small (Table 4). When the crop was sown early at all sites except Wongan Hills, post-anthesis extraction was smaller (mean reduction 5-11 mm; Table 4). The increase in total water uptake for early-sown crops was due to much greater preanthesis uptake, creating a drier soil by anthesis and less water was available for post-anthesis uptake. PAW at sowing (soil water legacy effect) In the annually reset simulations, PAW at sowing varied across the sites according to soil water holding capacity, and fallow rainfall (Table 5). For simulations of standard cultivars, the mean PAW at sowing was similar in the reset and continuous simulations. However, the variability was much greater in continuously run simulations, because the soil water content was also affected by water extraction of the previous crops (data not shown). For continuous simulations, modified root systems led to reduced PAW at sowing for deep soils (mean 17-32 mm drier; range 0-49 mm drier; Table 5). For soils where depth was restricted, including Harden (restricted at 1.6 m), the soil was up to 4 mm drier (site ranges; 0-11 mm). Similarly, in a system where crops were sown early, mean PAW at sowing was 7-21 mm drier on unconstrained soils and 2-6 mm drier on soils with root constraints. The combination of early cultivars with modified root systems resulted in even drier soil at sowing (23-44 mm and 3-8 mm on unconstrained and constrained soils, respectively; Table 5). Grain yield Mean grain yields for standard cultivars sown in the conventional window ranged from 3.1 to 5.7 t ha −1 across the eight sites, with higher yields occurring at sites with more rainfall and deeper soils (Table 6). In the reset simulation, modified root systems led to a mean yield increase of 0.1-0.6 t ha −1 , which varied with site, while yield benefits were smaller (−0.03-0.24 t ha −1 ) in the continuous simulation. At Dalby, there was a mean yield loss in the continuous simulation (0.03 t ha −1 loss compared to a 0.38 t ha −1 benefit in the annually reset simulation). For all sites the reduced benefit of modified root systems was associated with increased risk of yield loss in some years in the continuous simulation compared to the annually reset simulation where no downside risk was predicted (Fig. 5). In the continuous simulation, benefits of early sowing were greater than those of modified root systems at every site (0.1-0.8 t ha −1 ), except at Dalby where on average a greater yield loss was predicted (−0.26 t ha −1 ; Table 6). In general, annual resetting resulted in similar or smaller mean annual benefit from early sowing than continuous simulation when crops were sown early (with or without modified root systems). The mean yield benefit from the combination of root modification and early-sown longer-season cultivars was equivalent to the sum of the two individual components in most cases (Table 6). Variability in yield benefit from early sowing was much greater than was predicted for modified root systems (Fig. 5). The range was largest at Wongan Hills, where yield benefits from early sowing ranged from a reduction of 0.8 t ha −1 to a benefit of 2.0 t ha −1 . The combination of modified root systems with early sowing resulted in a small further increase in variability. In the annual reset simulations the proportion of years with a significant yield benefit (defined here as >0.2 t ha −1 ) was similar to that reported by Lilley and Kirkegaard (2011) at common sites (data not shown). For continuous simulations, the deepest soil at Wongan Hills had the highest proportion of years with a significant yield benefit from modified root systems (44%; Fig. 6). Other sites with deep soils had a significant yield benefit in fewer years (23-30%) and shallow soils had the smallest frequency of benefit (3-11%; Fig. 6). Early sowing resulted in a much greater frequency of significant yield benefits than modified root systems at all sites except Dalby (35-79%) (Fig. 6). Notably, early sowing produced significant yield benefits in 35-58% of years on shallow soils. A further increase (up to 11%) in frequency of yield Table 5. Mean plant available water (PAW) at sowing (mm) at eight sites in annually reset and continuous simulations for the standard cultivar sown in the conventional window and the reduction in PAW at sowing due to the legacy of either modified root systems (Mod), early sowing of a longer-season cultivar, or a combination of both. Values are mean of 100 years of simulation benefits >0.2 t ha −1 was reported when root system modification was combined with early sowing. At Dalby, where the mean response to early sowing was negative, a yield response >0.2 t ha −1 was reported in only 9% of years. Modified root systems provided a yield benefit >0.2 t ha −1 in 14% of years for both early and conventional sowing windows at Dalby (Fig. 6). Discussion Our study suggests that previous investigations may have significantly overestimated the value of deep roots in Australian dryland farming systems by ignoring legacy effects. Previous studies (Dreccer et al., 2002;King et al., 2003;Manschadi et al., 2006;Semenov et al., 2009;Lilley and Kirkegaard, 2011) all involved annual resetting of soil water and we have shown using continuous simulation that the legacy of drier soils caused by more effective root systems will reduce predicted yield benefits to the subsequent crop in many seasons. At sites with shallower soils, which make up a significant area of the Australian cropping zone, the predicted benefits of more efficient root systems were negligible, while earlier sowing of slower-maturing crops delivered yield benefits on all of the soil types considered in Australia's southern cropping zone. Benefits of root modification Our current analysis showed a similar range in yields (3.5-5.7 t ha −1 ) and yield benefits from root modification (0.2-0.6 t ha −1 ) on the same deep soil sites (with annual reset) as the previous study. The yield benefit was attributed to 0.25-0.34 m deeper roots and a 14-29 mm increase in water uptake. Simulation studies of Manschadi et al. (2006) in the northern cropping zone also reported a similar range of yield benefits when soils were one-third full at sowing. For the new, shallow soil sites, modified root systems made little difference to mean root depth (up to 0.04 m deeper), with an extra 4 mm of water taken up and a smaller mean yield benefit than for deep soil (~0.1 t ha −1 ). At Birchip and Paskeville, the benefits of root modification were small, since most of the soil water was extracted by the standard cultivar and there was no additional water available for uptake by more efficient roots (see median soil water content at maturity, Fig. 1). In addition, two factors constrained root depth at Birchip. Firstly, high boron content slowed root penetration, and secondly the low and variable rainfall (mean 365 mm) combined with the large water holding capacity in the surface layers of this soil (Fig. 1), meant that water often did not penetrate deeply, and dry soil limited root penetration. At Esperance, which had a relatively high rainfall and a soil with a low water holding capacity, the profile filled frequently and adequate soil water was available within the 1 m root zone, so that soil water supply generally met demand from shoots, and water uptake did not limit growth of the standard cultivar. This is confirmed by the relatively high median soil water content at harvest for standard roots (Fig 1) and high frequency (99%) of years that roots reached the maximum depth. Comparison of Ardlethan and Cootamundra, which had an identical soil type, shows that at the drier site (Ardlethan), the profile filled less frequently and average rooting depth was 0.25 m shallower due to more frequent limitation to root penetration of dry soil, as reported by Lilley and Kirkegaard (2011). Legacy effects Our analysis showed that increased water extraction by modified root systems leaves the soil in a drier state in most seasons, and where the soil does not refill this had an additional impact on the subsequent crop. This finding is consistent with experimental evidence from Kirkegaard and Ryan (2014), who showed large and significant impacts of cropping history on wheat yield (0.6-0.9 t ha −1 ), which persisted for three to four years in semi-arid cropping environments of Australia and particularly in seasons with below average rainfall. Angus et al. (2015) also reviewed field experiments in Australia and Sweden and showed that a range of crop species can have an impact on the yield of subsequent wheat crops and these effects can last more than one season, depending on intervening rainfall patterns. In the previous study (Lilley and Kirkegaard 2011), this legacy effect was demonstrated by comparing root exploration following either an annual crop or lucerne which had dried the soil to a much greater extent. For example, at Ardlethan where fallow rainfall was low (mean 187, range 45-450 mm) benefits of modified root systems were observed less frequently following lucerne than an annual crop. This new study focussed on benefits of root modification in continuous crop simulations where the legacy of previous crops and seasons affects current crops, as happens in reality. Continuous simulation showed that in a cropping sequence, the legacy of modified root systems meant that the profile was 17-32 mm drier at sowing of the subsequent crop while in reset simulations no such impact is accounted for. The legacy of dry soil varied seasonally and for deep soils, the increased frequency of dry soil decreased the mean rooting depth of subsequent crops and hence the root penetration benefit of improved root vigour. As a consequence of reduced soil water availability and reduced root penetration, the benefit in water uptake from modified root systems was smaller in continuous compared to reset simulations. The 'dry soil legacy' reduced the mean predicted yield benefit of modified root systems in the continuous simulations to 0-0.2 t ha −1 (range −0.4-1.1 t ha −1 ) compared to the annual reset simulations (mean 0.1-0.6; range 0-1.4 t ha −1 ) as reported in previous studies. At Dalby, the legacy effect of soil drying was so large that in 67% of years rooting depth of the modified cultivar was shallower than the standard cultivar (mean reduction 0.06 m, range +0.23 m to -0.43 m; Fig. 3). The drier soil and reduced rooting depth resulted in a reduction in water uptake by the crop in more than 50% of years, and average additional uptake due to root modification was much less in continuous (mean of 3 mm, median of −1 mm) than reset simulations (mean and median of 15 mm). The reduced water uptake was related to a reduction in grain yield in around 75% of years, and a yield benefit >0.2 t ha −1 was predicted in only 14% of years. In the northern cropping zone, Hochman et al. (2014) showed that decisions in crop sequence management are based on soil water content as a strategy for managing legacies of previous crops and seasonal conditions. The cropping system in this summer-dominant rainfall zone differs from those in southern Australia as a range of summer and winter crops are well adapted to the region, while southern Australia is limited to winter cropping (Hunt et al., 2013;Hochman et al., 2014) In reality, soil which is too dry to support a crop would be left fallow to accumulate soil water for a subsequent summer or winter crop and growers need to be mindful of cultivar and species choices which leave a legacy of dry soil. On shallow soils (~1 m), rooting depth was restricted by other soil constraints, discussed above, and root system modification had little effect on final root depth. The effect on extra water uptake was also small, although there was an increase in variability and a small decrease in mean uptake at Paskeville and Birchip. Consequently, there was not a significant legacy effect on PAW at sowing (1-4 mm) as roots of the standard cultivar fully dry the soil in most years and modified root systems provided little additional extraction capacity. In semi-arid farming systems such as Australia and north Africa, where the soil profile does not refill in many seasons (Cooper et al. 1987), analyses that involve annual resetting of soil water content have typically overestimated the benefit of more extensive root systems. For example, the analysis of Lilley and Kirkegaard (2011) reported that there was no downside risk of introducing modified root systems, however in this study the legacy of previous crops with modified root systems resulted in negative effects on yield in 25% of years (Wongan Hills, Ardlethan, Harden, Paskeville and Birchip) and in 75% of years at Dalby (Fig. 5). These negative effects were rare at the higher rainfall sites at Esperance and Cootamundra where the profile refilled more frequently. showed that because deep water is accessed late in crop growth it is particularly valuable as it is used during the grain-filling period and contributes efficiently to grain yield. Much of the previous work on the value of improved root systems focused on deeper soils where there is potential to increase the depth of rooting (Manschadi et al., 2006;King et al., 2003;Lilley and Kirkegaard 2011). However, in Australia much of the cropping zone has soils with constraints below 0.5-1.0 m which reduce or prevent root exploration (salinity, sodicity, acidity, alkalinity, and toxicities or deficiencies of micronutrients; Dolling et al. 2001;Passioura, 2002;Adcock et al., 2007;McDonald et al., 2012). Two simulation studies (Dreccer et al., 2002;Semenov et al., 2009) which considered benefits to wheat yield of increased uptake efficiency in shallow soils found that the yield benefit was small, despite optimal water availability due to a full profile at sowing. Our new study also showed that on shallow soils there was no rooting depth benefit. Increased efficiency of uptake resulted in a small additional extraction (2-4 mm for soils ~ 1 m deep) and yield benefits were generally small and infrequent (benefits >0.2 t ha −1 in 3 to 11% of years; Fig. 6). At Harden, where the soil was not shallow, but depth was restricted to 1.6 m, significant extra uptake occurred (mean 7 mm) and a yield benefit >0.2 t ha −1 was reported in 26% of years. Seasonal variability in the size of the yield benefit from modified root systems was much greater at sites with deep soil since water storage was also variable, while the benefit on shallow soils was consistently low due to the limited water holding capacity (Fig. 5). Benefits of early sowing Changing the duration of the vegetative period affects the final rooting depth of wheat as root growth ceases around the time that grainfilling commences, due to increased demand for assimilate from the developing grain (Gregory 2006;Thorup-Kristensen et al., 2009). Simulation and field studies by Kirkegaard and Hunt (2010) and , have recently shown that earlier sowing of wheat increases potential crop yield, provided that flowering remains in the optimal window to avoid frost. The early sowing of a longer duration cultivar in this study resulted in a 3-week longer period of downward root growth and similar climatic conditions during grain filling as flowering occurred at a similar time to the conventionally sown cultivar (mean difference 1-2 d). The mean legacy of drier soil from early sowing was smaller than from modified root systems (on deep soils; 10-15 mm wetter after early sowing), while the mean yield benefit of early sowing was always greater than for modified root systems at southern sites. The mean yield benefit of early sowing over the conventional sowing date ranged from 0.54 t ha −1 at Ardlethan to 0.75 t ha −1 at Cootamundra (deep soils). In southern cropping zones, attributed much of the early sowing benefit to a longer period of water extraction, resulting in greater total transpiration, and less soil evaporation on an annual basis, increasing the seasonal water use efficiency. Although early sowing increased mean water uptake at Dalby by 14 mm, the mean effect of early sowing at that site was a reduction in grain yield, with yield benefits >0.2 t ha −1 reported in only 9% of years (Fig. 6). Small negative effects of early sowing on yield were also reported for the northern cropping zone by Hochman et al. (2014). For restricted soils, including Harden, the extra water extraction for early-sown crops was also large (mean; 22-26 mm). This extra uptake was achieved through longer season length and greater rainfall capture rather than more extensive soil exploration and there was little effect on PAW at maturity (data not shown). Consequently, the soil water legacy for the following crop was also small, (mean; 2-6 mm). Notably, for shallow soils the yield benefit from early sowing was much greater (>0.2 t ha −1 in 35-48% of years) than from root modification (3-11% of years). The yield benefit from early sowing was particularly high at Esperance, where the profile had ample water throughout the crop growth period in many years, so that a longer growth period allowed increased uptake and a yield benefit >0. 2 t ha −1 in 58% of years. Seymour et al. (2015) and Bell et al. (2015) have shown that early sowing is well suited to this region due to the high rainfall, and frequent opportunities to sow early. Manschadi et al. (2006) and Semenov et al. (2009) discussed the trade-off between more rapid water-use in the early part of the season in anticipation of late season rainfall vs. conserving water for use during grainfilling when the benefit to grain yield is known to be high. Our results suggest that on deep soils the majority (66-80%) of the additional water uptake by modified root systems occurred post-anthesis, while on shallow soils this was 45-67% although the difference in total uptake was very small (mean; 2-7 mm). In contrast to modified root systems, mean post-anthesis uptake in earlysown crops decreased by 5-11 mm at all sites except Wongan Hills where deep soil water supply was generally greater than demand. While early-sown crops used more water over the season, the post-anthesis water use was less at most sites because these crops had depleted the available water supply by anthesis. This phenomenon of increased total water use, but decreased post-anthesis water uptake has been observed in several experimental studies in south-eastern Australia (James Hunt, unpublished.) The benefits to water extraction and yield from early-sown, slow-maturing cultivars and modified root systems appeared to be additive, with the combination resulting in a small further yield benefit beyond that of early sowing (further 0.1-0.3 t ha −1 on deep soils and 0.03 t ha −1 on shallow soils). However, there was a greater legacy effect, with mean PAW at sowing reduced by 23-44 mm on deep soil sites and 3-8 mm on shallow soil sites. Implications for improved productivity in future rain-fed environments The current analysis has been conducted on the historical climate record, however the future climate is unlikely to be the same, and variability and production risk is expected to increase (Howden et al., 2007). In southern Australia, a decrease in growing season rainfall has also been observed (Pook et al., 2009;Cai et al., 2012), making the efficient use of carry-over soil water and fallow rainfall an important consideration (Hunt et al., 2013). This will exacerbate variability in refilling of the soil after a crop and potentially increase the significance of soil water legacies. Kirkegaard and Hunt (2010) showed benefits of early sowing are likely to persist under climate change where weather will generally be hotter, drier and more variable, however genetic differences in roots are likely to be more problematic due to more variable soil refilling. These findings support and extend the work of Kirkegaard (2007, 2011), who showed that a range of management factors such as fallow weed control, preceding crop legacy and timely sowing often exceeded or overrode the impact of root modification on yield by influencing the depth of profile wetting and duration of root descent. Though our continuous simulation better matches reality, the simulation rules were fixed, where as in practice farmers can manage the crop sequence dynamically, electing to sow crops that have a smaller water requirement following crops and seasons which leave dry profiles (Hochman et al., 2014). Inclusion of a legume or green manure crop can preserve water and has disease break, weed control and nitrogen-saving benefits to the farming system, but must be profitable for such choices to be made (Hochman et al., 2014;Angus et al., 2015). Crop choice is ultimately driven by current soil water status, seasonal forecasts (weather and market), and paddock history in relation to disease and weed break rotations and market value of the crop (Moeller et al., 2009;Oliver et al., 2010;Hunt et al., 2013;Hochman et al., 2014). Thus, annual crops with deeper and more effective root systems can be used tactically in crop sequences to capture benefits from deep water when it is available. Information from soil moisture sensors and/ or simple models of soil water availability (e.g. HOWWET?; Dimes et al. 1996) would assist farmers to manage the sowing window in a more flexible way. Availability of cultivars that have a wide sowing window yet flower in the optimal period to minimize frost and heat risk will also improve options for earlier sowing Richards et al. 2014;. This analysis indicated that in some circumstances a yield loss is associated with more effective root systems so it is important to consider when it is appropriate to include crops with more extensive root systems in the rotation sequence. Conclusion More extensive root systems are valuable for acquiring resources to increase crop yield, but create a legacy of drier soil for subsequent crops, which can reduce the predicted long-term system benefit at some sites. At sites with shallower soils, which make up a significant area of the Australian cropping zone, the benefits of more extensive root systems were negligible. On all soil types in Australia's southern cropping zone, earlier sowing of slower-maturing crops increased average yield. Managing risk associated with more variable future climate will require species and cultivar choices in sequences that optimize use of the available soil water. Wheat cultivars with deeper and more efficient root systems will need to be used tactically to optimize overall system benefits. Supplementary data Supplementary data are available at JXB online. Table S1. Values of several soil characteristics and APSIM parameters (defined in Keating et al., 2003) used in the simulation studies.
BikeMaps.org: A Global Tool for Collision and Near Miss Mapping There are many public health benefits to cycling, such as chronic disease reduction and improved air quality. Real and perceived concerns about safety are primary barriers to new ridership. Due to limited forums for official reporting of cycling incidents, lack of comprehensive data is limiting our ability to study cycling safety and conduct surveillance. Our goal is to introduce BikeMaps.org, a new website developed by the authors for crowd-source mapping of cycling collisions and near misses. BikeMaps.org is a global mapping system that allows citizens to map locations of cycling incidents and report on the nature of the event. Attributes collected are designed for spatial modeling research on predictors of safety and risk, and to aid surveillance and planning. Released in October 2014, within 2 months the website had more than 14,000 visitors and mapping in 14 countries. Collisions represent 38% of reports (134/356) and near misses 62% (222/356). In our pilot city, Victoria, Canada, citizens mapped data equivalent to about 1 year of official cycling collision reports within 2 months via BikeMaps.org. Using report completeness as an indicator, early reports indicate that data are of high quality with 50% being fully attributed and another 10% having only one missing attribute. We are advancing this technology, with the development of a mobile App, improved data visualization, real-time altering of hazard reports, and automated open-source tools for data sharing. Researchers and citizens interested in utilizing the BikeMaps.org technology can get involved by encouraging citizen mapping in their region. INTRODUCTION Cycling has many health benefits (1,2) and cycling promotion supports better population health. A primary barrier to increased ridership is the risk, both real and perceived, of incurring substantial injury (3,4). In past decades, overall levels of ridership have increased in North America and cyclist fatalities have declined (5). However, cyclists are still at a greater injury risk than automobile drivers, and there is substantial spatial variation and socioeconomic inequality in cycling rates and safety (5). Data and studies on cycling safety typically rely on data for crashes between cyclists and motor vehicles (6)(7)(8), which are often reported through vehicle insurance claims and/or when police are called to a vehicle crash event. However, cycling safety concerns go beyond motor vehicles incidents. In a study of injured adult cyclists, treated in emergency departments, only 34% of incidents were collisions with motor vehicles and another 14% were a result of avoidance of a motor vehicle (9). Significant injury risk present on multi-use paths, away from motor vehicles (10), and may involve falls or collisions with infrastructure, pedestrians, cyclists, or animals. Importantly, cyclists perceive multi-use (pedestrian and cycling) pathways as safer than they are, based on observed risk (4). The lack of complete datasets on cycling incidents is limiting researchers' ability to study cycling safety. More comprehensive data are required to assess safety and risk, overcome the gap between real and perceived safety issues, monitor progress in decision-making aimed at improving traffic safety, and identify priority locations for improved traffic management. Data on near miss incidents are not reported by standard traffic data collection systems, but are a critical aspect of safety management (11). Near miss data have the potential to assist in early detection of high-risk areas and to mitigate both real and perceived safety issues, thereby enabling increased ridership (12). When compared to the number of human errors or near miss incidents, a crash is a relatively rare event. Thus, collecting near miss data allows larger data sets to be generated and enables earlier detection of problematic areas (13), and supports robust statistical analysis. Volunteer geographic information (VGI), sometimes referred to as geo-crowdsourcing, is a data collected by ordinary citizens through digital mapping, typically, via a web-interface (14). VGI offers an innovative digital technology approach to enriching available data for a wide range of research and planning applications. VGI is emerging as an important tool for health research and practice (15,16). For instance, Robertson et al. (17) used VGI provided by veterinarians to conduct surveillance of zoonotic disease in Sri Lanka. VGI offers a powerful approach to generating more comprehensive, map-based data on cycling crashes and exposure [i.e., Ref. (3)]. In the area of active transportation, Apps www.frontiersin.org like CycleTracks 1 (accessed February 1, 2014) and Brisk Cycle 2 (accessed February 1, 2014) are examples of tools that support collection of cycling specific VGI. Our goal is to introduce a new tool for collecting data on bicycle safety and risk, developed by the authors. BikeMaps.org is a global web-mapping system that allows citizens to map cycling collisions and near misses, and to identify the location of hazards and thefts. Here, we focus on the functionality associated with mapping cycling collisions and near misses. To introduce this tool, we begin by providing details of the BikeMaps.org technology and outlining website functionality. We then quantify the information content of early data submissions. In the Section "Discussion," we highlight new opportunities related to BikeMaps.org, highlight technology developments that are underway, and outline how researchers and planners can get involved. BikeMaps.org BikeMaps.org is a tool for mapping bike collisions and near misses ( Figure 1) and is built with free and open-source tools. The website is welcoming; the citizen mapper sees a GoogleMaps-like interface, although the map technology used is Leaflet, a JavaScript mapping library that can retrieve and render image-like "map tiles" from a map tile server, as well as display point, polyline, polygon, and popup features. The backend database system is PostgreSQL, a database that accommodates efficient storage and querying of spatial data. The website front-end HTML templates use additional open-source JavaScript and CSS packages to provide professional styling, dynamic user interaction, and containers for rendering map content. The website employs the Django web framework to control the retrieval and submission of data between the backend database and front-end templates. Citizen mappers identify the location of their cycling incident by clicking a "submit new point" button and adding the location on the map where the incident occurred. They then report details of collisions and near misses on a digital form through pulldown options. All reports are anonymous. The attributes captured through the pull-down menus are designed to enable research on important determinants of cycling injury, based on research by Teschke et al. (10). There are three categories of attributes: incident details ( Table 1), conditions ( Table 2), and personal details ( Table 3), with a balance of required and optional questions to manage citizen mapper burden. Most incident details are required fields; citizen mappers are invited to answer questions on when the collision or near miss occurred, the type of object involved, and whether the object was stationary or moving. A question on injury severity is also required, and optionally the citizen mapper can provide details on their cycling trip purpose. The questions around conditions at the location and time of the collision or near miss are optional. The questions query road condition, sight lines, presence of parked cars, type of road, bike infrastructure, use of lighting, terrain characteristics, direction of travel, and traffic flow of cyclist (turning or straight). Personal details are also optional and include details on the birthdate (month and year) of rider (for future, anonymous linking with emergency room health outcomes), gender, cycling frequency, helmet use, and intoxication. There are additional functions and visualizations on BikeMaps.org designed to enhance data communication and community engagement. Ridership data are essential for characterizing exposure (1). Without ridership data, collision and near miss hot spots may simply represent rider hot spots (18). We have plans to collect ridership data through a mobile application that is under development. Currently, we provide ridership data available from Strava.com, as a visualization backdrop on the website. Strava.com publishes their ridership data as a map tile dataset, and to our knowledge it is the only publically available data for ridership globally. However, Strava.com best represents the routes of recreational riders and the number of users varies regionally. We have also added base map data for cycling infrastructure, for our pilot case study in Victoria, British Columbia. Infrastructure is mapped by three categories: protected bike lanes, bike lane, and other cycling routes, similar to what is used by in Bike Score 3 or Google Maps cycling routes. At present, we add bike infrastructure on a region by region basis, and are developing a framework for more automated submission of cycling infrastructure geographic information system (GIS) files. On the BikeMaps.org website, each type of incident has a unique marker color. Official data sources, such as police crash reports, can also be incorporated and have a unique symbol. For example, in British Columbia, Canada, we have included data provided by the provincial insurance provider [Insurance Corporation of British Columbia (ICBC)] on collisions including cyclists. The website symbology changes depending on scale: as a user zooms in on the map, a marker appears at the collision or near miss location and certain incident details appear if the user clicks on the marker. As the user zooms out to a larger area, the incident markers aggregate to show the total number of events that have been mapped. For aggregated mapping of incidents, a symbol similar to a pie chart is used to denote the number of each type of incident that has been mapped. Regardless of the scale the data are viewed at, general trends can be visualized through a heat map tool, available in the legend. Furthermore, BikeMaps.org can generate summary reports. Citizen mappers, researchers, or planners can create a login and define their riding or study area via a polygon. They can then accessBikeMaps.org/stats page to monitor monthly reports on what has been mapped in their riding area. The "stats" page includes a map of the riding area with the frequency of collisions and near misses added. The bottom panel of the "stats" page is used to provide messages to citizens such as social media links for BikeMaps.org, updates on global mapping, and safety messages. For example, currently we include a graphic that demonstrates that cycling is safe through comparison with other travel modes. EARLY RESULTS OF DATA SUBMISSIONS On October 6, 2014, we launched the BikeMaps.org website through a media release. We also emailed bike groups and The citizen-mapped incidents were mainly recent reports, with 77% being collisions or near misses that occurred in 2014, 16% from 2013, and 7% from incidents before 2013. The earliest possible submission dates are 10-year prior to the date of reporting. There were strong day of week trends in when the incident occurred (Figure 3). Incidents were most common mid-week, with 25% of incidents occurring on Wednesday and 63% occurring between Tuesday and Thursday. Only 10% of collisions and near misses occurred on the weekend. In this early stage, we can see that BikeMaps.org makes a significant contribution to the need for more comprehensive data collection. For example, in Victoria, the only prior source of geo-located cycling incident data was cyclist-involved motor vehicle crashes reported to the provincial vehicle insurance carrier (ICBC). The ICBC data include between 119 and 140 reports per year from 2009 to 2013. In comparison, for Victoria, there were 160 citizen reports captured through BikeMaps.org within these first 2 months. Thus, the contribution from citizen mappers to BikeMaps.org in the first 2 months provided as many incident points as might be expected in a year from the existing available cycling incident data from ICBC. Another early indication of the quality of data generated from BikeMaps.org is the completeness of attributes provided. Fifty percent of data are fully attributed, with another 10% having only one missing attribute. For incident details, four of the five questions are required. The fifth question, on purpose of trip, was optional but 99% complete ( Table 1). Most of the collisions and near misses are collisions with moving objects (88%). The object the cyclist collided or nearly collided with was most commonly a vehicle (86%), compared with other objects including cyclists (4%), pedestrians (1%), animals (1%), and infrastructure (9%). Given the high proportion of near miss reporting, it is not surprising that 69% of the incidents did not involve injury. Additionally, 67% of incidents were reported to have occurred while commuting to work or school. Answering questions on the conditions at the location and time of the collision or near miss is optional but between 80 and 83% were completed ( Table 2). Most responses indicate that the roads were dry (65%), sight lines unobstructed (67%), and there were no parked cars (60%). Of those that indicated where they were riding, 43% of riders were on roads with no infrastructure and 73% were heading straight. While the overall report completeness is interesting, data need to be analyzed on a case by case basis to explore the interaction of multiple conditions that are associated with collisions and near misses occurrence. Personal information on the riders involved in the collision is also optional and only 48-69% complete. Age and sex were the least well reported. Of the 59% of reports that included mapper sex, two-thirds were male, in keeping with typical cyclist profiles for North America. About 68% of citizen mappers ride at least once a week. Most report wearing helmets (67%) and very few report intoxication (1%). DISCUSSION BikeMaps.org is a new global tool for cycling safety data collection. In the future, these data will enable spatial analysis, GIS, and statistics to further knowledge on cycling safety for decision Frontiers in Public Health | Public Health Education and Promotion (19,20). Our data collection is designed to test hypotheses on infrastructure and traffic flow conditions that lead to injury and safety, and this analysis of early data submissions shows good completeness of attributes. BikeMaps.org will generate more data than has been traditionally available for cycling research and allow quantitative analysis of both where and when cycling safety varies. With incident data on space and time, across all days of the week, we will be able to assess how safety varies throughout the day with different traffic volumes and flows. www.frontiersin.org FIGURE 3 | Day of week trends for citizen reports of collisions and near misses. For example, in Victoria, Canada where the initial citizen outreach was conducted, Bikemaps.org was able to capture data equivalent to about 1 year of official cycling collision reports within 2 months. Given the dearth of data available on cycling collisions and near misses, BikeMaps.org is an important tool that can be widely adopted for cycling safety data collection. There are other websites that are also aiming to fill this niche. For instance, collideosco.pe is a cycling incident reporting site for the United Kingdom and Toronto in Canada has adopted an App called Toronto Cycling for collecting of better cycling data. A benefit of BikeMaps.org is the global technology. Technology investments benefit all jurisdictions, and comparisons across locations are more easily made. The primary drawback is that the data collected and displayed need to be consistent. Excellent data available for only one region are difficult to include. When conducting analytics, however, any data can be integrated. In our own research, we anticipate utilizing more detailed ridership and infrastructure data for GIS and spatial analyses. The near miss data are a substantial contribution for cycling safety research. This gap has been identified in recent studies (3,12). The benefits of near miss reporting in injury prevention and surveillance are well documented (21)(22)(23), including support for early detection of risky locations and increased data. More data will allow statistical modeling to be more robust, assessment of change in safety and risk over space and time, monitoring of change in safety over time. Nearly two-thirds of reports are near misses, which signal the potential to use BikeMaps.org for monitoring. While it is early for direct comparisons between BikeMaps.org data and official reports, there are several interesting trends that we will continue to monitor. First, while studies have linked 48% of crashes treated at emergency departments have been directly and indirectly related to vehicles (9), 86% of BikeMaps.org data, which includes many less severe incidents, are associated with vehicles. As well, we notice BikeMap.org data are reported in some locations where official reports do not exist. In particular, where biking pathways intersect the road network cyclists are reporting relatively high numbers of incidents. Beyond data collection BikeMaps.org has mechanisms for promotion of cycling through citizen engagement and increased awareness of cycling safety using mapping as a mechanism. The BikeMaps.org/stats page is an example of a communication tool that can provide positive messaging about how to cycle safely, and may be used to share with citizens the research findings based on the data they contributed. In this way, BikeMaps.org has the potential to narrow the gap between real and perceived cycling risk (24) through better communication. We are continuing to develop this technology based on feedback and suggestions. In the next phase of development, we are enhancing hazard mapping and real-time alerting of hazards, via text or email, to increase citizen engagement. Hazard mapping provides unique challenges; for example, hazards associated with weather or glass are transient and should not persist on the maps, unless they are associated with prone to the same hazards (e.g., chronic pudding). Infrastructure hazards may be repaired, and a feedback mechanism may be beneficial to remove hazards or update hazard status. Ultimately, this feature can serve as a valuable tool for real-time hazard monitoring, for instance, of road ice or construction. The observed day of week trends, which indicate most cycling collisions and near misses occur mid-week, are evidence of the need for denominator data in any cycling safety research. Given that most of the mapping is done by commuters (67%), it is not surprising that fewer incidents occur on weekend days. At present, we do not have ridership data to analyze day of week trends though comparison with other research suggests, at a minimum Wednesday and Thursday should have similar levels of ridership [e.g., Ref. (25)]. Rather than only count data, these maps should also show cycling incident risk, for example, as the ratio of incidents relative to the number of riders (1,18). As a visualization tool on BikeMaps.org, we are utilizing Strava.com data, which shows variation in cycling volumes based on riders that use the Strava App for recording rides. This provides a global backdrop for the website, but further spatial modeling will require more nuanced denominator data. Citizens are completing the majority of the queried attributes for cycling incidents, but only 48-69% provide personal details. Though all reports are anonymous, citizen mappers seem hesitant to provide personal details. Gender and rider experience have been shown to be important predictors of cycling safety and risk (26). Given these learnings from these early report submissions, we will modify the BikeMaps.org report page to clarify the value of such data, with the hope of more complete demographic data. Specifically, we are adding a "why we ask" button on the data collection form to clarify use of these data and highlight the importance of age and gender details to risk and safety modeling. Based on consultation with stakeholders, we have many developments planned for BikeMaps.org. We are incorporating new visualizations for the website, such as sliders that allow mappers to visualize incidents that occur over a specific time period. This will allow filtering by time periods, and avoid apparent accumulating risk that is would result from increased reporting. We are also developing a mobile App to encourage timely "on-the-fly" reporting. This will include further functionality, for example, hazard mapping of geotagged photos. Real-time alerts of collisions, near misses, and hazards are also a focus of App development. Alerts, are aimed at letting riders know about short term hazards (e.g., ice, glass, or construction) before they start their rides, allowing for route choice modification for optimal safety. Route mapping will also be included in the App, such that citizens can provide route data on directly through BikeMaps.org. Route choice data can be used to generate the exposure denominator data for incidence risk. Importantly, we need tools for transferring data collected through BikeMaps.org to all researchers and planner in each area, which may be to integrate our website with open data sharing platforms (e.g., Open311) to enable each jurisdiction access to their data. A forum for collaboration between citizens, advocates, decision makers, and researchers, BikeMaps.org can address a massive data gap that will support safe cycling and increased ridership worldwide. Researchers, advocates, and planners can get involved by encouraging mapping locally. Tools are available to support local outreach activities and, until automated systems are developed, data can be made available upon request.
Research on Penetration Loss of D-Band Millimeter Wave for Typical Materials The millimeter-wave frequency band provides abundant frequency resources for the development of beyond 5th generation mobile network (B5G) mobile communication, and its relative bandwidth of 1% can provide a gigabit-level communication bandwidth. In particular, the D-band (110–170 GHz) has received much attention, due to its large available bandwidth. However, certain bands in the D-band are easily blocked by obstacles and lack penetration. In this paper, D-band millimeter-wave penetration losses of typical materials, such as vegetation, planks, glass, and slate, are investigated theoretically and experimentally. The comparative analysis between our experimental results and theoretical predictions shows that D-band waves find it difficult to penetrate thick materials, making it difficult for 5G millimeter waves to cover indoors from outdoor macro stations. The future B5G mobile communication also requires significant measurement work on different frequencies and different scenarios. Introduction The rapid growth of mobile data and the use of smartphones have created an unprecedented challenge for wireless service providers to overcome the global bandwidth shortage [1,2]. To address this challenge, there is growing interest in cellular systems in the 30 to 300 GHz millimeter waveband, which has a much wider bandwidth available than today's cellular networks [3,4]. Some high frequency bands (mm-band) were previously used for satellite communications, long-range point-to-point communications, military communications, and LMDS (28 GHz), but short wavelengths make it impossible for waves to bypass, or have quasi-optical propagation characteristics [5], which means that the high frequency bands do not have the rich scattering characteristics of the sub-6 GHz band [6][7][8][9][10]. Under line-of-sight conditions, the received signal energy is concentrated on the line-of-sight and a few low-order reflection paths. Under the condition of non-light of sight, signal propagation mainly relies on reflection and bypass, resulting in sparse channels in space and time, and the occlusion of people or objects will lead to large signal fading. High-band channels have many characteristics that are significantly different from sub-6 GHz cellular mobile channels [11][12][13][14]. The development of new 5G systems that can operate in higher frequency bands requires accurate propagation models for these frequency bands. Industry trends at home and abroad show that 5G mmWave is the next stage of 5G development, but it will require a significant amount of time and R&D costs to address the propagation characteristics of mmWave, before it can be deployed as a more general wireless network solution. Millimeter waves propagate in space as directional waves, have good directivity, are easily blocked by obstacles, and lack penetrating power. Channel measurement and modeling work has also been carried out for high frequency bands, for example, Aalto University in Finland has carried out measurement activities in 15,28,60 GHz and E-bands (81-86 GHz) based on a 60 GHz VNA detection system and completed multi-point conference room measurements to obtain the extended SV channel model [15]. Using the VNA measurement system, Ericsson participated in the METIS, mmMAGIC and 5GCM projects and completed the following several measurements: (1) in indoor 60 GHz transmission human blocking experiments [16], it was found that the human blocking loss can also be as high as 10-20 dB; (2) with indoor multi-frequency medium and long-range path loss measurements, it was observed that the bypass is the main path of millimeter-wave indoor non-light of sight transmission [16,17]; (3) multi-frequency under NLOS conditions in urban blocks. The measurement found that the signal path loss does not depend on the frequency very much, and is lower than the result of the knife-edge bypass, indicating that the signal in the case of outdoor NLOS mainly comes from other reflection paths [17]. (4) The final measurement was multi-band measured wall penetration loss [18]. In the 5GCM project, Nokia and Aalborg University in Denmark collaborated on path loss measurements at 10 GHz and 18 GHz [18]. The mm MAGIC project has other measurement activities [17], such as (1) French Telecom Orange Lab (Belfort) completed multi-frequency O2I measurements to observe penetration loss as a function of frequency; (2) French CEA-LETI completed 83.5 GHz and other indoor propagation channel measurements. In April 2019, Verizon, the largest mobile operator in the United States, launched 5G mobile services in the 28 GHz band in Chicago and Minneapolis. For indoor coverage, Verizon's 5G mmWave signal is nearly unreachable. After penetrating the concrete wall, the 5G download rate dropped sharply from 600 Mbit/s to 41.5 Mbit/s, while the 4G downlink rate at 1900 MHz did not change much, due to the severe penetration loss of 5G mmWave. Clearly, the wave penetration loss of typical building materials should be studied in light of the need to improve future 5G mmWave indoor coverage. However, the existing millimeterwave research mainly focuses on the millimeter-wave frequency band below 100 GHz, and the 100-300 GHz millimeter-wave still needs to be developed. D-band (110~170 GHz) electromagnetic waves (EMW), with a frequency range of 110~170 GHz, are located in the cross-band of millimeter waves (30~300 GHz) and terahertz (THz) waves (100~10,000 GHz). The atmospheric window frequency bandwidth of the D-band millimeter wave is about 26 GHz, and its center frequency is about 140 GHz, and the propagation loss in the air is smaller than THz band. Compared to lower millimeter wave frequencies, D-band electromagnetic wave signals have a wider bandwidth, with narrower beams and shorter wavelengths, resulting in greater transmission capacity and higher resolution. Research on the D-band channel propagation characteristics will be helpful to the research of new technologies in the physical layer of 5.5G and even 6G systems. To this end, this paper studies the penetration characteristics of the D-band (140-160 GHz) to typical materials, such as glass, slate, vegetation, and wood, and finds that the penetration loss of D-band millimeter waves is not very dependent on high frequencies, while the masking loss of vegetation is as high as 10-20 dB. D-band millimeter waves can hardly penetrate 5 cm-thick slate and 2 cm-thick wood, and the penetration loss is positively related to the thickness of the masking material. mmWave is known to increase the capacity of 5G networks and reduce latency. Wider implementations of high-definition video conferencing, teleoperation, and industrial automation will benefit from the wider bandwidth of the mmWave spectrum, especially those applications that require high precision. 5G mmWave will also enable each automated robot to generate or receive large amounts of data, as well as the high-density deployment of these robots in confined areas. From this point of view, good mmWave indoor coverage is necessary. Some current measurements and modeling efforts are still well underway, and work is expected to be carried out in several areas [9], including the following: current measurements focus on a few hotspots, with additional measurements in other candidate frequency bands; existing models claim that they support high bandwidths, but rely on systems that typically have smaller measurement bandwidths and lower angular resolution, so they will also enhance measurements and data analysis in large bandwidths and large antenna arrays. In addition, the statistical parameters provided by the model are all in the form of large table data lists. If the size scale parameters and frequencies, connection types, including antenna height and environmental parameters, can be established, this requires a rethinking of the modeling method. Materials and Methods The cause of wireless path loss is the radiation diffusion of electromagnetic waves and the channel characteristics in the transmission path, so that the received power is smaller than the transmitted power. The free-space path loss model describes the channel propagation characteristics in an ideal propagation environment. Its expression is given as where d is the wireless transmission distance; f is the transmission frequency; c is the speed of light. It can be observed from the above formula that the free space path loss is only related to the transmission distance d and the transmission frequency f. When the transmission distance or transmission frequency doubles, the loss is increased by 6 dB. The free-space propagation model is suitable for the wireless environment with an isotropic propagation medium (such as a vacuum), which does not exist in reality and is an ideal model, but the air medium is similar to an isotropic medium. Moreover, atmospheric attenuation is closely related to altitude, air pressure, temperature, and water vapor density above the Earth. Figure 1 shows atmospheric absorption for free-space paths at sea level (z = 0 km) and at 10 km above sea level under dry conditions (water vapor density w = 0 g/m 3 ) and standard conditions (w = 7.5 g/m 3 ). lower angular resolution, so they will also enhance measurements and data analysis in large bandwidths and large antenna arrays. In addition, the statistical parameters provided by the model are all in the form of large table data lists. If the size scale parameters and frequencies, connection types, including antenna height and environmental parameters, can be established, this requires a rethinking of the modeling method. Materials and Methods The cause of wireless path loss is the radiation diffusion of electromagnetic waves and the channel characteristics in the transmission path, so that the received power is smaller than the transmitted power. The free-space path loss model describes the channe propagation characteristics in an ideal propagation environment. Its expression is given as where d is the wireless transmission distance; f is the transmission frequency; c is the speed of light. It can be observed from the above formula that the free space path loss is only related to the transmission distance d and the transmission frequency f. When the transmission distance or transmission frequency doubles, the loss is increased by 6 dB. The free-space propagation model is suitable for the wireless environment with an isotropic propagation medium (such as a vacuum), which does not exist in reality and is an ideal model, but the air medium is similar to an isotropic medium. Moreover, atmospheric attenuation is closely related to altitude, air pressure, temperature, and water vapor density above the Earth. Figure 1 shows atmospheric absorption for free-space paths at sea level (z = 0 km) and at 10 km above sea level under dry conditions (water vapor density w = 0 g/m 3 ) and standard conditions (w = 7.5 g/m 3 ). As shown in Figure 1, it is a graph of signal attenuation per kilometer as a function of frequency. It can be observed that the attenuation of electromagnetic wave signals of different frequencies in water vapor and oxygen is different, and there are absorption peaks at several frequency points in the relationship between the resonant frequencies of water vapor and oxygen, and the D-band is just between the two absorption peaks, ranging from 0.01 dB/km to around 2 dB/km. So, the D-band is suitable for long distances up to 100 m millimeter wave communication. As shown in Figure 1, it is a graph of signal attenuation per kilometer as a function of frequency. It can be observed that the attenuation of electromagnetic wave signals of different frequencies in water vapor and oxygen is different, and there are absorption peaks at several frequency points in the relationship between the resonant frequencies of water vapor and oxygen, and the D-band is just between the two absorption peaks, ranging from 0.01 dB/km to around 2 dB/km. So, the D-band is suitable for long distances up to 100 m millimeter wave communication. In the real environment, the path loss is related to the presence or absence of occluders, the type of occluders, the thickness of the occluders, as well as the angle of the sender and receiver. The FSPL model does not reflect the actual propagation characteristics. For obstacles of different materials in the transmission path, two modeling schemes of three rays and four rays are used in Ref. [19]. With the proposed scheme, the transmission model of the D-band millimeter-wave signal with obstacles at a distance of 100 m can be simplified, as shown in Figure 2. In the real environment, the path loss is related to the presence or absence of occluders, the type of occluders, the thickness of the occluders, as well as the angle of the sender and receiver. The FSPL model does not reflect the actual propagation characteristics. For obstacles of different materials in the transmission path, two modeling schemes of three rays and four rays are used in Ref. [19]. With the proposed scheme, the transmission model of the D-band millimeter-wave signal with obstacles at a distance of 100 m can be simplified, as shown in Figure 2. The attenuation model given in Figure 2 is adaptive for the different materials with different thickness. For example, the relative permittivity and permeability of iron are very large, and the D-band millimeter-wave signal will have a very large transmission loss under the shielding of the steel plate. It can be considered that the D-band millimeterwave signal cannot penetrate the steel plate, so the three-ray diffraction model is applied to the steel plate shielding. For relatively thin insulator materials, such as 5 mm-thick glass and 3 mm-thick wood, their penetration loss is not very large, i.e., between 2 dB and 5 dB, so the four-ray method is used for D-band modeling [20]. Four rays include three edge diffraction paths and one transmission path. Unlike thicker insulating materials, such as 5 cm-thick slate and 1.75 cm-thick wood, D-band mmWave signals also have very large transmission losses. Therefore, we also believe that the D-band mmWave signal is impenetrable in this case, while the three-ray diffraction method is suitable for simulating thicker insulator materials. The penetration performance of millimeter waves is very poor with increasing frequency. Therefore, in our experiments, the penetration loss of millimeter waves in the 140 GHz-160 GHz frequency band is large, and it is also influenced by the dielectric constant, thickness and other parameters of the blocking material. The schematic diagram of signal transmission [21] is shown in Figure 3. The attenuation model given in Figure 2 is adaptive for the different materials with different thickness. For example, the relative permittivity and permeability of iron are very large, and the D-band millimeter-wave signal will have a very large transmission loss under the shielding of the steel plate. It can be considered that the D-band millimeterwave signal cannot penetrate the steel plate, so the three-ray diffraction model is applied to the steel plate shielding. For relatively thin insulator materials, such as 5 mm-thick glass and 3 mm-thick wood, their penetration loss is not very large, i.e., between 2 dB and 5 dB, so the four-ray method is used for D-band modeling [20]. Four rays include three edge diffraction paths and one transmission path. Unlike thicker insulating materials, such as 5 cm-thick slate and 1.75 cm-thick wood, D-band mmWave signals also have very large transmission losses. Therefore, we also believe that the D-band mmWave signal is impenetrable in this case, while the three-ray diffraction method is suitable for simulating thicker insulator materials. The penetration performance of millimeter waves is very poor with increasing frequency. Therefore, in our experiments, the penetration loss of millimeter waves in the 140-160 GHz frequency band is large, and it is also influenced by the dielectric constant, thickness and other parameters of the blocking material. The schematic diagram of signal transmission [21] is shown in Figure 3. In Figure 3, Pin is the incident signal power, Pout is the transmitted signal power, Pref is the reflected signal power, and D is the thickness of the barrier. The fading coefficient can be obtained from the transmission attenuation caused by material penetration [21], as follows: where is the relative permittivity of the barrier materials. In Figure 3, P in is the incident signal power, P out is the transmitted signal power, P ref is the reflected signal power, and D is the thickness of the barrier. The fading coefficient can be obtained from the transmission attenuation caused by material penetration [21], as follows: where ε r is the relative permittivity of the barrier materials. The relationship between P in and P out (that is, the transmittivity of penetration selected below) can be expressed as follows: where D is the thickness of the obstacles, and its measurement unit is meters; λ is the operating wavelength; tgδ is the tangential loss angle; ε r is the relative permittivity of the blocking material. For almost impenetrable barriers, such as slates, the relationship between P in and P out can be expressed as follows: Then, the penetration loss can be expressed as follows: Conversely, the relationship between P in and P out can also be calculated from the penetration loss measured in the experiment. For a more intuitive comparison, we select the transmittance index for comparison. It should be emphasized that the parameters of relative permittivity and tangent loss angle have been given in Refs [20,[22][23][24][25][26][27][28], and we use these parameters directly in the measured D-band mmWave system. The real part of wood permittivity was measured by a quasi-optical Mach-Zahnder interferometer with a backward-wave oscillator and the result was 1.60-1.89 in the D-band [25]. However, some of the parameters were not measured in the case of the D-band, so there is a certain error. The parameters are shown in Table 1. The experimental setup of the D-band millimeter-wave transmission system is shown in Figure 4. The signal generator (Agilent 83711B, 1-20 GHz) generates an IF signal from 11.4 GHz to 13.5 GHz, which is extended to the D-band (138 GHz to 163.2 GHz) after passing through a six-multiplier and a two-multiplier. D-band signals are transmitted to free space via a standard horn antenna (LB-6-25-A). After passing through the artificially placed shelter, the receiving end is received by the same type of standard horn antenna at the transmitting end. The received signal is first amplified by an electric amplifier with a gain of 30 dB (the specific parameters of LNA are shown in Table 2), and then downconverted to the intermediate frequency (1.2 GHz) in the mixer. At this point, the frequency range of the signal is already within the effective bandwidth of the digital oscilloscope. After that, the IF signal is amplified by an electric amplifier with a gain of 26 dB, and finally captured by a digital oscilloscope (E4407B ESA-E, 9 kHz-26.5 GHz). Therefore, the center The experimental setup of the D-band millimeter-wave transmission system is shown in Figure 4. The signal generator (Agilent 83711B, 1 GHz-20 GHz) generates an IF signal from 11.4 GHz to 13.5 GHz, which is extended to the D-band (138 GHz to 163.2 GHz) after passing through a six-multiplier and a two-multiplier. D-band signals are transmitted to free space via a standard horn antenna (LB-6-25-A). After passing through the artificially placed shelter, the receiving end is received by the same type of standard horn antenna at the transmitting end. The received signal is first amplified by an electric amplifier with a gain of 30 dB (the specific parameters of LNA are shown in Table 2), and then down-converted to the intermediate frequency (1.2 GHz) in the mixer. At this point, the frequency range of the signal is already within the effective bandwidth of the digital oscilloscope. After that, the IF signal is amplified by an electric amplifier with a gain of 26 dB, and finally captured by a digital oscilloscope (E4407B ESA-E, 9 kHz-26.5 GHz). Therefore, the center frequency and received power of the IF signal are observed. The horn antenna operates from 110 GHz to 170 GHz with a gain of 25 dBi and a half-power beamwidth (HPBW) of 9° in the E-plane and 10° in the H-plane. The sensitivity of the receiver is -56 dBm. The photos of the experimental setup at the transmitter and receiver are shown in Figure 5. This measuring D-band system is implemented in an indoor environment. It is car ried out in the underground garage of Building 2, Jiangwan Campus, Fudan University The antenna height of the transceivers is 1 m, the distance between them is 100 m, and th angle is horizontally aligned. We used a laser pointer to ensure that the antennas on th Tx-side and Rx-side are aligned. First, no obstructions are placed between the transceivers and the signal power received without obstruction is measured, so that it can be used a a benchmark. Then, the occluders (the details about the occluders are shown in Table 3 are artificially placed in the space, and the position of the occluders is continuously ad justed to minimize the power loss, so as to complete the calibration. We repeat the meas urement of the same parameter 10 times at each frequency point to improve the measure ment accuracy, and take the average value as the measurement result at that point. Finally the received signal power through different obstacles is subtracted from the referenc value to obtain the penetration attenuation mainly caused by the obstacles. In order t compare the transmission loss of different materials, the occluders were manually re placed to obtain different transmission loss results. Material Size (cm 2 ) This measuring D-band system is implemented in an indoor environment. It is carried out in the underground garage of Building 2, Jiangwan Campus, Fudan University. The antenna height of the transceivers is 1 m, the distance between them is 100 m, and the angle is horizontally aligned. We used a laser pointer to ensure that the antennas on the Tx-side and Rx-side are aligned. First, no obstructions are placed between the transceivers, and the signal power received without obstruction is measured, so that it can be used as a benchmark. Then, the occluders (the details about the occluders are shown in Table 3) are artificially placed in the space, and the position of the occluders is continuously adjusted to minimize the power loss, so as to complete the calibration. We repeat the measurement of the same parameter 10 times at each frequency point to improve the measurement accuracy, and take the average value as the measurement result at that point. Finally, the received signal power through different obstacles is subtracted from the reference value to obtain the penetration attenuation mainly caused by the obstacles. In order to compare the transmission loss of different materials, the occluders were manually replaced to obtain different transmission loss results. Figure 6 is a schematic diagram of the relative positions of the occluders and the transceivers in the experiment, including vegetation, a wood board, regular single-layer glass and slate materials. The vegetation includes potted green plants in the laboratory. When the number exceeds one pot, all the green plants are stacked together, as shown in Figure 7. the received signal power through different obstacles is subtracted from the reference value to obtain the penetration attenuation mainly caused by the obstacles. In order to compare the transmission loss of different materials, the occluders were manually replaced to obtain different transmission loss results. Figure 6 is a schematic diagram of the relative positions of the occluders and the transceivers in the experiment, including vegetation, a wood board, regular single-layer glass and slate materials. The vegetation includes potted green plants in the laboratory. When the number exceeds one pot, all the green plants are stacked together, as shown in Figure 7. Table 4 show the experimental measurement results of placing occlud ers of different materials in the 100-m D-band mmWave wireless transmission experi ment. It can be observed from the experimental results that the attenuation of the D-band millimeter-wave signal transmission by obstacles is positively correlated with the thick ness or number of obstacles. Signal attenuation increases with the thickness and numbe of obstacles. Table 4 show the experimental measurement results of placing occluders of different materials in the 100-m D-band mmWave wireless transmission experiment. It can be observed from the experimental results that the attenuation of the D-band millimeterwave signal transmission by obstacles is positively correlated with the thickness or number of obstacles. Signal attenuation increases with the thickness and number of obstacles. Figure 8 and For example, in Figure 8a, the penetration loss of one pot of vegetation is about 12 dB, the penetration loss of two pots of vegetation is about 16 dB, and the penetration loss of three pots of vegetation is greater than 21 dB. Due to the irregular distribution of vegetation stems and leaves, the power of the signal transmission path is greatly reduced when the signal transmission path is blocked by vegetation. It is important to emphasize that attenuation varies greatly with the irregular nature of the vegetative medium, which means that penetration losses depend on a wide range of vegetation types, densities, and the actual amount of water contained. Figure 8 and Table 4 show the experimental measurement results of placing occluders of different materials in the 100-m D-band mmWave wireless transmission experiment. It can be observed from the experimental results that the attenuation of the D-band millimeter-wave signal transmission by obstacles is positively correlated with the thickness or number of obstacles. Signal attenuation increases with the thickness and number of obstacles. In Figure 8b, the loss of the 0.3 cm-thick board is about 2 dB, the loss of the 0.6 cm-thick board is about 6 dB, and the loss of the 1.75 cm-thick board is greater than 23 dB. Penetration loss may be proportional to the thickness of the occluded board, and D-band mmWave signals cannot penetrate boards thicker than 1.75 cm. As shown in Figure 8c, the loss of 5 mm-thick glass is about 4 dB, and the loss of 10 mm-thick glass is about 9 dB. Path loss can be positively related to the thickness of the shielding glass. From this result, it can be observed that the penetration loss of the D-band in thin glass is not very large. The loss of the D-band signal as it passes through the 5 cm-thick slate is about 20 dB in Figure 8d, indicating that the D-band millimeter wave signal can hardly penetrate the 5 cm-thick slate. However, in the four cases, under the same obstacle, the penetration loss does not change much with the increase in the signal frequency. It can be observed that the penetration loss of the D-band is independent of frequency. Discussion Next, we explore the transmission rate in our conducted D-band transmission system as a function of transmission frequency, thickness and type of blocking material. The theoretical penetration rate is calculated according to the above-modified theoretical model and compared with it, as shown in Figure 9. The experimental measurements are basically consistent with the theoretical values. Since vegetation is not comparable the other three typical building materials, parameters such as relative permittivity can be queried, so the theoretical transmittance of vegetation is not listed here. In Figure 9a, only the transmittance of vegetation measured experimentally is plotted. As shown in Figure 9b,c, for both the experimental value and the theoretical value, with the increase in the thickness of the board and glass, the penetration rate will decrease and the loss will increase. However, the experimentally measured transmittance is greater Since vegetation is not comparable the other three typical building materials, parameters such as relative permittivity can be queried, so the theoretical transmittance of vegetation is not listed here. In Figure 9a, only the transmittance of vegetation measured experimentally is plotted. As shown in Figure 9b,c, for both the experimental value and the theoretical value, with the increase in the thickness of the board and glass, the penetration rate will decrease and the loss will increase. However, the experimentally measured transmittance is greater than the theoretical value. This is caused by multipath propagation effects. This phenomenon is related to factors such as the location of the obstacle and the surrounding environment. Since this experiment was conducted in an indoor environment, the received interference included reflections from walls and objects around the experimental site, diffraction from occluders, and scattering from vegetation. Moreover, although the amplitude of the signal that reaches the receiving antenna is small under the occlusion of obstacles, the system error is still relatively large in the case of small signal reception. This error can be improved by reducing the measurement error by increasing the measurement accuracy of the system. In Figure 9d, for both the experimental measurement and the theoretical analysis, the calculated transmittance of slate is extremely low, and it can be observed that the D-band millimeter wave can hardly penetrate 5 cm-thick slate. According to the general theory, the shielding effect of the material includes the following two parts: reflection shielding and absorption shielding, and the penetration loss effect of slate combines these two factors. Reflective shielding is caused by the impedance mismatch of propagating waves, and absorbing shielding is caused by heat loss from hydrates inside the concrete and steel mesh. Conclusions This paper discusses the penetration loss of D-band millimeter waves when shielded by various materials, such as vegetation, board, glass, and slate, as well as blocking measurement experiments in an indoor environment. The experimental results show that, under the given experimental conditions, the average transmission attenuation of D-band millimeter waves caused by a pot of vegetation is about 12 dB, implying that the receiving antenna receives only about 6.5 percent of the transmit power. As the number of vegetation increases, the attenuation of the D-band millimeter-wave signal increases sharply. In our experiment, when the amount of vegetation is increased to three pots, the receiving end can hardly receive the D-band millimeter-wave signal. For the measurement of the wooden board, the transmittance decreases with the increase in the thickness of the wooden board. Millimeter waves can penetrate thin boards, but when the thickness of the boards exceeds 1 cm, D-band millimeter waves can hardly penetrate obstacles. The average transmission attenuation coefficient of the thin glass shield to the D-band millimeter wave is about 4.4 dB, that is, only about 35% of the transmitted power is received by the receiving antenna. The loss of the D-band signal that passes through 5 cm-thick slate is about 20 dB, indicating that it can barely penetrate 5 cm-thick slate. The experimental measurement values are consistent with the theoretical value in general. The experimental measurement results of various materials show that the influence of occlusions on D-band millimeter-wave transmission cannot be ignored, which has a potential application prospect for estimating the channel attenuation characteristics of 5G or 6G systems with obstructions. In addition, we will explore the transmission loss of more frequency points and more materials in future work.
PACS Integration of Semiautomated Imaging Software Improves Day-to-Day MS Disease Activity Detection BACKGROUND AND PURPOSE: The standard for evaluating interval radiologic activity in MS, side-by-side MR imaging comparison, is restricted by its time-consuming nature and limited sensitivity. VisTarsier, a semiautomated software for comparing volumetric FLAIR sequences, has shown better disease-activity detection than conventional comparison in retrospective studies. Our objective was to determine whether implementing this software in day-to-day practice would show similar efficacy. MATERIALS AND METHODS: VisTarsier created an additional coregistered image series for reporting a color-coded disease-activity change map for every new MS MR imaging brain study that contained volumetric FLAIR sequences. All other MS studies, including those generated during software-maintenance periods, were interpreted with side-by-side comparison only. The number of new lesions reported with software assistance was compared with those observed with traditional assessment in a generalized linear mixed model. Questionnaires were sent to participating radiologists to evaluate the perceived day-to-day impact of the software. RESULTS: Nine hundred six study pairs from 538 patients during 2 years were included. The semiautomated software was used in 841 study pairs, while the remaining 65 used conventional comparison only. Twenty percent of software-aided studies reported having new lesions versus 9% with standard comparison only. The use of this software was associated with an odds ratio of 4.15 for detection of new or enlarging lesions (P = .040), and 86.9% of respondents from the survey found that the software saved at least 2–5 minutes per scan report. CONCLUSIONS: VisTarsier can be implemented in real-world clinical settings with good acceptance and preservation of accuracy demonstrated in a retrospective environment. With these therapies, no evidence of disease activity has become a new treatment target, making disease monitoring more important than ever. 3,4 MR imaging is the most commonly used surrogate marker of MS activity. 5,6 Radiologists typically evaluate MR imaging studies for the development of new MS lesions by comparing the current study with a prior study in adjacent view ports on a monitor, usually in multiple planes, which we will refer to as conventional side-by-side comparison (CSSC). The sensitivity of such a comparison is degraded by multiple human and technologic factors, including the quality of MR imaging protocols and the expertise of radiologists evaluating the examinations. [7][8][9] Although it is routinely accepted in phase II and III trials, the demanding nature and relative inaccuracy of visual inspection of MRIs compared with novel methods including computer-assisted lesion detection pose an important limitation to utility in clinical practice. 10,11 Indeed, computer-assisted lesion-detection software has shown promise by increasing the specificity and sensitivity of MS disease-activity monitoring. 8,12,13 One such software, VisTarsier (VT; open-source available at github.com/mhcad/vistarsier) has been validated in a series of retrospective studies, allowing radiologists, regardless of training level, to detect up to 3 times as many new MS lesions on monitoring scans compared with CSSC. 8,9,14 These validation studies, however, were performed on a dedicated research workstation with axial, coronal, sagittal and semitransparent 3D "overview" images, rather than on a conventional PACS workstation during normal clinical practice. In this prospective, observational cohort study, we sought to share our experiences implementing this assistive software in the Royal Melbourne Hospital PACS and to demonstrate that once implemented, it would augment radiologists' capacity to detect increases in MS disease-activity detection compared with CSSC. Software Integration into PACS Every new MR imaging brain demyelination protocol study generated using 3T magnets (Tim Trio, 12-channel head coil; Siemens, Erlangen, Germany) for a patient with a previous study obtained with the same MR imaging protocol was automatically processed by the software. The automated process (Fig 1) is triggered as soon as a study is verified in our radiology information system (Karisma; Kestral, Perth, Australia) by the radiographer, with the radiology information system automatically sending a completion HL7 message (NextGen Connect; NextGen Health care, Irvine, California) to the software virtual machine (Xeon Processer E5645, 8 VCPU cores @ 2.40 GHz, 8 GB DDR3 RAM, 500 GB SATA3 7200 RPM hard disk drive, no 3D/GPU acceleration [Intel, Santa Clara, California, Windows 7 Professional 64-bit operating system [Microsoft, Redmond, Washington]). The software then queries the PACS and searches the study for a series that is deemed compatible on the basis of a list of possible series descriptors (eg, FLAIR sagittal 3D). If a compatible series exists in the new study, the software then queries the PACS for previous MR imaging studies of the same patient. Once a compatible series is found in the previous most recent MR imaging, the 2 series are retrieved and processed. Software processing includes brain-surface extraction and masking of volumetric FLAIR sequences, followed by intensity normalization, 6-df registration, automated change detection, and reslicing to generate 3 new coregistered series: 1) A resliced prior study sagittal FLAIR ($160 images, preserving original resolution, one 16-bit grayscale channel); 2) an increased signal intensity color map ($160 images, 256 Â 256, three 8-bit RGB channels); and 3) a decreased signal intensity color map ($160 images, 256 Â 256, three 8-bit RGB channels). Once processing is complete, the virtual machine sends the 3 series (typical total size $150 megabytes) back to the new study as additional series. These series are then available as part of the normal clinical study for staff radiologists to report in real-time in the usual PACS environment (see the On-line Figure for an example of the output series generated by VisTarsier). Most important, these change maps do not replace routine sequences and reformats but are in addition to routine imaging. They merely draw the attention of reporting radiologists to areas that may represent new or enlarging lesions (orange). These areas are then assessed normally on routine imaging, and a determination is made as to whether they represent disease activity. Participants and Data Collection In July 2015, the software underwent a soft launch within our tertiary hospital's PACS (ethics approval number QA2015161). Eligibility criteria included the following: consecutive studies in patients with a confirmed diagnosis of multiple sclerosis (as per 2017 revised McDonald criteria) and an MR imaging including a imaging studies for patients with MS are processed by the VisTarsier software in a virtual machine once they are signed off in the radiology information system (RIS) by the radiographer. Successful processing requires all systems to be operational and compatible sequences to be available. volumetric FLAIR sequence (FOV ¼ 250, 160 sections, section thickness ¼ 0.98 mm, matrix ¼ 258 Â 258, in-plane resolution ¼ 0.97 mm, TR ¼ 5000 ms, TE ¼ 350 ms, TI ¼ 1800 ms, 72 degree selective inversion recovery magnetic preparation). 15 For all studies not meeting the automated criteria for software assistance, only CSSC was used by staff radiologists to report MS disease progression. At our hospital, the software runs as a virtual machine on a server that hosts several other research and nonessential clinical services. Thus, upgrades, power outages, and hospital network reconfigurations lead to a small amount of downtime. In cases in which studies were performed during these times or due to other software-based failures illustrated in Fig 1, VT-assisted series were not automatically generated, and only CSSC was used by reporting radiologists. Unfortunately, a detailed breakdown of the various causes of nonprocessing could not be collated prospectively and cannot be established retrospectively. We collected imaging reports for all studies performed with the above protocol prospectively from July 1, 2015, to June 30, 2017. All imaging reports for studies meeting the inclusion and exclusion criteria were assessed for written evidence of interval radiologic disease activity. Disease activity was defined as the presence of new or enlarging lesions as stated in the report body and/or conclusion available to the referring clinician. Demographic and clinical details for each patient were included in the study. After study completion, a brief survey was sent to assess the real-world impact of the software on the day-to-day lives of reporting radiologists and trainees. The results of this survey will be summarized without statistical analysis. Statistical Analysis Assessed demographic and clerical variables included the following: the presence of VT-generated series, age at scanning, sex, and reporting radiologist's training level. Assessed clinical variables included disease-modifying drug use, Expanded Disability Status Scale (EDSS), time from diagnosis to the date of the scan, and annualized rate of MR imaging scans (ie, the number of MR imaging scans per year). Because available MS subtype data were incomplete, EDSS, time since diagnosis, and annualized scan rates were used as surrogate markers of disease activity and trajectory. The distributions of the variables were compared between the groups, using t tests and x 2 tests. Generalized linear mixed models were computed to assess the difference in rates of disease progression with the software compared with CSSC. For the primary analysis, interval radiologic activity was entered as the dependent variable. All other assessed variables were entered as independent variables. Continuous variables were centered and scaled. A random intercept term for each participant was specified to allow multiple observations per person. Parameter estimation was performed using maximum likelihood. Because the dependent variable was binary, a binomial response family was used with a logit-link function. We also performed an additional sensitivity analysis with a stepwise forward variable selection for the multivariable generalized linear mixed model. An estimated odds ratio was computed for each variable. A 2-sided critical P value of .05 was used to assess statistical significance. Confidence intervals at the 95% level are presented when relevant. Data were analyzed with R statistical and computing software (http://www. r-project.org). 16 RESULTS During the 2-year study period, 906 study pairs for 538 patients met the inclusion criteria. VT was automatically activated in 841 study pairs. This activation occurred only on the occasions when both studies included a volumetric 3D-FLAIR sequence, the software was active at the time of image migration to PACS, and both studies had the same series labeling. Thus, all studies protocoled for MS follow-up should have been automatically processed by VT, and the instances in which this was not the case were random, resulting from technical reasons unrelated to patient factors (eg, server being restarted, Fig 1). These random cases occurred in the remaining 65 study pairs, which allowed CSSC only. Processing times for the software-generated series varied depending on a few factors, including ease of brain-surface extraction and workload of the server due to additional services (average processing time ¼ 5 minutes 11 seconds 6 22 seconds). Clinical and demographic data are summarized in Table 1, with both groups showing a similar distribution of key variables. Age at scan, sex, and EDSS were comparable across the CSSC and software-assisted groups. As shown in Table 2, pharmacologic treatment was also comparable across groups. In the first year following the introduction of the software, 20.49% (95% CI, 16.36%-24.63%) of studies using the software reported having new lesions versus 9.76% (95% CI, 0.67%-18.84%) with CSSC. Similarly, in the second year, 20.21% (95% CI, 16.6%-23.82%) of studies using the software reported new The fully adjusted multivariable generalized linear mixed model found a greater probability of identifying new/enlarging lesions compared with CSSC with an estimated odds ratio of 4.15 (95% CI, 1.07-16.14; P = .04). It was adjusted for age at scanning, sex, whether a scan was reported by a staff radiologist or a radiology resident, EDSS, time since diagnosis, and annualized rate of MR imaging scans. The On-line Table outlines the results of each partially adjusted model computed as part of our sensitivity analysis. These highlight the sustained effect of the software when adjusting for each additional variable independently. The Akaike information criterion (AIC) for the fully adjusted model was 586.8. Of the 39 individuals reporting MR imaging to whom the impact assessment survey was sent, 23 responded, of whom eight (34.8%) were radiology residents and thirteen (56.5%) were staff radiologists, including eight (34.8%) fellowship-trained neuro-radiologists and two (8.7%) radiology fellows. Twenty-one (91.3%) reported always using the software when available, and 22 (95.7%) felt comfortable using it as an additional series for reporting. Twenty-one (91.3%) believed it saved them at least 2-5 minutes of reporting time per scan. None of the respondents believed the software added to their reporting time, and 21 (91.3%) stated that they would like to see it implemented in other areas soon. DISCUSSION Semiautomated imaging software has shown great promise in the field of MS disease monitoring. [17][18][19] Earlier studies of VT concluded that it allowed higher lesion detection with improved interreader reliability and decreased reporting times when used by readers of all radiology training levels (ie, ranging from medical student to fellowship-trained neuroradiologist) compared with their performance using CSSC. 8,9,14 The main caveats of prior research in this area, however, included the retrospective design, artificial research conditions, and/or relatively small sample sizes. In this translational study, we used a previously retrospectively validated open-source software for MS follow-up. We used prospectively acquired data, accounting for several potential demographic and clinical confounders. We sought to demonstrate the efficacy of semiautomated imaging when implemented in a real-world clinical setting and to share our experience integrating one such software in our daily practice. We used a permissive research design to mitigate any distortion created by a research setting. Department staff were given an in-service brief and informal overview of how the software worked and of prior validation; then radiologists were left to work as they would outside a trial environment. There was no pressure to use the software, to pay attention to or record their usage pattern, or to focus on time. We thought that any such intervention would potentially mislead what another department could expect if they were to implement this sort of assistive software. More than 800 of 906 new hospital scans had VT-assisted series automatically generated and available to the reporting radiologist in real-time, with only a few minutes elapsing before the color-mapped image series became available on the PACS for reporting. This feature yielded a >4-fold increase in new lesion detection compared with those scans reported using CSSC. While <10% of studies using CSSC showed disease progression, it was reported in >20% of those using software assistance. In a poststudy survey, almost all radiologists and radiology trainees used VT and thought that it cut down on their reporting times for MS comparison studies. The results observed in this prospective study of >800 scans demonstrate an effect equivalent to the ones seen in our earlier retrospective studies. Similar demographic data were seen across both study groups and were specifically included in our analysis model to limit the amount of confounding. The software was the sole variable associated with a difference in lesion detection compared with age, sex, disease state, and time course; reporting radiologist; and annualized rate of scanning. MR imaging remains the most widely used and reliable surrogate marker to monitor disease activity in patients in the realworld clinical setting. 5,6,8 Physical and psychological disabilities seen in MS are associated with the number of demyelinating lesions, some of which can be visualized on neuroimaging with FLAIR and T2-weighted sequences. [20][21][22] Recently, the importance of accurate interval MR imaging activity has become even greater because postcontrast imaging is no longer recommended for routine follow-up, largely due to concerns about the presence of residual contrast in the brain after repeat exposure to gadoliniumbased agents. 23,24 Semiautomated imaging represents a growing field of MS and radiology research, with methods ranging from assisted lesion assessment to brain volumetric analysis. 6,19,25 Similar growth is seen with an extension of computer-assisted detection called "radiomics," which converts images to minable data for deep learning. 26 Image coregistration is a crucial component of traditional MR imaging comparison. Although image coregistration is routinely performed on a PACS, minor changes in alignment are inevitable without reslicing. [27][28][29][30] Thus, if not via the color-change maps, the automated reslicing and coregistration availed by the software rapidly and effectively provide an important and known means to optimal image comparison and assessment. After incorporating VT-assisted imaging in our hospital's daily MR imaging reporting activities, our findings are in line with other smaller prospective studies that have shown an absolute increase of 13% (22% relative increase) in new MS lesion detection using similar semiautomated software. 19 Perhaps more important, implementation of this software in our department was largely seamless and did not appreciably increase transfer times to PACS or data memory burden. Similarly, a post hoc survey of staff in our department showed an overwhelmingly positive response to the integration of the software in our daily practice. Limitations The main limitation in this study is the relatively smaller number of scans in the CSSC group. Because our PACS is programmed to automatically process new images with the software whenever possible, the number of unaided scans was limited to the days when VT was unavailable, such as when servers were undergoing maintenance. These factors contributing to the group size discrepancy were random and were not associated with the probability of MR imaging activity. This discrepancy was also further addressed by the statistical design of our analysis. For those wishing to implement a similar system in their practice, the mentioned downtime could be addressed by having a dedicated server for the software. Similarly, series description and naming in PACS was another potential source of exclusion from automated VisTarsier integration. Similarly, our protocols included 3D-FLAIR sequence series that were all named "FLAIR 3D Sag"; however, at times this could be changed manually, resulting in a matching study not being found. This could be addressed by raising awareness of the importance of standardized series naming. Unfortunately, the reason that a given scan from the CSSC cohort did not meet the automated criteria was not recorded prospectively, and it could not be reconstructed retrospectively. Although a survey sent to all reporting doctors within the radiology department yielded highly positive results in terms of ease of use and time-saving capabilities of the software, we did not track reporting times as in previous retrospective studies. Unfortunately, these data were not retrospectively mineable on our department's PACS. The qualitative nature of these data thus makes them an adjunct, rather than a statistically rigorous end point. Last, the inherent limitations of a pragmatic real-world prospective observational cohort study mean that we cannot explicitly control how the studies are read by radiologists, and we do not have the ability to generate inter-or intrareader descriptive statistics. These limitations have, however, previously been established in retrospective validation studies. 8 This is, in our opinion, offset by being able to describe the effect of implementing VisTarsier in a routine clinical environment, which is more likely to be of relevance to other institutions. CONCLUSIONS Semiautomated lesion-detection software improves the standard of reporting of new or enlarging T2/FLAIR hyperintense lesions in patients with multiple sclerosis. VisTarsier has improved reporting standards in cerebral MR imaging from patients with MS using standardized volumetric sequences and uniform scanning protocols. Most important, implementing this software in our practice's PACS was relatively seamless and very well received by staff. Future research should validate its capacity to improve reporting in a more heterogeneous sample of images. It should also seek to measure reporting times behind the scenes as a surrogate for workflow efficiency and to demonstrate a change in disease management as a marker of clinical relevance. Computeraided detection systems promise to improve radiologists' ability to detect disease activity in patients with MS. in aiding the development of the semiautomated imaging software.
Efficacy and safety of a Chinese herbal formula Maxing Ganshi Decoction in children with community-acquired pneumonia: A randomized, double-blind, placebo-controlled, multicenter trial Background: As one of the most commonly used Chinese medicine formula in the manage of respiratory diseases, Maxing Ganshi Decoction (MGD) has been demonstrated to improve the clinical symptoms of pneumonia. To evaluate the efficacy and safety of MGD in treating children with community-acquired pneumonia (CAP), we conducted the clinical trial. Methods: A randomized, double-blind, placebo-controlled, multicenter trial was conducted in 3 study sites in Tianjin, China. MDG or placebo were randomly given to patients aged 3–6 years with onset of CAP within 48 h. Changes in disease efficacy during the study period (which was measured as recovery, significant effect, improvement and no effect) was evaluated as the primary outcome. Time from enrollment to fever resolution was assessed as the secondary outcome. The adverse event was analyzed as safety evaluation. Results: A total of 71 patients (36 in MGD and 35 in placebo) were randomized and completed the whole study. The patient demographics and other characteristics at baseline were similar between the 2 groups (p > 0.05). After 10 days of intervention, the proportion of recovered and significant effective patients was increased significantly in the MGD group (34.85% [95% CI, 12.44%–57.26%]; p < 0.05) compared with the control group. Besides, the symptom score of the MGD group was lowered significantly (p < 0.001). The estimated time to fever resolution in the MGD group was also reduced compared with the control group (p < 0.05). During the whole study, no side effects were observed in both MGD and control groups. Conclusion: MGD was effective in improving disease efficacy, clinical symptoms and reducing time to fever resolution in patients with childhood CAP, which suggested that MGD may be used as an alternative therapy in the treatment of childhood CAP. Clinical Trial Registration: http://www.chictr.org.cn/showproj.aspx?proj=5612, identifier 13003955. Introduction As a leading cause of mortality and morbidity in children, community-acquired pneumonia (CAP) has brought about serious medical burden both in developing and developed countries, and also become a major public health panic in China (Zar et al., 2017;Zhou et al., 2019). According to the statistics of children under the age of 5 years old, it is estimated that pneumonia resulted in the mortality of approximately 0.921 million children in 2015, and 153.2 per 0.1 million live births die of this disease in China (Liu et al., 2016). Symptoms of childhood CAP include cough or rapid breathing, while chest pain or shortness of breath can also happen. Age, extent of lung involvement, and the organism causing the infection are considered as the infecting factors of symptom severity (Shah et al., 2017). With respect to pathogens, viruses, especially notably respiratory syncytial virus, are the most common cause for CAP children under 5 years old. Among all the pathogenic bacteria induce to childhood CAP, Streptococcus pneumonia is the most common bacterial pathogen, other important bacterial causes include Mycoplasma pneumonia and Chlamydophila pneumonia (Leung et al., 2018). For pneumonia caused by bacteria, antibiotics are generally the only choice for the treatment, along with additional supportive and symptomatic treatment. However, the mortality of hospitalized CAP patients is still high despite the application of antibiotic treatment (Restrepo et al., 2008). Besides, with the rising antimicrobial resistance rates and adverse effects of childhood antibiotic use on host microbiome, it is necessary to implement rational antibiotics prescribing as well as seek for other alternative therapies (Tramper-Stranders, 2018). Traditional Chinese medicine (TCM), which is one of the primary branches of the complementary and alternative medicine, has been implemented in the treatment of respiratory diseases for thousands of years in China and other Asian countries (Xu et al., 2013). As one of the most commonly used TCM formula in the manage of respiratory diseases, Maxing Ganshi Decoction (MGD) has been demonstrated to have obvious efficacy and significantly improve the clinical symptoms of children with pneumonia (Li et al., 2009;Wang et al., 2014;Liu et al., 2019). MGD also named as Maxing Shigan Decoction, which is a classic Chinese herbal formula. The compositons of MGD contain Ephedra sinica Stapf, Prunus armeniaca L., Gypsum fibrosum, and Glycyrrhiza uralensis Fisch. ex DC. A literature study on TCM prevention and treatment of CAP concluded that 16 TCM formulas have been applied in randomized clinical trials (RCT) as well as case series, among which MGD related clinical trials were accounted for most part, indicating that MGD present great potential in the treatment of childhood CAP (Li et al., 2017). However, most of current clinical trials are of low methodological quality, and high-quality RCTs are high required. In order to further evaluate the clinical efficacy and safety of MGD on CAP in children, along with the dose-related response during the whole treatment process, we conducted a randomized, doubleblind, placebo-controlled, multicenter trial. Study design This study was a prospective, randomized, double-blind, placebo-controlled, multi-center trial between 8 January 2014 to 21 July 2014. It was approved by the Ethics Committee of Guang'anmen Hospital, China Academy of Chinese Medical Sciences (2010EC026-02). Participants were recruited by 3 study sites in Tianjin, China, namely First Teaching Hospital of Tianjin University of TCM, Second Frontiers in Pharmacology frontiersin.org 02 Teaching Hospital of Tianjin University of TCM, and Binhai New Area Hangu Hospital of TCM, Tianjin. All legal guardians of every participant signed the written informed consent before enrollment. Participant enrollment Participants enrolled in this study were aged 3-6 years presented with onset of CAP within 48 h and were admitted to hospitals. The study enrollment was based on the following inclusion criteria: 1) They had been diagnosed with childhood CAP [diagnosed clinically and radiologically according to the clinical practice guideline by the Pediatric Infectious Diseases Society and the Infectious Diseases Society of America (Bradley et al., 2011;Ambroggio et al., 2018)] and was in hospital. 2) The body temperature was greater than 37.3°C within 24 h before the first visit. 3) Weight ≥14 kg. 4) Wight blood cells (WBC) ≤ 10 × 10 9 /L and the proportion of neutrophils (NEU%) was less than 70%. 5) C-reactive protein (CRP) was detected as normal (<5 mg/L). Participants were excluded if they met one or more of the following criteria: 1) They had comorbidities such as heart failure, respiratory failure, toxic encephalopathy, and exudative pleurisy. 2) Definite bacterial infection was detected at the time of enrollment. 3) They had severe primary diseases in heart, liver, kidney and hematopoietic system, or presented with psychiatric illness. 4) They cannot cooperate or were participating in clinical trials of other drugs. Drug administration In this study, the TCM formula MGD was prescribed for the treatment of childhood CAP. MDG originated from a Chinese medical classic named Treatise on Cold-induced Febride Diseases in the Han Dynasty (3rd century AD). The composition and dosage of MDG in our study included 4 herbs: Ephedra sinica Stapf 6g, Prunus armeniaca L. 6g, Gypsum fibrosum 24g, and Glycyrrhiza uralensis Fisch. ex DC. 6g, which is listed in Table 1. Herbs used for the decoction and placebo in the controlled group were supplied by Yanjing Herb Pharmaceutical Co., Ltd., (Beijing, China) and were distributed to First Teaching Hospital of Tianjin University of TCM. The herbs were quality controlled in accordance with China Pharmacopoeia (2005 edition) (Commision, 2005). The dosage of herbs was within the safe dosage range specified by the Chinese Pharmacopoeia, and combined with clinical practice experience. Before being applied to the study, all the herbs were examined for heavy metals, microbial contamination, and residual pesticides, and all results have met safety standards in China. Both MGD and placebo were administered in the form of decoction. The preparation of MGD decoction was according to a standardized procedure by a trained technician at First Teaching Hospital of Tianjin University of TCM and distributed to other 2 study sites. The herbs were weighted and soaked into 10 times of the herbs weight of cold water for 1 h, and then boiled the herbs for 50 min, filtrated and concentrated to a final volume of 150 ml decoction. Placebo was prepared according to a standard production process: 5 g cornflour, 16 g brown sugar and an appropriate amount of water were mixed together and boiled for 20 min, filtrated and added 0.005 g sucrose octaacetate to the solution, then concentrated to 1,000 ml and sub-packed in 150 ml for each pack. The appearance and taste sensation of the placebo were similar to the MGD decoction. The MGD and placebo decoction were taken 50 ml orally three times a day after meals. All participants were hospitalized so that they could be quarantined and closely observed and were followed until discharge. Eligible participants were randomized 1:1 by using random-number tables to receive MGD intervention or placebo for 10 consecutive days. The presence and severity of primary symptoms (fever, cough, wheezing, phlegm and pulmonary symptom) and secondary symptoms (thirsty, dry stool, yellow urine, tongue and purse) were recorded daily, which was formulated with reference to TCM Pediatric Disease Syndrome Diagnosis and Curative Effect Standard (State Administration of Traditional Chinese Medicine, 1994). Primary symptoms were scored as 0 (none), 2 (mild), 4 (moderate) and 6 (severe); secondary symptoms were scored as 0 (none), 1 (mild), 2 (moderate) and 3 (severe). The adverse event was also recorded daily. On day 6 and day 10 of the treatment, an all-round efficacy evaluation was performed. Within 10 days, those who had totally relieved from CAP based on efficacy evaluation were out of the group and completed the trial process. However, those who didn't respond to the treatment (there were no significant changes or exacerbations of symptoms and signs, and the main symptom score decreased by < 33%) can be dropped out of the trial and given another effective treatment. Study evaluation and outcomes The primary outcome was change in disease efficacy during the study period, which was measured as recovery (disappearance of the rales on lung auscultation, complete absorption of inflammation detected by X-ray chest examination, and the reduction rate of symptom score ≥90%), significant effect (disappearance of the rales on lung auscultation, substantial absorption of inflammation detected by X-ray chest examination, and 67% ≤ reduction rate of symptom score <90%), improvement (reduction of the rales on lung auscultation, partial absorption of inflammation detected by X-ray chest examination, and 33% ≤ reduction rate of symptom score <67%) and no effect (no significant changes or aggravation of symptoms and signs, reduction rate of symptom score <33%). The symptom score was measured by the improvement in symptoms including fever, cough, pant, phlegm, lung signs, thirst, dry stool, yellow urine, tough and pulse during the study period. The secondary outcome was time from enrollment to fever resolution (body temperature ≤37.2°C for ≥24 h). The adverse event was analyzed as safety evaluation. Statistical analysis All data were analyzed using the SAS version 8.1 software. Regarding demographic and clinical characteristics at baseline, quantitative variables were reported as mean ± standard deviation (SD), and qualitative variables as frequencies and percentages. Chi-square test, Fisher exact test or Wilcoxon test was used for assessing quantitative variables, Cochran-Mantel-Haenszel test was used for assessing qualitative variables when considering the influence of multi-center and other factors. For analysis of time from to fever resolution, Kaplan-Meier method was used to estimate the time to resolution when the incidence was 25%, 50% and 75% respectively, and log-rank test was considered to compare the differences between the two groups. A p value less than 0.05 was considered statistically significant. Patient characteristics A total of 80 patients were enrolled from 3 sites and 71 patients completed the whole study and were randomly assigned to two groups (36 MGD; 35 placebo), 46.48% of the patients were men; the disposition of patients is shown in Figure 1. The patient demographics and other characteristics at baseline were similar between the 2 groups, which is shown in Table 2. No comorbidity or clear bacterial infection were found Clinical outcomes Considering changes in disease efficacy, after 6 days of intervention, no patients recovered, 15 patients (41.67%) in the MGD group and 6 patients (17.14%) in the control group showed significant effect. 12 patients (33.33%) in the MGD group showed improvement, while 15 patients (42.86%) in the control group improved; and 9 patients (25.00%) in the MGD group and 14 patients (40.00%) in the control group had no effect. The proportion of recovered and significant effective patients was increased significantly in the MGD group compared with the control group (24.53% [95% CI, 3.30%-45.76%]; p < 0.05). After 10 days of intervention, 8 patients (22.22%) in the MGD group and 4 patients (11.43%) in the control group recovered from CAP. 21 patients (58.33%) and 12 patients (34.29%) showed significant effect in the MGD and the control group respectively. 3 patients (8.33%) improved in the MGD group while 14 patients (40.00%) improved in the control group. 4 patients (11.11%) in the MGD group and 5 patients (14.29%) in the control group had no effect. Compared with the control group, the proportion of recovered and significant effective patients was increased significantly in the MGD group (34.85% [95% CI, 12.44%-57.26%]; p < 0.05). The mean baseline symptom score was 19.67 ± 3.40 in the MGD group and 19.80 ± 3.19, which was not statistically significant (p > 0.05). After treatment, the symptom score of the MGD group was lowered significantly compared with the symptom score of the control group (p < 0.001). For individual symptoms, both two groups were improved significant in fever, cough, pant, phlegm, lung signs, thirst, dry stool, yellow urine, tough and pulse. The comparison between the two groups showed that there was no difference in fever, pant, thirst, dry stool or yellow urine after treatment. Symptoms of cough, phlegm, lung signs, tough and pulse were significantly lowed in the MGD group compared with the control group (Table 3). The effects of the interventions on time from enrollment to fever resolution showed in Table 4. The median time to fever resolution was 0.5 (. to.) in the MGD group and 1.0 (0.5-1.5) in the control group. The 75% incidence of time to fever resolution was 1.0 (0.5-1.5) in the MGD group and 2.0 (1.0-2.5) in the control group. The estimated time to fever resolution in the MGD group was reduced compared with the control group (p < 0.05). Safety evaluation During the whole study, no side effects were observed in both MGD and control groups. Discussion CAP is one of the main health problems in China, and its incidence rate peaks in children younger than 5 years old (Sun et al., 2020b). The results of a cross-sectional study of childhood CAP in China showed that about 99.4% of children received antibiotic treatment, and 23.3% of children received Chinese medicine treatment (Mi et al., 2018). Compared with antibiotic treatment, TCM treatment can get rid of the problem of drug resistance. At present, Chinese medicine is becoming more and more widely used in the clinical application of CAP. On the basis of the effectiveness of TCM treatment, China has established a practical guideline to guide clinical practice. In the guideline, the recommended Chinese medicine prescriptions such as MGD, Yinqiaosan, and Tanreqing injection are all effective at over 65% . In this RCT, we observe the effectiveness of the TCM prescription MGD intervention in children's CAP, the results found that after MGD intervention, the proportion of recovered and significant effective patients increased significantly compared with the control group. Meanwhile, the symptom score and time to fever resolution were reduced significantly. Since Chinese medicine is mainly derived from natural plants, animals or minerals, the ingredients are usually complex and diverse, the mechanism of TCM treatments for pneumonia presents a complex feature of multiple targets and multiple pathways. MGD is composed of four Chinese medicines, namely Ephedra sinica Stapf, Prunus armeniaca L., Glycyrrhiza uralensis Fisch. ex DC., and Gypsum fibrosum. A study has found that Ephedra sinica Stapf and Prunus armeniaca L. are the most important herb pair for treating pneumonia, quercetin, kaempferol, luteolin are the main active ingredients in the herb pair, and the Kaplan-meier Estimate MGD (n = 36) Placebo (n = 35) p value 25% incidence of time to fever resolution 0.5 (. to.) 0.5 (0.5-1.0) <0.05 Median time to fever resolution 0.5 (. to.) 1.0 (0.5-1.5) 75% incidence of time to fever resolution 1.0 (0.5-1.5) 2.0 (1.0-2.5) Frontiers in Pharmacology frontiersin.org 06 involved treatment mechanisms may include affecting inflammation and immune response, cell apoptosis, hypoxia injury, etc. (Xia et al., 2021). Another study found that the combination of Ephedra sinica Stapf and Glycyrrhiza uralensis Fisch. ex DC. may exert immune regulation, organ protection and antiviral effects by regulating the PI3K/Akt signaling pathway to treat pneumonia (Li et al., 2021). Scientists have carried out data mining and systematic pharmacology studies on pneumonia, and the results showed that Ephedra sinica Stapf, Prunus armeniaca L. and Glycyrrhiza uralensis Fisch. ex DC. were effective Chinese medicines for the treatment of mycoplasma pneumonia; further analysis of 93 active ingredients in Chinese medicines revealed that TNF, β2AR and PTGS2 played a key role in the antipneumonia, and epithelial cell apoptosis (defensive barrier function), GPCR signal transduction (improvement of symptoms) and immune pathways (innate Signal transduction and adaptive Th17 response) were important therapeutic mechanisms (Sun et al., 2020a). One in vitro study of MGD on serum of children with Mycoplasma pneumonia also showed that MGD suppressed L-1β, IL-18, TNF-α, down-regulated NLRP3, pro-IL-1β, Caspase-1, pro-Caspase-1, and GSDMD-N in infected cultures and mitigated NLRP3 overexpressioninduced pyroptosis (Liu et al., 2021). At present, there are few safety assessments of MGD in the clinical treatment of pediatric pneumonia. In our study, no adverse events have been found at this dose of MGD. However, we need large sample of multicenter clinical trials to systematically evaluate the safety of MGD on vulnerable groups like children, and we also need further research to confirm the safe dose range of MGD for clinical use among children. Our study has limitations. First of all, if the situations of severe breath shortness, phlegm, high fever and co-bacterial occurred and MGD didn't work during the study, the use of combined drugs including antibiotics, expectorants and antipyretics were determined by the physician based on the patient's symptoms, which were anti-pneumonia drugs, thus the efficacy of MGD was difficult to determine and need further vitro and vivo study. Since the use of antipyretic drugs was infrequently (mostly on the first day of treatment), they may have no effect on temperature changes. Secondly, although we only included Mycoplasma-infected CAP at the time of enrollment, a small number of patients were found to have bacterial infections during the intervention process, so antibiotics were used, which may have an impact on our study outcomes. Conclusion In conclusion, our study found that the Chinese medicine MGD can be used as an alternative therapy in the treatment of childhood CAP especially with Mycoplasma infection. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by The Ethics Committee of Guang'anmen Hospital, China Academy of Chinese Medical Sciences. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Written informed consent was obtained from the minors' legal guardian/next of kin for the publication of any potentially identifiable images or data included in this article.
Complementary analyses of transcriptome and proteome revealed the formation mechanism of ethyl acetate, ethanol and organic acids in Kluyveromyces marxianus L1-1 in Chinese fermented acid rice soup Background: Recently, more chemical and biotechnological applications have been found in Kluyveromyces marxianus than Saccharomyces cerevisiae in the food eld because they show advantageous metabolism features in the production of avor components of interest. However, most of study demonstrated Kluyveromyces marxianus involved in ethanol synthesis in the dairy products in food elds. Our study aims to clarify the formation mechanism of ethyl acetate and organic acids in acid rice soup inoculated with Kluyveromyces marxianus. Results: The higher concentration of ethyl acetate than ethanol and organic acids in fermented acid rice soup inoculated with Kluyveromyces marxianus. Up-regulated genes/proteins, including ADH1, ADH2, ADH6, ATF1, ACCT, and TES1, and down-regulated ALD family involved in glycolysis/gluconeogenesis and pyruvate metabolism played the crucial roles in the formation of ethyl acetate and other esters. In addition, up-regulated genes/proteins involved in starch and sucrose metabolism, amino sugar and nucleotide sugar metabolism, glycolysis/gluconeogenesis, TCA cycle, and pyruvate metabolism played the important roles in the formation of organic acids, ethanol and esters. Conclusion: Our results reveals the formation mechanism of ethyl acetate and organic acids in acid rice soup inoculated with K. marxianus L1-1. This study provides the basis for improving aroma and taste of fermented foods and reveals the formation mechanisms of avors in no-dairy products. malate dehydrogenase. key KEGG pathways related to the formation mechanism of the avor based on the DEGs are recombination, mismatch autophagy-yeast, MAPK signaling pathway-yeast, phenylalanine metabolism, base excision repair, nucleotide excision repair, other types of O-glycan biosynthesis, tryptophan metabolism, cell cycle-yeast, penicillin and cephalosporin biosynthesis and Background Recently, more chemical and biotechnological applications have been found in non-saccharomyces yeasts than Saccharomyces cerevisiae in the food eld because they show advantageous metabolism features in the production of avor components of interest. Kluyveromyces marxianus (K. marxianus), as a non-saccharomyces yeast, has various advantages over mesophilic yeasts, such as fast growth rate, wide spectrum of substrates and reduced cooling cost [1]. It is a haploid, homothallic, thermotolerant, and hemiascomycetous yeast and is closely related to Kluyveromyces lactis. Unlike S. cerevisiae, K. marxianus has the assimilating capability of lactose, glucose and xylose. Notably, compared to S. cerevisiae and K. lactis, K. marxianus has the more signi cant ability concentrate on the intrinsic fermentation capability of various sugars at high temperatures [2]. K. marxianus grows in a wide temperature range from 4 to 52 °C, indicating it is thermotolerant and can be applied in the processes of low temperatures and high temperatures, which can also prevent the growth of microorganisms sensitive to heat [3]. Furthermore, the probiotic properties of K. marxianus have been extensively explored [4]. Interestingly, K. marxianus shows a great potential in the production of esters, which are key aromatic compounds in the food industry [5]. Ethyl acetate and other short-chain volatile esters are used as industrial solvents and perfume ingredients. It was reported that the global market demand of ethyl acetate was more than 1.7 million tons per year [6]. Therefore, it is signi cant to produce ethyl acetate in the food production industry. Three synthesis ways of ethyl acetate have been reported [7]: hemiacetal oxidation (spontaneous formation of hemiacetal from acetaldehyde and ethanol under the action of enzymatic oxidization), condensation of ethanol and acetyl-CoA, and esteri cation of acetate and ethanol (the reverse synthesis of ethyl acetate from ethanol and acetate). However, it is di cult to produce ethyl acetate through esteri cation of ethanol and acetate because the ester-hydrolyzing activity of esterase is much higher than its ester-synthesizing activity [5]. The previous report demonstrated that ethyl acetate synthesis was characterized by the direct utilization of ethanol as a substrate or the hemiacetal reaction between sugar and acetaldehyde [5]. Ethyl acetate as an aroma component plays an increasingly important role in foods and other elds. However, the synthesis mechanism of ethyl acetate in K. marxianus in rice-acid is unknown. Most studies on K. marxianus focused on its role in alcoholic fermentation in the fuel industry and food eld [8], however, the production of ethyl acetate or organic acids with K. marxianus was seldom reported. The metabolism of K. marxianus is less well understood than that of S. cerevisiae. The formation mechanisms of ethyl acetate, other esters or avor compounds in fermented foods inoculated with K. marxianus are not yet completely understood. More multi-omics analysis technologies have been used to explore the synthesis, metabolism and accumulation of nutrients and avor components in foods. RNA Sequencing is a novel high-throughput sequencing technology with many advantages, such as much information, low data redundancy and accurate analysis. In addition, it does not require the background of genomics, but it can analyze the transcriptional expression of multiple materials [9]. Proteomics can enhance the understanding of the biochemical processes of avor development in fermented foods [10]. However, due to non-coding RNA regulation, protein degradation, protein secretion, the quantity of differently expressed proteins (DEPs) are often less than differently expressed genes (DEGs). In addition, the detectable protein content, protease hydrolysis and operational errors in proteomics detection also limit the application of proteomic analysis [11]. Many factors lead to the difference in the analysis results of transcritptomics and proteomics. Notably, a previous report proved that sequence features contributed to 15.2-26.2% of total variations of mRNA and proteins [12]. Therefore, the combination of transcriptomics and proteomics may reveal the avor formation mechanism in K. marxianus L1-1 in acid rice soup (rice-acid). In order to clarify the formation mechanism of ethyl acetate and organic acids in rice-acid inoculated with Kluyveromyces marxianus L1-1, volatile compounds and organic acids in rice-acid were measured in this study. In addition, we analyzed the differently expressed genes and proteins of K. marxianus L1-1 in riceacid in the key fermentation days (the rst and third days) through the complementary analysis of mRNA sequencing and proteomics. Through GO enrichment analysis and KEGG pathway enrichment analysis, the formation mechanisms of ethyl acetate and organic acids in K. marxianus L1-1 in rice-acid were explored in this study. Results Variations of the quantities of K. marxianus L1-1 in the fermentation process of rice-acid K. marxianus, which is able to utilize various sugars, may be a suitable microbe for lignocellulose hydrolysis and grain matrix at 30 °C [13]. In this study, the fermentation temperature was determined as 30 °C based on our previous study. The number of K. marxianus L1-1 changed signi cantly in the fermentation process. The number of K. marxianus L1-1 gained the most signi cant increase rate from 0 d to 1 d (Fig. 1a) and decreased from 1d to 2 d. Interestingly, it increased from 2 d to 3 d. The above variations may be related to the oxygen content in the fermentation tank. The limited supply of oxygen (the terminal electron acceptor) also initiated the synthesis of some esters, but it primarily forced ethanol production during the growth of K. marxianus DSM 5422 [14]. We explored the formation of ethyl acetate, other esters and organic acids based on the growth of K. marxianus L1-1 in this study. In the study, we investigated the key fermentation process of rice-acid with K. marxianus L1-1 in 1 d and 3 d and analyzed the formation mechanism of avors in K. marxianus L1-1. Variations of key volatile compounds in the fermentation process of rice-acid The variations of avor compounds in 1 d and 3 d were explored. The basic conditions for the formation of ethyl acetate are acetic acid, ethanol and some key enzymes which were discussed in this study. In the obtained volatile compounds, 5 key acids, 13 key alcohols and 12 key esters were found (Table 1). From 1 d to 3 d, ethyl acetate content increased from 162.98 ± 5.02 to 241.37 ± 6.20 g/kg; ethanol content increased from 36.11 ± 4.54 to 52.68 ± 14.45 g/kg; acetic acid content increased from 0.21 ± 0.06 to 32.67 ± 1.57 g/kg. Acetic acid, 2-phenylethyl ester, 2-methyl-propanoic acid, ethyl ester and 9 other esters were also found. Interestingly, ethyl acetate, ethanol and acetic acid are important volatile compounds because of their high contents and low odor thresholds [15]. Ethyl acetate made a signi cant contribution to the formation of the fruity avor and promoted the overall avor balance in rice-acid. Moreover, ethyl acetate exhibits probiotic properties such as being closely linked to the antioxidant function in the fruit [16]. The formation of esters in the alcoholization stage was closely related to the enzyme activity of yeasts. Therefore, it is necessary to explore the formation mechanism of ethyl acetate and other esters. Table 1 The key volatile compounds (mg/L) and organic acids (mg/L) in fermented rice-acid inoculated with K. marxianus L1-1 at 1 d and 3 d. Variations of organic acids in the fermentation process of rice-acid Seven organic acids were found in rice-acid, including L-lactic acid, acetic acid, malic acid, succinic acid, citric acid, oxalic acid and tartaric acid ( Table 1). Among 7 organic acids, L-lactic acid had the highest content. The content of L-lactic acid increased from 3.01 ± 0.61 g/kg in Day 1 to 6.02 ± 1.67 g/kg in 3 d. The contents of the other 6 organic acids did not show the signi cant increase during the fermentation process. However, although the contents of the 6 other organic acids were low, they interacted with each other to promote the formation of the sourness and taste of rice-acid. In our study, both volatile components and organic acids affected the formation of the avor of rice-acid. Lactic acid exists in two isomeric forms which include L-(+) and D-(−)-Lactic acid. It is produced by microbial fermentation and chemical synthesis and used in food, cosmetic, pharmaceutical, and chemical industries [17]. In the study, we mainly focused on the increase in L-(+) -Lactic acid caused by microbial fermentation. L-(+)-Lactic acid with a high enantiomeric purity is required in many industries, especially in medical, pharmaceutical and food industries, since D-(−)-Lactic acid is harmful to humans and can cause decalci cation or acidosis [18]. L-(+)-Lactic acid not only promoted the formation of the avor in rice-acid, but also had an important effect on the health. We will further explore the genes, proteins and enzymes associated with the formation of organic acids. Table 2). All of high-quality clean reads were used for gene comparison and more than 88% of total reads of clean reads were mapped to the database by software Bowtie 2. By using |log2FC| > 1.5 and FDR < 0.05, we identi ed 1390 DEGs (788 up-regulated and 602 down-regulated) between Y 1 d and Y 3 d (Fig. 2a). GO analysis of the DEGs showed the enrichment of three major cellular components, biological processes and molecular functions (Fig. 3). In terms of cellular components, most of the DEGs were enriched in nucleolus, small-subunit processome and preribosome, and large-subunit precursor. In terms of biological processes, most of the DEGs were enriched in endonucleolytic cleavage in ITS1 to separate SSU-rRNA from 5.8S rRNA and LSU-rRNA from tricistronic rRNA transcript (SSU-rRNA, 5.8S rRNA, and LSU-rRNA). In terms of molecular functions, these DEGs were enriched in structural constituent of ribosome, snoRNA binding and rRNA binding. All DEGs were subjected to KEGG pathway enrichment analysis. KEGG analysis assigned the DEGs of Y1-d and Y3-d to metabolic pathways. At least 4231 genes were identi ed, including 1390 DEGs annotated to 279 KEGG pathways (Table S1). The signi cantly enriched pathways (p-value < 0.05 and q-value < 0.05) were ribosome, cytosolic DNA-sensing pathway, RNA polymerase, DNA replication, ribosome biogenesis in eukaryotes, pyrimidine metabolism and purine metabolism. Importantly, the crucial KEGG pathway related to the ethyl acetate and organic acids included amino sugar and nucleotide sugar metabolism, starch and sucrose metabolism, glycolysis/gluconeogenesis, pyruvate metabolism and TCA cycle. These pathways showed the different roles of K. marxianus L1-1 in the formation of avor and taste of rice-acid. Ethyl acetate and organic acids promoted the maturity of avor and taste of rice-acid inoculated with K. marxianus L1-1. Key genes were found to be involved in the ethyl acetate metabolism process in glycolysis, including GLK1 (K00844), GPD2 (K00134), 3 genes of ALD family (K00129 and 2 K00128), 6 genes of ADH family (K13953, K13953 and 4 genes of K13953) and ATF1 protein (BAO42650, BAO42650 and BAO42650) ( Table 3). The genes of GLK1, GPD2, ALD family, ADH family and ATF1 encode glucokinase-1, glyceraldehyde 3-phosphate dehydrogenase, aldehyde dehydrogenase, alcohol dehydrogenase and alcohol O-acetyltransferase, respectively. In addition, the gene ERG10 encoding acetyl-CoA C-acetyltransferase was found to be involved in the ethyl acetate metabolism process in pyruvate metabolism. The key genes related to organic acids found in pyruvate metabolism included CYB2 (K00101), DLD1 (K00102), 2 genes of MDH family (2 K00026), and FUM1 (K01679) respectively encoding L-lactate dehydrogenase, D-lactate dehydrogenase, malate dehydrogenase and fumarate hydratase. The key genes related to organic acids found in the citrate cycle included 2 genes of CIT family (2 K01647), 3 genes of SDH family (K00234, K00236 and K00237) and 2 genes of MDH2 (K00026) respectively encoding citrate synthase succinate dehydrogenase (ubiquinone) avoprotein subunit and malate dehydrogenase. The key KEGG pathways related to the formation mechanism of the avor based on the DEGs are discussed below. Note: FC value represented the differential expression multiple of mRNA and protein. Proteomics Characterization The total proteins were extracted from the Y1-d and the Y3-d at lling stage and subjected to 4D label-free proteomics analysis to complement the transcriptome analysis. According to the abundance levels of proteins, 610 proteins were identi ed as DEPs at p-value < 0.05, including 135 proteins with increased abundance levels and 475 proteins with decreased abundance levels (Fig. 2b), and the difference ratio reached > 1.5. The number of up-regulated proteins was smaller than that of down-regulated proteins at the lling stage since the growth of K. marxianus L1-1 was inhibited due to the acid environment in the later fermentation stage of rice-acid. To obtain a global diagram of proteomic changes, at least 2937 proteins were identi ed and 187 DEPs were annotated with GO analysis and KEGG analysis (Table S2). In GO functional analysis, 187 proteins were annotated to 59 GO terms. The results of GO analysis showed that the distributions of DEPs in functional classi cation were consistent with the distributions of transcription levels of DEGs (Fig. 4). In terms of cellular components, most of the up-regulated DEPs were enriched in mitochondrion, tricarboxylic acid cycle enzyme complex and mitochondrial matrix. In terms of the biological process, most of the up-regulated DEPs were enriched in citrate metabolic process, tricarboxylic acid metabolic process and galactose catabolic process. In terms of molecular functions, these up-regulated DEPs were enriched in oxidoreductase activity, L-malate dehydrogenase activity, malate dehydrogenase activity and alcohol dehydrogenase activity. The 6 GO terms involved in alcohol dehydrogenase activity had the smallest p-value (p-value < 0.01) and were related to the formation of ethyl acetate. The 187 DEPs were annotated to 30 KEGG pathways (Table S2). Most up-regulated KEGG pathways analyzed by proteomics were related the formation of ethyl acetate and organic acids, which explained reasonably the formation of avor and taste in rice-acid inoculated with K. marxianus L1-1. Meanwhile, the down-regulated KEGG pathways obtained by proteomics analysis were most related to the growth of K. marxianus L1-1 (Fig. 5b), verifying that it was reasonable to select the third day as the ending of rice-acid fermentation. According to the pathway analysis (Fig. 5a), we could conclude that many proteins took part in various metabolic pathways including amino sugar and nucleotide sugar metabolism, starch and sucrose metabolism, glycolysis/gluconeogenesis, pyruvate metabolism and citrate cycle, which might affect many aspects of the metabolism of K. marxianus L1-1 during the key fermentation period of rice-acid. However, the different KEGG pathways obtained by proteomics analysis played different roles in the formation of avor and taste in rice-acid. Ethyl acetate is the most important volatile compound in rice-acid inoculated with K. marxianus L1-1. Four key proteins were found to be involved in the ethyl acetate metabolism process in glycolysis, including GLK1(BAO37673), GAP1 (BAO40242), 2 ADH1 (BAO40648 and BAO40126) and ADH6 (BAO42650) ( Table 2). GLK1, GAP1, ADH1 and ADH6 encode glucokinase-1, glyceraldehyde-3-phosphate dehydrogenase 1, alcohol dehydrogenase and NADP-dependent alcohol dehydrogenase, respectively. In addition, four key proteins were found to be involved in organic acids metabolic process in pyruvate metabolism, including 2 MDH family (BAO41458 and BAO40079), FUM1 (BAO42339) and LYS21 (BAO38393). MDH, FUM1 and LYS21 encode malate dehydrogenase, fumarate hydratase and homocitrate synthase, respectively. Four key proteins were found to be involved in the metabolism process of organic acids in citrate cycle, including CIT1 (BAO38563), SDH1 (BAO38924), MDH2 (BAO40415) and FUM1 (BAO42339), which respectively encode citrate synthase, succinate dehydrogenase (ubiquinone) avoprotein subunit, malate dehydrogenase and fumarate hydratase. The proteins related to the metabolism of organic acids were consistent with the genes, indicating that the combination of transcriptomics and proteomics were useful tools to analyze the formation of ethyl acetate and organic acids during the key fermentation period of rice-acid inoculated with K. marxianus L1-1. Correlation Analysis Of Transcriptome And Proteome Data Transcriptomic and proteomic analysis results are shown in Fig. 1b-1d. At least 4231 genes were identi ed and 1390 DEGs were annotated to 279 KEGG pathways by using transcriptomic analysis. In addition, 2937 proteins were identi ed and 610 DEPs were annotated to 30 KEGG pathways by using proteomics analysis. Pearson correlation coe cient was 0.3761 and the results of the two analysis methods were signi cantly different. Therefore, the combination of transcriptome and proteome data could be an effective way to reveal the formation mechanism of the avor in rice-acid inoculated with K. marxianus L1-1. The 5 KEGG pathways related to the synthesis of ethyl acetate and organic acids are shown in Table 3. Discussion Rice-acid, as a cereal-based fermented food used for seasoning, is famous in China. However, the traditional rice-acid process requires two times of fermentation and the long-term fermentation time may lead to the unstable and non-persistent avor. In this study, we adopted a novel inoculation strain (K. marxianus L1-1), the inoculation of K. marxianus L1-1 promoted the fermented rice-acid has the unique avor and shorten the fermentation period of rice-acid from 40 d to 4 d. Our previous study proved that this fermentation method could realize the high-quality avor. However, the formation mechanism of the avor in rice-acid inoculated with K. marxianus L1-1 is not clear. In this study, the RNA-seq and 4D labelfree technologies were used to explore the genes and proteins in the formation mechanism of ethyl acetate and organic acids in rice-acid inoculated with K. marxianus L1-1. DEGs and DEPs were identi ed and annotated to key KEGG metabolism pathways, including starch and sucrose metabolism, amino sugar and nucleotide sugar metabolism, glycolysis/gluconeogenesis, pyruvate metabolism and TCA cycle. Furthermore, the results of transcriptome and proteome were combined to reveal the formation mechanism of ethyl acetate and organic acids. We provided a comprehensive interpretation and exact measurements of genes and protein expressions involved in the changes of the avor of rice-acid for the rst time. Up-regulated proteins and genes involved in starch and sucrose metabolism, amino sugar and nucleotide sugar metabolism provide the energy for the formation of acids, ethanol and esters Starch and sucrose metabolism provides an important transient pool in the sugar accumulation pathways. The genes and proteins could provide the energy for the formation of the avor and taste in rice-acid, including GLK1 encoding glucokinase-1 and KLMA_10051 encoding hexokinase in the KEGG pathways of starch and sucrose metabolism and amino sugar and nucleotide sugar metabolism (Table 3). Protein GLK1 was a glycolysis-initiating enzyme [19] and showed the 13.384-Log2FC upregulation. The gene of KLMA_10051 encodes a hexokinase, which showed the 1.83-Log2FC upregulation. Up-regulated GLK1 indicated an increase in NADPH amount and more energy generated in K. marxianus L1-1. Lane et al. [20] also reported that the catabolite repression could be reduced by modulating the expression of glucose-phosphorylating enzymes, such as GLK1 and hexokinase (HK). In addition, SCW4 (KLMA_30608) encoding glucan 1,3-beta-glucosidase showed the upregulation of 6.986-Log2FC, as indicated by proteomic analysis, and the upregulation of 1.53-Log2FC, as analyzed by transcriptomic analysis. SCW4 (KLMA_30608) encoding glucan 1,3-beta-glucosidase might have the hydrolytic activity and provide the energy to promote the formation of the avor and taste in rice-acid. In a previous report, it was also indicated that a 1,3-β-glucosidase BGL1 puri ed from the pilei showed the hydrolytic activity toward laminarin, laminarioligosaccharides including laminaribiose, and p-nitrophenylβ-d-glucopyranoside (pNPG) [21]. In addition, another study also showed that 1,3-β-glucosidase had a certain hydrolytic activity towards gentiobiose, cellobiose, and related polysaccharides [22]. Therefore, the hydrolysis of sugar compounds may provide substrates and energy for orderly metabolism. Eventually, with hydrolytic enzymes, carbohydrates were easily converted into glucose in K. marxianus L1-1. Our fermentation belonged to a static fermentation method. K. marxianus seemed to enhance glucose metabolism and shift to fermentation, implying the connection between oxygen and glucose-sensing pathways [13]. In future study we will explore the content on oxygen in the fermenter and its correlation with avor in rice-acid. However, genes and proteins were differentially expressed in the different fermentation days (the rst and third days) of rice-acid with the inoculation of K. marxianus L1-1. The correlation between transcriptomics and proteomics data was not perfect and there are some differences in the results of the two methods [23]. Protein expressions may be affected by various factors in the translational stage [12]. The e ciency of protein biosynthesis and accumulation depends on various factors in the biological regulation process. Therefore, we adopted the complementary analysis method of transcriptomics and proteomics to reveal formation mechanism of ethyl acetate, ethanol and organic acids in Chinese rice-acid inoculated with K. marxianus. Up-regulated proteins and genes involved in glycolysis/gluconeogenesis and pyruvate metabolism played an important role in the formation of ethyl acetate and other esters Volatile esters are secondary metabolites produced by yeasts and fungi during fermentation of fermented foods [24]. Interestingly, our study showed compared to alcohol, more esters existed in rice-acid inoculated with K. marxianus L1-1. This difference contributed to the avor formation in rice-acid. Ethyl acetate was one of most important volatile esters in rice-acid. Both DEGs and DEPs in glycolysis/gluconeogenesis and pyruvate metabolism played an important role in the formation of ethyl acetate and other esters (Fig. 7). Glycolysis is the cytosolic pathway that converts glucose to pyruvate. ADH1, ADH2, ADH3 (alcohol dehydrogenase) and ADH6 (NADP-dependent alcohol dehydrogenase 6) involved in glycolysis/gluconeogenesis respectively encode the proteins of BAO40648, BAO40126 and BAO42650 (Table 3 and Fig. 6a). This type of esteri cation is carried out from primary and secondary alcohols, aldehydes or ketones [25]. ADH enzymes catalyze the synthesis of ethyl acetate through the oxidation of hemiacetal. ADH3 was only found in the transcriptomic analysis in this study. A previous study proved that ADH2 was constitutively expressed in aerobic growth with glucose as a carbon source, whereas ADH3 expression increased as cells reached the stationary phase. These results were in agreement with the previous analyses of K. marxianus transcriptome [26]. Our study proved that the ADH family was important in the complementary analysis of transcriptomics and proteomics. ADH family had a signi cant effect on the formation of ethyl acetate and ethanol, and the up-regulated genes and proteins suggested that ADH1, ADH2 and ADH6 were the dominant enzymes in ethanol production when glucose was used as a carbon source. In the identi ed up-regulated genes and proteins, ADH2 and ADH6 were critical in the reduction of acetaldehyde to ethanol (a precursor to ethyl acetate). This result was consistent with the previous study [26]. Another report proved that the alcohol acetyltransferase was associated with intracellular lipid particles in cytosol [27]. We analyzed the up-regulated genes and proteins of ADH family and found that they promoted the formation of ethanol and ethyl acetate. Herein, one possible formation pathway was the oxidation of hemiacetal (the spontaneous product of ethanol and acetaldehyde) under the catalysis action of ADH activity. Although the two methods of transcriptomics and proteomics showed some differences, all the up-regulated ADH genes and proteins indicated that the alternative biosynthetic routes of ethyl acetate existed in K. marxianus L1-1. In addition, the formation of ethyl acetate in rice-acid fermentation inoculated with K. marxianus L1-1 was mainly catalyzed by two enzymes named ATF1 (alcohol O-acetyltransferase) and TES1 (acylcoenzyme A) in this study (Table 3 and Fig. 6b), which possessed an acyl-coenzyme A: ethanol Oacyltransferase (AEATase) activity as well as the esterase activity. We found that ATF1 could use ACCT (acetyl-CoA C-acetyltransferase) to synthesize acetate esters, including ethyl acetate, acetic acid, 2phenylethyl ester and isobutyl acetate (Fig. 7). The previous analysis of the ATF1p also found that acetyl-CoA was used to synthesize acetate esters [28,29]. It was demonstrated that acetyl-CoA taking a crucial role to produce the generation of more NADH and more ATP in glucose [13]. The previous study postulated that ester biosynthesis in K. marxianus may also occur through homologs to the mediumchain acyltransferases from S. cerevisiae, the isoamyl acetate-hydrolyzing esterase, the Nacetyltransferase Sli1 and/or the alcohol-O-acetyltransferase [30]. Interestingly, we also found that the another important ester family including propanoic acid, 2-methyl-propanoic acid, ethyl ester, 2-propenoic acid, ethenyl ester and propanoic acid, 2-methyl-2-phenylethyl ester, and the increase in propanoic acid, ethyl ester was closely related to propanoate metabolism. The propanoate is expected to be converted to acetyl-CoA or pyruvate, as suggested by examination of likely propanoate metabolism. It was also demonstrated that propanoate was converted to acetyl-CoA in three classes of mycolate [31]. However, the differentially expressed gene ATF1 was only found under the transcriptomics analysis. The combination method of transcriptomics and proteomics could provide the more reasonable explanation for the formation of ethyl acetate and propanoic acid, ethyl ester. Furthermore, some genes and proteins related to the formation of ethyl acetate and other esters included GLK1 (glucokinase-1), KLMA_10051 (hexokinase), GPD2 (glyceraldehyde 3-phosphate dehydrogenase) and ALD family (aldehyde dehydrogenase) ( Table 3 and Fig. 7). Protein GLK1 was a glycolysis-initiating enzyme, which promoted the formation of ethyl acetate and other esters and also played an important role in glycolysis/gluconeogenesis and pyruvate metabolism. Glyceraldehydes-3-P and pyruvate were the intermediate products in the glycolysis process and provided the carbon skeleton for volatile compound biosynthesis in rice-acid. A recent transcriptomic study suggested that a β-glucosidase homolog in K. marxianus may be responsible for cellobiose degradation [13]. Interestingly, the genes of ALD family (aldehyde dehydrogenase showed the downregulation, indicating that more energy was used in ADH family encoding alcohol dehydrogenase. In the further study, we will explore the change of aldehyde dehydrogenase. Some different genes were involved in the formation of ethyl acetate in K. marxianus and S. cerevisiae, although the metabolism of ethyl acetate in K. marxianus was seldom reported. The previous study suggested that the biosynthesis of acetate ester could be interpreted with the antagonistic activity of esterase IAH1 [32], but we did not nd a reverse esterase playing a role in the formation of ethyl acetate and other esters in our study (Fig. 6c). A previous study also reported that there was no esterase involved in the biosynthesis ethyl acetate or other esters in K. marxianus CBS 6556 [26]. Up-regulated genes and proteins involved in TCA cycle and pyruvate metabolism played important roles in the formation of organic acids The mitochondrial TCA cycle, also known as Krebs cycle, is one of the major pathways of carbon metabolism in higher organisms that provides electrons during oxidative phosphorylation within the inner mitochondrial membrane. TCA cycle is crucial in mitochondrial membranes for respiration. Both TCA cycle and pyruvate metabolism played important roles in the formation of organic acids (Table 3 and Fig. 7). It was also reported that organic acids were closely related to TCA cycle in rice [33]. The upregulated gene PYC2 (KLMA_10253) encoding pyruvate carboxylase played the crucial role in TCA cycle and pyruvate metabolism. It was demonstrated that pyruvate carboxylase as an anaplerotic enzyme had a special effect and played an essential role in various cellular metabolic pathways including gluconeogenesis, glucose-induced insulin secretion, de novo fatty acid synthesis and amino acid synthesis [34]. The DEGs and DEPs related to TCA cycle and pyruvate metabolism reasonably interpreted the formation of organic acids during the key fermentation period of rice-acid inoculated with K. marxianus L1-1. The up-regulated gene LYS21 (KLMA_10771) encoding homocitrate synthase, has been identi ed by the combined analysis of transcriptomics and proteomics in this study. The previous study proved that homocitrate synthase was responsible for the rst important step of the pathway and played the crucial role in pyruvate metabolism [35]. Notably, homocitrate synthase LYS21 was linked to the key process of DNA damage repair in a nucleus and TCA cycle in the cytoplasm [13,36]. The up-regulated gene CIT1 (KLMA_20105) encoding citrate synthase reasonably interpreted the increase in citric acid in the fermentation process of rice-acid. CIT1 acts as a quantitative marker for healthy mitochondrion and is encoded by the nuclear DNA [37]. Interestingly, lactic acid has optical isomers: L-lactic acid and D-lactic acid, which can be produced by chemical synthesis (DL-lactic acid) or microbial fermentation (L-lactic acid, D-lactic acid, or DL-lactic acid). Compared to chemical synthesis processes, microbial fermentation processes present more advantages since they make use of renewable substrates from lactic acid bacteria [38]. Consistently, our study demonstrated that fermented rice-acid inoculated with K. marxianus L1-1 could produce more concentration of L-lactic acid in 3 day than 1 day, and the L-lactic acid has some advantages for healthy. The genes CYB2 (KLMA_10621 and KLMA_40341) encoding L-lactate dehydrogenase were up-regulated, whereas another CYB2 (KLMA_30013) was down-regulated. DLD1 reported proved that fumarate hydratase (FUM1) and succinate dehydrogenase (SDH) were tumour suppressors [39]. Indicating K. marxianus L1-1 could have potential probiotic characteristics. Therefore, the enhanced activities of proteins and enzymes in TCA cycle and pyruvate metabolism indicated the increased organic acids in rice-acid inoculated with K. marxianus L1-1. Down-regulated proteins and genes indicated the stable formation of the avor Interestingly, most of the genes and proteins involved in the 5 KEGG pathways includes starch and sucrose metabolism, amino sugar and nucleotide sugar metabolism, glycolysis/gluconeogenesis, pyruvate metabolism and TCA cycle were up-regulated except the genes CTS1, ALD family and DLD1. The reason had been discussed above. The up-regulated genes and proteins played the active role in the formation of avor during rice-acid fermentation, whereas the down-regulated genes and proteins played the important role in maintaining the stable key avor. Many reports focused on the up-regulated genes or proteins in the role of promoting the avor maturity of fermented foods, the down-regulated genes or proteins were seldom reported. In our study, as seen from Fig. 5B, the down-regulated genes and proteins were involved in the KEGG pathways, including DNA replication, meiosis-yeast, homologous recombination, mismatch repair, autophagy-yeast, MAPK signaling pathway-yeast, phenylalanine metabolism, base excision repair, nucleotide excision repair, other types of O-glycan biosynthesis, tryptophan metabolism, cell cycle-yeast, penicillin and cephalosporin biosynthesis and D-Arginine and Dornithine metabolism. Most of the down-regulated pathways were related to the growth of K. marxianus L1-1 and this result provided a reasonable explanation of the decrease in the quantity of K. marxianus L1-1 in rice-acid during the key fermentation period (Day 3). The down-regulated proteins in DNA replication had the small Log2FC value (data is not displayed), indicating that related DEPs had small effects on the growth of K. marxianus L1-1. Therefore, the third day was the suitable ending of the fermentation process. In addition, the down-regulated KEGG pathways were related to the decomposition and utilization of substrates by K. marxianus L1-1. Consistently, a previous report demonstrated that downregulated proteins related to advanced glycation end products were implicated in the aging process [40]. In the future, we will focus on the in uences of the content of substrate of K. marxianus L1-1 on the avor formation of rice-acid. Conclusion The transcriptome and proteome of K. marxianus L1-1 in rice-acid were determined in the study. The differentially expressed genes and proteins related to the formation of ethyl acetate and organic acids were determined. DEGs and DEPs were identi ed and found to be enriched in the key KEGG metabolism pathways, including starch and sucrose metabolism, amino sugar and nucleotide sugar metabolism, glycolysis/gluconeogenesis, pyruvate metabolism and TCA cycle. With the complementary analyses of the transcriptome and proteome, we revealed the formation mechanism of ethyl acetate and organic acids in Chinese rice-acid inoculated with K. marxianus L1-1. This study provides the basis for improving aroma and taste of fermented foods and reveals the formation mechanisms of avors. Methods Strain culture and growth determination The strain of K. marxianus L1-1 was previously screened and isolated in the traditional fermented riceacid and could produce high concentration of aroma compounds, was used in the fermentation experiments. Determination Of Volatile Compounds Through SPME-GC-MS analysis, volatile compounds were determined according with the method of Molyneux and Schieberle [41]. Retention times and mass spectral data were used to identify each compound. The retention times of the volatile compounds were determined using a C6-C26 alkane standard. The concentrations of volatile components were calculated with the peak areas of the internal standard (10 µL of 2-methyl-3-heptanone, 10 mg/L). The mass spectra and retention indices were determined on at least two different GC columns that have stationary phases of different polarities and results were compared to spectra and retention indices. Determination Of Organic Acids After the settlement, the rice-acid samples inoculated with K. marxianus L1-1 were ltered with doublelayer lter paper. The obtained ltrate was ltered through a 0. Proteomic Sequencing And Data Analysis Similarly, three independent biological replicates of proteomic sequencing of K. marxianus L1-1were used in the samples Y 1 d and Y 3 d. The preparation for proteomic analysis of K. marxianus L1-1 cell, liquid chromatography and mass spectrometry, peptide and protein identi cation and quanti cation according to the method by Xu et al [42]. All data with a 95% con dence and false discovery rate (FDR) less than 1% were considered to result in false positive results. According to the protein abundance level, the difference of more than 1.5-fold change (FC) and the statistical test of the p-value less than 0.05 were deemed to be differentially expressed proteins (DEPs) between Y 3 d and Y 1 d. All of DEPs were analyzed by GO and KEGG. The FASTA protein sequences of DEPs were blasted against KEGG database to retrieve their KEGG Orthologies (KOs) and were subsequently mapped to the pathways in KEGG. The corresponding KEGG pathways were extracted.Correlation Analysis Between Proteomic And Transcriptomic Results The DEGs and the DEPs were separately counted, and the Venn diagrams were plotted according to the counted results. Correlation analysis (Pearson correlation coe cient) was performed by Origin Pro 2018, and the four-quadrant maps were drawn based on changes in the transcriptome and proteome analysis. Statistical analysis All experiments were conducted in triplicate. Data were represented as the means ± standard deviation. Duncan's multiple range test and t-test were carried out to analyze signi cant differences in SPSS version 20.0 (SPSS Inc., Chicago, IL, USA). whereby P 0.05 or P 0.01 were considered to be statistically signi cant. The volcano map of differentially expressed genes (a) and differentially expressed proteins (b) in K. marxianus L1-1when inoculated rice-acid (Y 3 d vs Y 1 d). Figure 3 Statistical diagram of the second node annotation (a) and the most enriched GO Terms (b) of the differentially expressed genes in K. marxianus L1-1 when inoculated rice-acid (Y 3 d vs Y 1 d). Page 28/32 The GO analysis (molecular function, cellular component and biological process) of differentially expressed proteins in K. marxianus L1-1 when inoculated rice-acid (Y 3 d vs Y 1 d). Rich factor, the more signi cant the enrichment was. The Q-value was the corrected p-value after multiple hypotheses testing, which was ranged from 0 to 1. The closer to zero, the more signi cant the enrichment was (For interpretation of the references to color in this gure legend, the reader is referred to the web version of this study). Supplementary Files This is a list of supplementary les associated with this preprint. Click to download.
Symmetry-protected many-body Aharonov-Bohm effect It is known as a purely quantum effect that a magnetic flux affects the real physics of a particle, such as the energy spectrum, even if the flux does not interfere with the particle's path - the Aharonov-Bohm effect. Here we examine an Aharonov-Bohm effect on a many-body wavefunction. Specifically, we study this many-body effect on the gapless edge states of a bulk gapped phase protected by a global symmetry (such as $\mathbb{Z}_{N}$) - the symmetry-protected topological (SPT) states. The many-body analogue of spectral shifts, the twisted wavefunction and the twisted boundary realization are identified in this SPT state. An explicit lattice construction of SPT edge states is derived, and a challenge of gauging its non-onsite symmetry is overcome. Agreement is found in the twisted spectrum between a numerical lattice calculation and a conformal field theory prediction. It is known as a purely quantum effect that a magnetic flux affects the real physics of a particle, such as the energy spectrum, even if the flux does not interfere with the particle's path -the Aharonov-Bohm effect. Here we examine an Aharonov-Bohm effect on a many-body wavefunction. Specifically, we study this many-body effect on the gapless edge states of a bulk gapped phase protected by a global symmetry (such as ZN ) -the symmetry-protected topological (SPT) states. The many-body analogue of spectral shifts, the twisted wavefunction and the twisted boundary realization are identified in this SPT state. An explicit lattice construction of SPT edge states is derived, and a challenge of gauging its non-onsite symmetry is overcome. Agreement is found in the twisted spectrum between a numerical lattice calculation and a conformal field theory prediction. Mysteriously an external magnetic flux can affect the physical properties of particles even without interfering directly on their paths. It is known as the Aharonov-Bohm(AB) effect [1]. For instance, a particle of charge q and mass m confined in a ring (parametrized by 0 ≤ θ < 2π) of radius a threaded with a flux Φ B , see Fig. 1(a), would have its energy spectrum shifted as where Φ 0 = 2π/q is the quantum of magnetic flux and we adopt e = = c = 1 units. One can dispose of the gauge potential in the Schrödinger's equation of the wavefunction ψ(θ), by a gauge transformation that changes the wavefunction toψ(θ) = ψ(θ) exp[i q θ A(θ ) d θ ]. So, the effect of the external flux can be enforced by the condition that the phaseφ(θ) of the new wavefunction satisfies a twisted boundary condition, as the particle trajectory encloses the ring; thus, this twisted boundary condition implies a "branch cut", see Fig. 1(b). We may refer to this twist effect as "Aharonov-Bohm twist". For electrons confined on a mesoscopic ring, for example, even though interactions are not negligible, the sensitivity of the system to the presence of the external flux can be rationalized as a single particle phenomenon [2]. It is then opportune, as matter of principle, to ask whether such an AB effect can take place as an intrinsically interacting many-body phenomenon. More concretely, we ask whether the low energy properties of such interacting systems display a response analogous to Eq. (1) when subject to a gauge perturbation and, in turn, how this effect is encoded in the "topology'' (or boundary conditions) of the the wave-functional Ψ[φ(x)], see Figs. 1(c)-(d). We shall refer to this as a many-body AB effect or twist. In this paper we show that 2D "symmetry protected topological"(SPT) states [3][4][5] offer a natural platform for observing the many-body AB effect. SPT states are quantum many-body states of matter with a finite gap to bulk excitations and no fractionalized degrees of freedom. Due to a global symmetry, the system has the property that its edge states can only be gapped if a symmetry breaking occurs, either explicitly or spontaneously. So, in the absence of any symmetry breaking, the edge is described by robust edge excitations which can not be localized due to weak symmetry-preserving disorder, in contrast to purely one dimensional systems [6]. Assuming then that the edge states are in this gapless phase (an assumption which we will take throughout the paper), we shall demonstrate that the system will respond to the insertion of a gauge flux in a non-trivial way, whereas if the edge degrees of freedom were to become gapped, then they would be insensitive to the flux. We note that in 2D systems displaying the integer quantum Hall effect, the insertion of a flux also induces a non-trivial response of the chiral edge states [7]. In contrast to this situation, here we shall be concerned with 2D non-chiral SPT states for which the gapless edge excitations, like the single particle modes on a ring, propagate in both directions. The spectrum of these gapless modes characterize the low energy properties of the system. We approach this problem from two venues: (I) First we study the response of the SPT state to the insertion of a gauge flux by means of a low energy effective theory for the edge states and derive the change in the spectrum of edge states akin to Eq. (1). (II) Complementarily, we show that the many-body AB effect derived in (I) can also be captured by formulating a lattice model describing the edge states. Twisted boundary conditions defined for these models are shown to account for the presence of a gauge flux, which we confirm numerically. MANY-BODY AHARONOV-BOHM EFFECT To capture the essence of AB effect on a symmetryprotected many-body wavefunction, we imagine threading a gauge flux through an effective 1D edge on one side of a 2D bulk SPT annulus (or cylinder). This manybody wavefunction on the 1D edge (parametrized by 0 ≤ x < L) of SPT states is the analogue of a singlebody wavefunction of a particle in a ring. Since the bulk degrees of freedom are gapped, we concentrate on the low energy properties on the edge described by a non-chiral Luttinger liquid action I edge [φ I ] [9,10]. To capture the gauge flux effect on a many-body wave function |Ψ , we formulate it in the path integral, with φ I the intrinsic field on the edge. Our goal is to interpret this many-body AB twist (1/2π) q I A ∧ dφ I . We anticipate the energy spectrum under the flux would be adjusted, and we aim to capture this "twist" effect on the energy spectrum. Below we focus on bosonic SPT states with Z N symmetry [9][10][11][12][13], with global symmetry transformation on the edge (see Appendix 1 for details on the field theoretic input) where p ∈ {0, ..., N − 1} and (1/2π)∂ x φ 2 (x) is the canonical momentum associated to φ 1 (x) [14]. The Lagrangian density associated to Eq.(3) reads where indices µ, ν ∈ {0, 1}, I, J ∈ {1, 2}, K = 0 1 1 0 , the Hamiltonian density describing a free boson and q I = (q 1 , q 2 ) = (1, p) specify the charges carried by the currents J µ I = (1/2π)ε µν ∂ ν φ I . The right/left moving modes are described by φ R,L ∝ φ 1 ± φ 2 . Integrating the equations of motion of (5), with respect to φ I , along the boundary coordinate x in the presence of a static background Z N gauge flux configuration (See Appendix 2 for an alternative derivation from a bulk-edge Chern-Simons approach). Eq. (6) represents the shift in winding modes of the edge boson fields and plays a role analogous to the single-particle twisted boundary condition Eq. (2). The spectrum of the central charge c = 1 free boson at compactification radius R is labeled by the primary states |n, m (n, m ∈ Z) with scaling dimension and momentum P(n, m) = nm [15]. Then, according to Eq. (6), after the flux insertion, we derive the new spectrum (also see another related setting [16]) and momentaP EFFECTIVE LATTICE MODEL FOR THE EDGE OF SPT STATES -Symmetry transformation and domain wall-The twist effect encoded in Eq. (8) comes from an effective low energy description of the edge. We aim, as a complementary and perhaps more fundamental point of view, to capture this twist effect from a lattice model. As a first step in this program, we shall construct a global Z N symmetry transformation in terms of discrete degrees of freedom on the edge whose action reduces to Eq. (4) at long wavelengths. The hallmark of a non-trivial SPT state is that the symmetry transformation on the boundary cannot be in a tensor product form on each single site, i.e., it acts as a non-onsite symmetry transformation [3,4,17]. We propose the following ansatz for the symmetry transformation, acting on a ring with M sites that we take to describe the 1D edge, with σ M +1 ≡ σ 1 . At every site of the ring we consider a pair of two Z N operators: (τ j , σ j ), with a site index j = 1, ..., M , satisfying τ N j = σ N j = 1 1 and a conjugation relation τ † j σ j τ j = ω σ j , where ω ≡ e i 2π/N . We shall use the following representation The operators act on the Hilbert space of Z N states for each site j. The overall symmetry transformation contains the onsite transformation part generated by the string of τ 's and the "non-onsite domain wall (DW)" part (δN DW ) j,j+1 between sites j and (j + 1). The ansatz form Eq. (9) has the property that The construction above then naturally yields N distinct classes of Z N symmetry transformations, labeled by p ∈ Z N , upon imposing the following condition on the (N − 1)-th order polynomial operator Q (p) which guarantees (due to periodic boundary conditions) that (S (p) N ) N = 1 1. The symmetry transformation in the trivial case corresponds to p = 0 (mod N ) for which , then the domain wall variable (δN DW ) j,j+1 counts the number of units of Z N angle between sites j and j + 1, so (2π/N )(δN DW ) j,j+1 = φ 1,j+1 − φ 1,j , which produces the expected long distance behavior of the symmetry transformation Eq. (4). Our ansatz nicely embodies two interpretations together, on both a continuum field theory and a discrete lattice model. The Z N symmetry transformations Eq. (9) that satisfy Eq. (12) can be explicitly written as In Ref.17 the edge symmetry for Z N SPT states was proposed in terms of effective long-wavelength rotor variables. We emphasize that the construction of the edge symmetry transformations Eq. (13) described here does not rely on a long wavelength description; rather it can be viewed as a fully regularized symmetry transformation. In Appendices 3 and 4, we give explicit formulas for the Z 2 and Z 3 symmetry transformations, as well as we draw a connection between the lattice operators (τ j , σ j ) and quantum rotor variables. where T performs a translation by one lattice site. Our model Hamiltonian is (with λ , the model gives a gapped and symmetry preserving ground state. In Appendix 3, we provide explicit forms of the non-trivial classes of SPT Hamiltonians for the N = 2 and N = 3 cases. We note that for the Z 2 case, our symmetry transformation and edge Hamiltonian are the same as that obtained in Ref. 10 (where the low energy theory in terms of a non-chiral Luttinger liquid has been discussed), despite the fact that our method of constructing the symmetry is independent of that in Ref. 10 and provides a generalization for all Z N groups. It is noteworthy to mention that the authors of Ref. 10 argue that the edge of the Z 2 bosonic SPT state is generically unstable to symmetry preserving perturbations. Nevertheless, we shall still study the model Hamiltonian (15) for the Z 2 as a means to address our numerical methods. A common feature of these Hamiltonian classes is the existence of combinations of terms like σ j−1 τ j σ j+1 due to the non-onsite global symmetry. Their effect, as we shall see, is to give rise to a gapless spectrum. In order to understand their effect on the low energy properties, we perform an exact diagonalization study of the non-trivial Hamiltonian classes Eq. (15) on finite systems. In Fig. (2) we plot the lowest energy eigenvalues for the Z 2 and Z 3 non-trivial SPT states as a function of the lattice momentum k ∈ Z defined by T = e i 2π M k . The spectrum of H with M = 20 sites, shows very good agreement with the bosonic spectrum Eq. (7) at R = 2, with states being labeled by |n, m . The global Z 2 charges relative to the ground state were found to be e i π (n+m) in accordance to Eq. (4) (We note that similar results have been obtained for the Z 2 case in Ref. 17). For the Z 3 SPT states, which have not been investigated before, with M = 12 sites, the spectrum of H (1) 3 and H (2) 3 are identical [18]. Finite size effects are more prominent than in the Z 2 case, but the overall structure of the spectrum is very similar with the second and third states being degenerate with energy close to 1/4 and global Z 3 charges e ±2πi/3 (which we identify as the |n = ±1, m = 0 states), suggesting the same spectrum Eq. (7) at R = 2. In Appendix 4, following the methods of Refs. [3,4,17], we show that the symmetry classes defined in Eq. (9) subject to condition Eq. (12) are related to all Z N 3-cocycles of the group cohomology classification of 2D SPT states [3]. Thus, our lattice model completely realizes all N classes of H 3 (Z N , U (1)) = Z N , where p stands for the p-th class in the third cohomology group. TWISTED BOUNDARY CONDITIONS AND TWISTED HAMILTONIAN ON THE LATTICE We now seek to build a lattice model with twisted boundary conditions to capture the edge states spectral shift in the presence of a unit of Z N flux insertion. It is instructive to revisit the case of twisted boundary conditions where the symmetry transformation acts as an on-site symmetry. For the sake of concreteness, let us consider the one dimensional quantum Ising model The Z 2 twisted sector (or equivalently, in this case, the anti-periodic boundary condition sector) of the model is realized by flipping the sign of a pair interaction σ z k σ z k+1 → −σ z k σ z k+1 , for some site k, while leaving all the other terms unchanged. If the Ising model is defined on an open line, the twist effect is implemented by conjugating the H Ising with the operator ≤k σ x . When the model is defined on a ring, the same effect is obtained by defining a new translation operator T = T σ x k and demanding that the twisted Hamiltoniañ H Ising commutes withT . It is straightforward to see that the twisted Ising Hamiltonian on a ring which commutes withT indeed has σ z k σ z k+1 → −σ z k σ z k+1 . We also note that (T ) M = M j=1 σ x j generates the Z 2 symmetry of H Ising , which is also a symmetry ofH Ising . We now generalize the construction above for the SPT edge Hamiltonians on a ring with a non-onsite symmetry by defining the unitary twisted lattice translation operator [20]T for each p ∈ Z N classes, which incorporates the effect of the branch cut as in Fig. 1 reads (see Appendix 3.2 for explicit results) Notice that, due to the intrinsic non-onsite term in the symmetry transformation, (n, m). Our findings thus establish a relationship between the many-body AB effect both in terms of a long wavelength description in the field theory as well as in terms of twisted boundary conditions in a lattice model. SUMMARY We have demonstrated that an intrinsically many-body realization of the Aharonov-Bohm phenomenon takes place on the edge of a 2D symmetry-protected manybody system in the presence of a background gauge flux. In our construction we have assumed that edge state is in a gapless phase and is described by a simple non-chiral Luttinger liquid action with one right and one left moving propagating modes carrying different Z N charges [24], in which case, the spectrum in the presence of a gauge flux displays quantization as Eq.(8) due to global symmetry protection (Z N symmetry in our work), analogous to the quantization of the energy spectrum of a superconducting ring due to the Z 2 symmetry inherent to superconductors [23]. The universal information carried by the counter propagating edge modes is that they carry different Z N charges, which has been numerically verified for the Z 2 and Z 3 SPT classes in Fig. (2), where this difference is parametrized by the integer p ∈ {1, ..., N − 1} that characterizes the SPT class. This quantum number should remain invariant as long as the SPT order is not destroyed in the bulk. The offset in the charges carried by the right and left moving modes has then been shown to reflect itself in the edge spectrum according to Eq. (8) (where R is a non-universal parameter), which we have confirmed numerically in our model Hamiltonians for the Z 2 and Z 3 SPT classes in Fig. (3). We have proposed general principles guiding the construction of the lattice Hamiltonians, Eqs. (15) and (18), of the bosonic Z N -symmetric SPT edge states for both the untwisted/twisted (without/with gauge fluxes) cases. The twisted spectra (i.e. with gauge flux) characterize all types of Z N bosonic anomalies[21, 22], which naturally serve as "SPT invariants [5]" to detect and distinguish all Z N classes of SPT states numerically/experimentally. (See also recent works [22,25].) Gauging a non-onsite symmetry of SPT has been noticed relating to Ginsparg-Wilson(G-W) fermion[26] approach of a lattice field theory problem [27]. We remark that our current work achieves gauging a non-onsite symmetry for a bosonic system, thus providing an important step towards this direction. Whether our work can be extended to more general symmetry classes and to fermionic systems (such as U(1) symmetry in G-W fermion approach) is an open question, which we leave for future works. Appendix In Appendix 1, we briefly review the field theory tool for topological states, especially symmetry-protected topological (SPT) states, but with the emphasis on the canonical quantization, and how the global symmetry transformation S (p) N on the edge is encoded in the canonical quantization. Using the same formalism, in Appendix 2, we derive the twisted boundary condition due to a gauge flux insertion. In Appendix 3, we provide our detailed lattice construction (with Z N symmetry) for both the untwisted/twisted (without/with gauge flux) cases. In Appendix 4, we match each SPT class of our lattice construction to the 3-cocycles in the group cohomology classification. The intrinsic field theory description of SPT states, on a 2D spatial surface M 2 , is the Chern-Simons action where a is the intrinsic(or statistical) gauge field, and K is the K-matrix which categorizes the SPT orders. An SPT state is not intrinsically topologically ordered [3], so it has no topological degeneracy [8,14]. Ground state degeneracy(GSD) of SPT on the torus is GSD = | det K| = 1 [8,9,14], this suggests a constrained canonical form of K [9,13,14]. The SPT order is symmetry-protected, so tautologically its order is protected by a global symmetry. The novel features of SPT distinct from a trivial insulator is its symmetry-protected edge states on the boundary. The effective degree of freedom of its 1D edge, ∂M 2 , is chiral bosonic field φ, where φ is meant to preserve gauge invariance on the bulk-edge under gauge transformation of the field a [8]. The boundary action shows (21) ZN symmetry transformation The Z N symmetry simply requires a rank-2 Kmatrix, which exhausts all the group cohomology class, H 3 (Z N , U (1)) = Z N , The Z N symmetry transformation with a Z N angle specifies the group element g [9], where p labels the Z N class of the cohomology group H 3 (Z N , U (1)) = Z N . Both n and p are module N as elements in Z N . It can be shown that under φ gn → φ gn + δφ gn , the action Eq. (21) is invariant, and the Z N group structure is realized through g N n = 1 1. The construction of more general symmetry classes can be found in Ref.9, 13. Canonical quantization Here we go through the canonical quantization of the boson field φ I . For canonical quantization, we mean that imposing a commutation relation between φ I and its conjugate momentum field Π I (x) = δL δ(∂tφ I ) = 1 2π K IJ ∂ x φ J . Because φ I is a compact phase of a matter field, its bosonization contains both zero mode φ 0I and winding momentum P φ J , in addition to non-zero modes [14]: The periodic boundary has size 0 ≤ x < L. Firstly we impose the commutation relation for zero mode and winding modes, and generalized Kac-Moody algebra for non-zero modes: (25) We thus derive canonical quantized fields with the commutation relation: The symmetry transformation of Eq. (23) imples φ gn → φ gn + δφ gn : It can be easily checked, using Eq. (26), that implements the global symmetry transformation Twisted boundary condition from a gauge flux insertion Here we apply the canonical quantization method to formulate the effect of a gauge flux insertion through a cylinder (an analog of Laughlin thought experiments [7]) in terms of a twisted boundary condition effect. The canonical quantization approach here can be compared with the alternate path integral approach motivated in the main text. The canonical quantization offers a solid view why the twisted boundary condition resulted from a gauge flux is a quantum effect. We will firstly present the bulk theory viewpoint, then the edge theory viewpoint. Bulk theory Our setting is an external adiabatic gauge flux insertion through a cylinder/annulus. Here the gauge field (such as electromagnetic field) couples to (SPT or intrinsic) topologically ordered states, by a coupling charge vector q I . The bulk term (here we recover the right dimension, while one can set these to be e = = c = 1 in the end) where J µ I is in a conserved current form From the action, we derive the EOM From the bulk theory side, an adiabatic flux ∆Φ B induces an electric field E x by Faraday effect, causing a perpendicular current J y flow to the boundary edge states. We can explicitly derive the flux effect from Faraday-Maxwell equation in the 2+1D bulk, which relates to the induced charge transported through the bulk, via the Hall effect mechanism. This is a derivation of Laughlin flux insertion argument. Q is the total charge transported through the bulk, which should condense on the edge of cylinder. Edge theory On the other hand, from the boundary theory side, the induced charge Q I on the edge can be derived from the edge state dynamics affecting winding modes(see Eq. (24)) by An equivalent interpretation is that the flux insertion twists the boundary conditions of φ I field In the Z N symmetry SPT case at hand, we should replace e to the condensate(order parameter) charge e * = N e. This affects the unit of ∆Φ B as 2π e * , so ∆Φ B = 2πn N e , and the twisted boundary condition is Notice q J is the crucial coupling in the global symmetry transformation, where we gauge it by minimal coupling to a gauge field A with a term q I A µ J µ I . Here q J is realized by (1, p) from Eq.(23), so inserting a unit Z N flux produces In other words, while the global Z N symmetry transformation is realized by the insertion of a unit Z N gauge flux implies the twisted boundary condition Here φ 1 (x) is realized as the long wavelength description of the rotor angle variable introduced in the maintext, while its conjugate momentum is the angular momentum where We stress that our result is very different from a seemly similar study in Ref.5, where "the gauging process" is by coupling the bulk state to an external gauge field A, and integrating out the intrinsic field a, to get an effective response theory description. However, the twisted boundary condition derived in [5] does not capture the dynamical effect on the edge under gauge flux insertion. Instead, in our case, we can capture this effect in Eq. (42). From field theory to lattice model Here we motivate the construction of our lattice model from the field theory. Our lattice model uses the rotor eigenstate |φ as basis, where in Z N symmetry, φ = n(2π/N ), with n is a Z N variable. The conjugate variable of φ is the angular momentum L, which again is a Z N variable. The |φ and |L eigenstates are related by a Fourier transformation, |φ = General Hamiltonian construction The Z N class of Hamiltonian may be realized by H with the parametrization where Q (p) Hermiticity of Q (p) N combined with σ † j σ j+1 ∈ Z N imply the constraint on the complex coefficients q a (we drop indices p, N to simplify notation and, in the following, bar denotes complex conjugation): Solution of Eq. (51) can be systematically found for each value of p ∈ Z N giving rise to different symmetry classes. Below, for the sake of concreteness, we give explicit forms of the symmetry transformations and Hamiltonians for Z 2 and Z 3 groups. The symmetry transformation reads where we find, by imposing condition (51), With that, we obtain the Hamiltonian in the trivial class as and in the non-trivial SPT class as Twisted Boundary Conditions on the lattice model We clarify some of the steps leading to an edge Hamiltonian satisfying twisted boundary conditions accounting for the presence of one unit of background Z N gauge flux. The case with a general number of flux quanta can be equally worked out. Let T be the lattice translation operator satisfying for any operator X j on a ring such that X M +1 ≡ X 1 . It satisfies T M = 1 1. One can then immediately verify from Eqs. (45) and (47) Twisted boundary conditions are implemented by defining a modified translation operator and seeking a twisted Hamiltoniañ under the condition that Thus we obtaiñ Notice that in trivial case (p = 0) the relatioñ Correspondence in Group Cohomology and non-trivial 3-cocycles from MPS projective representation Here we map our lattice construction to the 3-cocycles in the group cohomology classification for each SPT class. Importantly, we notice that the non-onsite piece in S We seek a quantum rotor description of the above form. We claim that which is equivalent to (i) the domain wall picture using rotor angle variables (here (φ 1,j+1 − φ 1,j ) r , where subindex r means that we take the module 2π on the angle [17]), and to (ii) the field theory formalism in Eq. (29). The reason follows: as we mention in the p-th case of Z N class, we impose the constraint to solve the polynomial ansatz This is equivalent to the fact that since exp[iφ 1,j ] ab = φ a |e iφj |φ b = σ ab,j . Therefore, the domain wall variable (δN DW ) j,j+1 indeed counts the number of units of Z N angle between sites j and j + 1, so (2π/N )(δN DW ) j,j+1 = φ 1,j+1 − φ 1,j . We thus have shown Eq. (89), and have confirmed affirmatively that our approach of lattice regularization is indeed a rotor realization in Ref. 17 with the same symmetry transformation S (p) N , but captures much more than the low energy rotor model there. (101) Here means the equivalence is up to a projection out of un-parallel state transformation. To derive P g1,g2 , notice that P g1,g2 inputs one state and output two states. This has the expected form, where (m 1 + m 2 ) N with subindex N means taking the value module N . In order to derive θ(g 1 , g 2 , g 3 ), we start by contracting T which form inputs one state φ in | and outputs three states |φ in + 2π N (m 2 + m 3 ) , |φ in + 2π N m 3 and |φ in . On the other hand, one can contract T (I 1 ⊗ P g2,g3 )P g1,g2g3 again which form inputs one state φ in | and outputs three states |φ in + 2π N (m 2 +m 3 ) , |φ in + 2π N m 3 and |φ in . From Eq. which indeed is the 3-cocycle in the third cohomology group H 3 (Z N , U (1)) = Z N . We thus verify that the projective representation e iθ(g1,g2,g3) from MPS tensors corresponds to the group cohomology approach [3]. This demonstrates that our lattice model construction completely maps to all classes of SPT, as we aimed for.
Extremal Index, Hitting Time Statistics and periodicity We give conditions to prove the existence of an Extremal Index for general stationary stochastic processes by detecting the presence of one or more underlying periodic phenomena. This theory, besides giving general useful tools to identify the extremal index, is also tailored to dynamical systems. In fact, we apply this idea to analyse the possible Extreme Value Laws for the stochastic process generated by observations taken along dynamical orbits with respect to various measures. As in the authors' previous works on this topic, the analogy of these laws in the context of hitting time statistics is explained and exploited extensively. Introduction The study of extreme or rare events is of great importance in a wide variety of fields and is often tied in with risk assessment. This explains why Extreme Value Laws (EVL) and the estimation of the tail distribution of the maximum of a large number of observations has drawn much attention and become a highly developed subject. In many practical situations, such as in the analysis of financial markets or climate phenomena, time series can be modelled by a dynamical system which describes its time evolution. The recurrence effect introduced by Poincaré, which is present in chaotic systems, is the starting point for a deeper analysis of the limit distribution of the elapsed time until the occurrence of a rare event, which is usually referred to as Hitting Time Statistics (HTS) and Return Time Statistics (RTS). In [FFT10], we established the connection between the existence of EVL and HTS/RTS for stochastic processes arising from discrete time chaotic dynamical systems. This general link allowed us to obtain results of EVL using tools from HTS/RTS and the other way around (this was applied in cases where the extremal index was 1, which is the most classical setting). The extremal index (EI) θ ∈ [0, 1] is a measure of clustering of extreme events, the lower the index, the higher the degree of clustering. In this paper, we give general conditions to prove the existence of an extremal index 0 < θ < 1, which can be applied to any stationary stochastic process. Although our results apply to general stationary stochastic processes, we will be particularly interested in the case where the stochastic process arises from a discrete time dynamical system. This setup will provide not only a huge diversity of examples, but also a motivation for the conditions we propose, as well as a better understanding of their implications. Namely, motivated by the study of stochastic processes arising from chaotic dynamical systems, we associate the extremal index to the occurrence of periodic phenomena. We will illustrate these results by applying them to time series provided by deterministic dynamical systems as well as to cases where the extremal index is already well understood: an Autoregressive (AR) process introduced by Chernick and two Maximum Moving Averages (MMA) processes. Because our conditions on the time series data which guarantee an EVL with a given EI are so general, in the dynamical systems context we are able to prove strong results on EVLs around periodic points. For example, this allows us to consider non-uniformly hyperbolic dynamical systems. Moreover, coupling these weak conditions with the connection of EVLs to HTS/RTS enables us to consider hits/returns to balls, rather than cylinders. To our knowledge this is the first result of HTS/RTS different from the standard exponential which applies to balls. We do this first for so-called 'Rychlik systems' which are a very general form of uniformly expanding interval map. As explained in Remark 5, these results can easily be extended to some higher dimensional version of these Rychlik systems. We also give an example of non-uniformly hyperbolic dynamical system: the full quadratic map (also known as the quadratic Chebyshev polynomial), where invariant measures are absolutely continuous w.r.t. Lebesgue or are, more generally, equilibrium states w.r.t. certain potentials. In future work we will apply these ideas to even more badly behaved non-uniformly hyperbolic systems. One of the striking results here is that, at least for well-behaved systems, an extremal index different from 1 can only occur at periodic points. We prove this for the full shift equipped with the Bernoulli measure, (we believe that this last result holds in greater generality, but do not prove that here). Hence, this result raises the following: Question. Is it possible to prove the existence of an EI in (0, 1) without some sort of periodicity? In a more concrete formulation: Question. For stationary stochastic processes arising from chaotic dynamical systems, is it possible to prove the existence of an EI in (0, 1), either for EVL or HTS/RTS around non-periodic points? We finish this subsection by emphasising that our conditions on time series data also apply beyond that given by dynamical systems. Indeed the dynamical systems approach suggests that in very general settings we should view data with an extremal index θ ∈ (0, 1) as having some underlying periodic phenomenon. The conditions we use to check this, which are, to our knowledge, the weakest of their kind, and can almost be reduced to simply checking periodicity and mixing. Throughout this paper the notation A(u) ∼ B(u), for u approaching u 0 , means that lim u→u 0 A(u) B(u) = 1. When u = n and u 0 = ∞ we will just write A(n) ∼ B(n). The notation A(n) = o(n) means that lim n→∞ A(n) n → 0. Also, let [x] denote the integer part of the positive real number x and for a set A, the notation A c will denote the complement of the set A. The notion of the EI was latent in the work of Loynes [L65] but was established formally by Leadbetter in [L83]. It gives a measure of the strength of the dependence of X 0 , X 1 , . . ., so that θ = 1 indicates that the process has practically no memory while θ = 0, conversely, reveals extremely long memory. Another way of looking at the EI is that it gives some indication on how much exceedances of high levels have a tendency to "cluster". Namely, for θ > 0 this interpretation of the EI is that θ −1 is the mean number of exceedances of a high level in a cluster of large observations, i.e., is the "mean size of the clusters". Remark 1. The sequences of real numbers u n = u n (τ ), n = 1, 2, . . ., are usually taken to be one parameter linear families such as u n = a n y + b n , where y ∈ R and a n > 0, for all n ∈ N. Observe that τ depends on y through u n and, in fact, in the i.i.d. case, depending on the tail of the marginal d.f. F , we have that τ = τ (y) is of one of the following three types (for some α > 0): τ 1 (y) = e −y for y ∈ R, τ 2 (y) = y −α for y > 0 and τ 3 (y) = (−y) α for y ≤ 0. 1.2. Hitting and return time statistics. Consider a deterministic discrete time dynamical system (X , B, µ, f ), where X is topological space, B is the Borel σ-algebra, f : X → X is a measurable map and µ is an f -invariant probability measure, i.e., µ(f −1 (B)) = µ(B), for all B ∈ B. One can think of f : X → X as the evolution law that establishes how time affects the transitions from one state in X to another. Consider now a set A ∈ B and a new r.v. that we refer to as first hitting time to A and denote by r A : Given a sequence of sets {U n } n∈N so that µ(U n ) → 0 we consider the sequence of r.v. r U 1 , r U 2 , . . . If under suitable normalisation r Un converges in distribution to some nondegenerate d.f. G we say that the system has Hitting Time Statistics (HTS) for {U n } n∈N . For systems with 'good mixing properties', G is the standard exponential d.f., in which case, we say that we have exponential HTS. We say that the system has HTS G to balls at ζ if for any sequence (δ n ) n∈N ⊂ R + such that δ n → 0 as n → ∞ we have HTS G for (U n ) n = (B δn (ζ)) n . Let P 0 denote a partition of X . We define the corresponding pullback partition P n = n−1 i=0 f −i (P 0 ), where ∨ denotes the join of partitions. We refer to the elements of the partition P n as cylinders of order n. For every ζ ∈ X , we denote by Z n [ζ] the cylinder of order n that contains ζ. For some ζ ∈ X this cylinder may not be unique, but we can make an arbitrary choice, so that Z n [ζ] is well defined. We say that the system has HTS G to cylinders at ζ if we have HTS G for U n = Z n (ζ). Let µ A denote the conditional measure on A ∈ B, i.e., µ A := µ| A µ(A) . Instead of starting somewhere in the whole space X , we may want to start in U n and study the fluctuations of the normalised return time to U n as n goes to infinity, i.e., for each n, we look at the random variables r Un as being defined in the probability space (U n , B∩U n , µ Un ) and wonder if, under some normalisation, they converge in distribution to some non-degenerate d.f.G, in which case, we say that the system has Return Time Statistics (RTS)G for {U n } n∈N . The existence of exponential HTS is equivalent to the existence of exponential RTS. In fact, according to the Main Theorem in [HLV05], a system has HTS G if and only if it has RTSG and (1.3) Regarding normalising sequences to obtain HTS/RTS, we recall Kac's Lemma, which states that the expected value of r A with respect to µ A is A r A dµ A = 1/µ(A). So in studying the fluctuations of r A on A, the relevant normalising factor should be 1/µ(A). Definition 3. Given a sequence of sets (U n ) n∈N so that µ(U n ) → 0, the system has HTS G for (U n ) n∈N if for all t ≥ 0 → G(t) as n → ∞, (1.4) and the system has RTSG for (U n ) n∈N if for all t ≥ 0 (1.5) The theory of HTS/RTS laws is now a well developed theory, applied first to cylinders and hyperbolic dynamics, and then extended to balls and also to non-uniformly hyperbolic systems. We refer to [C00] and [S09] for very nice reviews as well as many references on the subject. (See also [AG01], where the focus is more towards a finer analysis of uniformly hyperbolic systems.) Since the early papers [P91, H93], several different approaches have been used to prove HTS/RTS: from the analysis of adapted Perron-Frobenius operators as in [H93], the use of inducing schemes as in [BSTV03], to the relation between recurrence rates and dimension as explained in [S09,Section 4]. However, even for systems with good mixing properties, it is known at least since [H93] that at some special (periodic) points, similar distributions for the HTS/RTS (for cylinders) with an exponential parameter 0 < θ < 1 (i.e., 1 − G(t) = e −θt ) apply. This subject was studied, also in the cylinder context, in [HV09], where the sequence of successive returns to neighbourhoods of these points was proved to converge to a compound Poisson process. 1.3. The connection between EVL and HTS/RTS. We start by explaining what we mean by stochastic processes arising from discrete time dynamical systems. Take a system (X , B, µ, f ) and consider the time series X 0 , X 1 , X 2 , . . . arising from such a system simply by evaluating a given random variable ϕ : X → R ∪ {±∞} along the orbits of the system, or in other words, the time evolution given by successive iterations by f : (1.6) Clearly, X 0 , X 1 , . . . defined in this way is not an independent sequence. However, finvariance of µ guarantees that this stochastic process is stationary. We assume that ϕ achieves a global maximum at ζ ∈ X and the event {x ∈ X : ϕ(x) > u} = {X 0 > u} corresponds to a topological ball "centred" at ζ. EVLs for the partial maximum of such sequences have been proved directly in the recent papers [C01, FF08, FFT10, HNT10, GHN11,FFT11]. We highlight the pioneer work of Collet [C01] for the innovative ideas introduced. The dynamical systems covered in these papers include non-uniformly hyperbolic 1-dimensional maps (in all of them), higher dimensional non-uniformly expanding maps in [FFT10], suspension flows in [HNT10], billiards and Lozi maps in [GHN11]. In [FFT10], we formally established the link between EVL and HTS/RTS (for balls) of stochastic processes given by (1.6). Essentially, we proved that if such time series have an EVL H then the system has HTS H for balls "centred" at ζ and vice versa. Recall that having HTS H is equivalent to say that the system has RTSH, where H andH are related by (1.3). This was based on the elementary observation that for stochastic processes given by (1.6) we have: (1.7) We exploited this connection to prove EVL using tools from HTS/RTS and the other way around. In [FFT11], we carried the connection further to include the cases where the invariant measure µ may not be absolutely continuous with respect to Lebesgue measure and also to understand HTS/RTS for cylinders rather than balls in terms of EVL. To achieve the latter we introduced the notion of cylinder EVL which essentially requires that the limits (1.1) and (1.2) exist only for particular time subsequences {ω j } j∈N of {n} n∈N (see Section 5). Hence, under the conditions of [FFT10, Theorem 2], when X 0 , X 1 , X 2 , . . . has an EI θ < 1 then we have HTS for balls G given by (1.8) Using (1.7) plus the integral relation (1.3) and arguing as in the proof of [FFT10, Theorem 2], we have RTS for ballsG that can be written as: or in other words: the return time law is the convex combination of a Dirac law at zero and an exponential law of average θ −1 where the weight is the EI θ itself. As a consequence of this relation new light can be brought to the work of Galves and Schmitt [GS90] who introduced a short correction factor λ in order to get exponential HTS, that was then studied later in great detail by Abadi [O87], which is widely used in the estimation of the EI, can be easily derived from formula (1.9) for the RTS. 1.4. Extreme Value Laws in the absence of clustering. In this subsection we recall some of the results which imply the existence of EVLs in the absence of clustering, which means the EI is 1. We do so to motivate and provide a better understanding of the conditions we propose on Section 2. We start by recalling a condition proposed by Leadbetter for general stochastic processes which imposes some sort of independence on the short range that prevents the appearance of clustering. Supposing that D(u n ) holds, let (k n ) n∈N be a sequence of integers such that k n → ∞ and k n t n = o(n). (1.10) Condition (D ′ (u n )). We say that D ′ (u n ) holds for the sequence X 0 , X 1 , X 2 , . . . if there exists a sequence {k n } n∈N satisfying (1.10) and such that However, when one considers stochastic processes arising from dynamical systems such as in (1.6), in practice condition D(u n ) can not be verified unless the system satisfies some strong uniformly mixing condition such as α-mixing (see [B05] for a definition), and even in these cases it can only be verified for certain subsequences of {n} n∈N , which means that the limit laws only hold for cylinders. For that reason, based on the work of Collet [C01], in [FF08a] we proposed a condition we called D 2 (u n ) which is much weaker than D(u n ), and which follows from sufficiently fast decay of correlations, thus allowing us to obtain the results for balls rather than cylinders. We remark that rates of decay of correlations are nowadays very well known for a wide variety of systems including non-uniformly hyperbolic systems admitting a Young tower (see [Y98, Y99]). Condition (D 2 (u n )). We say that D 2 (u n ) holds for the sequence X 0 , X 1 , . . . if for all ℓ, t and n where γ(n, t) is decreasing in t for each n and nγ(n, t n ) → 0 when n → ∞ for some sequence t n = o(n). Observe that while D(u n ) imposes some rate for the independence of two blocks of r.v. separated by a time gap which is independent of the size of the blocks, condition D 2 (u n ) requires something similar but only when the first block is reduced to one r.v. only. This detail turns out to be crucial when proving D 2 (u n ) from decay of correlations as can be seen in [FF08a, Section 2]. The interesting fact is that we can replace D(u n ) by D 2 (u n ) in [L83, Theorem 1.2] and the conclusion still holds. In fact, according to [FF08a, Theorem 1], if conditions D 2 (u n ) and D ′ (u n ) hold for X 0 , X 1 , . . . then there exists an EVL for M n and H(τ ) = 1 − e −τ . The idea is that condition D ′ (u n ), instead of being used once as in the original proof of Leadbetter, is used twice: in one of the instances it is used in conjunction with D 2 (u n ) to produce the same effect as D(u n ) alone. Basically this means that as long as you start with a dynamical system with sufficiently fast decay of correlations you only have to prove D ′ (u n ) to show the existence of exponential EVL or HTS/RTS. 1.5. Structure of the paper. The paper is organised as follows. In Section 2 we give conditions to prove the existence of an EI for general stochastic processes; initially this is applied to 'first order' clustering behaviour and then later to higher order clustering. In Section 3 we give a very general introduction to the dynamical systems and accompanying measures we will be studying. We also explain the link between EVL and HTS and state general theorems for those laws in this context. In Section 4 we give some concrete examples of dynamical systems, measures and observables yielding EVLs with EI in (0, 1). These examples are so-called Rychlik systems as well as the full quadratic map. Section 5 is a short section explaining the relevant conditions required to guarantee an EVL for returns to cylinders rather than balls, while Section 6 shows that in that context we can completely characterise all possible EVLs for simple dynamical systems. Finally in the appendix we show how our conditions apply to various standard types of random variables not necessarily produced by a dynamical system, namely, two MMA processes and one AR(1) introduced by Chernick in [C81]. Extremal index and periodicity In this section we give conditions that can be applied to any stationary stochastic process and which allow us to prove the existence of an EI by realising the presence of one or more underlying periodic phenomenon. To explain what is happening here, and to underline the motivation, we first turn to the main stream of the paper which is the dynamics around repelling periodic points. Our strategy is essentially to replace the role of "exceedances" (that correspond to entrances in balls) by what we shall call "escapes" (that correspond to entrances in annuli), and then reduce to the usual strategy when no clustering occurs, described in Section 1.4. 2.1. Motivation from periodic dynamics. We consider a model case: the stochastic processes defined by (1.6) when ϕ achieves a global maximum at a repelling periodic point ζ ∈ X , of prime period p ∈ N, which is also a Lebesgue density point of an invariant measure µ, where µ is assumed to be absolutely continuous with respect to Lebesgue. We postpone the exact meaning of all this to Section 3 but keep the following facts: (1) we assume that for u sufficiently large, {X 0 > u} corresponds to a topological ball centred at ζ; (2) the periodicity of ζ implies that for all large u, {X 0 > u} ∩ f −p ({X 0 > u}) = ∅ and the fact that the prime period is p implies that {X 0 > u} ∩ f −j ({X 0 > u}) = ∅ for all j = 1, . . . , p − 1. (3) the fact that ζ is repelling means that we have backward contraction implying that i j=0 f −j (X 0 > u) is another ball of smaller radius around ζ and Leb( i j=0 f −j (X 0 > u)) ∼ (1 − θ) i Leb(X 0 > u), for all u sufficiently large and some 0 < θ < 1; (4) the fact that ζ is a Lebesgue density point of µ implies that we can replace Leb by µ in the previous item. can be seen as an annulus centred at ζ that corresponds to the points that after p steps manage to escape from {X 0 > u}. Moreover, for u large we have µ(Q(u)) ∼ θµ(X 0 > u). Following the work of Hirata [H93] on Axiom A diffeomorphisms, it is known that around periodic points there is a parameter less than 1 in the Hitting Times distribution, which in light of the connection between EVL and HTS can be seen as the Extremal Index. However, this has only been checked for cylinders. The approach we propose here allows to finally establish the result for balls, and for non-Axiom A systems. The main obstacle when dealing with periodic points is that they create plenty of dependence in the short range. In particular, using properties (3) and (4) we have that for all u sufficiently large which implies that D ′ (u n ) is not satisfied, since for the levels u n as in (1.1) it follows that Recalling the discussion at the end of Section 1.4, condition D ′ (u n ) was essential to allow the replacement of D(u n ) by D 2 (u n ) in order to use decay of correlations to get the result. To overcome this difficulty around periodic points we make a key observation that roughly speaking tells us that around periodic points one just needs to replace the ball {X 0 > u n } by the annulus Q(u n ): then much of the analysis works out as in the absence of clustering. To be more precise, let Q n (u n ) : . Note that while the occurrence of the event {M n ≤ u n } means that no entrance in the ball {X 0 > u n } has occurred up to time n, the occurrence of Q n (u n ) means that no entrance in the annulus Q(u n ) has occurred up to time n. Proposition 1. Let X 0 , X 1 , , . . . be a stochastic process defined by (1.6) where ϕ achieves a global maximum at a repelling periodic point ζ ∈ X , of prime period p ∈ N, so that conditions (1) to (4) above hold. Let (u n ) n be a sequence of levels such that (1.1) holds. Then, Proof. Clearly Next, note that if Q n (u n ) \ {M n ≤ u n } occurs, then you must enter the ball {X 0 > u n } at some point which means we may define first time it happens by i = inf{j ∈ {0, 1, . . . n−1} : However, since Q p,0,n (u n ) does occur, you must never enter the annulus Q ( u n ) which is the only way out of the ball {X 0 > u n }. Hence, once you enter the ball you must never leave it, which means that It follows by stationarity, properties (3), (4) above and (1.1) that The proposition above is essentially saying that if the sequence of levels is well chosen then, around repelling periodic points, in the limit, the probability of there being no entrances in the ball {X 0 > u n } equals the probability of there being no entrances in the annulus Q(u n ). Then the idea to cope with clustering caused by periodic points is to adapt conditions D 2 (u n ) and D ′ (u n ), letting annuli replace balls. In order to make the theory as general as possible, motivated by the above considerations for stochastic processes generated by dynamical systems around periodic points, we will propose some abstract conditions to prove the existence of an EI less than 1 for general stationary stochastic processes. 2.2. Existence of an EI due to the presence of periodic phenomena. We start by an abstract condition designed to capture the essential of properties (1)-(4) from Section 2.1 in order to guarantee that the conclusion of Proposition 1 holds for general stochastic processes. It imposes some type of periodic behaviour of period p ∈ N plus a summability requirement. For that reason we shall denote it by SP p,θ which stands for Summable Periodicity of period p. To state the condition we will use a sequence of levels (u n ) n as in (1.1). Condition (SP p,θ (u n )). We say that X 0 , X 1 , X 2 , . . . satisfies condition SP p,θ (u n ) for p ∈ N and θ ∈ [0, 1] if and moreover Condition (2.1), when θ < 1, imposes some sort of periodicity of period p among the exceedances of high levels u n , since if at some point the process exceeds the high level u n , then, regardless of how high u n is, there is always a strictly positive probability of another exceedance occurring at the (finite) time p. In fact, if the process is generated by a deterministic dynamical system f : X → X as in (1.6) and f is continuous then (2.1) implies that ζ is a periodic point of period p, i.e., f p (ζ) = ζ. We also state a stronger condition, which is often simpler to check than SP p,θ (u n ) and which requires, besides the periodicity, some type of Markov behaviour that which immediately implies the summability condition (2.2). We call it MP p,θ which stands for Markovian Periodicity. We will check this condition rather than SP p,θ (u n ) in the applications presented in Sections 3 and 4 as well as in Appendix C. Condition (MP p,θ (u n )). We say that X 0 , X 1 , X 2 , . . . satisfies the condition MP p,θ (u n ) for Note that if besides condition (2.1), the stationary stochastic process satisfies the following Markovian property: then it can easily be seen by an induction argument that condition MP p,θ (u n ) holds. Assuming that SP p,θ (u n ) holds, for i, s, ℓ ∈ N ∪ {0}, we define the events: Assuming θ < 1, by (2.1), we know that the stochastic process has some underlying periodic behaviour such that the occurrence of an exceedance of an high level u n at time i leads to another exceedance at time i + p, with probability approximately (1 − θ). Therefore, • Q * p,i (u n ) corresponds exactly to the realisations of the process with an exceedance of u n , at time i, which were "captured " by the underlying periodic phenomena; and • Q p,i (u n ) corresponds to those realisations with an exceedance of u n , at time i, but that manage to "escape" the periodic behaviour. Hence, if Q * p,i (u n ) occurs, then we say we have a capture at time i while, if Q p,i (u n ) occurs, then we say we have an escape at time i. The event Q p,s,ℓ (u n ) corresponds to the realisations for which no escapes occur between time s and s + ℓ − 1. Recall that in the terminology used in Subsection 2.1 where the occurrence of exceedances correspond to entrances in balls, the occurrence of escapes correspond to entrances in annuli. Note that for either a capture or an escape to occur at time i, an exceedance must occur at that time. Note that if condition SP p,θ (u n ) holds we must have: and consequently conclude that under SP p,θ (u n ), we have nP(Q p,0 (u n )) → θτ, as n → ∞. (2.5) As we will show in Theorem 1, under SP p,θ (u n ) the conclusion of Proposition 1 still holds. This means that, in loose terms, the limit distribution of the exceedances is the same as that of the escapes. Hence, in order to prove the existence of limiting law for the maximum in the presence of a periodic phenomenon creating clustering, we follow a similar strategy to that used in [FF08a], with escapes playing the role of exceedances. We define: Condition (D p (u n )). We say that D p (u n ) holds for the sequence X 0 , X 1 , X 2 , . . . if for any integers ℓ, t and n where γ(n, t) is nonincreasing in t for each n and nγ(n, t n ) → 0 as n → ∞ for some sequence t n = o(n). This condition requires some sort of mixing by demanding that an escape at time 0 is an event which gets more and more independent from an event corresponding to no escapes during some period, as the time gap between these two events gets larger and larger. It is in this condition that the main advantage of our approach to prove the EI lies. This is because in all the approaches we are aware of (see for example [L83, O87, HHL88, LN89, CHM91]), some condition like D(u n ) from Leadbetter [L73] is used. Some are slightly weaker like AIM(u n ) from [O87] or ∆(u n ) in [LR98], but they all have a uniform bound on the "independence" of two events separated by a time gap, where both these events may depend on an arbitrarily large number of r.v.s of the sequence X 0 , X 1 , . . .. In contrast, in our condition D p (u n ), the first event Q p,0 (u n ) depends only on the r.v.s X 0 and X p and this proves to be crucial when applying it to stochastic processes arising from dynamical systems as explained in Subsection 3.3. Assuming D p (u n ) holds let (k n ) n∈N be a sequence of integers such that k n → ∞ and k n t n = o(n). (2.6) Condition (D ′ p (u n )). We say that D ′ p (u n ) holds for the sequence X 0 , X 1 , X 2 , . . . if there exists a sequence {k n } n∈N satisfying (2.6) and such that This last condition is very similar to Leadbetter's D ′ (u n ) from [L83], except that instead of preventing the clustering of exceedances it prevents the clustering of escapes by requiring that they should appear scattered fairly evenly through the time interval from 0 to n − 1. Our main result is this section is the following: Theorem 1. Let (u n ) n∈N be such that nP(X > u n ) = n(1 − F (u n )) → τ , as n → ∞, for some τ ≥ 0. Consider a stationary stochastic process X 0 , X 1 , X 2 , . . . satisfying SP p,θ (u n ) for some p ∈ N, and θ ∈ (0, 1). Assume further that conditions D p (u n ) and D ′ p (u n ) hold. Then lim (2.8) Theorem 1 and in particular formula (2.8) allow us to paint the following picture: In (1.9) we are concerned with the distribution of M n given that an exceedance of level u n has occurred at time 0. The underlying periodic phenomena and in particular the capture incidents are responsible for the appearance of the Dirac term in (1.9) for the distribution of RTS with a weight given by the probability of a capture occurring, given that an exceedance has occurred, which is 1 − θ. On the other hand, the escapes are responsible for the appearance of the exponential term in (1.9), again with a weight given by the probability of an escape occurring, given that an exceedance has occurred, which is θ. However, the distribution of M n , where we assume nothing about exceedances at time 0, is equal to the one of the HTS, which, as can be seen in (1.8), only sees the exponential term or in other words the escape component. Formula (2.8) is then saying that computing the distribution of M n can be reduced to computing the distribution of the escapes. Remark 2. If we enrich the process and the statistics by considering either multiple returns or multiple exceedances we can study Exceedance Point Processes or Hitting Times Point Processes as in [FFT10, Section 3]. One would expect these point processes to converge to a compound Poisson process, consisting, in loose terms, of a limiting Poisson process ruling the cluster positions, to which is associated a multiplicity corresponding to the cluster size. One can then adapt the proof of Theorem 1 to obtain a result similar to [FFT10, Theorem 5], thus obtaining the convergence of cluster positions to a Poisson Process. In order to achieve this, we would have to change D p (u n ) in the same way that D 2 (u n ) was changed to D 3 (u n ) in [FFT10], with exceedances in D 3 (u n ) from [FFT10] replaced by escapes. However, to obtain the actual convergence of the Exceedance Point Processes or Hitting Times Point Processes to the compound Poisson process more work is needed since we cannot apply Kallenberg's criterion used in [FFT10, Theorem 5] because here the Poisson events are not simple, i.e., they can have multiplicity. This is studied in a work in progress. We start the proof of Theorem 1 with the following two simple observations. Lemma 2.1. For any integers p, ℓ ∈ N, s ∈ N ∪ {0} and real numbers 0 < u < v we have Proof. This is a straightforward consequence of the formula for the probability of a multiple union on events. See for example first theorem of Chapter 4 in [F50]. Lemma 2.2. Assume that t, r, m, ℓ, s are nonnegative integers and u > 0 is a positive real number. Then, we have and (2.10) The proof of this lemma can easily be done by following the proof of [FF08a, Lemma 3.2] or [C01, Proposition 3.2] with minor adjustments. Proof of Theorem 1. We split the proof in two parts. The first is devoted to showing the second equality in (2.8), leaving the first equality for the second part of the proof. Let ℓ = ℓ n = [n/k n ] and k = k n be as in Condition D ′ p (u n ). We begin by replacing P(Q 0,n (u n )) by P(Q 0,k(ℓ+t) (u n )) for some t > 1. By (2.9) of Lemma 2.2 and the fact that (2.11) We now estimate recursively P(Q p,0,i(ℓ+t) (u n )) for i = 0, . . . , k. Using (2.10) of Lemma 2.2 and stationarity, we have for any 1 ≤ i ≤ k Using stationarity, D p (u n ) and, in particular, that γ(n, t) is nonincreasing in t for each n we conclude . Since nP(X > u n ) → τ , as n → ∞, by (2.5), it follows that nP(Q p,0 (u n )) → θτ. Hence, if k and n are large enough we have ℓP(Q p,0 (u n )) < 2, which implies that 1−ℓP(Q p,0 (u n )) < 1. Then, a simple inductive argument allows us to conclude Recalling (2.11), we have Since by (2.5), it follows that nP(Q p,0 (u n )) → θτ , as n → ∞, for some τ ≥ 0, we have It is now clear that, the second equality in (2.8) holds if Assume that t = t n where t n = o(n) is given by Condition D p (u n ). Then, by (2.6), we have lim n→∞ kt n P(Q p,0 (u n )) = 0, since nP(Q p,0 (u n )) → θτ ≥ 0. Finally, we use D p (u n ) and D ′ p (u n ) to obtain that the two remaining terms in (2.12) also go to 0. Now, we need to show that the first equality in (2.8) holds. First observe that Next, note that if Q p,0,n (u n ) \ {M n ≤ u n } occurs, then we may define i = inf{j ∈ {0, 1, . . . n − 1} : X j > u n } and s i = [ n−1−i p ]. But since Q p,0,n (u n ) does occur, then for all j = 1, . . . , s i we must have X i+jp > u n , otherwise, there would exist j i = min{j ∈ {1, . . . , s i } : X j ≤ u n } and Q p,i+(j i −1)p (u n ) would occur, which contradicts the occurrence of Q p,0,n (u n ). This means that It follows by SP p,θ (u n ) and stationarity that 2.3. Existence of an EI due to multiple underlying periodic phenomena. In this subsection we consider stochastic processes with more than one underlying periodic phenomenon creating clustering of events (these can not be realised as stochastic processes coming from dynamical systems as described above). In fact, it may happen that the escapes themselves form clusters which means that D ′ p (u n ) does not hold. This occurs if, for example, for some 1 ≤ j ≤ [n/k n ] we have nP(Q p,0 (u n ) ∩ Q p,j (u n )) → α > 0. Let p 2 be the smallest such j. Then, since P(Q p,0 (u n ))/n ∼ θτ , we have that (2.1) holds if we replace exceedances by escapes and p by p 2 . Therefore there is a second underlying periodic phenomena which leads to the notion of escapes of second order. This motivates the introduction of similar conditions to SP p,θ , D p (u n ), D ′ p (u n ), where the role of the exceedances is replaced by escapes, in order to obtain a statement like Theorem 1, where the distribution of the maximum would be equal to the distribution of these escapes of second order. Since it may also happen that these escapes of second order also form clusters, we may have to repeat the process all over again. Hence, we establish a hierarchy of escapes in the following way. Given the sequences (p i ) i∈N and (θ i ) i∈N , with p i ∈ N and θ i ∈ (0, 1) for all i ∈ N, let p i = (p 1 , p 2 , . . . , p i ), Θ i = (θ 1 , θ 2 , . . . , θ i ). For each j ∈ N and u ∈ R, assuming that Q (i−1) p i−1 ,j (u) is already defined we define the escape of order i as We set Q (1) p 1 ,j (u) = Q p 1 ,j (u) and in the case i = 0 we can consider that Q . Now we restate conditions SP p,θ , D p (u n ), D ′ p (u n ) with respect to the escapes of order i ∈ N. . We say that X 0 , X 1 , X 2 , . . . satisfies condition SP and moreover (2.14) Condition (D p i (u n )). We say that D p i (u n ) holds for the sequence X 0 , X 1 , X 2 , . . . if for any integers ℓ, t and n P Q where γ(n, t) is nonincreasing in t for each n and nγ(n, t n ) → 0 as n → ∞ for some sequence t n = o(n). Condition (D ′ p i (u n )). We say that D ′ p i (u n ) holds for the sequence X 0 , X 1 , X 2 , . . . if there exists a sequence {k n } n∈N satisfying (2.6) and such that Observe that condition D ′ p i (u n ) gets weaker and weaker as i increases, which means that every time a new underlying periodic phenomenon is found, there is a higher chance that escapes of the next order satisfy D ′ . The next result generalises Theorem 1, which corresponds exactly to the case i = 1, to the case of higher order escapes. We stated these theorems separately since Theorem 1 contains the essential ideas required for Theorem 2 and, moreover, shows the influence of the periodic behaviour in a more transparent way. Theorem 2. Let (u n ) n∈N be such that nP(X > u n ) = n(1 − F (u n )) → τ , as n → ∞, for some τ ≥ 0. Consider a stationary stochastic process X 0 , X 1 , X 2 , . . . satisfying conditions SP (j) p j ,Θ j (u n ) for all 1 ≤ j ≤ i. Assume further that conditions D p i (u n ) and D ′ p i (u n ) hold. Then We notice that in the particular case i = 2 then θ 2 corresponds to the upcrossings index η in [F06]. Proof. The proof of the last equality in (2.15) is basically done as in Theorem 1 simply by replacing everything by its corresponding i version. The proof of the j-th equality, with 1 ≤ j ≤ i, in (2.15) follows as the proof of the first equality in (2.8) except that instead of (2.2) we use (2.14) of the corresponding condition SP (j) p j ,Θ j . Regarding the formula for the extremal index θ observe that it follows by an easy induction argument from the fact that SP (j) p j ,Θ j (u n ) holds for all 1 ≤ j ≤ i. In fact, as always, let (u n ) n∈N be a sequence of levels such that n(1 − F (u n )) = nP(X 0 > u n ) → τ , as n → ∞, for some τ ≥ 0. Assuming by induction that nP(Q (j−1) Since, by (2.5), we have nP(Q p 1 ,0 (u n )) → θ 1 τ , as n → ∞, the result follows at once. When comparing Theorem 2 with similar results in the literature, particularly the most similar in [LN89,CHM91] and [F06], we highlight the following advantages: the interpretation of the EI is explicitly motivated by the existence of underlying periodic phenomena; and the fact that our conditions are weaker, especially because our condition D p i (u n ) is much weaker than D(u n ). In fact, as we explain in greater depth in Section 3.3, if we had to check D(u n ) for stochastic processes arising from dynamical systems we could only get HTS/RTS for cylinders (see definition in Section 5) instead of balls, which we do obtain in Corollaries 4 and 6. In terms of EVL, this means that we would get cylinder EVL with convergence only for certain subsequences ω n of time n ∈ N, which contrasts with our results in Theorems 3 and 5. Regarding applications of Theorems 1 and 2, we mention that for the examples of stochastic processes that besides D(u n ) also satisfy D ′′ (u n ) from [LN89], then Theorem 1 can be used to prove the existence of an EI. While for the examples we know of stochastic processes that, for some k ≥ 2, satisfy D (k) (u n ) from [CHM91] instead, then eventually Theorem 2 can be used for the same purpose. Besides the applications to stochastic processes coming from dynamical systems given in Section 3, for which MP p,θ is shown to hold, we give two examples in the appendix, one of Maximum Moving Average sequences and one of an Autoregressive process, to which the results of this section also apply. While these examples are not novel, they illustrate how to check conditions SP p,θ , MP p,θ , in different, more classical, settings. The Maximum Moving Average in Appendix A satisfies SP p,θ with p = 2, and θ = 1/2, while the one in Appendix B satisfies SP (i) p i ,Θ i , with i = 1, 2, p 2 = (p 1 , p 2 ) = (1, 3), Θ 2 = (2/3, 1/2). The Autoregressive process of order 1 (AR(1)), introduced in [C81] and considered in Appendix C, is shown to satisfy MP p,θ with p = 1 and θ = 1 − 1/r. The general theory for sequences generated by dynamical systems In this section, we set out the general theory of the extremal index in the context of a discrete time dynamical system (X , B, µ, f ), where X is a Riemannian manifold, B is the Borel σ-algebra, f : X → X is a measurable map and µ an f -invariant probability measure. We will initially show that MP p,θ (u n ) can be proved for quite general systems, and later, in Section 4, give specific examples where we can also prove D p (u n ) and D ′ p (u n ) and thus apply Theorem 1. We consider a Riemannian metric on X that we denote by 'dist' and for any ζ ∈ X we define the ball of radius δ > 0 around ζ, as B δ (ζ) = {x ∈ X : dist(x, ζ) < δ}. Let Leb denote a normalised volume form defined on B that we call Lebesgue measure. In order to study the statistical properties of the system, the invariant probability measure µ and its properties play a crucial role. First, we want the measure to provide relevant information about the system. This is achieved, for example, by requiring that the measure is 'physical' or even more generally an 'equilibrium state'. We will emphasise the first kind of measures here due to their importance in the study of the statistical properties of dynamical systems. A measure µ is said to be physical if the Lebesgue measure of the set of points U (called basin of µ), for which the law of large numbers holds for any stochastic process defined as in (1.6) for any continuous r.v. ψ : X → R, is positive. In other words, if the set of points x such that 1 n has positive Lebesgue measure. For example, µ is a physical measure if it is absolutely continuous with respect to Lebesgue, in which case we write µ ≪ Leb, and ergodic, which simply means that (3.1) holds µ-a.e. Note that these measures do provide a nice picture of the statistical behaviour of the system, since describing how the time averages 1 n n−1 i=0 ψ(f i (x)) of any continuous function ψ behave, reduces to compute the spacial average ψdµ simply by integrating ψ against the measure µ. Moreover, this works on a "physically observable" set U of positive Lebesgue measure. More generally, we can study the statistical properties of a system through the following class of measures, known as equilibrium states. For good introductions to this topic see for example [Bo75,W82,K98]. Let f : X → X be a measurable function as above, and define I.e., for µ ∈ M f , µ(X ) = 1 and for any Borel measurable set A, µ(f −1 (A)) = µ(A). Then for a measurable potential φ : X → R, we define the pressure of (X , f, φ) to be where h(µ) denotes the metric entropy of the measure µ, see [W82] for details. If, for µ ∈ M f , h(µ) + φ dµ = P (φ) then we say that µ is an equilibrium state for (X , f, φ). The absolutely continuous measures given above can often be shown to be particular examples of equilibrium states. This is explained in more depth in Section 3.2. 3.1. Measures absolutely continuous with respect to Lebesgue. In this subsection, we assume that the measure µ is absolutely continuous with respect to Lebesgue. Besides, we assume that ζ is a repelling p-periodic point, which means that f p (ζ) = ζ, f p is differentiable at ζ and 0 < |det D(f −p )(ζ)| < 1. Moreover, we also assume that ζ is a Lebesgue density point with 0 < dµ dLeb (ζ) < ∞ and the observable ϕ : X → R ∪ {+∞} is of the form ϕ(x) = g(dist(x, ζ)), where the function g : [0, +∞) → R ∪ {+∞} is such that 0 is a global maximum (g(0) may be +∞); g is a strictly decreasing bijection g : V → W in a neighbourhood V of 0; and has one of the following three types of behaviour: Type 1: there exists some strictly positive functionπ : W → R such that for all y ∈ R lim s→g 1 (0) Examples of each one of the three types are as follows: g 1 (x) = − log x (in this case (3.3) is easily verified withπ ≡ 1), g 2 (x) = x −1/α for some α > 0 (condition (3.4) is verified with β = α) and g 3 (x) = D − x 1/α for some D ∈ R and α > 0 (condition (3.5) is verified with γ = α). Remark 3. Recall that the d.f. F is given by F (u) = µ(X 0 ≤ u) and u F = sup{y : F (y) < 1}. Observe that if at time j ∈ N we have an exceedance of the level u (sufficiently large), i.e., X j (x) > u, then we have an entrance of the orbit of x into the ball B g −1 (u) (ζ) of radius g −1 (u) around ζ, at time j. This means that the behaviour of the tail of F , i.e., the behaviour of 1 − F (u) as u → u F is determined by g −1 , if we assume that Lebesgue's Differentiation Theorem holds for ζ, since in that case 1 − F (u) ∼ ρ(ζ)|B g −1 (u) (ζ)|, where ρ(ζ) = dµ dLeb (ζ). From classical Extreme Value Theory we know that the behaviour of the tail determines the limit law for partial maximums of i.i.d. sequences and vice-versa. The above conditions are just the translation in terms of the shape of g −1 , of the sufficient and necessary conditions on the tail of F of [LLR83, Theorem 1.6.2], in order to exist a non-degenerate limit distribution forM n . Recall that X 0 , X 1 , X 2 , . . . is given by (1.6) for observables of the type (3.2), which means the event {X 0 > u} corresponds to a ball centred at ζ. Suppose that p ∈ N and consider as before Given the special structure of these dynamically defined stochastic processes, observe that for all i ∈ N we have Q p, . We will provide some conditions which guarantee an Extreme Value Law with a given extremal index. We will give some systems which satisfy these conditions in Section 4. Proof. To prove Theorem 3 we only need to show property MP p,θ (u n ) and apply Theorem 1. Since ζ is a repelling periodic point, by the Mean Value Theorem we have for u close to u F . By induction, we get for u close to u F . Consequently, using the fact that ζ is a Lebesgue density point, we have for i ∈ N, So replacing u with (u n ) n , summing over i and letting n → ∞, we have MP p,θ , as required. The relation between EVL and HTS established in [FFT10] allows us to obtain the following: Corollary 4. Suppose that ζ is a repelling periodic point of prime period p, with θ = θ(ζ) = 1−| det D(f −p )(ζ)| ∈ (0, 1). Let (u n ) n∈N be such that nµ(X 0 > u n ) = n(1−F (u n )) → τ , as n → ∞, for some τ ≥ 0. Assume further that conditions D p (u n ) and D ′ p (u n ) hold. Then we have Hitting Time Statistics to balls at ζ, 6) and Return Time Statistics to balls at ζ, for all sequences δ n → 0, as n → ∞. Note that for example for a smooth map interval map f , Lebesgue measure is φ-conformal for φ(x) := − log |Df (x)|. Moreover, if for example f is a topologically transitive quadratic interval map then as in Ledrappier [Le81], any physical measure µ with h(µ) > 0 is an equilibrium state for φ. That this also holds for the even simpler case of piecewise smooth uniformly expanding maps follows from Section 4.1. We define S n φ(x) := φ(x) + · · · + φ • f n−1 (x). In the following proposition we will assume that for a potential φ, we have P (φ) = 0. Note that if P (φ) = p for p ∈ (−∞, ∞) then we can replace φ by φ − p to obtain P (φ − p) = 0. Clearly any equilibrium state for φ is an equilibrium state for φ − p and vice versa. Recall that we are assuming that f : X → X is a measurable map of a Riemannian manifold which is differentiable at a periodic point ζ ∈ X . Again, we consider that the stochastic processes X 0 , X 1 , X 2 , . . . defined by (1.6) are such that the r.v. ϕ : X → R ∪ {±∞} achieves a global maximum at ζ ∈ X . However, in order to still be able of establishing the connection between EVL and HTS in this setting, where the invariant measure may present a more irregular behaviour than when it is absolutely continuous, we need to tailor the observable ϕ to cope with this lack of regularity as in [FFT11]. Essentially, this means that we need to replace dist(x, ζ) in (3.2) with µ(B dist(x,ζ) (ζ)). So throughout this section the stochastic processes X 0 , X 1 , X 2 , . . . defined by (1.6) is such that the r.v. ϕ : X → R ∪ {±∞} is given by where g is as in Section 3.1. Since for this application, we would like a sequence (u n ) n such that lim n→∞ nµ({X 0 > u n }) = τ , it is useful to assume that the measure µ has some continuity: otherwise, 'jumps' in the size of balls around ζ may prevent us from finding such a sequence. To deal with this issue, in [FFT11], we defined a function for small η ≥ 0 and given by (η) = µ(B η (ζ)). (3.9) We required that is continuous on η. For example, if X is an interval and µ a Borel probability with no atoms,i.e., points with positive µ measure, then is continuous. Condition (3.10) is to control the distortion of φ on small scales. It follows for example from a Hölder condition on φ. Remark 4. If ζ ∈ X is a repelling p-periodic point in the support of a φ-conformal measure m φ as in Theorem 5, then S p φ(ζ) must be non-positive. We can show this by taking a very small set A around ζ such that f p : A → f p (A) is a bijection and such that A ⊂ f p (A), then Using the theory developed in [FFT11], an analogue of Corollary 4 holds for equilibrium states, namely: Corollary 6. Under the conditions of Theorem 5 we have Hitting Time Statistics to balls at ζ, and Return Time Statistics to balls at ζ, for all sequences δ n → 0, as n → ∞. The proof of Theorem 5 follows almost immediately from the following lemma. Lemma 3.1. Let φ : X → R be a potential which is continuous at ζ, f (ζ), . . . , f p−1 (ζ) as in Theorem 5 with P (φ) = 0. If φ has a conformal measure m φ then Proof. The continuity of φ implies that for any ε > 0, for all u sufficiently close to u F , e |Spφ(x)−Spφ(y)| < (1 + ε) for x, y ∈ {ϕ > u}. Using this and conformality, for u close enough to u F , proving the first part of the lemma. For the second part of the lemma, note that (3.10) implies that for any ε > 0 we can choose u so close to u F that So as in the proof of Theorem 3, we inductively obtain as required. Proof of Theorem 5. The proof is reduced to apply Theorem 1 after checking that MP p,θ (u n ) holds with θ = 1 − e Spφ(ζ) for any sequence (u n ) n with u n → u F as n → ∞, which follows using Lemma 3.1 and the same ideas as those in the proof of Theorem 3. Before giving specific examples of dynamical systems satisfying D p (u n ) and D ′ p (u n ), in the next subsection we discuss general conditions which imply those conditions. 3.3. On the roles of D p (u n ) and D ′ p (u n ) for stochastic processes arising from dynamical systems. Theorems 3 and 5 and Corollaries 4 and 6 assert that the existence of limiting laws of rare events with an extremal index for stochastic processes arising from dynamical systems as in (1.6) for observables given by (3.2) or (3.8) centred at repelling periodic points depends on the good mixing properties of the system both at long range (D p (u n )) and short range (D ′ p (u n )). In general terms the Condition D p (u n ) follows from sufficiently fast (e.g. polynomial) decay of correlations of the dynamical system. This is where D p (u n ) are seen to be much more useful then Leadbetter's D(u n ). While D(u n ) usually follows only from strong uniform mixing, like α-mixing (see [B05] for definition), and even then only at certain subsequences, which means most of the time the final result holds only for cylinders, D p (u n ) follows from decay of correlations which is much weaker and allows to obtain the result for balls, instead. Just to give an idea of how simple it is to check D p (u n ) for systems with sufficiently fast decay of correlations, assume that for all φ, ψ : M → R with bounded variation, there are C, α > 0 independent of φ, ψ and n such that where Var(φ) denotes the total variation of φ (see Section 4.1 for more details) and n̺(t n ) → 0, as n → ∞ for some t n = o(n). Take φ = 1 Qp(un) , ψ = 1 Q p,t,ℓ (un) , let C ′ > 0 be such that Var(1 Qp(un) ) ≤ C ′ , for all n ∈ N and set c = CC ′ . Then (3.11) implies that Condition D p (u n ) holds with γ(n, t) = γ(t) := c̺(t) and for the sequence t n such that n̺(t n ) → 0, as n → ∞. Observe that the existence of such C ′ > 0 derives from the fact that Q p (u n ) depends only on X 0 and X p . This is why we cannot apply the same argument to prove D(u n ) directly from Leadbetter, since we would have to take φ to be the indicator function over an event depending on an arbitrarily large number of r.v. X 0 , X 1 , . . ., which could imply the variation to be unbounded. It could happen that decay of correlations is only available for Hölder continuous functions against L ∞ ones, instead. This means that we cannot use immediately the test function φ = 1 Qp(un) , as we did before. However, proceeding as in [C01,Lemma 3.3] or [FFT10,Lemma 6.1], if we use a suitable Hölder approximation one can still prove D p (u n ). Rates of decay of correlations are nowadays well known for many chaotic systems. Examples of these include Hyperbolic or uniformly expanding systems as well as the non Hyperbolic or non-uniformly expanding admitting, for example, inducing schemes with a well behaved return time function. In fact, in two remarkable papers Lai-Sang Young showed that the rates of decay of correlations of the original system are intimately connected with the recurrence rates of the respective induced map. This means that, basically, for all the above mentioned systems D p (u n ) can easily be checked. In short, in order to prove the existence of EVL or HTS/RTS with an extremal index around repelling periodic points for systems with sufficiently fast decay of correlations is reduced to proving D ′ p (u n ). Usually, this requires a more closed analysis of the dynamics around the periodic points. Below, we will show that D ′ p (u n ) holds for Rychlik maps and for the full quadratic map, which are chaotic systems with exponential decay of correlations, and in this way obtain the existence of an extremal index different of 1. Up to our knowledge, these are the first limiting laws of rare event with an extremal index different of 1 to be proven for balls rather than cylinders. Examples of dynamical systems and observables with extremal index in (0, 1) We begin this section by introducing a particularly well-behaved class of interval maps and measures for which MP p,θ (u n ), D p (u n ) and D ′ p (u n ) hold at periodic points. This class is more general than the class of piecewise smooth uniformly hyperbolic interval maps. We then go on to consider a particular example of a non-uniformly hyperbolic dynamical system -the 'full quadratic map'. 4.1. Rychlik systems. We will introduce a class of dynamical systems considered by Rychlik in [R83]. This class includes, for example, piecewise C 2 uniformly expanding maps of the unit interval with the relevant physical measures. We first need some definitions. Definition 4. Given a potential ψ : Y → R on an interval Y , the variation of ψ is defined as where the supremum is taken over all finite ordered sequences (x i ) n i=0 ⊂ Y . The idea behind the proof is that it takes a point in Q p (u n ) at least something of the order log n iterations to return to Q p (u n ). Then after log n iterates, the decay of correlations estimates take over to give D ′ p (u n ). We first give a theorem and a lemma. The fact that these maps have decay of correlations of observables in a strong norm like BV against L 1 observables allows to prove the following Lemma which is very similar to the first computations in the proof of [BSTV03, Theorem 3.2]. Lemma 4.1. There exists C ′ > 0 such that for all j ∈ N Proof. Taking ψ = υ = 1 Qp(un) in Theorem 7 we easily get Since we have assumed, as above, that dµ φ dm φ ∈ BV and is strictly positive, and since 1 Qp(un) BV ≤ 5 there is C ′ > 0 as required. Proof of Proposition 2. First observe that the non-atomicity of µ, given by Theorem 7, implies that we can indeed find a suitable sequence (u n ) n as in (1.1), see the discussion around (3.9). Moreover, condition D p (u n ) follows from Theorem 7 as in Section 3.3. To prove D ′ p (u n ), first let U ∋ ζ denote a domain such that x ∈ U implies d(f p (x), ζ) > d(x, ζ). In order for a point in Q p (u n ) to return to Q p (u n ) at time k ∈ N, there must be some time ℓ ≤ k/p such that image f ℓp (Q p (u n )) must have only just escaped from the domain U. Therefore we must have µ(f (ℓ−1)p (Q * p (u n ))) ≥ Cµ(U) for some C > 0 which depends only on U and ζ. Since µ(Q p (u n )) ∼ τ θ n , e Spφ(ζ) ∈ (0, 1) and we must have ℓ, and therefore k, greater than B log n for some B > 0, depending on C, U and dµ dm . Using this and Lemma 4.1, ≤ n ([n/k] − B log n) µ(Q p (u n )) 2 + nµ(Q p (u n ))e −Bβ log n (nµ(Q p (u n ))) 2 k + C β nµ(Q p (u n ))n −Bβ where C β := ∞ j=0 e −jβ . Since nµ(Q p (u n ) → τ θ as n → ∞ we have for some C > 0 We give a short list of some of the simplest examples of Rychlik systems: • Given m ∈ {2, 3, . . .}, let f : x → mx mod 1 and φ ≡ − log m. Then m φ = µ φ = Leb. • Let f : (0, 1] → (0, 1] and φ : (−∞, 0) be defined as f (x) = 2 k (x − 2 −k ) and φ(x) := −k log 2 for x ∈ (2 −k , 2 −k+1 ]. Then m φ = µ φ = Leb. Remark 5. The crucial point in proving the result for Rychlik maps is the fact that the exponential decay of correlations given by Theorem 7 is expressed in terms of the L 1 norm of one of the observables. This is key in proving Lemma 4.1 also. In particular, the same argument can be applied to a generalisation of Rychlik maps in higher dimensions which were both defined, and proved to have decay of correlations of the same type (with an L 1 norm estimate), in [S00]. It is well known that the invariant density µ is given by dµ dLeb We will consider the fixed point ζ = 1 and the observable ϕ : [−1, 1] → R given by ϕ(x) = −x which achieves the maximum value 1 at ζ = −1. Notice that ϕ can be written as ϕ(x) = g(dist(x, ζ)) as in (3.2), simply by taking g : [0, ∞) → R defined by g(y) = 1 − y. Clearly, since f ′ (ζ) = 4 > 1, then ζ is a repelling periodic point with p = 1. Let F denote the d.f. of X 0 . We have that for s close to 0, the tail of the d.f. may be written as: This implies that the level u n , which is such that n(1 − F (u n )) → τ ≥ 0, as n → ∞, may be written as (4.1) This system has exponential decay of correlations which, in light of the discussion in Subsection 3.3, implies that Condition D p (u n ) holds. In fact, from [KN92,Y92] one has that for all φ, ψ : M → R with bounded variation, there is C, α > 0 independent of φ, ψ and t such that where Var(φ) denotes the total variation of φ. In particular, taking υ = 1 Q p,0 (un) and ψ = 1 Q p,t,ℓ (un) , then (4.2) implies that Condition D p (u n ) holds with γ(n, t) = γ(t) := 2Ce −αt and for the sequence t n = √ n, for example. To check Condition D ′ p (u n ) we have to look at the particular behaviour of the system and estimate the probability of starting in a neighbourhood of ζ and returning in relatively few iterates. The idea is to observe that since the critical orbit ends in ζ, one can just estimate the probability of starting close to the critical point and returning close to it. This can easily be done by using the estimates in [FF08,Section 6] where D ′ (u n ) was proved for Benedicks-Carleson maps (which include the example in hand) for observables achieving a maximum either at the critical point or at its image. Note that, in here, D ′ (u n ), which imposes some significant memory loss for relatively fast returns, cannot hold because ζ is a fixed point. Nevertheless, what we will prove is that the set points that manage to escape from a tight vicinity of ζ, i.e., Q p,0 (u n ), only return after having a significant memory loss. As in [FF08, Section 6], we start by computing a turning instant T which splits the time interval 0, . . . , [n/k] of the sum in (2.7) into two parts. We compute T ∈ N such that for every j > T we have 2Ce −αj < 1 n 3 . For n large enough it suffices to take T = 4 log n α . From (4.2) with υ = ψ = 1 Qp(un) and for j > T one easily gets which implies that for some C > 0 we have Thus, we are left with the piece of history from time 0 to T to analyse. For that purpose, we start by studying the pre-images of a small interval in the vicinity of ζ, namely, A(s) = [−1, −1 + s] for s close to 0. We have f −1 (A(s)) = A 1 (s) ∪ A 2 (s), where A 1 (s) = [−1, − 1 − s/2] is a small neighbourhood of −1 and A 2 (s) = [ 1 − s/2, 1] is a small neighbourhood of 1. Also, the pre-image of any small neighbourhood of 1 is a small neighbourhood of the critical point 0, in particular: Moreover, for s close to 0, we may write Now, observe that Q p (u n ) ⊂ A(1 − u n ) and if you are to enter A(1 − u n ) then either you are in A 1 (1 − u n ) ⊂ A(1 − u n ) or in A 2 (1 − u n ). Since, by definition of Q p (u n ), if you start in a point of Q p (u n ), then you immediately leave A(1 − u n ) in the next iterate, this means that the only way you can return to Q p (u n ) is if you enter A 2 (1 − u n ), which implies that you must enter the neighbourhood of the critical point f −1 (A 2 (1 − u n )) first. Hence, Using the symmetry of the map and of the invariant density plus the invariance of µ, we have This means, that we only need to study the probability of starting in the neighbourhood of the critical point and returning after j iterates and for that we use the computations in [FF08, Section 6]. The threshold Θ defined in [FF08, Equation (6.1)], in here, is given by Using (4.1) and (4.3) we may write which implies the existence of C 1 > 0 such that 2T /Θ ≤ C 1 for all n ∈ N. We are now in condition of using the final computations of [FF08, Section 6] to get that where 0 < β < 0.01. Finally, since by definition of Θ, (4.1) and (4.3), we have e −Θ ≤ const 1/n, it follows that This means that D ′ p (u n ) also holds and, hence, by Theorem 3, we have an EVL with extremal index θ = 3/4 for the stochastic process defined by (1.6) with ϕ defined above and achieving a global maximum at the repelling fixed point ζ = −1. EVL/HTS for cylinders Many results on HTS for dynamical systems were initially proved for HTS to dynamically defined cylinders, which is usually a more straightforward problem to study. Indeed many results which are known about the statistics of hits to cylinders are not known for balls. Therefore one of our goals in [FFT11] was to extend the results of [FFT10] to this setting. In this short section we outline this theory and in Section 6 we will apply it to the problem of EVLs with non-trivial EI. Many dynamical systems (X , f ) come with a natural partition P 1 , for example this might be the collection of maximal sets on which f is locally homeomorphic. The dynamically defined n-cylinders are then P n := n−1 i=0 f −i (P 1 ). For x ∈ X , let Z n [x] denote an element of P n containing x. Note that in principle there may be more than one choice of cylinder, but in the cases we consider we can make an arbitrary choice. If we wish to deal with HTS/EVL to dynamically defined cylinders Z n [ζ] around a point ζ, we replace the sets (U n ) n with (Z n [ζ]) n in (1.4). In this case we chose our observable ϕ to be of the form ϕ = g i • ψ, (5.1) where g i is one of the three forms given above and ψ(x) := µ(Z n [ζ]) where n is maximal such that x ∈ Z n [ζ]. Moreover, we select a subsequence of the time n, which we denote by (ω n ) n∈N and such that and for every n ∈ N, τ ≥ 0, where u n is taken to be such that We achieve this, for example, by letting Finally, we say that we have a cylinder EVL H for the maximum if for any sequence (u n ) n∈N such that (5.3) holds and for ω n defined in (5.4), the limit (5.2) holds and µ (M ωn ≤ u n ) →H(τ ), (5.5) for some non-degenerate d.f. H, as n → ∞. The cylinder HTS is defined analogously. The equivalence between these two perspectives was given in [FFT11,Theorem 3]. We also showed in that paper that the following two conditions imply that (5.5) holds with H(τ ) = e −τ . Condition (D(u n , ω n )). We say that D(u n , ω n ) holds for the sequence X 0 , X 1 , . . . if for any integers ℓ, t and n where γ(n, t) is nonincreasing in t for each n and ω n γ(n, t n ) → 0 as n → ∞ for some sequence t n = o(ω n ); Condition (D ′ (u n , ω n )). We say that D ′ (u n , ω n ) holds for the sequence X 0 , X 1 , . . Remark 6. We say a system is Φ-mixing, if for an n-cylinder U and a measurable set V , where Φ(j) decreases to 0 monotonically in j. This holds in the Axiom A case: see Haydn and Vaienti [HV09] for example. In [HV09, Section 3] they showed that Φ-mixing dynamical systems give rise to Poisson HTS around periodic points with a parameter, interpreted in the current paper as the EI. The Poisson law is for the number of returns to asymptotically small cylinders. Our results imply theirs for the first hitting time. As can be seen from our examples, we do not require our systems to have such good mixing properties and moreover our results also apply to balls. Dichotomy for uniformly expanding maps In this section we will prove that for a simple class of dynamical systems periodic points are the only points which can generate a cylinder EVL with EI in (0, 1). Therefore we understand all the cylinder EVLs for this system. We assume that the dynamics is f : x → 2x mod 1 on the unit interval I = [0, 1]. Let α ∈ (0, 1/2] and µ be the (α, 1 − α)-Bernoulli measure. This is thus a Rychlik system as in Section 4.1. Moreover, for α = 1/2 the measure is Lebesgue. While this system has stronger mixing properties, we will only actually use the fact that for our system, for n-cylinders U, V , Observe that this is a weaker assumption than Φ-mixing.) Proposition 3. Suppose that (I, f, µ) and the observable ϕ is as in (5.1). If ζ ∈ I is non-periodic then D ′ (u n , ω n ) and D(u n , ω n ) hold. Hence there is an EVL with EI equal to 1. • Given a periodic point ζ, if another periodic point x = ζ, with prime period p, shadows the orbit of ζ for a long time, say for n < p steps, but differs at some stage from the orbit of ζ, then the EI corresponding to the point x is of the form 1 − α k (1 − α) p−k for some 0 ≤ k ≤ p. So if p is very large then the EI here is almost 1. This implies that the EI corresponding to x has no relationship with the EI corresponding to ζ, no matter how much shadowing takes place. Arguing heuristically that non-periodic points should behave like periodic points with very long period, this idea also suggests that our dichotomy should hold for a much larger class of dynamical systems. • This proposition allied to the cylinder version of Proposition 2, completely characterises the possible cylinder EVLs for this system. • In the proof of the proposition, the only properties we need for our dynamical system are that it is Markov, that (6.1) holds and the measure of n-cylinders decay exponentially in n. • We would expect a similar proposition to be true for balls also. However we strongly use the cylinder structure of the system (I, f ) in our proof. It may be possible to approximate the balls by cylinders, but and since n-cylinders can not all be assumed to be symmetric about ζ, in the usual metric on I, this may not be straightforward. Before proving the proposition, we will discuss the symbolic structure of our dynamical system and then prove a lemma. First we recall that the system (I, f ) has a natural coding: x ∈ I can be given the code [1/2, 1). Then the dynamics is semi-conjugate to the full shift on two symbols ({0, Notice that the points x where there is a problem in the conjugacy are precisely the points which map onto the fixed point at zero. Following the proofs below it is easy to see that Proposition 3 follows almost immediately in this case. In fact this is also the situation in which the cylinder Z n [x] is not well defined. Now let (p i ) i be the sequence of integers such that whenever p i ≤ n < p i+1 the time the orbit of ζ takes to visit Z n [ζ] is at least p i . (We will sometimes denote i such that p in ≤ n < p in+1 by i n .) For example, suppose that the first 153 symbols representing ζ are 000000000000001 000000000000001 000000000000001 000000000000001 000000000000001 000000000000001 000000000000001 000000000000001 (6.2) 000000000000001 000000000000001 001. Letting a n ∈ N be maximal such that a n p i ≤ n, we can also interpret (p i ) i as the first times r = p i when Z r [ζ] contains no periodic point of period less than r. So for p i ≤ n < p i+1 where j = a n p i + q n , for some 0 ≤ q n < p i , the coding for ζ up to time n must consist of the block ζ 0 . . . ζ p i −1 repeated a n times followed by the block ζ 0 . . . ζ qn−1 . Remark 8. The periodic structure of cylinders was considered in [AVe09], see particularly Section 3 (note that there they are interested in first returns/hitting times of the whole cylinder to itself, which is slightly different to what we look at here). They considered the p i blocks ζ 0 . . . ζ p i −1 as being 'i-period' blocks and the block ζ anp i . . . ζ anp i +qn as an 'i-rest' block. In the example in (6.2) the relevant blocks have 1-period 15 and the 1-rest period is 2. A key point in the proof of Proposition 3 is that the assumption that ζ is not periodic implies that p in → ∞ as n → ∞. The following lemma explains how this affects short term returns to n-cylinders. Lemma 6.1. For p i ≤ n < p i+1 as above, if, for j ≤ n, there is a cylinder Z n+j ⊂ Z n [ζ] such that f j (Z n+j ) ⊂ Z n [ζ] then (a) there is only one such cylinder in Z n [ζ] with the same return time j; (b) there exists 0 ≤ k ≤ a n such that j = kp i . In particular there is only one possible code for such a cylinder, determined only by j and by the first n entries in the code for ζ and the first part of the lemma follows. The second part also follows immediately from the periodic structure of the code for ζ. (The proof can also be seen from the setup described in [AVe09, Section 3].) Proof of Proposition 3. The fact that D(u n , ω n ) holds follows from Theorem 7 as in Section 3.3. To prove D ′ (u n , ω n ), we first estimate the first n terms in the sum (5.6). We note that for our system and Z n+j as in Lemma 6.1 for ϑ = 1/α. Then for a n maximal such that a n p in ≤ n and ω n = ω n (τ ) Therefore using the mixing condition, Moreover, as n → ∞, our assumption that ζ is not periodic implies that p in → ∞ as n → ∞. Therefore, D ′ (u n , ω n ) follows by taking k → ∞. The existence of an EVL with EI equal to 1 follows from [FFT11, Section 5]. We are left now with condition D ′ p (u n ). Recall that, in this case, Q p,i (u n ) = {X i > u n , X i+2 > u n }. It is easy to check that P(Q p,0 ∩ Q p,i ) = (1 − α n ) 2 α 4 n for all i ∈ N, except for i = 2 and i = 4 for which such probability is 0. Hence, Appendix B. A Maximum Moving Average process with two underlying periodic phenomena of periods 1 and 3 As before, let Y −2 , Y −1 , Y 0 , Y 1 , . . . be a sequence of i.i.d. random variables as in Appendix A. This time, we define a Maximum Moving Average process X 0 , X 1 , . . . in the following way: for each n ∈ N 0 set X n = max{Y n−3 , Y n−2 , Y n }. As before, the fact that X 0 , X 1 , . . . is 4-dependent clearly implies that condition D p 2 (u n ) holds. The uniform AR(1) process is defined recursively as follows: where ǫ n is independent of X n−1 and X 0 is uniformly distributed in [0, 1]. It is simple to check that X 0 , X 1 , . . . forms a stationary stochastic process such that each X n is uniformly distributed on [0, 1]. In [C81,Theorem 3.1], Chernick shows that this process satisfies D(u n ) from Leadbetter but D ′ (u n ) fails. Besides, in [C81, Theorem 4.1], using a direct approach, he shows that the partial maxima has a EVL of type III with an extremal index equal to 1 − 1/r. We will show that machinery we developed can be applied to this process and obtain the same result as Chernick [C81,Theorem 4.1] simply by checking the conditions of Theorem 1. C.1. Verification of MP p,θ . The proof of condition MP p,θ relies on the following property of the process: Lemma C.1. For all u > (r − 1)/r and n ∈ N, if X n−1 > u then X n > u if and only if ǫ n = (r − 1)/r. This means that the probability of having an exceedance of any high level u, given that you have just had an exceedance, is 1/r, which makes it a periodic phenomenon of period p = 1 (in the sense of condition 2.1). Thus, we are left with the piece of history from time 0 to t * to analyse. Recall that u n = 1 − τ /n so that nP(X 0 > u n ) → τ ≥ 0, as n → ∞. Observe that Q p,0 (u n ) occurs if and only if X 0 > u n and X 1 ≤ u n , which, for n large enough, will only happen if ǫ 1 < (r − 1)/r which means that X 1 ≤ (r − 1)/r. Besides, for Q p,j (u n ) to occur we must have X j > u n . Since X j = ǫ j + ρǫ j−1 + . . . + ρ j−1 X 1 , we have that, for very large n, there exists ς = ς(n) such that ǫ j = ǫ j−1 = . . . = ǫ j−ς = (r − 1)/r, otherwise X j cannot exceed the level u n . Next, we compute a lower bound for ς. Hence, we set ς = log n log r − log τ log r − 1. Observe that the occurrence of Q p,j (u n ) implies an exceedance of u n at time j followed by the occurrence of the event ǫ j+1 < (r − 1)/r, which, in turn, implies that we have to wait at least a period of length ς before another exceedance of u n occurs: This means that there can only occur at most a [t * /ς] + 1 number of Q p,j events with j = 1, . . . , t * . Hence, we have n t * j=1 P(Q p,0 (u n ) ∩ Q p,j (u n )) ≤ n([t * /ς] + 1)P(X 0 > u n ) 1 r ς .
Nitrogen-containing bisphosphonates inhibit cell cycle progression in human melanoma cells Cutaneous melanoma is one of the highly malignant human tumours, due to its tendency to generate early metastases and its resistance to classical chemotherapy. We recently demonstrated that pamidronate, a nitrogen-containing bisphosphonate, has an antiproliferative and proapoptotic effect on different melanoma cell lines. In the present study, we compared the in vitro effects of three different bisphosphonates on human melanoma cell lines and we demonstrated that the two nitrogen-containing bisphosphonates pamidronate and zoledronate inhibited the proliferation of melanoma cells and induced apoptosis in a dose- and time-dependent manner. Moreover, cell cycle progression was altered, the two compounds causing accumulation of the cells in the S phase of the cycle. In contrast, the nonaminobisphosphonate clodronate had no effect on melanoma cells. These findings suggest a direct antitumoural effect of bisphosphonates on melanoma cells in vitro and further support the hypothesis of different intracellular mechanisms of action for nitrogen-containing and nonaminobisphosphonates. Our data indicate that nitrogen-containing bisphosphonates may be a useful novel therapeutic class for treatment and/or prevention of melanoma metastases. Melanoma is a highly malignant tumour and its propensity to metastasise together to its resistance to therapy in later stages make it the most aggressive skin cancer (Serrone and Hersey, 1999;Soengas and Lowe, 2003). The high mortality rate of malignant melanoma, the poor efficacy of chemotherapy in advanced stages of the disease and the high toxicity of the classical regimens have stimulated intensive research for new alternatives for the therapy of melanoma. Various studies suggest that an intrinsic resistance to apoptosis can be one important mechanism by which melanoma cells escape therapeutic control (Soengas and Lowe, 2003). Therefore, new therapeutical strategies that bypass this resistance are necessary. Bisphosphonates are a class of synthetic analogues of the endogenous pyrophosphate, which are well established in the treatment of osteoclast-mediated bone diseases such as osteoporosis, Paget's disease of the bone and tumour-induced osteolysis (Fleisch, 1997;Finley, 2002). They have been used in medical practice for more than three decades for their antidemineralising effects. Recently, an increasing body of evidence from both in vitro and in vivo studies suggests that bisphosphonates may also have a specific antitumoural action (Clezardin, 2002;Padalecki and Guise, 2002;Green, 2003). Thus, bisphosphonates have been shown to inhibit proliferation, induce cell cycle changes and/or induce apoptosis in various types of human tumour cells, especially in those with preferential spread to bone, such as multiple myeloma, breast or prostate carcinoma cells (Shipman et al, 1997;Senaratne et al, 2000;Hiraga et al, 2001;Lee et al, 2001;Sonnemann et al, 2001;Iguchi et al, 2003;Oades et al, 2003). For the nitrogen-containing bisphosphonates, this antiproliferative and proapoptotic effect appears to be related to their ability to inhibit the enzymes of the mevalonate pathway, especially farnesyl pyrophosphate (FPP) synthase (Luckman et al, 1998;Benford et al, 1999;Senaratne et al, 2002). Consequently, bisphosphonates prevent the synthesis of higher isoprenoids such as geranylgeranyl pyrophosphate (GGPP) and FPP, which are necessary for the posttranslational processing (prenylation) of different signalling molecules, including monomeric G proteins of the Ras and Rho families. For these families of small GTPases, the prenyl residues act as membrane anchors essential for their activation and further interaction with other signalling molecules (Bar-Sagi and Hall, 2000;Aznar and Lacal, 2001). Ras and Rho protein families are key regulators of a variety of cellular processes, ranging from reorganisation of the cytoskeleton to transcriptional regulation and control of cell growth and survival (Aznar and Lacal, 2001). When their expression and activation escape the control mechanisms, small GTPases play an essential part in promoting tumorigenesis and tumour metastasing (Fritz et al, 1999;Clark et al, 2000;Pruitt and Der, 2001). Therefore, their inactivation by inhibition of prenylation could explain at least in part the antitumoral effects described for bisphosphonates. On the contrary, bisphosphonates that lack a nitrogen atom, such as clodronate, appear to have no effect on the mevalonate pathway, but rather reduce cell viability by metabolism to inactive analogues of ATP, and consequently, by disruption of the ATP-dependent processes of the cell (Rogers et al, 1996;Rogers et al, 1999). We have previously demonstrated that the nitrogen-containing bisphosphonate pamidronate is able to induce apoptosis and to inhibit proliferation in melanoma cells in vitro (Riebeling et al, 2002). Melanoma metastasises less often to bone, but it is an aggressive tumour with high metastatic potential and marked resistance to the currently available antitumour therapy strategies. New therapy alternatives are urgently required, and we addressed the question of the possible benefit of bisphosphonates in the adjuvant therapy of melanoma. Besides the well-known pamidronate, a wide range of newer bisphosphonates with higher antiresorptive effect have been introduced in practice (Green, 2001;Fleisch, 2002;Widler et al, 2002). However, the relationship between antiresorptive potency, mechanism of action and cellular effects of bisphosphonates has not been completely elucidated. Moreover, to what extent bisphosphonates of different pharmacological classes differ in their effects in tumour cells or if higher antiresorptive potency implies a stronger effect against tumour cell growth is still a matter of debate. The present study aims to compare the effect of three different bisphosphonates, with different postulated mechanisms of action and different antiresorptive potencies, on cell proliferation, cell cycle progression and cell survival in melanoma in vitro. We have chosen for this purpose the nonaminobisphosphonate clodronate, widely used in the treatment of cancer-induced osteolytic disease, and two nitrogen-containing bisphosphonates, pamidronate and the newly developed zoledronate, the most potent antiresorptive agent known to date. All three bisphosphonates were dissolved in distilled water and filter sterilised (sterile filters, B Braun, Melsungen, Germany). Stock solutions (at final concentrations of 21.5 mM for pamidronate, and 100 mM for zoledronate and clodronate) were aliquoted and kept at À201C for long-term storage. Caspase-3 inhibitor was purchased from Alexis (Grünberg, Deutschland). Cells were pretreated with the inhibitor 1 h prior to stimulation. Dulbecco's modified Eagle's medium (DMEM) was purchased from Invitrogen (Karlsruhe, Germany). Further cell culture reagents were obtained from Seromed-Biochrom (Berlin, Germany). All other reagents were obtained from Sigma (Munich, Germany) unless stated otherwise. Cell culture The melanoma cell line A375 (CRL-1619), derived from primary tumour, was purchased from American Type Culture Collection (Manassas, VA, USA). The melanoma cell population M186 was obtained by surgical intervention from a patient with histologically confirmed melanoma metastases. Melanoma cells were grown in 75 cm 2 culture flasks (Nunc, Wiesbaden, Germany) in DMEM supplemented with 10% heat-inactivated foetal calf serum, 100 U ml À1 penicillin and 100 mg ml À1 streptomycin, in a 5% CO 2 atmosphere at 371C. Proliferation assay Proliferation was assessed using the crystal violet staining method (Wieder et al, 1998). Subconfluent melanoma (60 000 cells well À1 ) were treated in 24-well plates with the indicated agents or corresponding solvents as control. After the indicated incubation time, culture medium was removed, cells were rinsed with phosphate-buffered saline (PBS) to wash off nonadherent cells and the remaining cells were fixed with 0.1 M glutaraldehyde in PBS for 30 min at room temperature. Subsequently, cells were washed with PBS and then stained by incubation with 0.2 mM crystal violet in PBS for 30 min at room temperature. Unbound dye was washed away in deionised water for 15 min and 0.2% Triton X-100 was added to release the bound dye. After 1 h of incubation, 100 ml supernatant of each sample was transferred to 96-well microtitre plate and the extinction at 570 nm was measured using an ELISA photometer. Extinction values of vehicle-treated control cells were set at 100% and the rate of proliferation of bisphosphonate-treated cells was calculated as the percent of controls. Cytotoxicity assay Cytotoxicity was determined using the Cytotoxicity Detection Kit (LDH) Roche Diagnostics, (Mannheim, Germany). After incubation of 80 000 cells well À1 in 24-well plates, for up to 24 h plates were centrifuged at 300 g for 5 min. A measure of 50 ml of the resulting supernatant were transferred into a microtitre plate and lactate dehydrogenase (LDH) activity was determined by the addition of substrate solution. Formation of the formazan salt was measured at 490 nm using an ELISA photometer. Extinction values of control cells were set at 100% and the rate of LDH release from the treated cells was calculated as the percent of controls. Cell death detection Induction of apoptosis was measured using the 'Cell death detection ELISA PLUS ' kit from Roche Diagnostics (Mannheim, Germany), which detects oligonucleosomes released into the cytoplasm of cells during apoptosis, by means of a combination of anti-histone and anti-DNA antibodies, as described (Wieder et al, 1998). Cells were seeded at 80 000 cells well À1 in 24-well plates and left to adhere overnight. Subsequently, cells were treated as indicated, after which the plates were centrifuged at 300 g for 5 min. The supernatant was cautiously removed and the cells further incubated with lysis buffer for 30 min at room temperature. After centrifugation at 300 g for 10 min, 20 ml from the resulting supernatants were transferred to a streptavidin-coated microtitre plate, supplemented with 80 ml of immunoreagent solution (containing biotin-coupled anti-histone antibodies and peroxidase-coupled anti-DNA antibodies) and incubated for 2 h at room temperature under moderate shaking. After incubation, the wells were rinsed with incubation buffer, supplied with 100 ml substrate solution per well and further incubated for 10 min at room temperature, under protection from light. The extinction of the samples at 405 nm was measured using an ELISA photometer. Extinction values of control samples were set at 100% and DNA fragmentation of treated cells was calculated as the percent of control. Measurement of caspase-3/7 activity Capase-3/7 activity was measured by proteolytic cleavage of the fluorogenic substrate Z-DEVD-R110 using the Apo-Onet Homogeneous Caspase-3/7 Assay (Promega, Madison, WI, USA). Cells were treated for 24 h in 96-well plates with the corresponding bisphosphonates or vehicle as control, at concentrations as indicated. Apo-Onet Homogeneous Caspase-3/7 buffer containing Z-DEVD-R110 diluted 1 : 100 was added to the cells and incubated at room temperature. The activity was measured fluorimetrically with an excitation wavelength of 499 nm and an emission wavelength of 521 nm after 90 min. Caspase-3/7 activity was determined and expressed as the percentage of control. Cell cycle analysis The distribution of cells in different phases of the cell cycle after treatment with bisphosphonates was analysed by measuring the DNA content of cells using FACS analysis after nuclear staining with propidium iodide. Melanoma cells were seeded at 80 000 cells well À1 in six-well plates. After 48 h, they were treated as indicated and subsequently washed with PBS, trypsinised and harvested in culture medium. All washes and cell solutions were pooled and centrifuged at 200 g for 5 min. The cell pellet was resuspended in PBS and 1 Â 10 6 cells of each sample were collected, washed in ice-cold PBS and fixed in ice-cold 70% ethanol in PBS (v v À1 ) at À201C over night. For the analysis of DNA content, samples were thawed and centrifuged at 400 g for 5 min. The pellet was washed once with 1 ml PBS, and then incubated with 1 ml of 2% propidium iodide and 20% RNase A in PBS, for at least 30 min at room temperature, protected from light. After incubation, the cell suspension was analysed for red fluorescence with a FACSCalibur flowcytometer (Becton Dickinson, Heidelberg, Germany). DNA histograms were created using Cell Questt software, version 3.0 for Apple Macintosh (Becton Dickinson), where 20 000 events sample À1 were analysed. The relative distribution of cells in the phases of the cell cycle was calculated with ModFitLT software, version 2.0 for Apple Macintosh (Becton Dickinson). Statistical analysis Statistical significance was determined using the Student's t-test, with SigmaStat 2.03 software. Po0.05 was considered significant. Nitrogen-containing bisphosphonates inhibit melanoma cell proliferation In order to investigate the effect of bisphosphonates on melanoma cell growth, melanoma cell lines A375 and M186 were treated with increasing concentrations of pamidronate, zoledronate and clodronate for 24 h. The number of cells was determined using the crystal violet method. In both cell lines, pamidronate as well as zoledronate treatment resulted in a dose-dependent decrease in cell number (Figure 1). In A375 cells, ( Figure 1A) a significant reduction in cell number was observed after treatment with 50 mM pamidronate, and reached the maximum after treatment with at 100 mM pamidronate (84% of control). Higher concentrations of pamidronate were not able to induce a further decrease in cell number. Zoledronate was more effective in inhibiting cell growth. The cell number was significantly reduced to 89% of control at a concentration of 30 mM and further decreased to 45% of control at 100 mM zoledronate. A similar effect of bisphosphonate treatment was observed in M186 cells ( Figure 1B). A slight, yet still significant reduction of cell number was observed for pamidronate at a concentration of 100 mM (88% of control), and a stronger effect was observed for zoledronate, beginning at a concentration of 30 mM and reaching a maximum at 100 mM with 57% of control. In contrast, incubation of both A375 and M186 cells with the nonaminobisphosphonate clodronate, at concentrations ranging from 100 to 1000 mM, failed to induce a significant decrease in cell number within 24 h (data not shown). Nitrogen-containing bisphosphonates induce apoptosis in melanoma cell lines DNA fragmentation as a marker of apoptosis was evaluated by means of an ELISA technique in A375 and M186 cells after 24 h of incubation with increasing concentrations of pamidronate, zoledronate or clodronate. A dose-dependent induction of DNA fragmentation was observed after both zoledronate and pamidronate treatment in the two-cell populations that were studied. In A375 cells, a significant effect was detectable at concentrations of 50 mM and was further increased at 100 mM ( Figure 2A). In M186 cells, increased DNA fragmentation was first detected at a concentration of 30 mM ( Figure 2B). In both cell lines, 100 mM pamidronate had a stronger effect in inducing apoptosis, with DNA fragmentation reaching 711% of control in A375 cells, and 746% of control in M186 cells, while, using 100 mM zoledronate, DNA fragmentation reached only 280% of controls in A375 cells and 247% in M186 cells. In contrast, A375 and M186 cells treated for 24 h with clodronate in concentrations ranging from 100 to 1000 mM showed no significant effect on DNA fragmentation ( Figure 2C). The activity of the execution caspase-3 or -7 is a further marker of apoptosis. The data obtained for DNA fragmentation correlated well with the caspase activity measured in bisphosphonate-treated melanoma cells. A375 and M186 cells were treated for 24 h with 100 mM pamidronate or zoledronate, respectively, after which the activity of caspase-3/7 was measured. The data obtained on caspase activation further support a stronger proapoptotic effect of pamidronate, in comparison to zoledronate. (Figure 3). Treatment of the cells with clodronate showed no effect on caspase-3/7 activity (data not shown). A possible unspecific cytotoxic effect of bisphosphonates on melanoma cells was investigated by measuring the extracellular release of LDH, following bisphosphonate treatment. No increase of LDH release in comparison with controls was found in M186 and A375 cells treated for 24 h with zoledronate or pamidronate in a concentration range between 10 and 100 mM, while treatment with the nonaminobisphosphonate clodronate, in concentrations ranging from 100 to 1000 mM, induced a moderate but significant increase in extracellular LDH activity measured after 24 h (data not shown). Thus, nitrogen-containing bisphosphonates are able to induce apoptosis in a dose-dependent manner in melanoma cells, Figure 3 Nitrogen-containing bisphosphonates induce caspase-3/7 activity in melanoma cells. Preconfluent A 375 (A) and M186 (B) cells were treated with vehicle control (white bars), 100 mM pamidronate (grey columns) or 100 mM zoledronate (black columns) for 24 h. For specific activation of caspase-3, cells were pretreated with the respective caspase-3 inhibitor 1 h prior to stimulation and then treated for 24 h with 100 mM pamidronate or zoledronate in combination with the inhibitor (square bars). Caspase-3/7 activity was determined with the Apo-Onet homogeneous caspase-3/7 assay as described under Materials and methods. Three independent experiments were performed in quadruplicate, with similar results. One representative experiment is shown. Results are given as % of control7s.d. (n ¼ 4) (*Po0.05; **Po0.01). while the nonaminobisphosphonate clodronate appears to induce necrosis without apoptotic effect. Further on, we investigated the relationship between the apoptotic effect of bisphosphonates and the duration of the treatment. A375 melanoma cells were incubated for 6, 12 and 24 h with 100 mM pamidronate or zoledronate. A significant increase in DNA fragmentation was observed after 12 h of incubation with both bisphosphonates (Figure 4), and increased further markedly after 24 h of treatment, reaching up to 700% of controls for pamidronate and 315% for zoledronate. Thus, apoptosis induced by bisphosphonates is dependent on both concentration and duration of treatment. Bisphosphonates inhibit progression of melanoma cells through the cell cycle In order to study the effect of bisphosphonates on cell cycle progression of melanoma cells, FACS analysis of the DNA content was used to investigate the distribution of A375 and M186 cells in the phases of the cell cycle. Cells were treated for 24 h with increasing concentrations of pamidronate, zoledronate or clodronate. Vehicle-treated cultures exhibited a distribution of cells in the phases of the cell cycle typical for proliferating cells, with an average of 61% of cells having a 2n DNA content, corresponding to G0/G1 phase, 10% of cells having a 4n DNA content (G2/M) and 28% showing a DNA content between 2n and 4n, corresponding to the S phase ( Figure 5). Cells treated with pamidronate or zoledronate in a concentration range of 10 -100 mM showed comparable dose-dependent alterations in cell cycle distribution, with an increase in number of cells in S phase accompanied by a reduction in the proportion of cells in G0/G1 and G2/M phases ( Figure 5A and B). Zoledronate was more potent in inducing changes in cell cycle distribution, its effects starting at concentrations of 30 mM, while pamidronate significantly altered the distribution of cells in the cell cycle phases only at the highest concentration. The maximum effect was seen for both drugs at a concentration of 100 mM, with an increase in the proportion of cells in S phase from 28 to 49% for pamidronate and to 47% for zoledronate. In contrast, treatment of cells with clodronate at concentrations 10 times higher showed a different pattern, with the tendency of an increase in the proportion of cells in G0/G1 phase ( Figure 5C). DISCUSSION At present, bisphosphonates are emerging as new potential antitumoral drugs. While most of the studies on bisphosphonates concentrate on tumours with preferential spreading to bone, such as breast or prostate cancer, we were the first to show that pamidronate can induce apoptosis in melanoma cells (Riebeling et al, 2002). In order to further investigate the potential benefit of bisphosphonates in the treatment of melanoma, the present study compares the effect of these compounds on proliferation, cell cycle progression and apoptosis induction in melanoma cell lines. Three bisphosphonates, with different structure, antiresorptive potency and postulated mechanism of action, namely pamidronate, zoledronate and clodronate, were analysed. Our results indicate that both nitrogen-containing bisphosphonates pamidronate and zoledronate are able to decrease cell proliferation in vitro in a dose-dependent manner. Inhibition of cell growth was not the result of necrosis, as no significant release of LDH from the cells was measured after treatment with the two bisphosphonates. At the same time, proliferation inhibition cannot be explained only by induction of apoptosis, since the antiproliferative capacity did not correlate to the proapoptotic effect of these nitrogen-containing bisphosphonates. Both pamidronate and zoledronate induced DNA fragmentation in the two studied melanoma cells lines, A375 and M186, in a dose-and timedependent manner, but pamidronate had a stronger proapoptotic effect. Consistently, pamidronate induced a stronger activation of the execution caspase-3/7. This caspases activation supported the specificity of the proapoptotic effect of bisphosphonates. In contrast, the nonaminobisphosphonate clodronate, even at concentrations 10 times higher, had no significant effect on cell number and induction of apoptosis in cultured melanoma cells. However, in higher dose, clodronate caused a slight increase in LDH activity, suggesting some cytolytic effect. These differences observed in the activity of the three bisphosphonates may reflect the difference in the mechanism of action between the nitrogen-containing and nonaminobisphosphonates. The stronger antiproliferative and/or proapoptotic effect of nitrogen-containing bisphosphonates compared to nonaminobisphosphonates was also reported in other cell types such as macrophages Rogers et al, 1999), breast cancer cells (Senaratne et al, 2000), multiple myeloma (Shipman et al, 1998;Shipman et al, 2000) or colon adenocarcinoma (Suri et al, 2001), and this effect appears to be related to the ability of nitrogen-containing bisphosphonates to inhibit the mevalonate pathway and thereby the prenylation of signalling proteins such as the small GTPases. Both pamidronate and zoledronate were shown to inhibit specifically the enzyme FPP synthase (Bergstrom et al, 2000;Dunford et al, 2001), and the depletion of cellular pools of GGPP and FPP has been demonstrated to be a key mechanism in the induction of apoptosis and reduction of cell viability by nitrogen-containing bisphosphonates Jagdev et al, 2001;Reszka et al, 2001). Consistently, in melanoma, we previously demonstrated that the apoptotic effect of pamidronate could be reversed by supplementation of cells with GGPP and FPP precursors, which circumvents the bisphosphonate-induced inhibition of isoprenoid synthesis (Riebeling et al, 2002). However, the exact mechanisms by which inactivation of small GTPases leads to the induction of apoptosis have not been elucidated to date. In melanoma, the apoptotic action of nitrogen-containing bisphosphonates involves caspase-3 activation and as shown previously (Riebeling et al, 2002) is not influenced by bcl-2 overexpression. In contrast to nitrogen-containing bisphosphonates, clodronate does not inhibit isoprenoid synthesis (Reszka et al, 1999). Rather, it has been reported that nonaminobisphosphonates can induce cell death by metabolism to toxic nonhydrolysable analogues of ATP (Rogers et al, 1996) and consequently by disrupting the energy-requiring processes of the cells. In the present study, the lack of effect of clodronate on cell survival in vitro (even at high concentrations) may suggest that this mechanism has no functional significance in melanoma cell lines. Although the two nitrogen-containing bisphosphonates differed in their capacity to induce apoptosis and to inhibit cell proliferation, a prominent effect of both was the alteration of the progression of melanoma cells through the phases of the cell cycle. Measurement of the cellular DNA content by FACS analysis revealed that both zoledronate and pamidronate caused accumulation of cells in the S phase of the cycle in the two melanoma cell lines studied, with a corresponding decrease in the number of cells in G1 and G2/M phases ( Figure 5). This effect was dose dependent and stronger using zoledronate, which induced significant alterations of the cell cycle progression starting at the concentration of 30 mM; in comparison, pamidronate had a significant effect on cell cycle progression only at the concentration of 100 mM. The mechanism of these changes in the cell cycle is not clear. A similar delay in the S-phase progression has been documented in myeloma (Aparicio et al, 1998;Iguchi et al, 2003), prostate cancer cells (Lee et al, 2001) or keratinocytes (Reszka et al, 2001) treated with nitrogen-containing bisphosphonates, and this could be related to the inhibition of prenylation of small GTPases. Both Ras and Rho proteins are known as important regulators of the cell cycle (Hirai et al, 1997;Olson et al, 1998;Pruitt and Der, 2001;Welsh et al, 2001), and in consequence their inactivation via inhibition of prenylation by bisphosphonates could explain the alterations of the cell cycle observed after treatment with these compounds. It would also be consistent with the observation that other inhibitors of the mevalonate pathway, such as statins, also induce comparable cell cycle changes (Vogt et al, 1997;Naderi et al, 1999).Very recently it was shown, for myeloma cells, that S-phase cell cycle arrest induced by nitrogen-containing bisphosphonates is linked to mitogen-activated protein kinase (MAPK) cascade activation (Iguchi et al, 2003). Consistent with the lack of effect of clodronate on cell proliferation and apoptosis, this compound also failed to significantly alter cell cycle progression, even at the higher concentrations. Zoledronate is considered the most effective antidemineralising agent available on the market, being about 100 times more potent then pamidronate in inhibiting bone resorption in vivo (Widler et al, 2002). However, in our study pamidronate was more efficient than zoledronate in inducing apoptosis in both melanoma cell lines studied. In contrast, zoledronate proved to be more potent in altering the cell cycle progression of cells and in inhibiting cell proliferation. These results suggest that zoledronate affects mostly cell growth, while pamidronate acts rather by inducing cell death. Similar differences in the actions of the two agents have also been reported in some of the studies in breast (Boissier et al, 2000)or prostate cancer cells (Lee et al, 2001). Pamidronate has also been shown to be more effective in inducing cell death than other bisphosphonates with higher antiresorptive potency Senaratne et al, 2000;Benford et al, 2001;Suri et al, 2001). The lack of correlation between the antiresorptive potency of bisphosphonates in vivo and their antitumoral effect in vitro appears to be dependent on both compound and cell type, and may be explained, for example, by selective inhibition by bisphosphonates of additional enzymes of the mevalonate pathway (van Beek et al, 1999;Rogers et al, 2000;Thompson et al, 2002) or possible additional mechanisms of action of bisphosphonates, for example, MAPK signalling (Iguchi et al, 2003). One objection point mentioned in most of the studies on bisphosphonates is the doses at which the antitumoral effect is achieved. In our study, inhibition of proliferation, cell cycle progression changes and induction of apoptosis in cultured melanoma cells by the two aminobisphosphonates were observed at concentrations ranging from 10 to 100 mM. Similar concentra-tions of aminobisphosphonates have been reported to act antiproliferative and/or proapoptotic in other types of tumour cells, for example, myeloma (Shipman et al, 1997), breast cancer (Senaratne et al, 2000)or prostate cancer (Lee et al, 2001). It is however not clear if these high concentrations may also be achieved in vivo, at least using the current dosage and treatment regimens. As bisphosphonates are rapidly removed from circulation following administration and accumulate in the bone, primary tumour cells or visceral metastases would be most likely exposed to only much lower doses of bisphosphonates, probably in the range 1 -5 mM (Daley-Yates et al, 1991;Berenson et al, 1997). Different treatment regimens or new bisphosphonates analogues with less affinity for the bone may be necessary in future to solve this problem. In summary, we demonstrate that nitrogen-containing bisphosphonates are effective antitumour agents in melanoma in vitro, as they inhibit proliferation, cause an S-phase delay in the cell cycle progression and induce apoptosis in melanoma cells. These encouraging results need to be confirmed by in vivo studies and further investigation is required to clarify the exact mechanism of the antineoplasic action of bisphosphonates, as well as their most effective structure and dosing regimen, in order to establish the possible benefit of these compounds in the adjuvant treatment of melanoma.
Introduction of advanced laparoscopy for peritoneal dialysis catheter placement and the outcome in a University Hospital Background Peritoneal dialysis (PD) catheters can be obstructed by omental wrapping or migration, leading to catheter malfunction. Multiple catheter placement techniques have been described. Advanced laparoscopy with fixation of the catheter and omentum has been reported to improve functional outcome compared to basic laparoscopy without fixation. This feasibility study describes surgical technique, complications, and comparison of the functional outcome of advanced versus basic laparoscopic catheter placement. Methods Between July 2016 and April 2019, the advanced laparoscopy technique was applied in all eligible patients. Two experienced surgeons placed the catheters in a standardized procedure. Peri-operative complications and functional outcome of the catheter were scored. Results were compared to a historical cohort retrieved from our RCT performed earlier using basic laparoscopy. Findings The basic laparoscopic group (BLG) consisted of 46 patients and the advanced laparoscopic group (ALG) of 32. Complication rate in both groups was similar and low with 7% in the BLG and 6% in the ALG (p = 1.0). There was a trend toward better functional catheter outcome in the ALG (88%) compared to the BLG (70%) (p = 0.1). Part of the catheter failures in the ALG could be related to the learning curve. After revision surgery, 94% of patients in the ALG had a functional catheter. These findings lead to the set-up of a multi-center randomized-controlled trial, currently running, comparing basic to advanced laparoscopic techniques. Introduction Peritoneal dialysis (PD) requires insertion of a peritoneal dialysis catheter into the abdominal cavity. Functional outcome can be defined as the uncomplicated inflow and outflow of dialysate, and is the primary outcome measure for a PD catheter. Functional outcome can be endangered by complications during or after catheter placement. Postoperative complications can be obstruction of flow through the catheter, catheter migration, fluid leaks, erosion of catheter into viscera, and sclerosing or bacterial peritonitis [1]. Various causes for catheter obstruction are identified such as omental wrapping, adhesions, and catheter migration [2,3]. Several surgical techniques including open, blind percutaneous, peritoneoscopic, and laparoscopic PD catheter placement have been described [4][5][6]. These techniques have been developed over the years to decrease complications. Laparoscopy compared to open procedure has several advantages that are associated with improvement of functional outcome by reducing catheter-related complications. An advantage of laparoscopy includes direct visualization during surgery. In addition, during advanced laparoscopic surgery, additional procedures as adhesiolysis, catheter fixation, and omentopexy can be performed [7,8]. 3 Fixation of the omentum to the abdominal wall of the upper abdomen (omentopexy) during laparoscopic catheter placement might prevent omental wrapping, thereby preventing catheter dysfunction. Omentopexy was described by Ögünc et al. and several studies about omentopexy and prevention of catheter dysfunction have been performed [9][10][11]. To prevent catheter migration, favorable outcomes after catheter fixation to the lower abdominal wall have been described [12,13]. The randomized-controlled trial we conducted in our center in 2010-2016 demonstrated equal clinical success rates between open and laparoscopic catheter placement. However, no advanced laparoscopic techniques were applied in this trial [6]. Because of the disappointing results of laparoscopy and the reported advantages of advanced laparoscopic techniques in catheter placement, we decided to conduct a feasibility study in our center adding catheter fixation to the abdominal wall and omentopexy to our standard laparoscopic procedure including rectus sheath tunneling. Surgical technique, complications, and comparison of the mechanical outcome of the new techniques versus basic laparoscopic placement are described. Patient selection We included all consecutive patients with end-stage renal disease who were eligible for a peritoneal dialysis (PD) catheter after finishing our randomized-controlled trial (RCT) in March 2016 [6]. Patients with a life expectancy of less than 1 year and patients in need for abdominal cavity surgery not related to catheter insertion were excluded. Patient history was taken and physical examination was performed. Previous abdominal surgery was not considered an exclusion criterion. Abdominal wall or incisional hernia was corrected with a mesh during PD catheter placement. Patients were informed about our previous RCT and its outcome and the presumed benefit of fixating the catheter and the greater omentum. The possible complications were explained, as well. After informed consent was obtained, patients were referred to the anesthesiologist for further screening before they were scheduled for surgery. Patients could participate only once in the study. Surgical procedure All patients were operated on by one or both of two surgeons (AP and JvL). Both are experienced laparoscopic PD surgeons and have performed over 65 laparoscopic PD catheter placements before introducing the advanced laparoscopic techniques. All patients had the desired exit site marked by the PD nurse pre-operatively. The PD catheter was always a two cuff coiled-tip catheter. The exit site always faced downwards. Adhesiolysis was only performed if adhesions prevented the planned route of the catheter. All catheters were tested at the end of the procedure by installation of 1.25 L of Icodextrin 4% and aspiration of 200 ml hereafter. The rest of the solution was left in place to prevent adhesions [2]. If an abdominal wall hernia was present, it was laparoscopically corrected with a composite mesh with > 3 cm overlap. Hereafter, the catheter was placed in the preferred position in the lower abdomen. In the first patients, only one of both fixation techniques was used. Later on, with more experience, both procedures were performed in one operation. The catheter was fixated with a non-absorbable suture which was usually a Prolene suture. The omental fixation was performed with non-absorbable (Prolene or Mersilene) or absorbable sutures (Vicryl). The latter was thought to be sufficient because of the sterile inflammation process that will take place and will create a tight adhesion from the omentum to the abdominal wall. In case the omentum was so small that it could not reach the desired position for fixation or could not reach the position of the catheter in the lower abdomen, it was not fixated and left in place. Also, if the omentum was already fixated by adhesions from previous surgery, it was untouched. We did not consider epiploic appendectomy or colopexy. Prophylactic antibiotic, one gram of cephalosporins, was administered pre-operatively. After general anesthesia and sterile exposure of the abdomen, the desired position of the subcutaneous track and cuffs were marked on the abdominal wall. Then, a 10 mm Hason trocar was introduced in the right hemi-abdomen under direct vision. A pneumo-peritoneum of 12-14 mmHg was created. Now, a 10 mm 30 degree angle laparoscope was introduced for inspection. Hereafter, a 5 mm working trocar was introduced in the right hemiabdomen for introducing graspers and needle holders. If a hernia was present, the 5 mm working trocar was exchanged for a 12 mm working trocar to introduce a composite mesh with two-to-four pre-placed sutures for proper placement. Fixation of the mesh was with absorbable tackers after appropriate placement with an endoclose of the pre-placed sutures. A 7 mm trocar was introduced at the desired position of the subcutaneous catheter curve. The trocar tip was placed at the transversalis fascia under vision, then tunneled through the rectus sheath for 4-6 cm, and finally introduced into the abdominal cavity at the position where the deep cuff should be placed. Patients were placed in the Trendelenburg position before the catheter was introduced through the 7 mm trocar using a stylet. Under direct vision, the catheter was placed at the desired position in the abdominal cavity. The deep cuff was introduced into the abdominal cavity. The 7 mm trocar was then removed and the deep cuff of the catheter was retrieved into the preperitoneal space. The proximal cuff was placed in the subcutaneous layer more than 2 cm from the exit site, which was usually also at the left side of the abdominal wall. The catheter was fixated to the anterior abdominal wall in the midline with non-absorbable sutures using the endoclose (Fig. 1). One side of a prolene thread was introduced into the abdominal cavity with an endoclose device through a stab skin incision just above the upper margin of the urinary bladder. The thread was looped around the curved catheter tip twice with a grasper. The endoclose was then re-introduced into the peritoneal cavity through the same skin incision, but a second abdominal wall puncture and the thread were retracted. Then, both ends of the thread were knotted with 1-2 mm free space between catheter and abdominal wall and the knot rests in the subcutaneous tissue on the rectus fascia. The omentum was fixated to the anterior abdominal wall in the epigastric area (Fig. 2). For the omental fixation, a curved needle of a mersilene thread was manually straightened outside the body and the needle was introduced through the abdominal wall with a needle holder. The needle was grasped by a needle holder. An additional 5 mm working trocar was introduced for a grasper to lift the omentum, so a suture could be made. After fixation, the needle and thread were removed by an endoclose. In some cases, the endoclose was directly pushed through the Outcome measures After the procedure, all patients had an abdominal X-ray to conform the position of the PD catheter. There were no re-interventions planned even if the position was not optimal (in the lower abdomen). Two weeks after insertion, the catheter was tested in the outpatient clinic and training was started. As per protocol, catheters were tested with low volumes of dialysate (250 ml). In case of a concurrent hernia repair, testing and training were started after 4 weeks. We evaluated the mechanical outcome of the catheter when first used. Technical success was defined as unobstructed inflow and outflow of dialysate without the need for revision surgery. Dialysate leakage from wounds, infections of the tunnel track, exit site, or the catheter itself during hospital stay and at outpatient clinic follow-up were scored. Total catheter survival in time was scored. Catheter survival was scored as all mechanical functional catheters without the need for removal due to peritonitis, abdominal surgery, or inadequate peritoneal dialysis. Patients were censored for death (with mechanical functioning PD catheter), kidney transplant, and for patient preferences (switch to hemodialysis with functional PD catheter). Patient demographics and mechanical outcome of the catheter of this study group were compared with data and outcome in our historical cohort of patients included in the randomized -controlled trial that compared open to basic laparoscopic PD catheter placement with rectus sheath tunneling. Patients from the historical RCT will be called the basic laparoscopic group (BLG) and those from the study group will be called the advanced laparoscopic group (ALG). Statistical analysis Continuous variables are expressed as means with standard deviation (SD). Differences were calculated using the Mann-Whitney U test or Fisher's exact test when appropriate. Survival analysis was performed using the Kaplan-Meier method. Two-sided testing was performed and a p value < 0.05 was considered statistically significant. Statistical analysis was performed using SPSS version 24 (IBM Corporation, Armonk, NY, USA). Results From July 2016 up to October 2019, we performed advanced laparoscopy for PD catheter placement. In the early experience, only catheter fixation or omental fixation was used, and after six cases, with more experience, we started to use both techniques in one procedure. In this period, we treated 32 patients in the ALG. In Table 1, the patient characteristics are presented and compared to the BLG data. There were no significant differences between both groups. In Table 2, the operative and post-operative characteristics are depicted. There is an expected statistically significant difference for operation time in favor of the BLG. Fixating the omentum and catheter takes an extra 30 min (mean operating time ± SD in minutes for the BLG was 38.3 ± 15.3 compared to 69.2 ± 26.9 for the ALG; p < 0.001). The number of hernia repairs and adhesiolyses was similar in both groups. The mean hospital stay in both groups was approximately 3 days; however, most patients stayed 1 day or less in both groups (28 patients in the BLG and 20 patients ALG). The number of post-operative complications is low with less than 10% in both groups and mostly minor complications. There were no deaths in both groups. Regarding the primary outcome of mechanical function of the catheters, there was no statistically significant difference between both groups with a technical success rate of 70% (32 patients) in the BLG compared to 88% (28 patients) in the ALG (p = 0.1) ( Table 3). If we focus on the reasons for failure in the ALG (Table 4), two catheters failed because of a technical problem that was resolved during re-operation. In one of these patients, the subcutaneous tunneled part caused an obstruction due to kinking not observed in supine position during placement. In the other patient, the knot in the suture for fixating the catheter failed and the catheter migrated to the upper abdomen. Both patients were re-operated, the curve was corrected in one, and the catheter was fixated in the other and both patients had a mechanical good functioning catheter afterward. The other two catheter failures can be attributed to failure of the advanced techniques to prevent catheter obstruction. In one patient, only omental fixation was performed during primary surgery. After revision surgery with fixation of the catheter, it still malfunctioned because of small bowel wrapping around the catheter as seen with a diagnostic laparoscopy in a third procedure. This patient switched to hemodialysis. In the other patient, the advanced technique was well conducted, but the curled portion of the catheter was covered by the peritoneum of the abdominal wall. We corrected the position of the suture to fixate the catheter to a more cranial site at the straight segment of the catheter, and hereafter, the catheter had a good mechanical function. In our former publication, the BLG follow-up is described in detail [6]. To summarize these findings, of 14 technical failures, 8 patients had a re-operation and 6 of these led to technical success. Figure 3 demonstrates the survival curve in time of both groups for the PD catheter. As demonstrated, the survival for the advanced laparoscopy is better compared to the basic laparoscopic group (p = 0.022). As expected, the operation technique has no influence on the survival curve, since reasons for dropout are not related to the technique itself, e.g., peritonitis, non-PD-related operations rendering in removal such as diverticulitis and failure of adequate dialysis because of thickening of the peritoneum. Discussion After our RCT in 2017, we were disappointed about the functional outcome of PD catheter placement in the open and the laparoscopic group. After scrutinizing literature, we concluded that omental removal might improve our outcome, since omental wrapping causes most catheter failures. Also in our previous publication, we proved removal of the omentum to be successful; however, this is a challenging procedure [3]. Fixation of the omentum has been shown to be a safe and feasible alternative to prevent omental wrapping [9][10][11]. After several successful procedures, we confirmed these findings. Again checking our RCT, we concluded that the second most important reason for failure was migration of the catheter, and therefore, we started to use the abdominal wall fixation technique as described by others with success rates of 94 and 100% [7,10]. These publications suggest that the advanced laparoscopic techniques lead to better outcome, but there are no randomized studies available. In the publications, there could be a selection bias based on improved experience or selection of patients for PD catheter placement. In our series, all consecutive patients were included, regardless of previous abdominal history, and therefore, this reflects a "real-world" PD patient population. The outcome of both groups, basic versus advanced laparoscopy is similar, and there is no statically significant difference with a p = 0.1 in this small study. However, there is a trend toward better functional catheter outcome for advanced laparoscopic placement technique. We feel that the two described technical failures, one kinking of the subcutaneous part of the catheter and one catheter migration because of disconnection of the abdominal wall suture, are due to our learning curve and results can further improve with more experience. On the other hand, our learning curve should be taken into account explaining the possible better outcome of the advanced technique. Since the end of our RCT, we gained more experience. It is possible that the improved results can be explained by our increased experience and cannot be attributed to the new techniques. To overcome the problems mentioned above, we will conduct a new multi-center RCT of basic laparoscopic placement versus advanced laparoscopic placement which includes fixation of the catheter and the omentum. We started including patients in our center in January 2020. However, because of the COVID-19 period, the inclusion has haltered and also the inclusion in other centers is problematic. This study demonstrates another important point. A history of former placement of a PD catheter or a history of abdominal surgery, even with a midline laparotomy, seems no contraindication for recurrent PD catheter placement. The success rates of 88% and 80% are constantly with our former RCT and acceptable for a new attempt to have patients on peritoneal dialysis. Conclusion This study demonstrated that there might be an advantage in functional outcome for placement of a laparoscopic peritoneal dialysis catheter with fixation of the omentum and catheter itself. It also demonstrates that there is an acceptable functional outcome for patients who need a redo-PD catheter or have abdominal surgery in the history. A new multi-center RCT will hopefully provide more definitive answers. Funding Not applicable. Availability of data and materials Historical data already published are referred to. New data of feasibility study not in repository. Code availability Not applicable. Conflict of interest Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Liposomal doxorubicin attenuates cardiotoxicity via induction of interferon-related DNA damage resistance. AIMS The clinical application of doxorubicin is severely compromised by its cardiotoxic effects, which limit the therapeutic index and the cumulative dose. Liposomal encapsulation of doxorubicin (Myocet®) provides a certain protective effect against cardiotoxicity by reducing myocardial drug accumulation. We aimed to evaluate transcriptomic responses to anthracyclines with different cardiotoxicity profiles in a translational large animal model for identifying potential alleviation strategies. METHODS AND RESULTS We treated domestic pigs with either doxorubicin, epirubicin, or liposomal doxorubicin and compared the cardiac, laboratory and hemodynamic effects with saline-treated animals. Cardiotoxicity was encountered in all groups, reflected by an increase of plasma markers NT-proBNP and Troponin I and an impact on body weight. High morbidity of epirubicin-treated animals impeded further evaluation. Cardiac magnetic resonance imaging with gadolinium late enhancement and transthoracic echocardiography showed stronger reduction of the left and right ventricular systolic function and stronger myocardial fibrosis in doxorubicin-treated animals than in those treated with the liposomal formulation. Gene expression profiles of the left and right ventricles were analysed by RNA-sequencing and validated by qPCR. Interferon-stimulated genes, linked to DNA damage repair and cell survival, were downregulated by doxorubicin, but upregulated by liposomal doxorubicin in both the left and right ventricle. The expression of cardioprotective translocator protein TSPO was inhibited by doxorubicin, but not its liposomal formulation. Cardiac fibrosis with activation of collagen was found in all treatment groups. CONCLUSIONS All anthracycline-derivatives resulted in transcriptional activation of collagen synthesis and processing. Liposomal packaging of doxorubicin induced interferon-stimulated genes in association with lower cardiotoxicity, which is of high clinical importance in anticancer treatment. Our study identified potential mechanisms for rational development of strategies to mitigate anthracycline-induced cardiomyopathy. Liposomal doxorubicin attenuates cardiotoxicity via induction of interferon-related DNA damage resistance 1 . Introduction The 5-to 10-year survival rates of patients suffering from certain tumours, such as breast, haematologic, or childhood cancers exceed 80%. This is mainly due to refined therapy by chemotherapeutics, such as anthracyclines, immunotherapies, other specific treatments, and/or targeted tumour excision or irradiation. 1 Unfortunately, 10-75% of cancer survivors suffer from chronic health issues in later life, including heart failure, vascular or valve diseases, and other cardiac complications, caused by toxicity of many chemotherapeutics. 2 Anthracyclines are one of the most frequently used anticancer drugs. Recent data of a study in adult cancer patients showed that the majority of cardiovascular toxicity, defined as a decrease of left ventricular (LV) ejection fraction (EF), occurred within the first year after the completion of doxorubicin (DOX) chemotherapy. 3 The development of cardiotoxicity correlates with the cumulative anthracycline dose. Clinical studies and the ESC guidelines suggest that early detection and eventual treatment of heart failure of patients exposed to anticancer agents might allow for a partial or complete recovery of LV dysfunction and positively impacts cardiac outcome. 4,5 The treatment usually includes an angiotensin-converting enzyme-inhibitor and further adjuvant therapies against heart insufficiency. An inverse relationship has been found between the time to cardiac treatment and clinical outcome in cancer patients. 4 Current means to mitigate anthracycline-induced cardiotoxicity are limited. 5 On the molecular level, the cytostatic effect of anthracyclines is attributed to DNA intercalation, DNA binding and cross-linking, inhibition of topoisomerase II, and induction of apoptosis. Cardiomyocyte damage is caused by oxidative stress, generation of reactive oxygen species (ROS), inhibition of nucleic acid synthesis, and decreased expression of contractile proteins. 6 In small animal models, cardiotoxicity was mediated by topoisomerase-IIb (Top2b). 7 Cardiomyocyte-specific depletion of Top2b protected mice from DOXinduced heart failure. In pigs, transcriptional activation of several matrix metalloproteinases was found after DOX administration. 8 It has been shown that DOX triggers several signalling pathways, such as the MAPK, p53, Jak-STAT, Wnt, MAPK/p53, or PPAR pathways, which might all be involved in DOX-associated cardiomyopathy. 9,10 Proposed strategies for mitigation of cardiotoxicity include iron chelation, 11 VEGF-B gene therapy, 12 stimulation of oxidative phosphorylation, 9 modulation of DNA damage and oxidative stress, 10 or targeting an RNA-binding protein. 13 However, a comprehensive high throughput transcriptomic screening of genes or proteins in a translational large animal model of cardiotoxicity had not yet been performed. In relation to their therapeutic dose, the cardiotoxicity risks and anticancer outcomes of DOX and its stereoisomer epirubicin (EPI) are similar, because epimerization reduces not only toxicity, but also therapeutic effects. 14 The use of liposomal DOX formulations has reduced cardiotoxicity because of lower myocardial drug concentrations. 14 Liposomes are designed to avoid direct contact of the cytotoxic agent with the vasculature, and influence biodistribution based on leakiness of the endothelium of various organs. 15 Both unPEGylated (Myocet V R /MYO) and PEGylated liposomal formulations of DOX (Caelyx V R /Doxil V R ) are in clinical use. The aim of our study was to investigate molecular mechanisms and impact on gene expression profile of DOX, the liposomal formulation of DOX, Myocet V R (MYO), and EPI in a large animal model, to facilitate pharmacological research for cardioprotection during anticancer treatment. Pigs are excellent translational models for investigating cardiac adverse events. The cardiovascular anatomy, physiology, and pathology of pigs compares favourably to that of humans, with the possibility of cardiac imaging with human clinical cameras. Instead of investigation of a priori selected transcripts or proteins, we aimed to perform an unbiased approach of global transcriptomic profiling for comprehensive insight into signal transduction pathways and for discovering multiple potential molecular targets 16 and confirmed our findings in in vitro cell culture experiments. Animal study design The investigation conforms to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1985). Animal experiments were performed at the Institute of Diagnostic Imaging and Radiation Oncology, University of Kaposvar, Hungary and were approved by the ethics committee (EK: 1/2013-KEATK DOI, MUW §27 Project FA714B0518). Twenty-three domestic pigs (Sus scrofa, female large whites 30 ± 2 kg, 3 months old) were randomized into four groups receiving either DOX (group DOX, n ¼ 6), EPI (group EPI, n ¼ 6), Myocet (group MYO, n ¼ 6), or physiologic saline (group CO, n ¼ 5) in doses equivalent to human treatment regimens (60 mg/m 2 body surface area DOX and MYO, 100 mg/m 2 EPI) as a single 1-h intravenous infusion every 21 days (at Days 1, 22, and 43). Liposomal DOX was prepared directly before the injection according to the manufacturer's instructions. DOX and EPI were dissolved in saline solutions. Pigs were sedated with 12 mg/kg ketamine hydrochloride, 1.0 mg/kg xylazine, and 0.04 mg/kg atropine. An intravenous infusion of the drug or saline was administered via puncture of the femoral vein. Instead of the human equivalent of six treatment cycles, the experiment was terminated early after concluding three cycles, due to the bad health conditions of the surviving animals. After animal inspection, higher mortality was assumed by continuing the treatment, therefore cardiac MRI was performed followed by sacrification of the animals at Day 60. All available samples were included in the respective analyses. Transthoracic echocardiography (TTE) was performed after each injection to evaluate the systolic and diastolic functions. Blood samples were collected at baseline, after the 1st and before the 2nd and 3rd treatments, and before termination. Before the 1st treatment (Day 1), and 2 weeks after the 3rd infusion (Day 58 ± 1), cardiac magnetic resonance imaging (cMRI) with late enhancement (LE) was performed to assess the right ventricular (RV) and LV systolic cardiac function and fibrosis. For euthanasia, during continuous deep anaesthesia (1.5-2.5 vol% isofluran, 1.6-1.8 vol% O 2 , and 0.5 vol% N 2 O), an additional dose of intravenous heparin (10 000 U) and 10 mL intravenous saturated potassium chloride (10%) were administered. Hearts were explanted and samples from the left and right ventricles were excised. cMRI image acquisition and volumetric MRI measurements We performed cMRI on a 1.5 T Siemens Avanto Syngo B17 scanner (Erlangen, Germany) with a phased-array coil and a vector ECG system. Functional scans were acquired using a retrospective ECG-gated (HR: 80-100 beats/minute), steady-state free precession (SSFP -TRUFISP sequence) technique in short-axis and long-axis views using 1.2 ms echo time (TE), 40 ms repetition time (TR), 25 phases, 50 flip angle, 360 mm field-of-view, 8 mm slice thickness, and 256 x 256 image matrix. For quantitative evaluation of myocardial fibrosis, LE diastolic phase images were obtained after injection of 0.05 mmol/kg contrast medium using an inversion recovery prepared, gradient-echo MRI sequence. Short-axis and long-axis images were obtained 10-15 minutes after gadolinium injection. Volumetric MRI measurements and visualizations were performed using the software Segment version 1.9 (Medviso AB, Lund, Sweden). 17 We performed semi-automatic segmentation of the LV endocardial and epicardial borders, extending from the most basal short-axis view slice in which the myocardium could be seen in 360 degrees to the most apical slice. Using 3 D volumetry, end-diastolic (EDV), end-systolic volumes (ESV), global LV EF were automatically calculated on short-axis cine MRI images. EDV and ESV were related to the body weight (EDVi and ESVi). Transthoracic echocardiography Routine TTE was performed at baseline, and during anaesthesia for each infusion treatment. We measured the diameter of the left ventricle from the M-mode of the long-axis parasternal view to assess the global LV systolic function. Pulsed wave Dopplers of diastolic function were recorded by measurements of mitral E and A waves and the E/e 0 ratio by using 4chamber view. RV systolic function was assessed by measurement of tricuspid annular plane systolic excursion (TAPSE). Histology and immunohistochemistry LV and RV myocardial samples were stored in formalin, or RNAlater, or fresh frozen. Myocardial samples were stained for fibrosis, ki67, and caspase activity (Supplementary material online). Western blot To assess cleaved (active) caspase 3 activity in the LV of DOX and MYO animals, a quantitative western blot was performed, of 5 DOX and 6 MYO LV samples, in duplicate. Forty micrograms of protein was loaded into each well of a NuPAGE TM 10% Bis-Tris gel. After electrophoresis, the proteins were transferred onto an Immun-Blot V R PVDF Membrane (0.45 mm pore size, BioRad). The membrane was cut in half at the 40 kDa marker to stain cleaved caspase 3 (17 kDa) and beta tubulin (50 kDa) separately. The membranes were stained using cleaved caspase 3 Ab (ab13847, Abcam) and beta Tubulin Ab (NB600-936, Novus Biologicals) as loading control. Densitometric analysis was carried out using ImageJ Transcriptomic profiling Detailed methods are described in the Supplementary material online. Briefly, after RNA isolation and enrichment of coding genes by poly(A) selection, libraries were prepared and analysed on an Illumina NGS system with paired-end sequencing. Results were mapped to the pig transcriptome and analysed for statistically significant changes of individual genes and pathways. For investigation of biological relevance, groups were compared and significantly deregulated genes were functionally clustered. DOX effects on isolated human cardiomyocytes Human cardiac myocytes isolated from adult left ventricles (PromoCell, Heidelberg, Germany) were cultured according to the manufacturer's instructions. For gene expression analyses, cells were treated with DOX (6.25 and 1.56 nM) in 48 well plates (2 Â 10 4 cells per well) for 48 h, lysed in Qiazol (Qiagen). RNA was isolated with an RNeasy micro kit followed by cDNA synthesis with a QuantiTect RT kit and qPCR with Sybr Green (all Qiagen) according to the manufacturer's instructions. For cytotoxicity testing, cells were seeded into 96 well plates (10 4 cells per well) and incubated for 24 h in standard cell culture medium. For induction of interferon-inducible genes, poly (I:C) was applied in a concentration of 1 mg/mL, and cells were further incubated for 4 h. DOX was added to pretreated and control cells in concentrations of 10 mM, 1 mM, 400 nM, 100 nM, 25 nM, 6.25 nM and 0.1 nM, and cells were incubated for 48 h. Cytotoxicity was assessed with an EZ4U cell proliferation and cytotoxicity kit (Biomedica, Vienna, Austria), based on the derivatization to a formazan dye, according to the manufacturer's instructions, and absorption was read after 4 h. A second cytotoxicity assay was performed for direct comparison of the effect of DOX and Myocet on cell viability, using the same methods and dosages as described above without poly (I:C) stimulation. Statistics Differences between treatment and control groups were tested for normality with Shapiro-Wilk, and parametric data were evaluated for statistical significance using one-way ANOVA tests with Bonferroni post hoc corrections. The Kaplan-Meier survival analysis was calculated for all groups. A difference was considered statistically significant at P < 0.05. Data analyses and interpretations were performed by an experienced observer who was blinded to the randomization and to results. Statistical analysis was performed using SPSS 18.0 (SPSS Inc., USA) software, and graphs were prepared in SigmaPlot 13.0 (Systat Software Inc., USA). Sample sizes for all figures are n ¼ 6 for DOX and MYO, and n ¼ 5 for controls, unless otherwise indicated. For gene array analysis, to test for all comparisons (i.e. differences between regions of interest: LV and RV in DOX vs. CO and MYO vs. CO) a linear model for each gene was fitted and the estimated coefficients and standard errors for these contrasts were computed. Study design and survival Animals of the DOX and MYO groups showed better survival ( Figure 1A) compared to group EPI. The two surviving EPI animals had obvious symptoms of cardiotoxicity (highly elevated troponin I and NT-proBNP with low LV EF). Due to the insufficient number of surviving animals, the EPI group was excluded from the further analyses. The reasons of premature death were leucopenia and thrombocytopenia (one DOX and two EPI pigs), renal failure (one EPI and one MYO pig), and haemorrhagic perimyocarditis in one EPI animal. Obduction revealed haemorrhagy in the right ventricle and haemorrhagic pericarditis in one DOX and one EPI pig, and pericarditis (two) and haemorrhagic pericarditis (one) in three EPI pigs. All deceased animals in the cytostatic treatment groups had elevated Troponin I and NT-proBNP levels. Body weight, biomarkers, and myocardial fibrosis The weight of the MYO and control pigs was significantly higher after the first treatment until the end of the experiments compared to the pigs in the DOX group, indicating better general health of these animals ( Figure 1B). Both NT-proBNP and TnI increased during cytostatic treatment and was in pathologic range in all animals of both groups ( Figure 1C, D). The baseline values of the RBC, WBC, platelet, haemoglobin, AST, and creatinine did not differ between the groups (Supplementary material online, Figure S1). A significant drop of the number of platelets was measured in both DOX and MYO groups, while the liver and kidney function parameters increased significantly in both groups. Blood cell counts showed a mild decrease of red blood cells and platelets before the 3rd and 2nd treatment cycle, respectively. Mild but significant elevation of creatine kinase (CK) was detected in DOX pigs after the 2nd treatment (P < 0.05 in comparison with MYO pigs), which was absent after the 3rd treatment, showing the insensitiveness of CK as a cardiotoxicity biomarker (Supplementary material online, Figure S1). Aspartate aminotransferase as a marker for hepatotoxicity also showed a slight increase starting after the first treatment but not during further treatment cycles. HE and MOVAT staining of the LV and RV samples showed disorientation of muscle fibres in hearts in all three treatment groups (Supplementary material online, Figure S2). Picrosirius red staining revealed smaller degrees of LV (6.8 ± 2.1% vs. 8.6 ± 1.4% vs. 11.0 ± 2.0%) and RV (6.5 ± 1.7% vs. 8.7 ± 1.0% vs. 9.1 ± 1.0%) fibrosis of the MYO pigs as compared to the DOX and EPI animals (n ¼ 6, P < 0.05 MYO vs. DOX). Overall, the assessment of cardiac function and histology show that MYO-treated animal developed less severe cardiotoxicity than DOX-treated pigs, although the markers TnI and NT-proBNP were increased in a similar extent. Supplementary material online, Figure S2 shows representative histological images of the LV and RV in DOX, MYO, and EPI groups with myocardial tissue fibrosis. Systolic and diastolic cardiac function Baseline LV and RV function was normal in all animals (mean LV EF 62.6 ± 6.2%, LV EDV 62 ± 9 mL, ESV 38 ± 8 mL). Two weeks after the 3rd treatment, cMRIþLE showed that animals in the MYO group had higher RV EF (P < 0.05) than those in the DOX group, and we found a trend towards higher LV EF, smaller LV and RV end-diastolic volume index (EDVi) and end-systolic volume index (ESVi) ( Figure 2D) in the MYO pigs. TTE showed impaired LV diastolic function in DOX, but not MYO animals, and RV systolic dysfunction (Figure 3). Transcriptomic profiling . RV gene expressions of the MYO pigs were slightly closer to controls, while moderate differences between DOX and MYO pigs were identified. Venn diagrams (Supplementary material online, Figure S4) reveal considerable overlap (i.e. genes up-or downregulated in both DOX and MYO groups), but also differences in response to either treatment. Cardioprotective mechanisms of liposomal DOX For assessing the mechanistic differences between cardiotoxicity caused by liposomal and free DOX, we compared the respective gene signatures. Functional clusters of dysregulated genes include apoptosis regulation, proto-oncogenes and oncogenes, cellular homeostasis and DNA repair, collagen synthesis, metabolism, and cytoskeleton ( Figure 4 and Supplementary material online, Table S1). Additional genes with significant alterations, which could not be allocated to these clusters, are listed in Supplementary material online, Table S2. A direct comparison of gene clusters ( Figure 5 and Supplementary material online, Table S3) shows consistently stronger expression of interferon-responsive genes after MYO treatment. In relation to controls, most of these genes are downregulated by DOX, but upregulated by MYO (Supplementary material online, Figures S5 and S6). Interferon-stimulated genes (ISGs) are induced upon certain degrees of DNA damage and can mediate pro-survival signals. 18 Among those genes, we found altered expression between DOX and MYO of IFIT1 and 2, ISG15, OAS2, and Poly(ADP-ribose)-polymerases (PARP) 1, 9, and 14. PARP 1 and other family members are important mediators of DNA repair, and link cytostatic damage to autophagy and survival mechanisms. 19 Interferon and ISGs are biomarkers for immune response and cell survival after DNA damage, and the gene signature reported here appears to be instrumental in the attenuation of cardiotoxicity by Myocet. 20 Genes mediating direct responses to cell stress (heat shock proteins and transcription factors) were generally upregulated after DOX and downregulated after MYO treatment (Figure 4). Several genes of the HSP70 family exhibited this differential expression while other HSPs, including HSPA4, HSPA5, HSPB1, and HSPD1, showed stronger upregulation in the DOX than in the MYO group. Heat shock protein 47/ SERPINH1 is a molecular chaperone for collagen, providing a link to fibrosis, and is more strongly upregulated by DOX than by MYO treatment. Functional clustering identified the translocator protein (TSPO) as a central gene in the responsive network ( Figure 5), which provides a connection between ISGs (strong expression in MYO) and regulation of transcription (strong expression in DOX). TSPO is abundantly expressed in many tissues and organs, including the heart. Among other functions, it is implicated in apoptosis and cell proliferation and has a protective role in the myocardium. 21 The cardiac expression of TSPO is altered by cellular stress, and it is upregulated in acute stress, but downregulated after repeated injury. Higher expression in the MYO group supposedly contributes to mitigation of cardiotoxicity. To further examine the potential role of TSPO, we stained the protein in LV of DOX and MYO-treated animals ( Figure 6). A typical mitochondrial punctuate pattern and slightly higher intensity of TSPO staining was found in MYO animals. Compared to MYO, detection of activated caspase-3 was stronger in DOX tissue samples ( Figure 6). This indicates higher cytotoxicity and activation of apoptotic pathways caused by higher drug concentrations in the myocardium. The quantitative western blot showed significantly higher cleaved caspase 3 activity in LV samples of DOX than in MYO animals (AUC, DOX vs. MYO: 0.65 ± 0.33 vs. 0.17 ± 0.11 (P < 0.05) ( Figure 6). Regulation of gene expression related to collagen production and fibrosis We also focused on the expression of genes previously associated with anthracycline toxicity (Supplementary material online, Figure S8). We detected a strong impact on genes implicated in collagen production and disposition after anthracycline treatments (Figure 4). 22 Among those, a profound increase of the transcription of collagens and enzymes for collagen maturation, stabilization, and cross-linking was found, including components of the highly abundant cardiac collagens I and III. COL1A1 and COL1A2 were activated more strongly in the DOX than in the MYO group, while COL3A1 showed significant induction only in the RV of MYO. In histological samples, stronger collagen deposition was found in DOX samples ( Figure 6). In addition, genes that are involved in collagen Myocet (MYO) treated animals. Genes are functionally grouped into indicated clusters. The heat maps illustrate significant upregulation of collagen and related genes, extracellular matrix and cytoskeleton pathways, genes involved in DNA damage repair, and a less pronounced effect of immune and cell metabolism (transforming growth factor/TGF-beta/and mitogen-activated protein kinase/MAPK/) signalling pathways, regulation of growth factors and small nucleolar RNAs (snoRNA). Significant upregulations are red, downregulations green and non-significant changes grey (all relative to controls, significance for P < 0.05, moderated t-statistics adjusted for multiple testing). Overall, these data show a strong induction of cardiac fibrosis, both in the DOX and MYO groups. Collagen 1 and a number of collagenregulating genes (SerpinE1 and H1, P4HA1, and PLOD3) were more strongly induced after DOX treatment. Regulation of cellular homeostasis, inflammation, and signalling pathways We found deregulation of several genes implicated in response to DNA damage, and apoptosis ( Figure 4). Among those, the heat shock proteins HSP27 (HSPB1) and HSP60 (HSPD1), which are protective against apoptosis and maintain mitochondrial function, are overexpressed. In contrast, kinases ATM and ATR, activated after DNA double strand breaks and stalled DNA replication, were unchanged or even downregulated. Their downstream target, the central guardian against genomic mutations, TP53, was slightly upregulated only in MYO. The p53 inhibitor MDM4 was reduced after anthracycline treatment and several TP53 targets, such as GADD45B and D, or ERAP1 were upregulated (Supplementary material online, Figure S9). We also investigated known signalling pathways of DOX-induced fibrosis (Supplementary material online, Figure S10). TGF-b1 was upregulated, while its receptor TGFBR1 showed reduced expression. The downstream signal transducers SMAD was changed non-significantly, except for a downregulation of SMAD5. Neither the MAP-kinase nor the PI3/Akt pathways were consistently induced in the DOX and MYO groups. Transmembrane integrin receptors mediate cellular adhesion to the ECM and activate distinct signal transduction pathways. Integrins are generally dysregulated in fibrotic myocardium, with the expression of their individual subtypes varying in different specific cardiomyopathies. 24 Integrins a-1, 3, 5, and 11 and b-1 and 3 are frequently linked to fibrosis in response to MI, pressure overload, or ageing in animal models. We found slight upregulation of the a-3 and a-5 subunits in the MYO and DOX group, respectively, without changes of a-1, a-11, and b-1; b-3 was not detected in the data set. Subunits b-4 and 7 with lesser documented roles in cardiac fibrosis were found to be upregulated, but in Figure 5 Protein-protein interactions of differentially expressed genes between MYO and DOX groups in the left (LV) and right ventricle (RV). A cluster of interferon-responsive genes was strongly upregulated by MYO treatment. Relative to controls, this cluster was upregulated in MYO, but downregulated in DOX animals. Interferon-inducible genes are upregulated upon DNA damage, and linked with pro-survival and tumour drug resistance. MYO upregulated several PARP (Poly [ADP-ribose] polymerase) genes, which play an important role in DNA damage repair, and are activated by interferon. Likewise, the mitochondrial translocator protein (TSPO), which has a cardioprotective and inflammation-limiting effect, was significantly repressed by DOX compared to controls, while its expression was retained after MYO treatment. In addition, DOX showed stronger induction of genes involved in transcriptional and inflammatory regulation, apoptosis and stress response. Overall, MYO treatment resulted in activation of pro-survival genes, while DOX induced genes involved in cell death and apoptosis. Considerable overlap of gene expression signatures between LV and RV was found. total no clear and significant transcriptional activation of integrins was detected. The transmembrane proteoglycan syndecan-1, but not other syndecans, was upregulated in both LV and RV of treated animals. In AngII induced fibrosis, syndecan-1 mediates profibrotic signalling through TGF-b/SMAD signalling. 25 Several fibroblast growth factors were upregulated, with the strongest effect on FGF-9 and 18. We found consistent upregulation of insulin growth factor binding proteins (IGFBP)-6 and 7, the latter of which has been linked to heart failure in a clinical study 26 and is regulated by DOX via p53 activation in tumour cell lines. 27 IGFBPs are modulating IGF effects in tissues and may thus serve as a link between anthracycline induced DNA damage and cardiac fibrosis and heart failure. Quantitative PCR of selected genes was performed to verify the NGS data. All genes showed differential regulation equivalent to the NGS (Supplementary material online, Figures S11-S18). Besides genes with well-documented roles in cardiac myopathies and in DNA damage response (Figure 4), we found dysregulation of a number of genes with previously undocumented role in the myocardium (Supplementary material online, Table S4). Several of those are incompletely characterized in Sus scrofa databases and some might be of interest for further investigations. Several small nucleolar RNAs (snoRNAs) were strongly and uniformly upregulated in both LV and RV of DOX and MYO animals. snoRNAs are non-coding transcripts that guide nucleotide modifications of other RNAs. Several members of each the H/ACA box class, which guides the conversion of uridines to pseudouridines, and the C/D box class, which guide 2 0 -O-methylations, were affected. Recently, some snoRNAs have been linked to oxidative stress caused by DOX, 28 and the activation in our experiment may represent a new mechanism for cardiotoxicity. Pharmacokinetics Plasma concentrations upon application of liposomal DOX were 6and 14-fold higher than after infusion of the free drug directly after completion of the first infusion, and 10 min later, respectively ( Figure 7A), indicating faster clearance of the free drug compound. Even 2 weeks after the final dose, residual DOX concentrations were detected in left and right ventricles. As expected, liposomal DOX resulted in lower myocardial concentrations. These results confirm the comprehensive pharmacokinetic data collected during pre-clinical and clinical development of Myocet, 29,30 and the translational value of the pig study. Concentration-dependent DOX effects on human cardiomyocytes in vitro To further investigate the involvement of an interferon response in DOX cytotoxicity, we treated human cardiomyocytes with increasing DOX non-lethal concentrations and found a significantly increased expression of DHX58 and OAS1 after applying low DOX concentrations ( Figure 7B). This confirms a concentration-dependent effect on interferon-responsive genes. We next examined whether an interferon response has a direct effect on DOX cytotoxicity on cardiomyocytes. Pre-treatment of cardiomyocytes with the nucleic acid and Toll-like receptor ligand poly (I:C) 31 strongly induced expression of the selected set of interferon-responsive genes ( Figure 7C), and induced a significant protection against DOX cytotoxicity as assessed by mitochondrial activity ( Figure 7D) in a short incubation time (48 h) by using a single dose. A second assay directly compared the myocardial cell viability after 24 h of incubation with either DOX or MYO in equivalent concentrations, which showed a similar IC50 but diverging toxicity at higher doses. Significantly preserved cell viability was observed from 1 nM drug concentration in the MYO-treated cell culture, compared to DOX (P < 0.01; Figure 7E) with similar cell viability curves of the poly (I:C)-stimulated and MYO-treated cells. Taken together, the lower myocardial DOX level by applying MYO and the stronger expression of interferon-responsive genes by a lower tissue drug concentration, the in vitro experiment indirectly proves the role of interferon-associated mitigation of cardiotoxicity by MYO. Discussion In line with the clinical knowledge, 32 liposomal encapsulation of DOX (Myocet V R ) reduced heart damage compared to the free drug in pigs, evidenced by body weight and LV and RV functional parameters. Liposomes exhibit a distinctly altered biodistribution pattern than the lipophilic small molecule DOX: organ distribution is predominantly influenced by their large size, which reduces accumulation in organs with tight endothelium such as the heart, but increases extravasation into tissues with leaky or fenestrated vessels, such as tumour tissue. 33 In clinical trials, liposomal DOX had a more than five-fold lower clearance and an approximately 10-to 15-fold lower volume of distribution (Vss), indicative of the lower degree of tissue uptake. 30,34 In dogs, peak and overall drug concentrations in the myocardium was 30-40% lower after liposomal DOX application compared to the free compound. 35 Our analyses of drug concentrations in plasma and heart tissue confirmed these data also for pigs. Both for DOX and its liposomal formulations, the cardiotoxicity is dependent upon the cumulative administered dose 36 and heart tissue of pigs were analysed by UPLC-MS after application as free drug (DOX) or as liposomal formulation (MYO). Ten minutes after the end of the 1-h infusion, higher drug concentrations were found after treatment with MYO, which were subsequently reduced more slowly than after application of the free drug (DOX), indicative of lower clearance, and slower distribution to cardiac tissues. Two weeks after the third and final treatment cycle, the drug concentration in myocardial samples was still lower after application of MYO compared to DOX. (B) Human cardiomyocytes were treated in vitro with doxorubicin in sublethal concentrations, and the effect on expression of selected interferon-inducible genes, which were upregulated in porcine hearts in vivo by MYO, but not DOX, was assessed. Expression of DHX58 and OAS1 was induced by 1.56 These alterations in tissue accumulation and in particular the lower C max in the myocardium are responsible for the overall lower toxicity of liposomal DOX in pigs. The encountered expressional alterations may serve as a basis for devising novel cardioprotective strategies against anthracycline toxicity. The most pronounced difference in the RNA-sequencing analysis is the induction and repression of ISGs by MYO, and DOX, respectively ( Figure 5). Although best known as a key component of the innate immune response and an antiviral defense mechanism, interferon-inducible genes are being increasingly recognized for mediating cell survival after cytostatic stimuli, including irradiation and anthracycline therapy. 38 It has been shown that anthracyclines activate the innate immune response in concentrations well below cytostatic levels. 39 Higher expression of a set of genes termed interferon-related DNA damage signature is connected with DNA damage resistance after DOX treatment. 40 Together with stronger induction of stress response genes by DOX, we propose a mechanism for mitigation of cardiotoxicity by liposomal encapsulation. The limited peak concentration in the heart results in less severe DNA damage and ROS activation, and consequently, an upregulation of ISGs. Activation of this subset of ISGs, as seen after liposomal DOX treatment, induces a pro-survival cell response. On the other hand, the more profound damage caused by higher cardiac DOX concentrations (after unpackaged application) fails to induce ISGs and drives the cell towards cell death. This is reflected by a more severe impact on expression of collagens and ECM genes as indications of cardiac fibrosis, as well as clinical outcomes. The TSPO is potentially central in linking DNA damage, ROS production, interferon response, and finally fibrosis genes. Because of its complex roles in the innate immune system, the interferon response appears to be difficult to exploit for adjuvant therapy. In contrast, TSPO might be a viable target. TSPO ligands are cardioprotective and limit ischaemia-reperfusion damage. 21 In isolated cardiomyocytes, the TSPO ligands 4 0 -chlorodiazepam and TRO40303 reduced DOX-induced dysfunction and cell death. 41 Pharmacokinetic analyses confirmed a lower myocardial concentration of the cytotoxic drug when applied in its liposomal form. This is reflected by lower expression of the apoptosis marker activated caspase-3 in MYO animals ( Figure 7). In vitro experiments on isolated human cardiomyocytes confirmed a concentration-dependent effect on gene expression in sublethal DOX concentrations, with lower concentration activating interferon-inducible genes. Stimulation of interferon through TLR-3 protected cardiomyocytes against acute DOX cytotoxicity. Myocardial cell viability was significantly more preserved after incubation of the cells with MYO, as compared to DOX, thus the in vitro data corroborate the results of the in vivo study. However, several factors, including involvement of fibroblasts and other cell types, the varying extent and mechanisms of ISG-stimulation, and exposure times to drug levels with subclinical toxicity need to be considered for comparing in vitro and in vivo models. Importantly, there is no reliable method for detection of chronic toxicity caused by repetitive drug administration in simplified in vitro models. Meaningful in vivo models with high translational value are essential for adequate assessment of molecular mechanisms of cardiotoxicity. Relevant pre-clinical signs of cardiotoxicity and cardiac fibrosis Of note, treatment of pigs with doses equivalent to human application resulted in significant cardiovascular toxicity in all groups. This finding is in contrast with the generally accepted view that cardiotoxicity is a rare to moderately frequent complication of DOX treatment with an incidence of heart failure between 5% and 48% depending on the cumulative dose. 5,42 In our experiment, the cumulative dose failed to reach this limit, yet all animals experienced elevated TnI, even when signs of clinical heart failure (e.g. congestion, dyspnoea) were absent. Moreover, the animals were healthy at the time of the study start, and lacking conventional risk factors. Their young age suggests increased sensitivity to adverse effects of chemotherapy. In conclusion, we show a uniform and pronounced upregulation of collagens and genes associated with tissue fibrosis in both LV and RV, and both DOX and MYO groups. Together with impaired cardiac function, these data clearly show that cardiac fibrosis was induced by anthracyclines at an early stage. Although the extent of upregulation of fibrosis-associated genes was lower after Myocet, similar caution and in particular close observation of cardiac function is advisable. Our data suggest that primary prevention of cardiotoxicity may be the right choice for all patients before starting anticancer therapy.
Effect of HDAC Inhibitors on Corneal Keratocyte Mechanical Phenotypes in 3-D Collagen Matrices. Purpose: Histone deacetylase inhibitors (HDAC) have been shown to inhibit the TGFβ-induced myofibroblast transformation of corneal fibroblasts in 2-D culture. However, the effect of HDAC inhibitors on keratocyte spreading, contraction, and matrix remodeling in 3-D culture has not been directly assessed. The goal of this study was to investigate the effects of the HDAC inhibitors Trichostatin A (TSA) and Vorinostat (SAHA) on corneal keratocyte mechanical phenotypes in 3-D culture using defined serum-free culture conditions. Methods: Rabbit corneal keratocytes were plated within standard rat tail type I collagen matrices (2.5 mg/ml) or compressed collagen matrices (~100 mg/ml) and cultured for up to 4 days in serum-free media, PDGF BB, TGFβ1, and either 50 nM TSA, 10 μM SAHA, or vehicle (DMSO). F-actin, α-SM-actin, and collagen fibrils were imaged using confocal microscopy. Cell morphology and global matrix contraction were quantified digitally. The expression of α-SM-actin was assessed using western blotting. Results: Corneal keratocytes in 3-D matrices had a quiescent mechanical phenotype, as indicated by a dendritic morphology, a lack of stress fibers, and minimal cell-induced matrix remodeling. This phenotype was generally maintained following the addition of TSA or SAHA. TGFβ1 induced a contractile phenotype, as indicated by a loss of dendritic cell processes, the development of stress fibers, and significant matrix compaction. In contrast, cells cultured in TGFβ1 plus TSA or SAHA remained dendritic and did not form stress fibers or induce ECM compaction. Western blotting showed that the expression of α-SM actin after treatment with TGFβ1 was inhibited by TSA and SAHA. PDGF BB stimulated the elongation of keratocytes and the extension of dendritic processes within 3-D matrices without inducing stress fiber formation or collagen reorganization. This spreading response was maintained in the presence of TSA or SAHA. Conclusions: Overall, HDAC inhibitors appear to mitigate the effects of TGFβ1 on the transformation of corneal keratocytes to a contractile, myofibroblast phenotype in both compliant and rigid 3-D matrices while preserving normal cell spreading and their ability to respond to the pro-migratory growth factor PDGF. Because it is exposed, the cornea is susceptible to physical and chemical injuries, while also being a target of vision correction through refractive surgical procedures. Following a lacerating injury or refractive surgery, quiescent corneal keratocytes surrounding the wound often transform into fibroblasts or myofibroblasts, generating contractile forces and synthesizing scar tissue. These processes can cause a permanent reduction in corneal clarity, as well as decrease the effect of refractive surgery. TGFβ1, a cytokine key to modulating corneal wound healing, has been implicated in the development of corneal haze after photorefractive keratectomy (PRK) [1][2][3][4]. TGFβ1 has been shown to transform quiescent keratocytes into myofibroblasts that synthesize fibrotic extracellular matrix (ECM) and exert strong contractile forces [5][6][7][8][9][10]. These processes result in opacity and vision degradation in a subset of patients [11,12]. Histone deacetylase inhibitors (HDAC) have recently been shown to mitigate the effects of TGFβ1 both in vitro and in vivo. HDAC inhibitors were initially developed as anti-cancer agents for their ability to regulate epigenetically anti-angiogenic and pro-apoptotic gene expressions in transformed cells [13,14]. However, more recent studies have demonstrated their anti-inflammatory and anti-fibrotic properties in canine and equine corneal fibroblasts [15,16], as well as in animal models of inflammatory bowel disease, multiple sclerosis, and systemic lupus erythematosis [17]. A recent study showed the HDAC inhibitor Trichostatin A (TSA) could inhibit fibrosis during corneal wound healing in a rabbit PRK model [18]. Similarly, the topical application of Vorinostat (suberoylanilide hydroxamic acid [SAHA]), an FDA-approved analog of TSA, has been shown to significantly reduce corneal haze, the expression of the myofibroblast marker protein α-smooth muscle actin, and the inflammation associated with the wound healing response in the rabbit [19]. TSA and SAHA both belong to a structural class of hydroxamic acid-based inhibitors that are only effective against classes I, II, and IV HDACs containing zinc in their catalytic active site [20]. Recent studies have found that inhibitors of these classes selectively alter the acetylation and transcription of genes involved in smooth muscle differentiation and fibrosis in cardiac fibroblasts [21,22]. However, their precise mechanism of action in reducing corneal fibrosis is still under investigation [23][24][25]. In vitro studies have shown that HDAC inhibitors can block myofibroblast transformation, but these studies have relied on 2-D culture models using serum-cultured corneal fibroblasts [18,23,24]. Keratocytes cultured under serumfree conditions maintain the quiescent, dendritic phenotype normally observed in vivo before injury [26,27], whereas exposure to serum results in fibroblast differentiation, as indicated by the assumption of a bipolar morphology, formation of intracellular stress fibers, and the downregulation of keratin sulfate proteoglycan expression [27][28][29][30][31]. An understanding of the effects of HDAC inhibition on both activated and quiescent corneal stromal cells is needed, as both are present during various stages of wound healing. The use of 3-D culture models may also provide further insights into the effect of HDAC inhibitors on cell behavior. Keratocytes reside within a complex 3-D extracellular matrix in vivo, and significant differences in cell morphology, adhesion organization, and mechanical behavior have been identified between 2-D and 3-D culture models [32][33][34][35][36]. Unlike rigid 2-D substrates, 3-D models also allow for the assessment of cellular force generation and cell-induced matrix reorganization, biomechanical activities that are critically involved in the migratory, contractile, and remodeling phases of wound healing. In this study, we use 3-D culture models to study the effects of HDAC inhibitors on TGFβ1-induced corneal keratocyte transformation (both biochemical and biomechanical) in defined serum-free culture conditions. We also evaluate the effects of these inhibitors on corneal keratocyte spreading in response to the pro-migratory growth factor PDGF [37,38]. Preparation of standard (uncompressed) collagen matrices: Hydrated collagen matrices were prepared by mixing Type I rat tail collagen (BD Biosciences, San Jose, CA) with 10X DMEM to achieve a final collagen concentration of 2.5 mg/ ml [31]. A 50 ul of suspension of cells was added after neutralizing the collagen by addition of NaOH. Next, 30 ul aliquots of the cell/collagen mixture (5x10 4 cells/matrix) were spread over a central 12-mm diameter circular region on Bioptechs culture dishes (Delta T; Bioptechs, Inc., Butler, PA). The dishes were then placed in a humidified incubator for 30 min for polymerization. The matrices were overlaid with 1.5 ml of serum-free media (basal media). After 24 h of incubation to allow for cell spreading, the media were replaced with basal media, basal media supplemented with 50 ng/ml PDGF BB, or 10 ng/ml TGFβ 1 and either 50 nM Trichostatin A (TSA; Sigma-Aldrich), 10 μM Vorinostat (suberoylanilidehydroxamic acid; SAHA; Selleck Chemicals LLC., Houston, TX), or vehicle (DMSO). Constructs were then cultured for an additional 1-4 days. Growth factor concentrations were determined from previous studies and they represent the lowest concentration to produce a maximal effect on changes in cell morphology and f-actin organization [37]. HDAC inhibitor concentrations were determined from pilot studies and they represent the highest concentration that did not induce keratocyte toxicity under serum-free conditions in standard 3-D collagen matrices. Compressed collagen matrices: Compressed collagen matrices were prepared as described previously by Brown and coworkers [37,40,41]. Briefly, 10 mg/ml of Type I rat tail collagen (BD Biosciences) was diluted to a final concentration of 2 mg/ml. After drop-wise neutralization with 1M sodium hydroxide, a suspension of 2×10 4 or 2×10 5 keratocytes in 0.6 ml basal media was added to the collagen mixture. The solution containing cells and the collagen was poured into a 3×2×1 cm stainless steel mold and allowed to set for 30 min at 37 °C. To compact the matrices, a layer of nylon mesh (~50 μm mesh size) was placed on a double layer of filter paper. The matrices were placed on the nylon mesh, covered with a pane of glass, and loaded with a 130-g stainless steel block for 5 min at room temperature. This process squeezes media out of the matrix and results in the formation of a flat, cell/collagen sheet with high mechanical stiffness. Following compression, 6-mm diameter buttons were punched out of the matrix using a trephine [37]. After 24 h of incubation to allow for cell spreading, the media were replaced with basal media, basal media supplemented with 50ng/ml PDGF BB, or 10ng/ ml TGFβ 1 and either 50 nM TSA, 10 μM SAHA, or vehicle (DMSO). Constructs were then cultured for an additional 1-4 days. Confocal imaging: After 1-4 days of culture in test media, cells were fixed using 3% paraformaldehyde in PBS for 15 min and permeabilized with 0.5% Triton X-100 in PBS for 3 min. To label f-actin, Alexa Fluor 546 phalloidin was used (1:20, Invitrogen). In some experiments, immunolabeling with α-smooth muscle actin (α-SM-actin) was performed. Following incubation in 1% BSA for 60 min to block nonspecific binding, cells were incubated for 2 h in mouse monoclonal antibody α-SM-actin (1:100, Sigma Aldrich) in 1% BSA at 37 °C. Cells were then washed in PBS and incubated for 1 h in affinity-purified FITC conjugated goat anti-mouse IgG (1:20, Jackson Laboratories, Bar Harbor, ME). Nuclei were stained with DAPI (300 nM) for 5 min, washed, and placed in a ProLong® Gold anti-fade reagent for imaging. Constructs were imaged using laser scanning confocal microscopy (Leica SP8, Heidelberg, Germany), as previously described [42]. A HeNe laser (633 nm) for reflection imaging, and Argon (488 nm) and GreNe (543 nm) lasers were used for fluorescent imaging. Stacks of optical sections (z-series) were acquired using a 63×water immersion objective (1.2 NA, 220 μm free working distance). Sequential scanning was used to image double-labeled samples to prevent cross-talk between fluorophores. Cell morphology: Changes in cell morphology within compressed collagen matrices were measured using Meta-Morph, as previously described [42]. The projected cell length was calculated by outlining the maximum intensity projection image of a cell (generated from the f-actin z-series), thresholding, and applying the Integrated Morphometry Analysis (IMA) routine. The length is calculated by IMA as the span of the longest chord through the object. The height of cells was calculated by measuring the distance between the first and last planes in the z-series in which a portion of the cell was visible. Measurements were performed on a minimum of 16 cells for each condition, taken from three separate experiments. Global matrix contraction: DIC imaging was used to measure the global matrix contraction of standard (uncompressed) 3-D collagen matrices. As the bottoms of the matrices remain attached to the dish, cell-induced contraction results in a decrease in matrix height [43]. Height was measured by focusing on the top and bottom of each matrix at six different locations. Measurements were performed in triplicate for each condition and they were repeated 3X. The percentage decrease in the matrix height over time (as compared to control matrices without cells) was then calculated. Immunoblotting: Hydrated collagen matrices with rabbit keratocytes were incubated in culture media with or without TGFβ1 (10 ng/ml), TSA (50 nM), and SAHA (10 uM) for 4 days. Post-incubation, cells were collected from the matrices (8-10 gels/precondition) through digestion in a solution of 2.5 mg/ml collagenase D (Roche Applied Sciences, IN) in PBS at 37 °C for 15 min, followed by centrifugation at 500 ×g for 4 min. Pelleted cells were washed in PBS and then lysed in ice-cold RIPA buffer (100 ul) supplemented with Protease, Phosphatase inhibitor cocktails (Roche Applied Sciences), PMSF (Sigma Aldrich), and sodium orthovandate (NEB) for 20 min. The lysates were sonicated on ice for 2 × 30 s at 30% power. Cell lysates were then clarified by centrifuging at 15,000 ×g at 4 °C for 45 min. The collected supernatant was assayed for total protein content using BCA assay (Thermo Scientific) and samples were prepared by adding 6X sample loading buffer (G Biosciences) with 20% β-mercaptoethanol and boiled for 5 min. An equal amount of protein (15 µg) from each condition was subjected to sodium dodecyl sulfate PAGE (SDS-PAGE) in 4-20% gels (Bio-Rad, Hercules, CA). The resolved proteins were then transferred onto a PVDF membrane (Millipore). The membrane was blocked with 5% non-fat milk in Tris buffered saline (TBS, pH 7.4) for 1 h at room temperature and incubated over night at 4 °C with mouse monoclonal anti-SMA antibody (1:1000; Sigma-Aldrich). The membrane was then washed in TBS-Tween-20 (0.1%) and probed with appropriate horseradish peroxidase (HRP)-conjugated secondary antibodies (1:10,000; Jackson laboratories) and enhanced chemiluminiscent (ECL) detection reagents (Pierce, Rockford, IL). Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) was probed on each membrane to control for equal protein loading. The blots were imaged on a Typhoon Variable Mode Imager (Amersham Biosciences, NJ) and visualized using ImageJ Software. Statistics: Statistical analyses were performed using Sigma-Stat version 3.11 (Systat Software Inc., Point Richmond, CA). A two-way repeated measures ANOVA was used to compare group means, and post-hoc multiple comparisons were performed using the Holm-Sidak method. Differences were considered significant if p<0.05. RESULTS Effect of TSA on corneal keratocyte phenotypes: Corneal keratocytes in standard (uncompressed) 3-D matrices had a quiescent mechanical phenotype, as indicated by a dendritic morphology, a lack of stress fibers, and minimal cell-induced matrix remodeling ( Figure 1A). This phenotype was generally maintained following the addition of TSA ( Figure 1B), although there was a reduction in the overall complexity and number of small branches in the cell processes. TGFβ1 induced a contractile phenotype, as indicated by a loss of dendritic cell processes, the development of stress fibers, and local matrix compaction ( Figure 1C). In contrast, cells cultured in TGFβ1 plus TSA remained dendritic and did not form stress fibers or induce ECM compaction ( Figure 1D). A quantitative analysis of cell-induced ECM reorganization was assessed by measuring global matrix contraction ( Figure 1E). An increase in global matrix contraction was induced by TGFβ1 at both 1 and 4 days of culture, and this increase was inhibited by TSA (p<0.001; two-way repeated measures ANOVA). Consistent with previous studies, approximately 20% of cells showed positive labeling for α-SM-actin localized to the stress fibers following treatment with TGFβ1 in hydrated 3-D collagen matrices (Figure 2A). This was completely blocked by treatment with TSA ( Figure 2B). TGFβ-treated cells also show enhanced vinculin labeling of focal contacts at the ends of stress fibers ( Figure 2C), which was inhibited by TSA ( Figure 2D). Western blotting showed the expression of α-SM actin after treatment with TGFβ1 for 4 days was inhibited by concurrent treatment with TSA ( Figure 2E). We also plated cells within compressed collagen matrices, as this provides a much stiffer 3-D culture environment than standard collagen matrices, similar to the native corneal stroma [40]. Specifically, the elastic modulus of newly polymerized 1-2 mg/ml hydrated collagen matrices measured by rheometry is generally less than 50 Pa [44][45][46], although the effective stiffness to which cells are exposed is likely higher in attached matrices due to the rigid boundary condition. By contrast, the stiffness of compressed collagen matrices has been reported to be 1 MPa [40,47] Keratocytes in compressed collagen matrices cultured in serum-free media developed a dendritic morphology with membraneassociated f-actin labeling ( Figure 3A), as previously reported [37]. This cytoskeletal organization was maintained in the presence of TSA ( Figure 3B). PDGF induces the branching and elongation of corneal keratocytes in 3-D matrices and is a potent stimulator of cell migration [37,38]. In this study, PDGF BB induced keratocyte spreading in compressed collagen matrices ( Figure 3C), as indicated by an increase in the lengths of cells ( Figure 3G). This spreading response was maintained in the presence of TSA ( Figure 3D and G), although there was an apparent reduction in the number of branches and filopodial extensions along the dendritic processes. In contrast, TGFβ induced the loss of dendritic processes, stress fiber formation ( Figure 3E), and the expression of α-SM-actin, as previously described [37]. This transformation was blocked by TSA ( Figure 3F). Cells cultured in TGFβ were shorter and had a more stellate morphology than cells cultured in TGFβ + TSA ( Figure 3G). Effect of SAHA on corneal keratocyte phenotypes: Corneal keratocytes in hydrated 3-D collagen matrices also maintained their quiescent mechanical phenotype following the addition of SAHA ( Figure 4A,B). Cells cultured in PDGF BB also maintained a quiescent mechanical phenotype and did not develop stress fibers or produce significant compaction of the matrix ( Figure 4C). This response was maintained in the presence of SAHA ( Figure 4D). Similar to TSA, SAHA prevented the TGFβ1-induced myofibroblast transformation of corneal keratocytes. Keratocytes cultured in TGFβ1 plus SAHA remained dendritic and did not form stress fibers or induce ECM compaction (compare Figure 4E,F). The expression of α-SM actin after treatment with TGFβ was also inhibited by SAHA ( Figure 2E). Similarly, in compressed collagen matrices, the normal dendritic morphology with membrane-associated f-actin labeling was maintained in the presence of SAHA ( Figure 5A,B). PDGF BB induced keratocyte spreading in compressed collagen matrices, as indicated by an increase in the lengths of cells ( Figure 5D). This spreading response was maintained in the presence of SAHA ( Figure 5E). In both serum-free and PDGF culture conditions, there was an apparent reduction in the number of branches in the dendritic processes in the presence of SAHA. Similar to TSA, SAHA prevented the TGFβ1-induced myofibroblast transformation of corneal keratocytes inside compressed ECM ( Figure 5C,D). DISCUSSION Corneal fibrosis following injury or surgery is characterized by the initial activation of keratocytes to a fibroblastic repair phenotype, some of which further differentiate into myofibroblasts. Myofibroblasts deposit excessive extracellular matrix components, generate large contractile forces, distort the surrounding microarchitecture, and ultimately contribute to corneal haze [2,6,7]. The HDAC inhibitors TSA and SAHA have demonstrated successful anti-inflammatory and antifibrogenic activity both in vitro and in vivo [48,49]. HDAC inhibition has been shown to reduce markedly laser-induced corneal haze in rabbits [18,19] and in alkali-burned mouse cornea in vivo [50]. In vitro studies have demonstrated HDAC inhibitors significantly decrease corneal fibroblast proliferation and activation and block TGFβ-induced alpha smooth muscle actin and fibrotic ECM expressions [15,16,[23][24][25]. While these studies have provided important insights into the possible mechanism of the HDAC inhibition of corneal fibrosis, they used serum-cultured corneal fibroblasts plated on rigid 2-D substrates [18,23,24]; thus, the effect of HDAC inhibitors on keratocyte spreading, contraction, and matrix remodeling in 3-D culture has not been directly assessed. The goal of this study was to investigate the effects of the HDAC inhibitors TSA and SAHA on corneal keratocyte mechanical phenotypes in 3-D culture using defined serum-free culture conditions. We show for the first time that TGFβ1-induced morphological changes, stress fiber formation, and matrix reorganization in corneal keratocytes in 3-D collagen matrices were inhibited by both TSA and SAHA. Concurrently, the expression of alpha-SMA was blocked by these HDAC inhibitors. Mechanistically, the HDAC inhibitors may lead to increased histone acetylation, thereby decreasing transcriptional access to gene promoter regions of pro-fibrotic proteins, such as α-SMA. The TGFβ-induced reorganization of the actin cytoskeleton and enhanced contractility have been shown to be products of increased transcription and the activation of small GTPases and RhoA/B, which themselves have been shown to be activated by TGFβ-dependent Smad proteins in NIH3T3 fibroblasts [51]. In addition, Smad3 mediates the TGFβ-induced α-SMA expression in rat lung fibroblasts [52]. Interestingly, the upregulation of α-SMA in corneal myofibroblasts stimulated by TGFβ is attenuated by inhibiting Rho-associated protein kinase (ROCK), a downstream target of Rho GTPase signaling that regulates cell contractility [37,53]. This suggests a possible interplay between the Rho signaling pathway and Smad proteins in TGFβ-induced fibrosis, both of which may likely be influenced by HDAC activity. In addition to TGFβ, another growth factor that may play an important role in wound healing is PDGF, which is endogenously expressed in corneal tear fluid and has been shown to induce cell spreading and migration in both dermal and corneal fibroblasts [38,43,54]. Specifically, PDGF BB induces Rac activation, which results in cell spreading via the formation of extensive pseudopodial processes and membrane ruffling with increased cell length and area in 3-D matrices [54]. PDGF stimulates the migration of corneal keratocytes without the generation of large mechanical forces, as indicated by a lack of stress fibers and minimal cell-induced ECM reorganization [37]. Thus, PDGF may facilitate wound repopulation without the development of fibrotic tissue, which can impair vision. In this study, corneal keratocytes elongated when stimulated with PDGF BB, even in the presence of TSA or SAHA, suggesting PDGF-induced cell spreading and migration may persist despite a loss in HDAC activity. Cells also maintained a quiescent mechanical phenotype in PDGF BB, as indicated by the maintenance of dendritic cell processes, a lack of stress fibers, and minimal cell-induced matrix reorganization. One difference we observed in the keratocyte phenotype following HDAC inhibition in both serum-free and PGDF containing media was a reduction in the branching and short filopodial extensions along the dendritic cell processes. The significance of this finding is unclear. Studies using other cell types have shown that deacetylase activity may be required for PDGF-induced actin remodeling and cell migration [55]. In NIH3T3 fibroblasts, deacetylase activity is required for certain PDGF-induced transcriptional programs, particularly STAT3 activation [56] and its dependent transcription of growth stimulatory genes (c-myc) [57], the induction of anti-apoptotic (bcl-XL) [58], and pro-angiogenetic (VEGF) activities [59]. Others have found HDAC6 (a potent class II microtubule deacetylase [55]) activity to be necessary [60,61] but not sufficient to support cell migration through the deacetylation of alpha-tubulin [62]. This may be explained by the fact that in addition to microtubules [61], deacetylase activity also targets the function of the molecular chaperone heat shock protein 90 (Hsp90) [63,64], a requirement for PDGF-induced membrane ruffle formation and cell motility. The expression of an acetylation-resistant mutant form of Hsp90 in HDAC6-deficient mouse embryonic fibroblasts rescued membrane ruffle defects in these cells [62]. Finally, functional HDAC6 and Hsp90 activity together were shown to be important for a fully activated Rac1 and consequent cell migration [62]. However, we also note that HDAC6-deficient mice are viable and fertile and show no obvious defects in microtubule organization and stability, despite impaired Hsp90 function [65]. Thus, there may be redundant mechanisms controlling actin organization and cell migration in these mice, which may explain the persistent spreading of corneal keratocytes after PDGF treatment, despite the inhibition of HDAC6 activity by TSA and SAHA in this study. The similarity in the effects of TSA and SAHA were as expected, as both inhibitors are analogs targeting the same class I and II HDACs, which require zinc for their catalytic activity. Inhibitors such as TSA and SAHA show low selectivity for the individual isoforms of class I and II HDACs, furthering the need to develop novel isozyme-specific inhibitors. This will be an important step toward addressing the dearth of knowledge regarding discovering and validating functional targets of each of these HDAC enzymes, in addition to identifying differences between their catalytic and non-catalytic activities. Overall, the data demonstrate that HDAC inhibitors mitigate the effects of TGFβ1 in the transformation of quiescent corneal keratocytes to a contractile, myofibroblast phenotype in both compliant and rigid 3-D matrices, while preserving normal cell spreading and their ability to respond to the pro-migratory growth factor PDGF BB. Following corneal injury, quiescent keratocytes are still present in the stroma surrounding the wound. From a clinical standpoint, it is important that HDAC inhibitors block TGFβ-induced myofibroblast transformation and its associated fibrosis, but that they do not alter the normal phenotype of these cells. In addition, the ability of keratocytes to respond to other wound healing cytokines, such as PDGF, following HDAC inhibition may allow keratocytes to repopulate the wounded stroma while maintaining a more quiescent cell mechanical phenotype and a more regenerative wound healing process.
Null curve evolution in four-dimensional pseudo-Euclidean spaces We define a Lie bracket on a certain set of local vector fields along a null curve in a 4-dimensional semi-Riemannian space form. This Lie bracket will be employed to study integrability properties of evolution equations for null curves in a pseudo-Euclidean space. In particular, a geometric recursion operator generating infinite many local symmetries for the null localized induction equation is provided. Introduction Recently in [1,2] a connection between the local motion of a null curve in L 3 and the celebrated KdV equation was given. In [3] the author obtained a connection between a null curve evolution in L 4 (to which we refer to as the "null localized induction equation" or NLIE), and the Hirota-Satsuma coupled KdV (HS-cKdV) system, which we remind here briefly. Hirota and Satsuma [4] proposed (perhaps up to rescaling) the HS-cKdV system which describes the interactions of two long waves with different dispersion relations. Many systematic methods have been employed in the literature to clarify the integrability of the HS-cKdV system: the Lax pair [5,6,7], the Bäcklund transformation method [8], the Darboux transformation [9,10,11], the Painlevé analysis [6,7], the search of infinitely many symmetries and conservation laws [4,12,13], etc. Fuchssteiner [12] discovered that HS-cKdV system given by (1) admits the symplectic and the cosymplectic operators Furthermore, an infinite hierarchy of symmetries for (1) was found in [13], In this case the recursion operator ΘJ is not hereditary. Hence, the bi-Hamiltonian formulation of the HS-cKdV system does not arise from a Hamiltonian pair. In this paper, we extend some of the results given in [1,2,3] to a more general background. More specifically, we generalize the Lie algebra structure defined on the local vector fields along null curves from the 3-dimensional Minkowski space to 4-dimensional semi-Riemannian space forms M 4 q (G) of index q = 1 or q = 2 and curvature G. This Lie algebra together with the properties about the HS-cKdV system described above will be used to construct an infinity hierarchy of commuting symmetries for the NLIE equation in a 4-dimensional pseudo-Euclidean space. It is interesting to point out that, from a physical point of view, the 4-dimensional space is a more realistic context than the 3-dimensional background, the latter very often serving merely as a toy model. Let us recall that relativistic particles models have been described by actions defined on null curves whose Lagrangians are functions of their curvatures [14,15]. These actions were also studied in Minkowski spaces L 3 and L 4 (see [16,17]), as well as in 3-dimensional Lorentzian space forms in [18]. All these works were addressed to study variational problems on null curve spaces, and they have shown that the underlying mechanical system is governed by a stationary system of Korteweg-De Vries type. Furthermore, if γ(σ, 0) is a critical point (the so-called null elastica) for the action c k dσ, where c is a constant, then the associated solution γ(σ, t) to the NLIE starting from γ(σ, 0) is the null elastica evolving by rigid motions in the direction X = 1 2 kT + N , where X is actually the rotational Killing vector field for the null elastica (see [16,18]), and it served to determine the benchmark for the evolution equation NLIE in [1]. This idea was originally explored by Hasimoto between the elastica (an equilibrium shape of an elastic rod) and the "localized induction equation" (LIE). As might be expected, relationships with the Korteweg-De Vries evolution systems still arise when the null curve motions in 4-dimensional backgrounds are considered. One of the many advantages of having a scalar evolution equation coming from a curve motion is that many aspects of its integrability can be elucidated from the intrinsic geometry of the involved curves (see [19,20]). Conversely, integrability properties of the curvature flow can be employed to determine integrability properties of the curve evolution equation (see [1,21,22,23]). Despite the numerous well-known connections between curve evolution equations and integrable Hamiltonian systems of PDEs, there is still a lack of understanding about the mechanisms and links among the different frameworks. Our overall aim here is to go further into those concerns. The rest of this paper is organized as follows. In section 2 we summarize some basic notions about formal variational calculus on which the Hamiltonian theory of nonlinear evolution equations is based. In section 3 we have included some background formulas and results concerning the differential geometry of null curves in a semi-Riemannian space form. In particular, we study the properties of variation vector fields along a null curve in a semi-Riemannian space form as well as the variational formulas for its curvatures. In section 4, the Lie bracket on the set of P-local vector fields locally preserving the causal character along null curves in a 3-dimensional Lorentzian space given in [1] is extended to 4-dimensional semi-Riemannian space forms. This section also includes a discussion about the connection between the geometric variational formulas for curvatures and the Hamiltonian structure for the HS-cKdV system. The above results will be employed in section 5 to introduce the NLIE equation as a geometric realization of HS-cKdV equations, and to construct a geometric recursion operator generating an infinity hierarchy of commuting symmetries for the NLIE equation. Preliminaries In this section we summarize some necessary notions and basic definitions from differential calculus which are relevant to the rest of the paper (see [24,25,26] for a very complete treatment of the subject). Let n be a positive integer and consider u 1 , u 2 , . . . , u n differentiable functions in the real . . , n} . Let P be the algebra of polynomials in u 1 , u 2 , . . . , u n and their derivatives of arbitrary order, namely, We refer to the elements of P whose constant term vanishes as P 0 . Acting on the algebra P is , thereby becoming a differential algebra. Remark 1. In a more general setting we can consider P, for example, to be the algebra of local functions, i.e. P = ∞ j=1 P j , where P j is the algebra of locally analytic functions of u 1 , u 2 , . . . , u n and their derivatives up to order j (see [27,28,29,30]). All the results and formulas established in sections 4 and 5 involving the algebra P remain valid if the differential algebra of polynomials is replaced by the differential algebra of local functions. Nonetheless, the differential algebra of polynomials is sufficient for our purposes. It is customary to take ∂ as the total derivative D x which can be viewed as In addition to ∂, other derivations ξ may also be considered. The action of ξ is determined if we know how ξ acts on the generators of the algebra. Indeed, set The space of all derivations on P, denoted by der(P), is a Lie algebra with respect to the usual commutator Derivations commuting with the total derivative have important properties. Among others, if [ξ, ∂] = 0, we have where a i = a i,0 = ξu i . Let A = (a 1 , . . . , a n ) be an element of P n and write The set of derivations ∂ A is a Lie subalgebra of der(P), and it induces a Lie algebra on the space This latter commutator can also be expressed with the aid of Fréchet derivatives as We will refer to ∂ A as an evolution derivation (or a vector field, provided that no confusion is possible), and the algebra of all evolution derivations will be denoted by der * (P). Observe that, in Consider an evolution equation of the form where F = (f 1 , . . . , f n ) is an element of P n . An element S = (s 1 , s 2 , . . . , s n ) ∈ P n is called a symmetry of the evolution equation (7) mapping a symmetry to a new symmetry. A linear differential operator R : P n → P n is a recursion operator for the evolution equation (7) if it is invariant under F , i.e., L F R = 0, where L F is the Lie derivative acting as L F A = [F, A] for all A ∈ P n . R is said to be hereditary if for an arbitrary vector field F ∈ P n the relation L RF R = RL F R is verified. Null curve variations in M 4 q (G) The geometry of null curves is quite different from the non-null ones, so let us review the relevant results, going further into what concerns us most for later work. A semi-Riemannian manifold (M n q , g) is an n-dimensional differentiable manifold M n q endowed with a non-degenerate metric tensor g with signature (n − q, q). The metric tensor g will be also denoted by ·, · and the Levi-Civita connection by ∇. The sectional curvature of a non-degenerate plane generated by {u, v} is where R is the semi-Riemannian curvature tensor given by Semi-Riemannian manifolds with constant sectional curvature are called semi-Riemannian space forms. It is a well-known fact that the curvature tensor R adopts a simple formula in these manifolds: where G is the constant sectional curvature. When the curvature G vanishes, then M n q is called pseudo-Euclidean space and will be denoted by R n q . Let M 4 q (G) denote a 4-dimensional semi-Riemannian space form with index q = 1, 2, background gravitational field , and Levi-Civita connection ∇. A tangent vector v is: timelike if v, v < 0; is called null if its tangent vector is null at all points in the curve. Fixed a constant a > 0, we can consider (if γ is not a geodesic) the parameter σ a given by where s is any parameter. When a = 1 this parameter agrees with the pseudo arc-length parameter σ for the null curve. In fact, it is easy to show that σ a is nothing but a linear reparametrization of the pseudo arc-length parameter and it verifies Throughout this paper it will be supposed that we have fixed a constant a, σ a will be denoted by σ and we will also refer to it as the pseudo arc-length parameter. The Cartan frame of a non-geodesic null curve γ : The Cartan equations read where ∇ T denotes the covariant derivative along γ and k 1 , k 2 are the curvatures of the curve. The fundamental theorem for null curves tells us that k 1 and k 2 determine completely the null curve up to semi-Riemannian isometries (see [31]). Even more, if functions k 1 and k 2 are given we can always construct a null curve, pseudo arc-length parametrized, whose curvature functions are precisely k 1 and k 2 . Then any local scalar geometrical invariant defined along a null curve can always be expressed as a function of its curvatures and derivatives. A non-geodesic null curve being pseudo arc-length parametrized and admitting a Cartan frame as above is called a Cartan curve. The bundle given by span {W 1 , W 2 } is known as the screen bundle of γ (see [31]). Projections of the variation vector fields onto the screen bundle will play a leading role in this research. Let γ be a null curve, for the sake of simplicity the letter γ will also denote a variation of null curves (null variation) γ = γ(s, t) : We denote by η the differentiable function verifying ∂γ ∂s (s, t) = η(s, t)T (s, t), and by D ∂t the covariant derivative along the curves γ s (t) = γ(s, t). We write γ(σ, t), k i (σ, t), V (σ, t), etc., for the corresponding objects in the pseudo arc-length parametrization. Definition 2. Let X(γ) be the set of smooth vector fields along γ. We say that V ∈ X(γ) locally preserves the causal character if ∇ T V, T = 0. We also say that V locally preserves the pseudo arc-length parameter along γ if η(s, t) satisfies ∂η ∂t t=0 = 0. The following properties for null variations can be found in [17] when a = ε 1 = ε 2 = 1, but they can be easily adapted to the general situation. Lemma 3. If γ is a null variation, then its variation vector field V verifies where Thus we obtain that V locally preserves the causal character and, moreover, V locally preserves the pseudo arc-length parameter if and only if ρ V = 0, which in such a case also entails commutation of T and V . We define some functions that will play a key role in the rest of the paper, namely, given a vector field V ∈ X(γ) we consider the following projections of V and ∇ T V on the screen bundle given by Lemma 4. With the above notation, the following assertions hold: Proof. Set ∇ V = D ∂t the covariant derivative. From equation (11) we obtain where it has been used . Now, taking into account the formulas (8), (9) and (14) we have that Considering again (8), (9), (14) and (15) we deduce Since ∇ V N, N = 0, the tangent component of ∇ V N vanishes, and the expression for V (k 1 ) becomes As a consequence the vector field ∇ V N boils down to Finally, a similar computation leads to In the same way, since the component of and Consider Λ the space of pseudo arc-length parametrized null curves in M 4 q (G). For γ ∈ Λ, it is easy to see that T γ Λ is the set of all vector fields associated with variations of pseudo arc-length parametrized null curves starting from γ. It is clear that a vector field in T γ Λ locally preserves the causal character and the pseudo arc-length parameter. The converse can also be proved applying a similar procedure as in [2]. Proposition 5. A vector field V along γ ∈ Λ is tangent to Λ if and only if it locally preserves the causal character and the pseudo arc-length parameter, that is, Consequently where D −1 σ is a formal indefinite σ-integral. Furthermore, Proof. For a generic vector field V we obtain: If V ∈ T γ Λ, Lemma 3 implies that In such a case, by using equations (25) and (26) we deduce that Last equations easily give rise to (23). Expression (23) becomes and the following holds Replacing f V into (28) and rearranging terms, we easily obtain (24). Conversely, if V is a vector field verifying (27), then it arises from an infinitesimal variation of null curves. Indeed, according to the Cartan equations (10), Lemma 4 and formulas (23) and (28), we consider the matrices Following the same procedure as described in Lemma 1 of [2] we can construct a null curve variation of γ whose variation vector field is V . From Proposition 5, a tangent vector field V ∈ T γ Λ and its covariant derivative ∇ T V are expressed by Remark 6. Observe that a tangent vector field V ∈ T γ Λ is completely determined by the differential functions h V and l V and two constants, since the operator D −1 σ is used twice; once for obtaining g V from h V and once more for obtaining f V from h V and l V . Therefore, given two differential functions h V and l V and two constants, we can construct a vector field locally preserving pseudo arc-length parameter along γ whose projections on the screen bundle are precisely h V and l V . Both constants could be determined or related if constraints on null curve variation are considered, but for our algebraic purposes, we will consider generic constants. A Lie algebra structure on local vector fields Our objective now is to define a Lie algebra structure on the set of local vector fields which locally preserve the causal character. To this end, we need first to set up the spaces in which we are going to work. Let k 1 and k 2 be smooth functions defined on an interval I and set P the real algebra of polynomials in k 1 , k 2 and their derivatives of arbitrary order, i.e., q (G) be a null curve with curvatures k 1 and k 2 , and consider the set of vector fields along γ whose components are polynomial functions An element of X P (γ) will be called a P-local vector field along γ. The set of P-local vector fields (locally preserving the causal character) will be denoted by and within it, the P-local variation vector fields locally preserving pseudo arc-length parameter are described as In this context, from Proposition 5 and taking into account Remark 6, we can explicitly calculate the P-local pseudo arc-length preserving variation vector fields by means of its constants of integration. Proposition 7. Let V be a vector field in X P (γ), then V ∈ T P,γ (Λ) if and only if it is fulfilled that where c 1 , c 2 are constants, and ∂ −1 σ is the anti-derivative operator verifying that ∂ −1 σ • ∂ σ = I when acting on P 0 . Given a pair of functions (h, l) ∈ Q and two constants c 1 and c 2 , we will denote by X (h, l) the P-local pseudo arc-length preserving variation vector field The vector fields V 0 and V 1 will be the starting point of the commuting hierarchy of symmetries in section 5. Note that to introduce the concept of symmetry (and so a recursion operator) and furnish the phase space of null curve motions with a formal variational calculus in section 5, an appropriate Lie bracket on the set of local vector fields should be defined. To this end, we first introduce a convenient derivation on both the differential algebra and the local vector fields along a null curve. Motivated by [32] and bearing in mind Lemma 4, given V ∈ X * P (γ), we denote by D V : X P (γ) → X P (γ) the unique tensor derivation fulfilling: where α V , β V and δ V are given in (13). We now restrict our definition of Lie bracket only on the set X * P (γ), which will be enough for our purposes. Proof. Given two vector fields The components g 12 and h 12 of V 12 are given by Since V 1 , V 2 ∈ X * P (γ), they verify the formulas (23) and (28) which, together with definitions of α i and β i , lead to the relation g ′ 12 = −aε 1 h 12 . The latter is the condition equivalent to [V 1 , V 2 ] γ ∈ X * P (γ), thus proving (a). To prove (b), it is sufficient to check the same equality solely for the generators k 1 and k 2 of the algebra P. We shall calculate the expressions of ϕ 12 , ψ 12 and ρ 12 (corresponding functions to the Lie bracket [V 1 , V 2 ] γ ), by means of ϕ i , ψ i and ρ i (corresponding functions to vector is any vector field, we have: Bearing in mind relations (34) and expressions of V (k 1 ) and V (k 2 ) obtained in Lemma 4, we deduce: Because of symmetry of formulas (35), deleting terms with repeated factors and rearranging the other, we obtain , From Lemma 4 we obtain Expanding each earlier term by using When adding up (37), (38), (39), (40), (41), (42) and (43), and making a long but easy computation we obtain Using again Lemma 4 we have In the same way as (36), we can compute the terms of (44), After some work, it also follows from (45), (46), (47), (48) and (49) that The paragraph (c) is a direct consequence of the definition, and (f ) is also trivial taking into account the expression for ρ 12 given in (36). The paragraph (d) follows from a straightforward computation. Finally, to prove (e), let us denote R(V 1 , we obtain In particular, Proposition 9 entails that the set of P-local pseudo arc-length preserving variation vector fields T P,γ (Λ) is a Lie subalgebra of the Lie algebra of the P-local vector fields (X * P (γ), [, ] γ ). Before turning to study the geometric hierarchies of null curve flows in section 5, we point out that equations (24) and (32) are particularly noteworthy when vector fields locally preserve the pseudo arc-length parameter and the curvature G vanishes. Equations (24) and (32) may be rewritten as with ω( It should be remarked that A and B come very close to being the symplectic and cosymplectic operators, respectively, for (up to scaling) the Hirota-Satsuma system (see [12,13]). Equation (50) can also be regarded as where It is now therefore evident that J and Θ are the symplectic and cosymplectic operators respectively for a rescaling of the HS-cKdV system. They have been obtained in a natural way using projections onto the screen bundle of both, the variation vector field V and its covariant derivative ∇ T V . This allows automatically to determine the recursion operator R = Θ • J, and the crucial relation Somewhat analogous relationships were obtained between curve evolution in 3-dimensional Riemannian manifolds in [19] (or more generally in n-dimensional Riemannian manifold with constant curvature in [20]) and the mKdV system. The above connection together with the availability of the Lie bracket provided by Proposition 9 will be employed to study the integrability of null curve evolution in the next section. Geometric hierarchies of null curve flows The background given in [1] for the 3-dimensional case used to construct a commuting hierarchy for null curve evolutions can also be well adapted to the 4-dimensional case. Consider Λ the space of pseudo arc-length parametrized null curves in the pseudo-Euclidean space R 4 q . A map f : Λ → C ∞ (I, R) is referred to a scalar field on Λ and f (γ) will be also denoted by f γ . Let A be the algebra of P-valued scalar fields on Λ, i.e., if f ∈ A, then f γ ∈ P for all γ ∈ Λ. In this sense, we will also understand the curvatures scalar fields k 1 , k 2 : Λ → C ∞ (I, R) with its obvious meaning. Similarly, a map V : Λ → ∪ γ∈Λ T γ Λ is referred to as a vector field on Λ, and V (γ) will be also denoted by V γ . We shall denote the set of tangent vector fields on Λ as X(Λ), and within we consider the subset X A (Λ) of vector fields V such that V γ ∈ X P (γ), namely, if we denote where the derivative and anti-derivative operators act on scalar fields as f ′ (γ) = f ′ γ and D −1 σ (f )(γ) = D −1 σ (f γ ) respectively. Thus X A (Λ) stands for the set of A-local vector fields locally preserving the pseudo arc-length parameter and the causal character. These vector fields commute with the tangent vector field T , so they will be called evolution vector fields. We also denote byX A (Λ) and X * A (Λ) the sets of vector fields V such that V γ ∈ X P (γ) and V γ ∈ X * P (γ), respectively. Hence, Remark 10. In what follows, we shall operate with scalar fields and vector fields in the natural way, understanding that the result of the operation is again a scalar field or vector field. For instance, if again a vector field, where ∇ T V (γ) = ∇ Tγ V γ and so on. The operator D V can also be described in other words when V ∈ X A (Λ). Consider γ be a null curve in Λ, and suppose that V γ (σ) = ∂γ ∂t (σ, 0), then In fact, the tensor derivation D V is an extension of the Fréchet derivative defined in (6) for derivations to vector fields on the null curves space. In this way, this operator can be easily translated to the context of any other type of curves. The Lie algebra structure on local vector fields locally preserving the causal character provided by Proposition 9 (along a particular curve) can also be easily extended on the set X * A (Λ). is a Lie bracket verifying the following Hence, [, ] is a Lie bracket, (X * A (Λ), [, ]) is a Lie algebra and the space of evolution vector fields (X A (Λ), [, ]) is a Lie subalgebra of X * A (Λ). Let us define der(A) as the set of derivations on A defined in the natural way and der * (A) the Lie subalgebra of all evolution derivations. In this setting, the elements of der * (A) are given by ∂ (p,q) , with p, q ∈ A, such that they are defined as usual by ∂ (p,q) f (γ) = ∂ (pγ ,qγ ) f γ , for all f ∈ A and γ ∈ Λ. Each vector field V on Λ can be regarded as a derivation on A, acting on the generators k 1 and k 2 in the following way: Theorem 12. The map Φ : X A (Λ) → der * (A) defined by and (V 2 (k 1 ), V 2 (k 2 )) commute with respect to the usual Lie bracket for scalar fields. Proof. Since the map Φ is clearly linear, it is enough to prove that Φ keeps the Lie bracket, i.e., , the latter being equivalent to show that From the last equality of Proposition 9(b) we deduce and it is therefore satisfied for all γ ∈ Λ. Finally, formula (56) is followed from: In order to prove the injectivity we shall prove that ∂ (V (k 1 ),V (k 2 )) = 0 implies V = 0, which is equivalent to proving that V (k 1 ) = V (k 2 ) = 0 implies V = 0. As a first step we will prove that if V (k 1 ) = V (k 2 ) = 0 then ϕ V = ψ V = 0 and so, from formula (29), ∇ T V = 0. According to (52) we have that For a scalar field f ∈ A we denote by ord(f ) the order of the highest derivative (with respect to both k 1 or k 2 ) appearing in f , i.e., Suppose that ord(ϕ V ) = n = 0, then we have that ord(θ(k 1 )ϕ V ) = n + 3. Since V (k 1 ) = 0 it is necessarily obtained that ord(S(k 2 )ψ V ) = n + 3, whence ord(ψ V ) = n + 2. Accordingly, ord(θ(k 1 )ψ V ) = n + 5, which together with the equation V (k 2 ) = 0 would lead to ord(ϕ V ) = n + 4 and so a contradiction. Therefore the scalar field ϕ V is constant, ϕ V = c, and it would verify the Suppose that ord(l V ) = n = 0, then the equations (57) give rise to the following implications ord(g V ) = n + 1 ⇒ ord(h V ) = n + 2 ⇒ ord(f V ) = n + 3. Nevertheless, those orders of derivation represent a direct contradiction to the first equation in (57) Remark 13. From Theorem 12 we have that Im(Φ) is a Lie subalgebra of the algebra der * (A) of all evolution derivations. Thus, we conclude that the algebra of evolution vector fields X A (Λ) on Λ can be regarded as a Lie subalgebra of the evolution derivations. Consider the vector fields V 0 = bT and V 1 = −ack 1 T − 2ε 1 a 2 cN borrowed from Example 8. Their flows γ t ∈ Λ are governed by the equations which in turn induce evolutions for the curvature functions k 1 and k 2 given by Observe that the evolution equation (61) is a generalization for the Hirota-Satsuma equation (1) (through suitable constants). Besides, the flows associated to the curvatures given by V 0 and V 1 are basically the flows σ 0 and σ 1 given in (4). We refer to the equation (59) induced by V 1 (that also appears in [3]) as the null localized induction equation (NLIE). Theorem 12 will be used below to obtain a recursion operator for NLIE, and thereby prove its integrability. Proposition 14. The operator R acting on symmetries V as follows is a recursion operator for NLIE. Proof. Let U = R(V ). Then by definition and making use of the equation (54) we obtain The result can be easily deduced as a consequence of Theorem 12. In fact, it was not possible to extend the procedure used in [1] to obtain the recursion operator and the Hamiltonian structure at the curve level to the 4-dimensional setting, mainly because of the appearance of nonlocal vector fields. Searching for a Hamiltonian structure at the curve level for the 4-dimensional case will be one of the subject for future research. Conclusions In this paper, our primary aim was to study the integrability properties of null curve evolutions in a flat 4-dimensional background. We undertook our research in an enough degree of generality for the purpose of showing the role of the constants appearing on it, especially when they possess geometrical meaning. In that regard, it is particularly important the way in which the computations were conducted to expose the most important elements of the Hamiltonian structure for curvature flows. One of the most surprising fact was to obtain the recursion operator (split into both the Poisson operator and the symplectic operator in formulas (52) and (53)) of the Hirota-Satsuma system by means of the geometry of null curves or, more precisely, making use of the projection of convenient variation vector fields onto the screen bundle. Similar results were obtain in [19,20], this suggesting that the screen bundle of a null curve may be thought of as playing the same role of the normal bundle in a Riemannian curve. We can therefore also state the following important conclusion: if we have a evolution vector field V with (ϕ V , ψ V ) having the property of being the gradient of a certain functional H, then the flow associated to the curvatures (V (k 1 ), V (k 2 )) is a completely integrable Hamiltonian system. Furthermore, in Proposition 14 we have lifted the recursion operator for the Hirota-Satsuma system (at the curvature level) to a recursion operator for the NLIE equation (at the curve level), enabling us to obtain an infinite hierachy of commuting vector fields. Proposition 11 shows that the subspace consisting of A-local evolution vector fields (denoted by X A (Λ)) is closed under bracket and contains the commuting flows as a subalgebra. One of the many benefits of increasing the dimension of the ambient space has been that the connections between integrable hierarchies of both null curves and their curvature flows become clearer. Nevertheless, finding a Hamiltonian structure at the curve level still needs to be achieved. In addition, it would be interesting to develop a purely geometric method to construct the existing structures of the dynamic of null curve motions without lifting any element from the curvature flow. Accordingly, further work is needed, perhaps in a nonlocal background, if possible, to properly understand which also has appeared in different contexts.
Rapid Depletion of DIS3, EXOSC10, or XRN2 Reveals the Immediate Impact of Exoribonucleolysis on Nuclear RNA Metabolism and Transcriptional Control Summary Cell-based studies of human ribonucleases traditionally rely on methods that deplete proteins slowly. We engineered cells in which the 3′→5′ exoribonucleases of the exosome complex, DIS3 and EXOSC10, can be rapidly eliminated to assess their immediate roles in nuclear RNA biology. The loss of DIS3 has the greatest impact, causing the substantial accumulation of thousands of transcripts within 60 min. These transcripts include enhancer RNAs, promoter upstream transcripts (PROMPTs), and products of premature cleavage and polyadenylation (PCPA). These transcripts are unaffected by the rapid loss of EXOSC10, suggesting that they are rarely targeted to it. More direct detection of EXOSC10-bound transcripts revealed its substrates to prominently include short 3′ extended ribosomal and small nucleolar RNAs. Finally, the 5′→3′ exoribonuclease, XRN2, has little activity on exosome substrates, but its elimination uncovers different mechanisms for the early termination of transcription from protein-coding gene promoters. INTRODUCTION The RNA exosome is a multi-subunit, 3 0 /5 0 exoribonucleasecontaining complex originally discovered as being important for rRNA processing (Mitchell et al., 1997). It also plays a crucial role in the turnover of multiple coding and non-coding (nc) transcript classes (Kilchert et al., 2016;Schmid and Jensen, 2018). Many of these transcripts, such as cryptic unstable transcripts (CUTs) in yeast or promoter upstream transcripts/upstream antisense RNAs (PROMPTs/uaRNAs) in humans, are products of antisense transcription (Flynn et al., 2011;Preker et al., 2008;Wyers et al., 2005). An additional class of ncRNAs in humans, termed enhancer RNAs (eRNAs), are produced from divergent transcription at intergenic enhancer sequence elements. Like many other pervasive transcripts, eRNAs are highly sensitive to exosome degradation (Andersson et al., 2014). More recently, products of premature cleavage and polyadenylation (PCPA) were also revealed as exosome substrates in mouse embryonic stem cells (mESCs) (Chiu et al., 2018). The structure of the exosome is similar in yeast and humans and is composed of 9-11 key protein subunits (Gerlach et al., 2018;Januszyk and Lima, 2014;Makino et al., 2013;Weick et al., 2018). It possesses a catalytically inactive barrel structure of 9-core subunits (EXO-9), arranged as a hexamer (the PH-like ring) capped with a trimeric S1/KH ring. EXO-9 interacts with two 3 0 /5 0 exoribonucleases: EXOSC10 (Rrp6 in budding yeast) and DIS3 (also known as Rrp44) (Makino et al., 2013). In budding yeast, DIS3 is present in both nuclear and cytoplasmic exosome complexes, but Rrp6 is found only in the nuclear complex (Allmang et al., 1999b). The composition of the exosome is more complicated in humans due to the presence of DIS3 subtypes; however, the canonical DIS3 is predominantly found within the nucleoplasm (Tomecki et al., 2010). Similar to Rrp6, EXOSC10 is nuclear and is enriched within the nucleolus (Tomecki et al., 2010). While DIS3 and the core exosome components are essential in budding yeast, cells lacking Rrp6 are viable (Allmang et al., 1999b;Briggs et al., 1998). EXOSC10 is a member of the RNase D family and contains a DEDD-Y active site providing distributive exoribonuclease activity (Januszyk et al., 2011). DIS3 is a processive ribonuclease related to the RNase II/R family, possessing an RNB and N-terminal PIN domain, and is capable of both exoribonuclease and endoribonuclease activity (Lebreton et al., 2008;Schneider et al., 2009). When interacting with the exosome complex, Rrp6 is localized on top of the S1/KH cap, close to the entry pore leading into the central channel passing through EXO-9, whereas DIS3 is associated with the channel exit pore at the opposing pole of EXO-9 (Makino et al., 2013;Wasmuth et al., 2014). Rrp6 can widen the entry pore leading into the central channel of EXO-9 facilitating threading of RNAs through EXO-9 toward DIS3 (Wasmuth et al., 2014). RNA substrates entering the S1/KH cap can also be directed to the active site of Rrp6 for trimming and degradation. Exosome activity is further enhanced by a range of co-factors, including the helicase MTR4 (Lubas et al., 2011;Weick et al., 2018). Genome-wide characterization of human exosome substrates have reported DIS3 as the main ribonuclease subunit responsible for degrading PROMPTs, prematurely terminated proteincoding transcripts, and eRNAs (Szczepi nska et al., 2015). The targets for EXOSC10 in human cells are less well characterized, but include rRNA precursors Sloan et al., 2013). In budding yeast, the active site of Rrp6 can aid in the processing of RNA substrates with more complex secondary structures, which is important during the maturation of precursor rRNAs (Fromm et al., 2017). Uncovering previously unknown RNAs has also increased our understanding of transcriptional regulation. For example, the discovery of PROMPTs helped to identify bi-directional transcription from most human promoters (Preker et al., 2008). While our study was in progress, products of PCPA were found to be stabilized by exosome loss, indicating that a proportion of truncated protein-coding RNA precursors are degraded (Chiu et al., 2018). This process is influenced by the recruitment of U1 small nuclear RNA (snRNA) to pre-mRNA and may constitute a transcriptional checkpoint. Both PROMPTs and PCPA products frequently have poly(A) signals (PASs) at their 3 0 ends and possess poly(A) tails when the exosome is depleted (Almada et al., 2013;Ntini et al., 2013). As such, a PAS-dependent mechanism is proposed for attenuating their transcription. Studies of the exosome complex in human cells usually involve protein depletion by RNAi, which is slow. The advantages of rapid, versus slower, depletion include reduced opportunities for compensatory effects and an ability to identify the most acute substrates rather than more gradual accumulation of RNA during long time periods, which could be indirect. This is also useful when inferring how frequently a process takes place, which is more difficult when protein depletion occurs during a period of days. We engineered human cells for rapid, inducible degradation of EXOSC10 or DIS3. Both catalytic components are essential, but DIS3 degrades the majority of nuclear exosome substrates. Direct detection of EXOSC10 substrates revealed a role in the maturation of small nucleolar RNAs (snoRNAs),reminiscent of the situation in budding yeast (Allmang et al., 1999a). Finally, the 5 0 /3 0 exonuclease XRN2 showed little activity on any exosome substrate. However, it promotes the early termination of a subclass of transcription events from proteincoding genes, suggesting a variety of such mechanisms. Depletion of EXOSC10 or DIS3 Using the Auxin-Inducible Degron System The auxin-inducible degron (AID) system allows the rapid elimination of AID-tagged proteins upon the addition of auxin to cell culture media (Nishimura et al., 2009). CRISPR/Cas9 was used to C-terminally tag EXOSC10 or DIS3 with an AID ( Figure 1A). Hygromycin or neomycin resistance markers were incorporated into the cassettes for homology directed repair (HDR) so that biallelic modification could be selected for . A P2A site, between the AID and drug markers, ensured their separation via peptide cleavage during translation (Kim et al., 2011). This system requires expression of the plant E3 ubiquitin ligase, Tir1, which we previously introduced stably into HCT116 cells, chosen for their diploid karyotype. Western blotting confirmed successful AID tagging of EXOSC10 as a species of the predicted molecular weight of EXOSC10-AID was detected in EXOSC10-AID cells with native-sized protein absent ( Figure 1B). This was confirmed by the exclusive detection of native-sized EXOSC10 in parental HCT116:TIR1 cells. A time course of auxin addition demonstrated rapid depletion of EXOSC10-AID, which was reduced by 97% after 60 min with native EXOSC10 insensitive to auxin. Western blotting also showed the exclusive presence of DIS3-AID in DIS3-AID cells and its depletion upon auxin treatment (Figure 1C). DIS3-AID is expressed at lower levels than native DIS3, and quantitative reverse transcription and PCR showed that there is a 50% reduction in spliced DIS3-AID mRNA ( Figure 1C). A monoclonal antibody to the AID tag also detected DIS3-AID, which is absent from HCT116:TIR1 cells and eliminated within 60 min of auxin treatment ( Figure 1D). Although DIS3-AID is expressed at lower levels than native DIS3, it does not limit the association of essential co-factors with the exosome core, as we observed equal co-immunoprecipitation of EXOSC2 with GFP-MTR4 in DIS3-AID and parental cells ( Figure 1E). To demonstrate the specificity of EXOSC10-AID and DIS3-AID depletion, we monitored the levels of several exosome components (EXOSC10, DIS3, EXOSC2, EXOSC3, and MTR4) in parental, DIS3-AID, and EXOSC10-AID cells treated or not treated with auxin ( Figure 1F). Tagging EXOSC10 or DIS3 had no impact on the levels of other exosome factors in the absence of auxin. Auxin treatment specifically eliminated the tagged factors without co-depleting other proteins. Rapid Depletion of EXOSC10-AID or DIS3-AID Leads to Accumulation of Unstable RNAs We next tested the effects of eliminating EXOSC10-AID or DIS3-AID on some of their known substrates. To check for any adverse effects of auxin addition or the AID tag, we added the parental HCT116:TIR1 cells to the experimental series. Depletion of EXOSC10 has been shown to stabilize a short 3 0 extended version of the 5.8S rRNA (Allmang et al., 1999b;Briggs et al., 1998;Schilders et al., 2007). We performed northern blotting on total RNA isolated from EXOSC10-AID cells treated or not treated with auxin for 60 min and probed blots for either mature or 3 0 extended 5.8S rRNA (Figure 2A). 3 0 extended 5.8S rRNA was weakly detected in treated and untreated HCT116:TIR1 cells and in untreated EXOSC10-AID cells. However, auxin treatment of EXOSC10-AID cells induced a strong increase in its levels. As such, acute depletion of EXOSC10 is sufficient to reveal its RNA substrates with no apparent adverse effect of the AID tag. For DIS3, we analyzed the levels of 3 PROMPTs (STK11IP, SERPINB8, and RBM39) and 1 antisense transcript (FOXP4-AS). This was done in DIS3-AID cells treated or not treated with auxin (60 min) and in HCT116:TIR1 cells grown under the same conditions ( Figure 2B). Quantitative reverse transcription and PCR showed no auxin-dependent changes in HCT116:TIR1 cells, as expected. PROMPT levels were similarly low in DIS3-AID cells untreated with auxin, demonstrating that DIS3-AID is sufficient for their normal turnover. However, auxin treatment of DIS3-AID cells results in a large increase in all cases, confirming the effectiveness of this system. DIS3 and EXOSC10 Are Essential in Human Cells We next tested whether EXOSC10 and DIS3 are required for cell viability. Colony formation assays were performed on EXOSC10-AID or DIS3-AID cells grown in the presence and absence of auxin and on HCT116:TIR1 cells under the same conditions. HCT116:TIR1 cells formed a similar number of colonies in the presence and absence of auxin, demonstrating no adverse effects of auxin on viability ( Figure 2C). DIS3-AID cells formed as many colonies as HCT116:TIR1 cells when auxin was omitted, but their smaller size highlights a slight reduction in growth. No DIS3-AID cell colonies formed in the presence of auxin, showing that DIS3 is essential. EXOSC10-AID cells showed no statistically significant defect in colony formation in the absence of auxin, compared to HCT116:TIR1 cells ( Figure 2D). However, auxin prevented the formation of EXOSC10-AID cell colonies, showing that EXOSC10 is essential. This contrasts with budding yeast, in which Drrp6 cells are viable (Allmang et al., 1999b). Nuclear RNA-Seq Analysis following EXOSC10-AID or DIS3-AID Elimination We next analyzed the immediate impact of EXOSC10 and DIS3 loss more globally. Nuclear RNA was extracted from EXOSC10-AID or DIS3-AID cells that had been treated or not treated with auxin for 1 h and performed RNA sequencing (RNA-seq). Nuclear RNA was chosen, as we anticipated most exosome substrates to be enriched in the nucleus. We first analyzed PROMPTs and found an obvious accumulation upon the loss of DIS3 ( Figure 3A). Metagene analysis shows that PROMPTs accumulate at thousands of genes when DIS3 is absent ( Figure 3B). The global increase in PROMPT levels within just 60 min of auxin treatment underscores their acute instability. Further examination of the metaplot in Figure 3B revealed no impact of either exosome subunit on the stability of 3 0 flanking region RNAs, consistent with our finding that these species are XRN2 substrates . Acute depletion of EXOSC10 had no effect on PROMPT transcripts, suggesting that they are not its immediate substrates. Hundreds of intergenic transcripts were also seen upon DIS3 elimination, which were barely detectable in the absence of auxin. We presume that these are eRNAs because separating sequencing reads into sense and antisense strands showed their bidirectionality ( Figure 3C). Moreover, these regions have high H3K4me1 versus H3K4me3 modified chromatin at their promoter regions, as do enhancers (Andersson et al., A metagene analysis of these transcripts confirmed the generality of the DIS3 effect and, as with PROMPTs, shows that they are generally not substrates for EXOSC10 ( Figure 3D). Our experiment again highlights the acute instability of eRNAs and straightforward uncovering of almost 1,000 examples upon DIS3 loss. This is a similar number to what has been reported in other mammalian cells when the exosome was depleted during several days (Pefanis et al., 2015). Protein-coding promoters also produce a variety of exosome substrates in the sense direction, some of which are generated by PCPA (Chiu et al., 2018;Iasillo et al., 2017;Ogami et al., 2017). Truncated pre-mRNA products are readily apparent in our data following rapid depletion of DIS3, but not when EXOSC10 is lost ( Figure 3E). A prominent example is observed for PCF11 pre-mRNA, which is subject to PCPA in mESCs (Chiu et al., 2018). To test the generality of DIS3-mediated turnover of truncated pre-mRNAs, we generated a metagene plot covering the first intron of genes ( Figure 3F). This showed an obvious enhancement of intron 1 levels in cells depleted of DIS3, with no effect of EXOSC10 loss observed. This effect is still evident when intron read counts are normalized to those over the first exon, but is diminished over the second or fourth intron (Figures S1C-S1E). The robust accumulation of such RNAs within minutes of DIS3 loss is an important observation that underscores the high frequency of attenuated transcription. All of the above DIS3 effects were confirmed in an independent biological RNA-seq replicate ( Figure S2). N=4356 Intron 1 There Is Little Redundancy between EXOSC10 and DIS3 Activity on Nucleoplasmic PROMPTs A striking outcome of our RNA-seq data is the lack of effect of EXOSC10 on the thousands of nucleoplasmic exosome substrates degraded by DIS3. In contrast, depletion of EXOSC10 by RNAi often affects nucleoplasmic transcripts, and co-depletion of EXOSC10 and DIS3 can produce synergistic effects that imply some redundancy (Lubas et al., 2011;Tomecki et al., 2010). To analyze the effects of EXOSC10 on nucleoplasmic substrates more closely, we performed a more extended time course of auxin treatment (4 and 8 h) in EXOSC10-AID or DIS3-AID cells, followed by the quantitation of SEPHS1, RBM39, and PPM1G PROMPTs ( Figure 4A). While DIS3-AID loss increases the levels of all 3 transcripts, none were significantly affected by the absence of EXOSC10-AID. MTR4 associates with the exosome core whether EXOSC10-AID is present or not, supporting the existence of functional complexes, even when EXOSC10 is absent ( Figure 4B). We next treated EXOSC10-AID cells for 24, 48, or 72 h with auxin, which revealed a mild increase in PROMPTs at longer time points ( Figure 4C). As EXOSC10 effects require long-term protein depletion, this increase could be due to the indirect consequences of its loss or reflective of very occasional roles in PROMPT turnover. This is not an indirect effect of auxin, as PROMPT levels were unaffected in parental cells after 72 h of treatment ( Figure 4D). The absence of acute effects of EXOSC10 on PROMPTs argues that DIS3 degrades them in its absence. To test this, DIS3-AID cells were transfected with control or EXOSC10-specific small interfering RNAs (siRNAs) before treatment or no treatment with auxin. Quantitative reverse transcription and PCR was then used to analyze the levels of SEPHS1, RBM39, and PPM1G PROMPTs ( Figure 4E). DIS3 elimination from control siRNAtreated cells caused the upregulation of each PROMPT as expected. For RBM39, this effect was generally not as large as in (E) Quantitative reverse transcription and PCR analysis of PPM1G, SEPHS1, and RBM39 PROMPTs in DIS3-AID cells transfected with control or EXOSC10-specific siRNAs before treatment or no treatment with auxin (1 h). Levels are expressed as fold change compared to control siRNA transfected cells not treated with auxin following normalization to GAPDH mRNA. n = 4. *p < 0.05 for differences concluded on in the text. Error bars show SDs. (F) EXOSC10 immunofluorescence in untreated DIS3-AID cells or the same cells treated with auxin for 1, 2, 3, or 4 h. The same cells stained with nucleolin are also shown. The red arrowheads show EXOSC10 puncta that do not overlap with nucleolin signal. Figure 2B, which may result from the additional perturbation caused by RNAi. EXOSC10 depletion caused an increase in PROMPT levels, even in the presence of DIS3-AID, which is consistent with the small effect of EXOSC10-AID loss at long time points of auxin treatment. Auxin treatment of EXOSC10depleted DIS3-AID cells revealed a larger enhancement of PROMPT levels than the depletion of either protein alone. As such, although EXOSC10 plays little role in PROMPT RNA degradation under normal circumstances, its presence may be more important when DIS3 levels are very low. DIS3 Loss Disrupts Focused Nucleolar Localization of EXOSC10 To understand why low DIS3 levels may lead to degradation of some nucleoplasmic exosome substrates by EXOSC10, we monitored its localization in DIS3-AID cells treated or not treated with auxin over a time course ( Figure 4F). As previously reported (Lubas et al., 2011;Tomecki et al., 2010), EXOSC10 is nucleolar enriched as shown by co-localization with nucleolin. DIS3-AID loss resulted in less focused nucleolar localization of EXOSC10 (also see Figure S3A). This was not due to a breakdown of nucleoli, as nucleolin signal showed little alteration in the same cells. Furthermore, at extended time points of DIS3-AID loss, we observed nucleoplasmic puncta of EXOSC10 in 25% of cells that do not overlap with nucleolin signal. EXOSC10 localization in DIS3-AID cells is identical to the parental cell line, and analysis of wider fields of cells confirmed the generality of the effects (Figures S3B and S3C). We conclude that DIS3-AID loss disrupts the normally focused nucleolar localization of EXOSC10, which may allow it to engage with nucleoplasmic substrates and potentially explain the synergistic effect of EXOSC10 and DIS3 co-depletion on PROMPTs. EXOSC10 Is Involved in 3 0 Trimming of Pre-rRNA and Pre-snoRNA Transcripts We next wanted to identify specific substrates of EXOSC10 and used individual-nucleotide resolution UV crosslinking and immunoprecipitation (iCLIP) to detect transcripts to which it directly binds. We complemented previous iCLIP data, generated using functional EXOSC10 (EXOSC10 WT ) in HEK293T cells , with iCLIP using a catalytically dead version of EXOSC10 (EXOSC10 CAT ) also expressed in HEK293T cells. EXOSC10 CAT contains a single substitution (D313N) previously shown to abolish EXOSC10 activity (Januszyk et al., 2011). We reasoned that EXOSC10 CAT would associate more stably with EXOSC10 substrates and facilitate their detection. As EXOSC10 loss leads to the accumulation of 3 0 extended 5.8S rRNA (Figure 2A), we validated our iCLIP data by first assessing this potential substrate. There was a strong iCLIP signal specifically at this site in EXOSC10 CAT samples, which had 33fold more reads than EXOSC10 WT mapping within a 30-nt window downstream of 5.8S ( Figure 5A). This large read density seen in EXOSC10 CAT indicates that the catalytic mutant blocks the processing of pre-5.8S and underscores it as a bona fide EXOSC10 substrate. The expression of inactive EXOSC10 in EXOSC10-AID cells consistently enhances the levels of extended 5.8S RNA in a dominant-negative fashion ( Figures S4A and S4B). Read density rapidly drops beyond 30 nt down-stream of the annotated end of 5.8S rRNA, suggesting that EXOSC10 is required only for the final nuclear trimming step. This indicates a ribonuclease switch and is consistent with reconstituted 5.8S rRNA maturation in budding yeast, during which DIS3 processing is sterically inhibited by the exosome core, necessitating handover to Rrp6 (Fromm et al., 2017;Makino et al., 2015). Analysis of the entire 45S rDNA showed significant CLIP density over the 5 0 external transcribed spacer (ETS) in both EXOSC10 WT and EXOSC10 CAT ( Figure S4C). We reasoned that the 30-nt ''footprint'' downstream of the 5.8S rRNA, seen in EXOSC10 CAT samples, can identify other RNAs that are subject to final processing by EXOSC10. Obvious 30-nt footprints of CLIP density were identified in 3 0 flanking regions of snoRNAs, with examples shown for SNORA69 and SNORD18C in Figure 5B. Metagene analyses of the average distribution of EXOSC10 iCLIP reads over annotated snoRNAs indicate that EXOSC10 engages in processing pre-snoRNAs that are extended at their 3 0 ends by 30 nt due to the specific enrichment of CLIP density exclusively seen in the EXOSC10 CAT iCLIP dataset ( Figure 5C). A majority of snoRNAs in both the SNORD and SNORA classes showed this signature of EXOSC10 CAT binding ( Figure S5A). Analysis of our RNA-seq data independently revealed examples in which short extended snoRNA precursors are specifically stabilized by EXOSC10 loss ( Figure 5D). Overall, these data identify short 3 0 extended RNA precursors as EXOSC10 substrates. The implication of EXOSC10 in human snoRNA processing highlights conservation with budding yeast in which Rrp6 performs a similar 3 0 trimming step (Allmang et al., 1999a). We also noted examples in which longer 3 0 snoRNA extensions were seen in the absence of DIS3, which is consistent with a ribonuclease handover and previous photoactivatable ribonucleoside (PAR)-CLIP analysis (Szczepi nska et al., 2015) ( Figure S5B). Finally, unlike for 3 0 extended snoRNA and 5.8S rRNA, PROMPT and eRNA reads were not enriched in the EXOSC10 CAT experiment, and the exclusive expression of inactive EXOSC10 did not stabilize PROMPTs ( Figures S5C and S5D). This further demonstrates that they are not usually EXOSC10 substrates. Analysis of XRN2 Regulation of Exosome-Targeted Transcripts Transcripts can also be degraded from their 5 0 end, with XRN2 being the major nuclear 5 0 /3 0 exoribonuclease and having a prominent role in transcriptional termination . Although RNAi has also been used to study XRN2, it may not reveal its full repertoire of functions, as we suggested previously by engineering XRN2-AID cells . To more accurately assess the impact of XRN2 on PROMPT and eRNA degradation, we analyzed our previously published nuclear RNA-seq from XRN2-AID cells in which XRN2 is eliminated within 60 min of auxin treatment ( Figure S6). There was no general impact of XRN2 elimination on either of these transcript classes, indicating that they are not its substrates. The termination of exosome substrates described here is poorly understood, but the XRN2-AID cell line allows an assessment of its role in the process. Accordingly, we analyzed PROMPT regions in mammalian native elongating transcript sequencing (mNET-seq) data that we previously generated in A SNORA48 SNORA68 [0][1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]653] 300 bp 300 bp (legend continued on next page) XRN2-AID cells . mNET-seq analyses the position of RNA polymerase at single-nucleotide resolution by sequencing the 3 0 end of RNA from within its active site (Nojima et al., 2015). A comparison of typical PROMPTs (MYC and RBM39) showed nascent transcription over these regions that terminated within 1.5 kb of the respective promoters (Figure 6A). XRN2 elimination caused neither more reads over the termination region nor additional reads beyond it. More general analysis of the XRN2 impact on PROMPT termination revealed only a very slight increase in signal at the 5 0 -most positions (also visible in the sense direction) ( Figure 6B). Therefore, extended PROMPT transcription is not generally apparent in the absence of XRN2. RNA-seq consistently revealed no general effect of XRN2 loss on PROMPT levels ( Figures S6A and S6B). We also show that protein-coding genes produce exosome substrates in the sense direction ( Figures 3E and 3F), and we tested the impact of XRN2 on the termination of these products. This analysis was performed on 4 truncated transcripts at the PIGV, PCF11, CLIP4, and SEPHS1 genes ( Figure 3E demonstrates the DIS3 effect for PCF11 and PIGV with CLIP4 and SEPHS1 data in Figure 6C). PCF11 was chosen as it is subject to PCPA in mESCs and has an annotated PCPA site in humans (Ensembl I.D.: ENST00000624931.1; Chiu et al., 2018) with the other 3 genes chosen at random. As truncated transcripts overlap with full-length transcription, we labeled nascent transcripts for 30 min with 4-thiouridine (4sU) following treatment or no treatment with auxin. 4sU-labeled RNA was then captured via biotinylation and streptavidin beads, isolating it from material that existed before the elimination of XRN2. Quantitative reverse transcription and PCR was then performed using a primer pair within the DIS3-stabilized region (upstream [US]) and another downstream (DS) of it ( Figure 6D). XRN2 loss induced a significant increase in RNA downstream of the DIS3stabilized region for PIGV and PCF11, but not for SEPHS1 or CLIP4. Premature termination may constitute a dead-end pathway or it could compete with full-length transcription. To distinguish these possibilities, primers were designed to detect downstream splicing events in PCF11, PIGV, CLIP4, or SEPHS1 mRNAs in 4sU-labeled RNA isolated from XRN2-AID cells treated or not treated with auxin ( Figure 6E). XRN2 depletion significantly increased the level of spliced mRNA from PCF11 and PIGV, suggesting that some transcripts escaping PCPA-mediated termination are not dead-end products. However, spliced SEPHS1 or CLIP4 mRNA were unaffected by XRN2 loss, in line with its lack of impact on their attenuated transcription. Finally, the apparent difference in sensitivity of early termination to XRN2 may be influenced by the frequency of attenuated transcription in each case. To assess this, attenuated SEPHS1, CLIP4, PIGV, and PCF11 transcripts were assayed by quantitative reverse transcription and PCR in DIS3-AID cells treated or not treated with auxin ( Figure 6F). All 4 transcripts accumulated robustly on the loss of DIS3, demonstrating similarly frequent attenuation of transcription, with SEPHS1 showing the largest effect. As such, the insensitivity of SEPHS1 and CLIP4 early termination to XRN2 is not correlated with less frequent attenuation of transcription compared to PCF11 and PIGV. We conclude that DIS3 is involved in the widespread degradation of attenuated transcripts from protein-coding genes that fall into subtly different classes. We have distinguished some of these on the basis of their sensitivity to XRN2-dependent termination. DISCUSSION We have engineered conditional depletion of DIS3, EXOSC10, or XRN2 to assess their immediate impact on RNA metabolism. The rapid depletion achieved provides important insights that complement previous RNAi approaches. Timescales of minutes versus days have the obvious advantage that transcripts are less likely to appear through secondary effects. Moreover, an accumulation of RNA within minutes demonstrates constant turnover in a way that is more difficult to infer by RNAi, during which accumulation may be gradual. It also highlights acute substrates versus those that are only apparent after long periods of protein depletion, as exemplified by the effect of EXOSC10 on PROMPT levels. We were initially concerned that the low levels of DIS3-AID may prove problematic for assaying the impact of its loss. However, several observations mitigate this concern. First, although DIS3 is essential, DIS3-AID cells produce as many colonies as HCT116:TIR1 cells, although they are smaller. Second, DIS3-AID cells have the same levels of DIS3 substrates as HCT116:TIR1 cells when auxin is not used. Third, DIS3 substrates do not accumulate upon the rapid loss of EXOSC10 activity, underlining the specificity revealed by our approach. Fourth, the level of other exosome components and the integrity of the exosome are not observably different between DIS3-AID cells and parental cells. While PROMPTs are stabilized by RNAi of EXOSC10 from DIS3-AID cells, no effect is observed when EXOSC10-AID is rapidly depleted, even though bona fide substrates are stabilized at this early timepoint. Long-term auxin treatment of EXOSC10-AID cells does cause a mild increase in PROMPT levels, suggesting that RNAi effects are due to prolonged EXOSC10 depletion. This observation suggests that RNAs, such as PROMPTs, are only occasionally targeted by EXOSC10 or that their slight upregulation is an indirect effect of its long-term depletion. A lack of effect of EXOSC10 on PROMPT (and eRNA) turnover is also underscored by our iCLIP dataset, which showed that their recovery is not enhanced by inactivating EXOSC10 ( Figure S5C). Moreover, PROMPTs are not stabilized, even when EXOSC10 is catalytically inactive (Figure S5D). These experiments demonstrate an evolving impact of EXOSC10 loss on transcript levels over time that may have (legend continued on next page) an indirect explanation that should be considered when interpreting data from its long-term depletion. Our experiments do show some role for EXOSC10 in PROMPT turnover when DIS3 is lost as mislocalization of EXOSC10 occurs when DIS3-AID is depleted and co-depletion of both proteins synergistically enhances PROMPT levels. Given the nucleolar enrichment of EXOSC10, it may be lacking in a large fraction of nucleoplasmic exosome complexes, explaining its limited impact on PROMPTs and other DIS3 substrates. Reciprocally, DIS3 shows relative exclusion from nucleoli, raising the possibility of compartment-specific catalytic complexes (Tomecki et al., 2010). We show that EXOSC10 is not required for MTR4 to associate with the exosome core, as judged by its continued immunoprecipitation with EXOSC2 in auxin-treated EXOSC10-AID cells. This is resonant with recent structural data demonstrating that MTR4 contacts the human exosome via MPP6 and EXOSC2 and explains how a lack of EXOSC10 is compatible with the continued degradation of transcripts by DIS3 (Weick et al., 2018). As it was initially difficult to identify EXOSC10 substrates from our RNA-seq data, we used iCLIP to detect RNAs directly bound by EXOSC10. This was facilitated by using the inactive protein, which revealed that signatures of EXOSC10 bound more robustly than the wild-type protein. There was an obvious predominance of short (30 nt) extended precursors to 5.8S rRNA, which we also saw by northern blotting. The sharp reduction of iCLIP reads beyond this 30-nt footprint strongly suggests that EXOSC10 is involved in a final nuclear trimming step, similar to what has been shown in budding yeast (Allmang et al., 1999a). Structural studies lend support to this hypothesis, having shown that bulky RNA particles can become stalled at the entrance to the central channel of the exosome, necessitating a handover from Rrp44 to Rrp6 (Fromm et al., 2017;Schuller et al., 2018). We suggest that handover is also required for human snoRNA processing because short extended snoRNAs are bound by EXOSC10 and stabilized upon its loss and because previous PAR-CLIP shows DIS3 association with longer snoRNA precursors (Szczepi nska et al., 2015). As snoRNAs are often present in the introns of expressed genes, stabilized extensions may often be masked by host gene reads in RNA-seq, with iCLIP providing a more direct assessment of their fate. We would also like to note that the exosome may act redundantly with other snoRNA processing pathways in humans (Berndt et al., 2012). In studying the termination of exosome-sensitive RNAs emanating from protein-coding gene promoters, we found that PROMPTs and some truncated sense transcripts are insensitive to XRN2 loss. Even so, many PROMPTs harbor PASs and poly(A) tails, and XRN2 is implicated in some antisense transcriptional termination by mNET-seq (Nojima et al., 2015). However, the detection of poly(A) tails does not necessarily mean that polyadenylation occurs on every RNA in a population, and it is possible that truncated sense transcripts are generated in multiple ways. A complex consisting of the cap-binding complex and ARS2 is implicated in the 3 0 end processing and termination of short human transcripts, including PROMPTs (Hallais et al., 2013;Iasillo et al., 2017). At least some ARS2-sensitive transcripts are generated by mechanisms that do not involve the canonical polyadenylation complex. The differential XRN2 effect on PROMPT and truncated sense transcript termination also suggests a variety of promoter proximal termination processes. In summary, our data further highlight the constant and rapid turnover of thousands of transcripts in the human nucleus and identify specific substrates for DIS3, EXOSC10, and XRN2. They also reveal that transcripts with apparently similar characteristics (e.g., PROMPTs, PCPA products) can be subtly distinguished on the basis of their sensitivity to XRN2. We anticipate that the ability to rapidly control exoribonucleases, as we have done here, will be especially useful to interrogate processes that cannot be dissected by long-term depletion (e.g., to test the importance of short-lived RNAs and RNA turnover in stress responses or other changes in cellular environments). STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: CONTACT FOR REAGENT AND RESOURCE SHARING Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Steven West (s.west@exeter.ac.uk). EXPERIMENTAL MODEL AND SUBJECT DETAILS Experiments involved human colon carcinoma derived HCT116 cells (male) and human embryonic kidney derived HEK293T cells (female). METHOD DETAILS Cell culture and cell lines HCT116 and HEK293T were cultured in Dulbecco modified eagle medium with 10% fetal calf serum. Our CRISPR protocol and plasmids was described previously . Sequences of EXOSC10 and DIS3 homology arms are provided in this manuscript. Briefly, HCT116 cells grown on a 30mm dish were transfected with 1ug each of guide RNA plasmid, Neomycin and Hygromycin repair constructs. Transfection was with Jetprime (Polyplus) following the manufacturers' guidelines. Media was changed after 24 hours and, after 72 hours, cells were re-plated into 100mm dishes in media containing 30ug/ml Hygromycin and 800ug/ml EXOSC10 siRNA Thermofisher Silencer select: S10738 qRT-PCR primers This paper Table S1 iCLIP Oligos Kö nig et al., 2010Kö nig et al., , 2011 Neomycin. Resistant colonies were picked and screened by PCR 10-14 days later. Correct genomic insertion of tags was assayed by sequencing these PCR products. Auxin was used at a concentration of 500uM for one hour unless stated otherwise. For RNAi, 24-well dishes were transfected with siRNA using Lipofectamine RNAiMax (Life Technologies) following the manufacturers' guidelines. The transfection was repeated 24 hours later and, 72 hours after the first transfection, RNA was isolated. qRT-PCR and 4sU analysis In general 1ug of RNA was isolated using Tri-reagent and DNase treated for one hour before reverse transcription (Protoscript II) using random hexamers. cDNA products were diluted to 50ul volumes. 1ul was used for real-time PCR in a QIAGEN Rotorgene instrument using Brilliant III SYBR mix (Agilent technologies). The comparative quantitation option in the software was used to generate graphs. The 4sU qRT-PCR protocol is as described in Eaton et al., 2018. Immunofluoresence Cells were grown on coverslips, treated for 0, 1, 2, 3, or 4 hours with auxin, washed with PBS, fixed for 10 minutes in 4% PFA, washed with PBS, permeabilised with 0.1% Triton x-100 (v/v in PBS) for 10 minutes, then blocked with 10% FBS (v/v in PBS) for 1 Hour. Cells were probed overnight with 1:1000 diluted a-EXOSC10 and a-nucleolin at 4 C, washed with 0.01% NP40 (v/v in PBS), probed with Alexa Fluorâ 488 anti-rabbit and Alexa Fluorâ 555 anti-mouse secondary's (1:2000, Invitrogen) for 1 hour, counter stained with DAPI, washed and mounted. All images were taken using an Olympus-81 oil immersion microscope, exposure times, brightness and contrast settings are identical between images. Nuclear RNA-seq Nuclei were extracted using hypotonic lysis buffer (10 mM Tris pH5.5, 10 mM NaCl, 2.5 mM MgCl 2 , 0.5% NP40) with a 10% sucrose cushion and RNA was isolated using Tri-reagent. Following DNase treatment, RNA was Phenol Chloroform extracted and ethanol precipitated. After assaying quality control using a Tapestation (Agilent), 1 mg RNA was rRNA-depleted using Ribo-Zero Gold rRNA removal kit (Illumina) then cleaned and purified using RNAClean XP Beads (Beckman Coulter). Libraries were prepared using TruSeq Stranded Total RNA Library Prep Kit (Illumina) and purified using Ampure XP beads (Beckman Coulter). A final Tapestation D100 screen was used to determine cDNA fragment size and concentration before pooling and sequencing using Hiseq2500 (Illumina) at The University of Exeter sequencing service. GEO accession numbers: (EXOSC10-AID and DIS3-AID cell RNA-seq: GSE120574), (XRN2-AID cell RNA-seq: GSE109003). RNA-Seq Read Alignment Raw single-end 50bp reads were screened for sequencing quality using FastQC; adaptor sequences were removed using Trim Galore! and trimmed reads shorter than 20 bp were discarded. All nuclear RNA-seq analyses were carried out using the Ensembl GRCh38.p10 and GRCh38.90 human gene annotations. Before alignment, trimmed reads were passed through the SortMeRNA pipeline (Kopylova et al., 2012) to remove trace rRNA matching in-built 18S and 28S human databases then mapped to GRCh38 using HISAT2 (Kim et al., 2015) with default parameters supplemented with known splice sites. Unmapped, multimapped and low MAPQ reads (< 20) were discarded from the final alignment using SAMtools (Li et al., 2009). de novo Transcript Assembly de novo transcripts were assembled from each library using the StringTie suite (Pertea et al., 2016) with default parameters, guided by current GRCh38 reference annotation. Known annotated genes were dropped and the assembled transcripts from each sample were merged into a single consensus annotation. Reads were then counted per transcript using featureCounts (Liao et al., 2013(Liao et al., , 2014 and differentially expressed upregulated de novo gene intervals (R2-fold, padj < 0.05) were called using DESeq2 (Love et al., 2014). de novo transcripts were designated as a PROMPT (< 3 kb) or eRNA (> 3 kb) based on their relative distance from the nearest annotated gene. Generation of Synthetic Intron Annotation A custom intron annotation file was produced from GRCh38 by merging all exon intervals derived from each transcript isoform to generate a synthetic transcript representative of every gene. Each synthetic exon was then subtracted from gene intervals using the BEDtools suite (Quinlan and Hall, 2010) producing intron intervals with inherited gene information. Synthetic introns were counted and numbered according to their strand orientation i.e., sense introns numbered ascending, antisense introns descending, finally merging into a single annotation file. Meta Profiling PROMPT and eRNA Analysis For metagene analysis, expressed protein-coding and ncRNA genes (> 50 reads per gene) were selected and an extended transcriptional window was then applied to each gene to include a 3 kb region 5 0 of the TSS and a 7 kb region 3 0 of the TES. Overlapping genes and genes that extended beyond chromosome ends were discarded using the BEDtools suite to prevent double read counting. Profiles of these filtered genes were then generated from RPKM normalized reads using deeptools (Ramírez et al., 2014) with further clarified by centrifugation (12000rpm for 10 mins) and then incubated with 20ml GFP-TRAP beads (Chromotek) for 1 hour at 4 C with rotation. Beads were washed four times with IP lysis buffer and complexes eluted in 2x SDS gel loading buffer for analysis by western blotting. QUANTIFICATION AND STATISTICAL ANALYSIS qRT-PCR was quantitated using the comparative quantitation function associated with the QIAGEN Rotorgene instrument. Values were first normalized to ACTB or GAPDH and then samples were compared by quantitating the experimental values relative to the control condition (given the value of 1 by the software). Bars show the average of at least three replicates and error bars show the standard deviation. Where assessed, p values were calculated using a Student's t test. DATA AND SOFTWARE AVAILABILITY The accession number for the RNA-seq (EXOSC10-AID and DIS3-AID cells) and iCLIP (EXOSC10 CAT ) data reported in this paper is Gene Expression Omnibus: GSE120574.
Transforming Growth Factor (cid:1) Suppresses Human Telomerase Reverse Transcriptase (hTERT) by Smad3 Interactions with c-Myc and the hTERT Gene Telomerase underpins stem cell renewal and proliferation and is required for most neoplasia. Recent studies suggest that hormones and growth factors play physiological roles in regulating telomerase activity. In this report we show a rapid repression of the telomerase reverse transcriptase (TERT) gene by transforming growth factor (cid:1) (TGF- (cid:1) ) in normal and neoplastic cells by a mechanism depending on the intracellular signaling protein Smad3. In human breast cancer cells TGF- (cid:1) induces rapid entry of Smad3 into the nucleus where it binds to the TERT gene promoter and represses TERT gene transcription. Silencing Smad3 gene expression or genetically deleting the Smad3 gene disrupts TGF- (cid:1) repression of TERT gene expression. Expression of the Smad3 antagonist, Smad7, also inter-rupts TGF- (cid:1) -mediated Smad3-induced repression of the TERT gene.MutationalanalysisidentifiedtheSmad3siteontheTERT gene promoter, mediating TERT repression. In response to TGF- (cid:1) , Smad3 binds to c-Myc; knocking down c-Myc, Smad3 does not bind to the TERT gene, suggesting that c-Myc recruits Smad3 to the TERT promoter. Thus, TGF- (cid:1) negatively regu-lates telomerase activity via Smad3 interactions with c-Myc binding site abrogates the binding of Smad3 and inhibits TGF- (cid:1) -induced repression of TERT gene promoter activity. Furthermore, Smad3 interacts with c-Myc in response to TGF- (cid:1) , and silencing c-Myc gene expression abrogates the binding of Smad3 to the TERT gene. These findings suggest a novel mode of rapid inhibition of TERT and telomerase activity in both normal and neoplastic cells. They show for the first time that Smad3 directly represses TERT gene expression in human cells and that this repression involves Smad3 interactions with c-Myc and TERT gene promoter DNA. Telomerase reverse transcriptase (TERT) 4 is required to regulate the structures of chromosomal ends (telomeres) for continuous cell division during embryonic development, stem cell renewal and proliferation, and cancer cell immortalization. TERT interacts with telomere DNA and telomere-binding pro-tein, catalyzing telomeric DNA reverse transcription and telomere end capping (1)(2)(3)(4)(5)(6). In the absence of TERT, telomeres undergo shortening for about 150 base pairs and rearrangement of telomere structure in each cell cycle. Short telomeres or uncapped telomere ends subsequently trigger cell senescence or apoptosis. Although TERT is expressed and telomerase is active during embryo development, TERT is down-regulated, and telomerase activity becomes suppressed during cell differentiation to mature somatic cells. This occurs in association with a gradual loss of cell proliferative potential. Although the role(s) of TERT repression in cell differentiation remains to be established (7)(8)(9)(10), expression of TERT mobilizes stem cells from cell renewal to proliferation (11,12) and extends cell proliferative lifespan toward immortality under certain conditions (13)(14)(15). Reactivated in most immortal cell lines and cancers, TERT has been a frequent target for inhibiting tumor cell proliferation (16). Transcriptional activation of the TERT gene is, thus, a critical, initial rate-limiting step in TERT function and telomerase activity. Reflecting its multifactorial regulation, the TERT gene promoter in man has multiple sites for transcriptional regulation. There are two typical E-boxes and several GC-boxes for the transcription factors c-Myc/max and Sp1, respectively (17)(18)(19)(20). Expression of N-Myc or c-Myc entrains a direct binding of Myc to the E-box (18,21) and induction of TERT gene transcription followed by cell proliferation (22,23). Another E boxbinding protein (upstream stimulatory factor) also up-regulates TERT promoter activity, with binding negatively regulated by the N-terminal-truncated form upstream stimulatory factor 2, as an inhibitory competitor whose levels are increased in telomerase negative cells (24). Little is known of the mechanisms whereby the TERT gene is repressed during cell development and differentiation. Recent studies show that transcription factor activator protein 1 (AP-1) is involved in repressing TERT gene transcription; combinations of c-Fos and c-Jun or c-Fos and JunD suppress TERT gene activation by binding to two AP-1 sites in the TERT gene promoter, suggesting a broad involvement of AP-1 in the regulation of telomerase in cell proliferation, differentiation, carcinogenesis, and apoptosis (25). Transforming growth factor ␤ (TGF-␤) is a secreted autocrine or paracrine growth inhibitor that restricts proliferation and promotes differentiation of diverse cell types including epithelial, endothelial, and hematopoietic cells. It, thus, plays important roles during development and in pathophysiology (26 -29). TGF-␤ exerts its biological effects through specific intracellular effector molecules called Smads that are phosphorylated by type I and type II transmembrane serine/threonine kinase receptors; phosphorylated Smad proteins such as Smad3 enter the cell nucleus to positively or negatively regulate gene expression by binding to DNA and interacting with DNA sequence-specific transcription factors (30 -32). Recent studies suggest that TGF-␤ limits cell proliferation and induces cell senescence (33)(34)(35), which is regulated by telomeres and telomerase (36,37). In contrast to epidermal growth factor, that stimulates telomerase (38,39), TGF-␤ inversely correlates with telomerase activity (40). Interrupting TGF-␤ autocrine actions increases telomerase activity in human breast cancer MCF-7 cells, whereas restoring autocrine TGF-␤ activity in human colon carcinoma HCT116 cells decreases telomerase activity (40). The available evidence suggests that TGF-␤ elicits inhibition of telomerase by suppression of the proto-oncogene c-Myc (40 -42) or partially via SIP1, a transcriptional target of the TGF-␤ pathway (43). The present study was undertaken to characterize the actions of TGF-␤ in the regulation of telomerase in human breast cancer cells. We show that TGF-␤ induces a rapid repression of TERT gene transcription in various cell lines and normal vascular smooth muscle cells. We show that repression requires the TGF-␤ signaling transducer protein Smad3, as demonstrated by overexpression of antagonistic Smad7, by silencing Smad3 gene expression and by genetic deletion of the Smad3 gene. In response to TGF-␤, Smad3 directly binds to the TERT gene, as demonstrated by in vitro gel shift assay and intact cell chromatin immunoprecipitation. Mutation of the TERT promoter Smad3 binding site abrogates the binding of Smad3 and inhibits TGF-␤-induced repression of TERT gene promoter activity. Furthermore, Smad3 interacts with c-Myc in response to TGF-␤, and silencing c-Myc gene expression abrogates the binding of Smad3 to the TERT gene. These findings suggest a novel mode of rapid inhibition of TERT and telomerase activity in both normal and neoplastic cells. They show for the first time that Smad3 directly represses TERT gene expression in human cells and that this repression involves Smad3 interactions with c-Myc and TERT gene promoter DNA. Cell Culture, Transfection, and Isolation-The breast cancer epithelial line MCF-7, the normal rat kidney tubular epithelial cell line NRK52E, the normal rat Wistar-Kyoto vascular smooth muscle cell line, spontaneous hypertensive rat smooth muscle cell line (46,47), mouse Smad2-and Smad3-deficient (Smad2 KO, Smad3 KO) and wild type (Smad2 WT, Smad3 WT) fibroblasts were grown in a 5% CO 2 atmosphere at 37°C in Dulbecco's modified Eagle's medium (Invitrogen) containing 0.5% fetal bovine serum in 6-well plastic plates, 10-cm dishes, or 8-chamber glass slides (Nunc, Naperville, CT). Recombinant human TGF-␤1 at concentrations of 0, 0.25, 1, or 4 ng/ml was added into the cell culture for 15 min and 1, 2, 6, 15, and 24 h or as indicated in individual experiments. Cells were lysed in icecold lysis buffer (0.5% Triton X-100, 120 mM NaCl, 40 mM Tris-HCl, pH 7.4, containing 10 mM sodium pyrophosphate, 2 mM EGTA, 2 mM EDTA, 10 mM NaF, 10 g/ml leupeptin, 5 g/ml aprotinin, 1 mM phenylmethylsulfonyl fluoride, 2 mM sodium vanadate). Clarified cell lysates were normalized for total protein concentration by the Bradford protein assay (Bio-Rad). To determine the effect of Smad3 on TGF-␤-mediated TERT gene suppression, Myc-tagged Smad3 was overexpressed by transfecting cultured cells with pcDNA3-Myc-Smad3 with empty plasmids as controls. To block TGF-␤ activity, a FLAG M2-tagged Smad7-expressing vector (pcDNA3 m2Smad7) and a control plasmid pcDNA3 were used. The transfection was conducted using Lipofectamine (Invitrogen) according to the manufacturer's instruction. After a 24-h transfection, cells were rested with serum-free medium for 24 h and then were stimulated with TGF-␤1 for different times for Western blotting and semiquantitative RT-PCR to detect gene expressions as indicated in individual experiments. RNA Interference-MCF-7 cells were cultured to 30% confluence. For each well in a 24-well transfection, 1 l of a 40 M stock of Smad3 siRNA, c-Myc siRNA, or appropriate negative controls siRNAs (Cellogenetics) was diluted into 41.5 l of Opti-MEM I reduced serum medium, and 2 l of Oligofectamine reagent (Invitrogen) was diluted into Opti-MEM I reduced serum medium to a final volume of 7.5 l. The diluted Oligofectamine TM reagent was added to the diluted siRNA, mixed gently, and incubated at room temperature for 15 min; 200 l medium was then added to each well containing cells. Fifty l of the above complex was overlaid onto the cells and mixed gently. Cells were incubated for 4 h at 37°C in a CO 2 incubator, and after incubation, 125 l of 30% serum growth medium was added to the transfection mixture. Cell extracts were assayed by Western blot for Smad3 at 72 h post-transfection. Immunoprecipitation and Western Blot Analysis-MCF-7 cells were rested with serum-free DMEM medium for 24 h and then stimulated with TGF-␤1 for 0, 15, 45, and 120 min. After washing in phosphate-buffered saline (PBS), cells were lysed in 1 ml of 1% Nonidet P-40, 25 mM Tris-HCl, 150 mM NaCl, 10 mM EDTA, pH 8.0, containing a 1:50 dilution of a protease inhibitor mixture (P2714; Sigma) for 30 min on ice. Cell lysates were centrifuged at 14,000 ϫ g for 5 min to pellet cell debris and incubated with primary antibodies overnight at 4°C with rotation followed by the addition of protein A-agarose (60 l of 50% slurry) for 1 h at 4°C with rotation to capture antibody-antigen complex. The antibody-antigen complex was washed, and samples (20 g) were mixed with SDS-PAGE sample buffer, boiled for 5 min, electrophoresed on a 10% SDS-polyacrylamide gel, and electroblotted onto a Hybond-ECL nitrocellulose membrane (Amersham Biosciences). The membrane was blocked in PBS containing 5% skimmed milk powder and 0.02% Tween 20 and then probed with antibodies at 4°C overnight. After washing, the membrane was incubated with a 1:20,000 dilution of peroxidase-conjugated goat anti-mouse IgG or porcine antirabbit IgG in PBS containing 1% normal goat serum and 1% fetal calf serum. The blot was then developed using the ECL detection kit (Amersham Biosciences) to produce a chemiluminescence signal, which was captured on x-ray film. For TERT, proteins were resolved on 8% SDS-PAGE, transferred to Immobilon-FL membranes (Millipore), and probed with specific antibodies raised in rabbits (49) or purchased from Santa Cruz Biotechnology. Western blots were probed with goat anti-mouse or anti-rabbit secondary antibodies conjugated to Alexa Fluor 680 (Molecular Probes) or IRdye 800 (Rockland Immunochemicals). Blotted proteins were detected and quantified by the Odyssey infrared imaging system (LI-COR). Immunofluorescence Microscopy-MCF-7 cells seeded in chamber slides (Lab-Tek II, Nalge Nunc International) at 60 -80% confluence were treated with TGF-␤1 (1 ng/ml) for different periods of time, fixed with 4% paraformaldehyde in PBS for 10 min, and blocked with CAS blocking solution (Zymed Laboratories Inc.) for 30 min at room temperature. Cells were incubated with Smad3 or TERT primary antibodies at RT for 1 h and washed 3 times with PBS, 0.01% Triton for 5 min each. Cells were then incubated for 1 h with fluorescein isothiocyanate-conjugated anti-rabbit secondary antibodies. For staining cell nuclei, cells were incubated with 50 ng/ml Hoechst (Sigma) for 10 min at RT. Slides were then washed with PBS-Triton, mounted in anti-fade medium (Bio-Rad), and analyzed by fluorescence microscopy (Leica Instruments). For immunocytochemical analysis, cells were cultured on 8-chamber glass slides in the presence or absence of TGF-␤1, fixed in 2% paraformaldehyde, and preincubated with 10% fetal calf serum and 10% normal sheep serum to block nonspecific binding. Cells were incubated with the anti-p-Smad2/3 Ab or an irrelevant isotype control rabbit IgG at 4°C overnight. After inactivation of endogenous peroxidase, cells were incubated with biotin-conjugated goat anti-rabbit IgG and then peroxidase conjugated streptavidin complex (ABC kit) followed by washing. Slides were developed with diaminobenzidine to produce a brown color, and the cell nucleus was counterstained with hematoxylin. Sections were cover-slipped in aqueous mounting medium. Chromatin Immunoprecipitation (ChIP) Assays-The ChIP assays were performed using the ChIP assay kit according to manufacturer's instructions (Upstate Biotechnology). Briefly, MCF-7 cells were fixed with formaldehyde (final concentration 1% v/v) in serum-free Dulbecco's modified Eagle's medium at 37°C for 10 min after TGF-␤1 stimulation for 1 h. Cells were washed twice with ice-cold PBS containing protease inhibitors (1 mM phenylmethylsulfonyl fluoride, 1 g/ml aprotinin, and 1 g/ml pepstatin A), pelleted for 4 min at 2000 rpm at 4°C, resuspended in 200 l of SDS lysis buffer (1% SDS, 10 mM EDTA, 50 mM Tris, pH 8.1), and incubated for 10 min on ice. After sonication, lysates were centrifuged for 10 min at 13,000 rpm at 4°C, and the supernatant was transferred to a new 2-ml microcentrifuge tube. The sonicated cell supernatants were diluted 10-fold in ChIP dilution buffer (0.01% SDS, 1.1% Triton X-100, 1.2 mM EDTA, 167 mM NaCl, 16.7 mM Tris-HCl, pH 8.1), the rabbit anti-Smad3 antibodies were added to the 2-ml supernatant fraction, and the mixture was incubated overnight at 4°C with rotation followed by the addition of 60 l of protein A-agarose/salmon sperm DNA (50% slurry) for 1 h at 4°C with rotation. The protein-DNA complex on protein A-agarose was pelleted at 1000 rpm at 4°C for 1 min and washed for 3-5 min on a rotating platform with 1 ml of each buffer listed in order (a) low salt immune complex wash buffer (SDS, 1% Triton X-100, 2 mM EDTA, 150 mM NaCl, 20 mM Tris-HCl, pH 8.1), (b) high salt immune complex wash buffer (SDS, 1% Triton X-100, 2 mM EDTA, 500 mM NaCl, 20 mM Tris-HCl, pH 8.1), and (c) LiCl immune complex wash buffer (0.25 M LiCl, 1% IGEPAL-CA630, 1% deoxycholic acid (sodium salt), 1 mM EDTA, 10 mM Tris, pH 8.1, and Tris-EDTA buffer (10 mM Tris-HCl, 1 mM EDTA, pH 8.0). The protein-DNA complex was eluted by adding 250 l of elution buffer (1% SDS, 0.1 M NaHCO 3 ) to the pelleted protein A-agarose-antibody-DNA complex. 5 M NaCl (20 l) was added to the eluate followed by heating at 65°C for 4 h to reverse Smad3-DNA cross-links. Samples were extracted twice with phenol/chloroform and precipitated overnight with ethanol. DNA fragments were recovered by centrifugation, resuspended in double distilled H 2 O, and used for PCR amplification of the hTERT gene promoter DNA. The primers for hTERT PCR were 5Ј-GGC CGG GCT CCC AGT GGA TTC-3Ј and 5Ј-CAG CGG GGA GCG CGC GGC ATC G-3Ј. The primers for rat TERT PCR were 5Ј-AAG CCT GGT TGG GAA AAA CT-3Ј and 5Ј-AGT GGT TGG CGG AAG TGT AG-3Ј for a 250-bp TERT promoter DNA. Telomerase Activity Assay-A telomeric repeat amplification protocol, performed essentially as described previously (49), was employed to determine telomerase activity. Briefly, cells treated with different reagents were washed and lysed by detaching and passing the cells though a 26Gx1/2 needle attached to a 1 ml syringe in prechilled telomeric repeat amplification protocol lysis buffer (0.5% 'APS, 10 mM Tris-HCl, pH 7.5, 1 mM MgCl 2 , 63 mM KCl, 0.05% Tween 20, 1 mM EDTA, 10% glycerol, 5 mM ␤-mercaptoethanol and mixture protease inhibitors). Nuclei were isolated by centrifugation, and protein content was determined. Equal amounts of nuclear telomerase extract (0.4 g) were incubated with telomeric DNA substrate and dNTP, and de novo synthesized telomeres with or without phenol and chloroform extraction were amplified by PCR using specific telomeric DNA primers in the presence of [␣-32 P]ATP (Amersham Biosciences) and TaqDNA polymerase. The resultant 32 P-labeled telomeres were resolved by polyacrylamide slab gel electrophoresis followed by autoradiography. Quantitative analysis of telomerase activity was performed by counting 32 P activity in de novo-synthesized telomeric DNA in a ␤-counter as described previously (50). To monitor nonspecific PCR effects, additional primers were included: NT (ATCGCTTCT-CGGCCTTTT) and TSNT (AATCCGTCGAGCAGAGTTA-AAAGGCCGAGAAGCGAT). Negative controls treated with either RNase A or alkaline phosphatase to inactivate telomerase were included in each experiment. TGF-␤ Suppression of hTERT Gene Expression and Telomerase Activity in Normal, Highly Proliferative, and Cancerous Cells-TGF-␤ has an inhibitory effect on cell proliferation in normal development, acting as a tumor suppressor (51,52). To determine whether the growth inhibitory effect of TGF-␤ is mediated by regulating telomerase activity, we determined the concentration-and time-dependent effect of TGF-␤ on telomerase activity and gene expression of hTERT in MCF-7 breast cancer cells. Previous work has shown that in human MCF-7 breast cancer cells, specific inhibition of the autocrine actions of TGF-␤ increases telomerase activity, and the addition of TGF-␤ is associated with hTERT gene suppression in MCF-7 cells (40,43). In the present study incubation of MCF-7 cells with increasing concentrations of TGF-␤ for 15 h resulted in a concentration-dependent down-regulation of telomerase activity and hTERT protein in the nucleus (Fig. 1). Likewise, hTERT mRNA was reduced by TGF-␤ in a dose-dependent manner ( Fig. 2A). Inhibition occurred at physiologically relevant concentrations with half-maximal inhibition at ϳ1 ng/ml. Thus, TGF-␤ is a potent negative regulator of telomerase, presumably through a specific ligand-receptor interaction at cell surface. TGF-␤ inhibition of hTERT gene expression occurred in 2-6 h, with complete inhibition achieved by 12 h of administration of TGF-␤ in cultured MCF-7 cells (Figs. 1, B and C, and 2B). Different degrees of sensitivity for the time course of inhibition were observed in normal renal epithelial NRK52E cells, vascular smooth muscle Wistar-Kyoto cells, and spontaneously hypertensive vascular smooth muscle cells of rats (Fig. 2). In breast and kidney epithelial cells, inhibition occurred within 2 h of incubation with 4 ng/ml of TGF-␤, whereas in vascular smooth muscle cells TGF-␤ did not induce significant reduction of hTERT mRNA until 6 h of treatment with 4 ng/ml of TGF-␤ (Fig. 2B). Because hTERT repression may be mediated by TGF-␤ induced c-Myc down-regulation, we examined the levels of c-Myc in these cells after TGF-␤ treatment for different times (Fig. 2C). TGF-␤1 induced c-Myc down-regulation after 12-24 h of treatment with TGF-␤, which was after the initiation of hTERT repression (Fig. 2C). The relatively belated inhibition of c-Myc by TGF-␤ is consistent the findings in a similar study using a rat fibroblast model recently in which TGF-␤ induces c-Myc repression in 48 h of TGF-␤ treatment (42). Thus, the molecular mechanism underlying the rapid repression of hTERT induced by TGF-␤1 is still unclear, although c-Myc down-regulation may be involved in the sus-tained phase of repression of the hTERT gene. Nevertheless, a time course of 2-6 h of TGF-␤ treatment suggests a direct action of TGF-␤ signaling protein(s) on TERT gene expression. Smad3 Signaling Is Required for TGF-␤ Repression of hTERT Gene-The TGF-␤ signaling protein, Smad3, has been shown to be involved in many TGF-␤ inhibitory activities (53). To explore the role(s) of Smad3 in TGF-␤ inhibition of hTERT gene expression, we examined Smad3 phosphorylation, intranuclear trafficking, and transcriptional activity during TGF-␤ suppression of hTERT in human MCF-7 breast cancer cells. In line with the inhibition of hTERT gene expression, Smad3 protein was increased significantly in the nucleus 30 min after TGF-␤ treatment, concomitant with its phosphorylation (Fig. 3A). Phosphorylation of Smad3 was also evaluated by Western blotting using anti-phospho-Smad3 antibodies (Fig. 4A). Endogenous Smad3 (along with Smad2) was phosphorylated within 30 min of TGF-␤ treatment (Fig. 4A), before Smad3 migration into the nucleus (Fig. 3A). We next determined Smad3 transcriptional activity using a Smad3 response element consisting of repeated CAGA boxes placed upstream of a luciferase gene. TGF-␤ stimulation of cultured MCF-7 cells transfected with the CAGA box promoter led to significant increases in Smad3 transcriptional activity in a TGF-␤ concentration-de- pendent manner (Fig. 3B), suggesting Smad3 binding to DNA after TGF-␤ stimulation. When cells were transfected with the hTERT promoter upstream of a luciferase reporter gene, TGF-␤ (4 ng/ml, 4 h) induced a significant decrease of hTERT gene promoter activity (Fig. 3C), suggesting a specific repressive effect of TGF-␤ on the hTERT promoter. Moreover, overexpression of wild type Smad3 also brought about a profound inhibition of the hTERT promoter activity. The effect of Smad3 overexpression on hTERT promoter activity was not decreased further when TGF-␤ was added into the cell cultures (Fig. 3C). These data suggest that both endogenous Smad3, after phosphorylation in response to TGF-␤, and exogenous Smad3, when overexpressed, are capable of inducing specific repression of hTERT gene promoter activity. The lack of additive or synergistic inhibitory effect between TGF-␤ and Smad3 on the hTERT promoter suggests that Smad3 is the primary signaling molecule that mediates TGF-␤ suppression of the hTERT gene. To further attest the repressor effect of Smad3 on the hTERT gene, we expressed Smad3 in human colon cancer HCT116 cells in which the type II TGF-␤ receptor is mutated (40). Although TGF-␤ failed to repress the hTERT gene as expected, expression of Smad3 still inhibited the hTERT gene promoter activity (Fig. 3D), confirming that Smad3 is an hTERT gene repressor. To establish an essential requirement of Smad3 in mediating TGF-␤-induced hTERT gene expression, we determined the effects of Smad7 (an antagonist Smad) on TGF-␤-induced hTERT gene suppression. We also determined the effects of knocking down Smad3 gene expression and the genetic deletion of the Smad3 gene on TGF-␤-induced hTERT gene suppression. As shown in Fig. 4A, stimulation of MCF-7 cells with TGF-␤ (4 ng/ml) resulted in up-regulation of protein phosphorylation of Smad3 and down-regulation of hTERT gene, with these effects abolished when antagonist Smad7 was expressed in MCF-7 cells. The failure of TGF-␤ to induce Smad protein phosphorylation and hTERT gene repression in the presence of Smad7 suggests that Smad3 activation is required for TGF-␤induced hTERT repression. To determine the specificity of Smad3 action and to exclude involvement of Smad2 in hTERT gene expression, three mouse fibroblast cell lines, wild type Smad3, Smad3 knock-out, and Smad2 knock-out, were examined for the effects of TGF-␤ on hTERT gene expression. Although TGF-␤ inhibition of hTERT gene expression remained unchanged in both Smad3 wild type-and Smad2deficient cells, deficiency of Smad3 abolished TGF-␤-mediated hTERT gene repression (Fig. 4B). Furthermore, temporarily knocking down Smad3 gene expression with Smad3 siRNA in human MCF-7 breast cancer cells also eliminated the effect of TGF-␤ on hTERT gene suppression (Fig. 4C). These results together suggest that Smad3 phosphorylation and nuclear migration mediate TGF-␤-induced hTERT suppression, which is reversible by expression of Smad7. Identification of Smad3 Interaction and Binding Site on the hTERT Gene That Represses hTERT Gene Transcription in Response to TGF-␤ Signaling-It is noteworthy that incubation of MCF-7 cells with TGF-␤ for several hours did not alter the level of c-Myc gene expression and that silencing Smad3 similarly had no effect on c-Myc gene expression (Fig. 4C). These data together with the time course of 4 -6 h for TGF-␤-mediated TERT repression suggest that Smad3 plays a direct role in mediating TGF-␤-induced hTERT gene repression. The finding that expression of Smad3 markedly represses the activity of the hTERT promoter (hTERT ϩ3 to Ϫ330 ) suggest that Smad3 acts at a site within this 330-bp region. Inspection of the hTERT promoter suggests several putative Smad3 binding sites, including two non-canonical CAGA boxes, at positions Ϫ281-284 and Ϫ259 -262 relative to the translation start codon (Fig. 5A). Mutagenesis studies of the hTERT promoter with a luciferase reporter assay showed TGF-␤-induced suppression of the hTERT promoter activity in both the wild type hTERT promoter and the hTERT Ϫ281-284 -mutated promoter. This was specific to Smad3 in that silencing Smad3 eliminated the TGF-␤-induced down-regulation (Fig. 5B). However, mutation of the CAGA box at Ϫ259 -262 of the hTERT promoter disabled TGF-␤ down-regulation of the hTERT gene and resulted in increases in the basal activity of the hTERT gene promoter (Fig. 5B), suggesting that the hTERT promoter Ϫ259 -262 sequence provides the binding site for Smad3 that mediates Smad3 repression of the hTERT promoter gene transcription. This identification of the CAGA box at Ϫ259 -262 is in contrast to a recent study suggesting Smad3 binding site(s) in a region from Ϫ748 to Ϫ729 in the rat TERT gene promoter (42). This discrepancy is consistent with the hypothesis that the TERT genes are differentially controlled between different species. To further address the hypothesis that Smad3 directly interacts with the hTERT gene in response to TGF-␤ stimulation, we performed in vitro electrophoresis gel mobility shift assays with MFC-7 cell nuclear proteins and a 32 P-labeled hTERT promoter DNA probe containing the Ϫ262 CAGA Ϫ259 sequence. As shown in Fig. 6, nuclear protein extracts from untreated MCF-7 cells showed little Smad3 binding activity to the hTERT DNA probe containing the Ϫ262 CAGA Ϫ259 sequence, whereas after TGF-␤ stimulation, significantly increased binding activity to the hTERT DNA probe was observed (left panel of Fig. 6A). The increased binding occurred from 30 to 120 min after administration of TGF-␤. This was not observed when the hTERT DNA probe was replaced with another labeled probe derived from hTERT Ϫ341-321 containing the putative Ϫ332 CAGA Ϫ329 box (right panel of Fig. 6A), suggesting specific binding to Ϫ262 CAGA Ϫ259 . To further establish the binding specificity of Smad3 protein to the hTERT promoter, we determined the effects of competitive inhibition by nonradioactive hTERT promoter DNA and Smad3 monoclonal antibodies. As shown in Fig. 6, B and C, the binding activity to the hTERT DNA probe was dependent on TGF-␤. In the presence of an excess amount of nonradioisotope-labeled hTERT promoter, DNA binding was completely inhibited, evidence for inhibition of specific binding to hTERT DNA. In the presence of Smad3 monoclonal antibodies, the binding to the hTERT DNA probe was also inhibited significantly (Fig. 6, B and C), similarly demonstrating that Smad3 is involved in direct binding to the hTERT promoter. Consistent with this, mutation of the Ϫ262 CAGA Ϫ259 sequence in the Smad3 binding nucleotide probe resulted in no binding (lane 3 of Fig. 6C). As positive controls for specificity, we also determined the binding of the 32 P-labeled hTERT promoter E-box and found high levels of binding of c-Myc and perhaps upstream stimulatory factor to the E-box, with binding not altered by TGF-␤ stimulation (Fig. 6D), consistent with unaltered gene expression of c-Myc after TGF-␤ treatment for up to 15 h (Fig. 4C). To determine Smad3 binding to the endogenous hTERT gene in cultured MCF-7 cells, we carried out ChIP analysis using spe-cific anti-Smad3 monoclonal antibodies. Negative and positive controls for antibodies included diluent, normal IgG, and c-Myc monoclonal antibodies. Controls for nonspecific precipitation included detection of GAPDH in the presence or absence of specific siRNA to Smad3 or c-Myc. As shown in Fig. 7A, the hTERT gene was significantly detected in either cell lysate input with GAPDH before precipitation or immunoprecipitates of c-Myc, confirming c-Myc binding to the hTERT gene as reported previously (23,54). Consistent with the data shown in Figs. 4C and 6D, co-precipitation of c-Myc with the hTERT gene was not significantly altered after TGF-␤ treatment (Fig. 7A). In contrast, whereas the hTERT gene was not detected in Smad3 without TGF-␤ treatment, significant levels of the hTERT gene fragment were found in Smad3 immunoprecipitates after treatment with TGF-␤, thereby demonstrating that Smad3 binds to the hTERT gene in response to TGF-␤. The specific immunoprecipitation of a complex between Smad3 and hTERT gene was verified in that hTERT gene precipitation became undetectable once Smad3 was silenced by RNA interference (Fig. 7B). To determine that Smad3 binds to the hTERT gene promoter in other cell types, we immunoprecipitated Smad3 from TGF-␤-responsive NRK52E cells and TGF-␤-irresponsive HCT116 cells treated with TGF-␤1 and observed that Smad3 bound the hTERT promoter DNA in both TGF-␤-sensitive and -insensitive cells (Fig. 7, C and D). Smad3 Interacts with c-Myc, Which Is Required for TGF-␤ Repression of hTERT Gene Transcription-Experiments to confirm specific precipitation between c-Myc and the hTERT gene using c-Myc siRNA led to an unexpected finding in terms of the involvement of c-Myc in mediating Smad3 binding to the hTERT gene. Treatment with c-Myc siRNA knocked down c-Myc and prevented co-precipitation of c-Myc and the hTERT gene with anti-Myc specific antibodies. However, silencing c-Myc also compromised the binding of Smad3 to the hTERT promoter (Fig. 8A), suggesting that Smad3 binding to the hTERT gene requires c-Myc. To determine whether c-Myc might affect TGF-␤ signaling and be required for TGF-␤ suppression of hTERT gene, we assessed the levels of Smad3, Smad2, and hTERT gene expression as a function of silencing c-Myc gene. Although having no effect on the gene expression of Smad3 and Smad2, silencing c-Myc induced not only a decrease in hTERT gene expression but also a failure of TGF-␤-induced down-regulation of hTERT gene expression (Fig. 8, B and C). Down-regulation of c-Myc to 70 -80% that of normal levels resulted in reduction of hTERT gene expression to ϳ50 -60%, similar to that mediated by TGF-␤ repression. When TGF-␤ was applied to cells with down-regulated c-Myc, no additive inhibition of hTERT gene expression was observed (Fig. 8C). These data suggest that TGF-␤ employs a c-Myc-dependent mechanism to suppress hTERT gene expression and that c-Myc is required for not only TGF-␤-induced hTERT downregulation (Fig. 8, B and C) but also Smad3 binding to the hTERT gene promoter DNA (Fig. 8A). Because the Smad3 binding site Ϫ262 CAGA Ϫ259 is adjacent to a c-Myc E-box binding site ( Ϫ240 CACGTG Ϫ235 ) (17) in the hTERT promoter and given that deprivation of c-Myc blocked Smad3 binding to the hTERT gene, we hypothesized that in MCF-7 cells Smad3 and c-Myc interact directly to inhibit c-Myc transcriptional activation of the hTERT gene and that their interaction recruits Smad3 to the hTERT gene promoter for repression. To test this hypothesis, we determined if Smad3 binds to c-Myc in response to TGF-␤ treatment of MCF-7 cells by co-immunoprecipitation. Immunoprecipitation of c-Myc showed little co-immunoprecipitation of Smad3 in the presence of TGF-␤ stimulation, but after stimulation of the cells by TGF-␤ (4 ng/ml) for various periods of time, significant levels of Smad3 were detected in the c-Myc immunoprecipitates, with Smad3 detectable 15 min after TGF-␤ stimu-lation (Fig. 9A). Immunopre-cipitation of Smad3 also co-precipitated c-Myc but only if the cells were stimulated by TGF-␤ (Fig. 9B). Thus, the TGF-␤-induced interaction between Smad3 and c-Myc is associated with TGF-␤-induced Smad3 binding to, and suppression of the hTERT gene promoter, suggesting a novel protein-DNA complex involving Smad3, c-Myc, and the hTERT gene promoter that initiates TGF-␤-induced repression of the hTERT gene. Together with the requirement for both Smad3 and c-Myc in TGF-␤ suppression of the hTERT gene, these data suggest that specific temporal and spatial interactions between Smad3, c-Myc, and hTERT gene promoter in response to TGF-␤ are responsible for TGF-␤-induced hTERT gene suppression in human breast cancer cells. DISCUSSION hTERT gene expression is the first step in telomerase activation for continuous stem cell renewal and proliferation. Repression of the hTERT gene occurs during cell differentiation, and de-repression takes place in tumorigenesis in most cancers by mechanisms that remain largely unexplored (7-10). As a pleiotropic autocrine and paracrine cytokine in a variety of tissues, TGF-␤ has a common effect of inhibiting cell proliferation in epithelial, endothelial, and hematopoietic cell types. This TGF-␤ checkpoint is implicated in mediating cell senescence (33)(34)(35), failure of which is a hallmark of many cancer cells (51). Recent studies suggest that TGF-␤ and hTERT form an important regulatory system in which TGF-␤ instigates hTERT down-regulation to inhibit cell proliferative potential (40,41,43) in addition to interactions with other growth control genes such as p21 WAF1/Cip1 , p15 ink4b , Cdc25A, cyclin-dependent kinase, mitogen-activated protein kinase, and Akt (21,30,(55)(56)(57). In characterizing the complex signaling pathways from TGF-␤ to the hTERT gene, the present study has for the first time revealed that Smad3 is phosphorylated in response to TGF-␤, upon which it shuttles into the cell nucleus and interacts with the hTERT gene transcription factor c-Myc and with a specific site on the hTERT promoter, leading to repression of hTERT gene expression. Thus, TGF-␤ may induce telomerase inhibition through multiple mechanisms including a rapid repression of the hTERT gene mediated by direct actions of Smad3 on the hTERT gene promoter followed by a sustained inhibition mediated by a transcriptional withdraw of c-Myc (Fig. 10). We have shown that TGF-␤ rapidly down-regulates telomerase activity after 2-6 h in cell cultures of normal smooth muscle cells, highly proliferative smooth muscle cells, immortal kidney epithelial cells, and human breast cancer cells. This finding provides an important connection between TGF-␤ and cell senescence in that TGF-␤ induces cell senescence in several cell types and models (33)(34)(35). For example, when human skin diploid fibroblasts are exposed to UV, they suffer premature senescence with increased TGF-␤ signaling; removal of TGF-␤ or targeting TGF-␤ receptors using specific neutralizing antibodies markedly alleviates UV-induced cellular senescence (35). In mouse models, TGF-␤ similarly represses hTERT gene and telomerase activity (42); overexpression of Smad3, but not Smad2 or Smad4, induces mouse keratinocyte senescence, whereas deletion of Smad3 delays keratinocyte senescence induced by v-Ras HA (34). The telomerase connection between TGF-␤ and cell senescence is further underlined by the finding that activation of endogenous telomerase or ectopic expression of hTERT induces resistance to TGF-␤-induced senescence of human mammary epithelial cells (36,58). In addition, recent studies suggest that TGF-␤ signaling may be feedbackcontrolled by telomere maintenance with short telomeres activating Smurf2 that inhibits TGF-␤ signaling (37). Central to TGF-␤ regulation of telomerase activity, Smad3 is shown for the first time to be not only required but also to act directly to repress the hTERT gene in human cells. Using specific gene silencing and expression to target Smad3, the present study shows that Smad3 responds to TGF-␤ stimulation by phosphorylation, migration into breast cancer cell nucleus, binding to hTERT gene promoter, and repression of hTERT expression. The direct binding of Smad3 to the hTERT gene promoter DNA is consistent with a recent study in rat fibroblasts (42). The data that expression of Smad7 prevents Smad3 phosphorylation and down-regulation of telomerase are consistent with the notion that Smad3 plays a physiological role in regulating the hTERT gene, which is balanced and reversible by intracellular antagonist Smad proteins. Consistent with Smad3 as the predominant signaling molecule mediating TGF-␤ regulation of the hTERT gene are the data that Smad3 overexpression mimics but is not additive for TGF-␤ suppression of hTERT promoter activity. Mediating the relatively rapid sup-pression of the hTERT gene after 2-6 h of TGF-␤ treatment, Smad3 binds the hTERT promoter directly as demonstrated by in vitro binding analysis and intact cell chromatin immunoprecipitation assays. This finding of direct binding by Smad3 to the hTERT promoter is consistent with recent findings in a rat model, in which Smad3 binds directly to a Smad binding element contained in the sequence of rat TERT DNA Ϫ748 to Ϫ729 (distal to the transcriptional start site of rat TERT) (42). In contrast with the Smad3 binding site in rat, we have found that the transcriptional repression activity of Smad3 lies within the 330 bp of the hTERT promoter relative to the transcription start site of hTERT. Structure-function analyses of the hTERT promoter activity of the 330-bp fragment and in vitro Smad3 binding studies using various mutants allowed us to identify the Ϫ262 CAGA Ϫ259 box as responsible in mediating Smad3 binding and repression of hTERT gene promoter activity during TGF-␤ stimulation. Consistent with previous findings that expression of the TGF-␤ type II receptor allows telomerase down-regulation through autocrine signaling of TGF-␤ (40), we found that mutation of the Smad3 binding site ( Ϫ262 CAGA Ϫ259 ) in the hTERT promoter increases hTERT gene promoter transcriptional activity in the absence of exogenous TGF-␤ (Fig. 5), suggesting that the TGF-␤ and Smad3 signaling pathway plays an important role in repressing the hTERT gene without exogenously applied TGF-␤. In addition, the data that silencing Smad3 further promotes the elevation of hTERT promoter activity induced by mutation of the Smad3 binding site suggest that in addition to interacting with hTERT promoter DNA, Smad3 may also regulate another factor(s) that is involved in regulating hTERT gene activity. These factors might be SIP1 (43), AP1 (25,59), and/or c-Myc (see below). Nonetheless, engineering the Smad3 binding site in the hTERT promoter to enhance the Smad3 effect on hTERT promoter repression may provide a new strategy to target telomerase in anti-cancer therapy. Although Smad3 null mice exhibit variable frequencies of spontaneous colon cancer (60,61), studies have shown that mice lacking Smad3 have impaired mucosal immunity and accelerated wound healing (62,63). Furthermore, whereas loss of Smad3 alone is insufficient to induce leukemia, additional loss of the p27 Kip1 cyclin-dependent kinase inhibitor that is frequently altered in human T-cell acute lymphoblastic leukemia causes leukemia to develop in Smad3-deficient mice (64). Finally, despite normal levels of Smad3 mRNA, deficiency of Smad3 protein is a specific feature in human acute lymphoblastic leukemia in children (64). Remaining to be determined are the detailed mechanisms of temporal and spatial interactions between Smad3 and the hTERT gene in response to TGF-␤ and how these interactions are regulated. In analyzing the role of c-Myc, however, we 1, 2, 5 , 6, 9, 10, 13, and 14) or silenced c-Myc gene expression (lanes 3, 4, 7, 8, 11, 12, 15, and 16) were treated with (even lane numbers, 4 ng/ml, 15 h) or without TGF-␤1 (odd lane numbers) followed by ChIP using control or specific antibodies (Ab). NS, normal serum. The DNA hTERT promoter and human GAPDH gene were detected by PCR using specific primers. B and C, effect of c-Myc knockdown on TGF-␤-mediated suppression of hTERT in human breast cancer cells. MCF-7 cells were transfected with specific c-Myc siRNA or control siRNA for 48 h followed by treatment with or with TGF-␤1 for 15 h. Cells were lysed, and cellular protein and RNA extracts were assayed for gene expression as indicated by immunoblotting or RT-PCR (panel B). Panel C shows densitometry quantitative analysis of hTERT suppression as the mean Ϯ S.D. from three similar experiments. found that treatment of MCF-7 cells with TGF-␤ for several hours caused no discernible changes in c-Myc gene expression or in c-Myc binding to the hTERT gene promoter. Surprisingly, however, silencing c-Myc blocks Smad3 binding to the hTERT promoter induced by TGF-␤, whereas knocking down Smad3 had no effect on c-Myc binding to hTERT promoter DNA. In addition, silencing c-Myc also prevented TGF-␤ from inducing inhibition of hTERT gene transcription, a finding consistent with an essential requirement for c-Myc in Smad3 binding and repressing the hTERT gene promoter. Furthermore, TGF-␤ fails to down-regulate hTERT gene expression when c-Myc is down-regulated. These findings suggest that TGF-␤ elicits Smad3 interactions with c-Myc in regulating hTERT gene expression. To explore if Smad3 might directly interact with c-Myc and cooperatively regulate of the hTERT gene, we determined if Smad3 and c-Myc bind directly to each other. Indeed, immunoprecipitation of c-Myc co-precipitated Smad3 and immunoprecipitating Smad3 co-precipitated c-Myc. Thus, in addition to binding to the CAGA box of the hTERT gene promoter. Smad3 also forms a complex with c-Myc to regulate the hTERT gene in response to TGF-␤, although the binding of Smad3 to c-Myc does not cause c-Myc dissociation from the hTERT promoter (Figs. 6 and 7). Consistently, previous studies have shown that Smad3 and c-Myc bind to each other directly in regulating the TGF-␤-induced cyclin-dependent inhibitor p15 ink4b gene (65). It is possible that Smad3 forms complexes with c-Myc to disable c-Myc function and that each may recruit the other to their binding sites at specific gene promoters and, thus, regulate each other's function (65). Thus, we propose a mechanism of TGF-␤-induced hTERT gene repression involv-ing Smad3 cis and trans actions on the hTERT gene and also c-Myc transcription-dependent repression (Fig. 10). In this model is a 3-step regulation; 1) the binding of Smad3 to c-Myc recruits Smad3 to the hTERT gene, and this initiates the inhibition of c-Myc in recruiting the transcriptosome for hTERT gene transcription, 2) subsequently, Smad3 physically binds to the CAGA box of hTERT gene promoter DNA to further prevent c-Myc activity on hTERT gene transcription, 3) given the TGF-␤ induced down-regulation of c-Myc in the later phase of hTERT gene repression (Fig. 2), transcriptional repression of the c-Myc gene sustains the repression of the hTERT gene in response to TGF-␤ signaling (Fig. 10). In summary, we have identified a direct transcriptional inhibitory pathway involving extracellular TGF-␤ signaling to the hTERT gene in human breast cancer cells. This mechanism involves Smad3 phosphorylation, nuclear migration, interaction with c-Myc, and binding to the hTERT gene promoter to repress hTERT gene transcription and inhibit telomerase activity. Further studies are required to decipher potential relationships between Smad3 and molecules such as AP1 and SIP1 that are also implicated in TGF-␤ suppression of telomerase activity. Additional studies are also required to target the interface between Smad3 and hTERT promoter DNA to develop reagents as novel modalities for aging and malignant cells.
High Efficacy of the Volatile Organic Compounds of Streptomyces yanglinensis 3-10 in Suppression of Aspergillus Contamination on Peanut Kernels Aspergillus flavus and Aspergillus parasiticus are saprophytic fungi which can infect and contaminate preharvest and postharvest food/feed with production of aflatoxins (B1, B2, and G). They are also an opportunistic pathogen causing aspergillosis diseases of animals and humans. In this study, the volatile organic compounds (VOCs) from Streptomyces yanglinensis 3-10 were found to be able to inhibit mycelial growth, sporulation, conidial germination, and expression of aflatoxin biosynthesis genes in A. flavus and A. parasiticus in vitro. On peanut kernels, the VOCs can also reduce the disease severity and inhibit the aflatoxins production by A. flavus and A. parasiticus under the storage conditions. Scanning electron microscope (SEM) observation showed that high dosage of the VOCs can inhibit conidial germination and colonization by the two species of Aspergillus on peanut kernels. The VOCs also showed suppression of mycelial growth on 18 other plant pathogenic fungi and one Oomycetes organism. By using SPME-GC-MS, 19 major VOCs were detected, like in other Streptomyces, 2-MIB was found as the main volatile component among the detected VOCs. Three standard chemicals, including methyl 2-methylbutyrate (M2M), 2-phenylethanol (2-PE), and β-caryophyllene (β-CA), showed antifungal activity against A. flavus and A. parasiticus. Among them, M2M showed highest inhibitory effect than other two standard compounds against conidial germination of A. flavus and A. parasiticus. To date, this is the first record about the antifungal activity of M2M against A. flavus and A. parasiticus. The VOCs from S. yanglinensis 3-10 did not affect growth of peanut seedlings. In conclusion, our results indicate that S. yanglinensis 3-10 may has a potential to become a promising biofumigant in for control of A. flavus and A. parasiticus. INTRODUCTION Volatile organic compounds (VOCs) are lipophilic chemicals with a low boiling point and low molecular mass (100-500 Da), but with high vapor pressure (Effmert et al., 2012). So far, there are approximately 1300 described VOCs from various microorganisms (Effmert et al., 2012;Lemfack et al., 2014;Piechulla and Degenhardt, 2014). The VOCs with antimicrobial activity have been reported in bacteria (Martins et al., 2019), filamentous fungi , yeasts (Huang et al., 2011, algaes (Gong et al., 2015), and higher plants (Wood et al., 2013). The VOCs from Streptomyces platensis showed inhibitory activity against Botrytis cinerea on strawberry, Rhizoctonia solani on rice seedlings, and Sclerotinia sclerotiorum on oilseed rape (Wan et al., 2008). The VOCs from Streptomyces globisporus showed inhibitory activity against Penicillium iltalicum on citrus and B. cinerea on tomato (Li et al., 2010. The VOCs from Streptomyces alboflavus was able to inhibit the mycelial growth of several filamentous fungi (Wang et al., 2013). In addition, the VOCs produced by Streptomyces spp. also showed inhibition of mycelial growth (IMG) of R. solani and promote the growth of Arabidopsis thaliana (Cordovez et al., 2015). VOCs can diffuse to atmosphere and biodegradable which cannot cause the issue of toxic residues (Wang et al., 2013). Previous studies demonstrated that VOCs from endophytic fungus Muscodor albus and VOCs from Saccharomyces cerevisiae can be used as mycofumigant to control of many postharvest fruit diseases (Schnabel and Mercier, 2006;Toffano et al., 2017). Aspergillus species are a threat to agriculture and human health, as some of them can produce carcinogenic and mutagenic secondary metabolites, like aflatoxin. Aflatoxin B 1 is most frequently found in maize, peanut, and rice that shows the greatest toxigenic potential (Amaike and Keller, 2011). The International Agency for Research on Cancer (IARC) has classified AFB 1 as a Group 1 human carcinogen (Williams et al., 2004). Aflatoxicosis is caused by inhaling or ingesting high levels of aflatoxin contaminated food and it is a major problem in developing countries, especially in Asia and Africa (Amaike and Keller, 2011). Aspergillus flavus and Aspergillus parasiticus may not reduce yield, but severe economic losses can be caused by the contamination of kernels or grains with aflatoxins produced by the fungi, especially under the storage conditions (Amaike and Keller, 2011). For instance, one major issue in peanut production worldwide is the aflatoxins contamination (Torres et al., 2014). China is the world's largest producer of peanuts (USDA, 2013), while approximately 60% peanuts were contaminated by aflatoxins in six provinces of China (Gao et al., 2011). The most effective strategy to reduce and/or eliminate aflatoxins is to prevent aflatoxigenic Aspergillus spp. from colonization on food/feed products during storage (Gong et al., 2015). To control Aspergillus spp. on peanuts, several chemicals and fungicides are applied to suppress the mycelial growth and aflatoxins production (Kolosova and Stroka, 2011). Additional concern about resistance and residue problems in application of chemicals compromises control efficacy of mycotoxin contamination indirectly (Torres et al., 2014). Biological control by applying competitive non-toxigenic isolates of A. flavus and/or A. parasiticus to soil has been explored in previous studies and achieved at least partial success, and a biocontrol product, namely, Aflasafe TM was successfully commercialized (Atehnkeng et al., 2008). Zucchi et al. (2008) used cell suspension of Streptomyces sp. ASBV-1 to reduce aflatoxins accumulation by A. parasiticus on peanut. Reddy et al. (2010) reported the culture filtrate of Rhodococcus erythropolis completely inhibited the A. flavus growth and AFB 1 production. Zhang et al. (2013) used antifungal substances purified from Streptomyces hygroscopicus to inhibit A. flavus on peanuts. Prasertsan and Sawai (2018) reported the VOCs from Streptomyces mycarofaciens showed antagonist to A. flavus and A. parasiticus growth on maize. Gong et al. (2015Gong et al. ( , 2019 reported that VOCs from Shewanella algae and Alcaligenes faecalis showed inhibitory activity against mycelial growth and aflatoxins production of A. flavus. In this study, we found that the VOCs from Streptomyces yanglinensis 3-10 showed strong antifungal activity on mycelial growth and conidia germination, as well as on suppression of expression of aflatoxin biosynthesis genes in A. flavus and A. parasiticus. The VOCs can also reduce the contamination and aflatoxins produced by A. flavus and A. parasiticus on peanut kernels under storage condition. By using SPME-GC-MS, 19 major putative components of VOCs were identified and three pure chemicals were used in a bioassay to verify the antifungal activity against A. flavus and A. parasiticus. Methyl 2-methylbutyrate (M2M) and 2-phenylethanol (2-PE) were proved with inhibitory effect on conidial germination and mycelial growth of Aspergillus spp. We also found that the VOCs from S. yanglinensis 3-10 is not harmful to peanut seedling growth. Microorganisms and Cultural Media A total of 23 microbial isolates were used in this study, including isolate 3-10 of S. yanglinensis (Lyu et al., 2017), two isolates of Pythium species, and 20 isolates of fungi. Origin of these isolates was listed in Supplementary Table S1. Among these isolates, S. yanglinensis 3-10 was used to produce the VOCs. Two species of Aspergillus, namely, A. flavus NRRL3357 and A. parasiticus MO527, were used to infect peanut kernels, where they grew, sporulated, and produced aflatoxins B 1 , B 2 , G 1 , and G 2 (AFB 1 , AFB 2 , AFG 1 , and AFG 2 , respectively). The remaining 20 isolates were used to determine the inhibitory spectrum of the VOCs of S. yanglinensis 3-10. Stock cultures of each isolate were maintained on potato dextrose agar (PDA) and stored in a refrigerator at 4 • C. Working cultures were established on PDA by transferring mycelial agar plugs from a stock culture of each isolate and the cultures were incubated at 20 • C in dark for 7-14 days. Four cultural media were used in this study, including glucose agar (GA), ISP-2 liquid and agar media (ISP = International Streptomyces Project), PDA, and autoclaved wheat grains (AWG). GA contained (in 1000 mL water) 20 g D-glucose and 15 g agar, and it was used for determination of conidial germination of A. flavus and A. parasiticus. ISP-2 liquid medium contained (in 1000 mL water) 4 g D-glucose, 10 g malt extract, and 4 g yeast extract (pH 6.5-7.0) (Shirling and Gottlieb, 1966), and it was used for preparation of the seed cultures (inoculum) of S. yanglinensis 3-10. The ISP-2 agar medium was prepared by addition of agar (2%, w/v) in the ISP-2 liquid medium. Both the ISP-2 liquid medium and the ISP-2 agar medium were used for incubation of S. yanglinensis 3-10. PDA was prepared with peeled potato tubers purchased from a local supermarket using the procedure described by Fang (1998). The medium AWG was prepared with wheat grains using the procedure described by Zhang et al. (2015). Both potato tubers and wheat grains (cultivars unknown) were purchased from a local supermarket in Wuhan of China. Profiling of the Streptomyces VOCs For preparation of the VOCs, an aliquot (1 mL) of a spore suspension (1 × 10 8 spores per mL) of S. yanglinensis 3-10 was pipetted to a 250-mL Erlenmeyer flask containing 100 mL ISP-2 liquid medium. The flask was mounted on a rotary shaker and the culture was shake-incubated at 150 r/min at 28 • C for 48 h. The resulting culture was used as the seed inoculum for inoculation of the AWG medium in 250-mL flasks each containing 80 g AWG. The ratio of the seed inoculum of S. yanglinensis 3-10 and AWG was 1:4 (v/w). The AWG cultures were incubated at 28 • C in dark for 3, 7, 10, and 14 days for production of the VOCs. For profiling of the VOCs, a flask with 80 g AWG culture of S. yanglinensis 3-10 (3, 7, 10, or 14-day-old) and another flask with 80 g fresh AWG (control) were maintained at 50 • C for 30 min for emission of the VOCs. A fiber coated with VOC adsorbent, namely, divinylbenzene/carboxen/polydimethylsiloxane (SUPELCO R , Bellfonte, PA, United States), was inserted into the airspace of a flask for 30 min to absorb the VOCs in that flask. The fiber was pulled out and immediately inserted into the injection port of a gas-chromatography and mass-spectrometry (GC-MS) instrument (Thermo Scientific DSQII, United States) equipped with an Agilent J & W HP-5MS fused-silica capillary column (30 m × 0.25 mm × 0.25 µm, length × inner diameter × film thickness) (Agilent Technologies Inc., Santa Clara, CA, United States). The GC-MS was performed using the procedures described in our previous study (Wan et al., 2008;Huang et al., 2011). Mass spectra were obtained using the scan modus with the total ion counts ranging from 45 to 650 m/z. The VOCs were identified by comparison of their mass spectra with those in the database of the National Institute of Standards and Technology (NIST)/EPA/NIH library (Version 2.0) deposited in GC-MS with the similarity index higher than 800. The VOCs detected both in the AWG cultures of S. yanglinensis 3-10 and in the fresh AWG were not considered to be the components produced by S. yanglinensis 3-10. The analysis was repeated once with three replicates both for the AWG cultures of S. yanglinensis 3-10 and for the fresh AWG. RT-PCR Detection of Expression of Selected VOC Synthase Genes in S. yanglinensis Nineteen major VOCs were identified by GC-MS in the AWG cultures of S. yanglinensis 3-10 (Table 1 and Supplementary Figure S1). Production of five of the VOCs, including β-caryophyllene (β-CA), trans-1,10-dimethyl-trans-9-decalinol (geosmin), 2-methyl-2-bornene (2-M2B), 2-methylisoborneol (2-MIB), and 2-PE was confirmed by detection of expression of the genes responsible for biosynthesis of these VOCs in S. yanglinensis 3-10. First, the whole genome of S. yanglinensis 3-10 was sequenced by Novogene Co. Ltd. (Beijing, China). Then, the genome sequence of S. yanglinensis 3-10 was submitted to the AntiSMASH database 1 and the KEGG pathway database 2 for search of the VOC biosynthesis-related genes or pathways. Five genes coding for the VOC biosynthetic enzymes were found, including the genes coding for 2-MIB synthase (GenBank Acc. No. MK861971), methyltransferase (GenBank Acc. No. MK861972), geosmin synthase (GenBank Acc. No. MK861973), aryl-alcohol dehydrogenasae (GenBank Acc. No. MK861974), and (+)-β-CA synthase (GenBank Acc. No. MK861975) (Supplementary Figures S2-S14). The DNA sequences of these genes as well as the gene for DNA gyrase subunit B (gyrB) were used for designing specific oligonucleotide PCR primers. DNA gyrase subunit B (gyrB) gene was used as the reference gene. Total RNA was extracted using E.Z.N.A R Bacterial RNA Kit (Omega Bio-tek, Inc., Norcross, GA, United States) from the mycelial masses of S. yanglinensis 3-10 harvested from the cultures (28 • C) on ISP-2 agar medium. The extract was treated with DNase I (TaKaRa Biotechnol. Co. Ltd., Dalian, China) to eliminate DNA contamination. The RNA of ∼1 µg was reverse transcribed with the reagents in ThermoScript One Step RT-PCR Kit (TaKaRa Biomedical Technology Co., Ltd., Beijing, China). The resulting transcripts were then used as templates in PCR detection of the five VOC biosynthetic genes as well as the gyrB gene with the specific primers and the specific thermal programs (Supplementary Table S2). The PCR products were separated by agarose gel electrophoresis (1%, w/v) and the DNA bands were viewed on an UV trans-illuminator after staining with ethidium bromide solution (1.5 µg/L) for 10 min. Antifungal Activity of the Selected VOCs Against Aspergillus Three synthetic chemicals present in the VOC profile of S. yanglinensis 3-10, including 2-PE, M2M, and β-CA, were selected for testing their antifungal activity against A. flavus and A. parasiticus. The chemicals (purity: > 98.5%) were purchased from Sigma-Aldrich R Company (St. Louis, MO, United States). IMG and conidial germination by these chemicals was determined in two-compartment plastic Petri dishes (9 cm in diameter). In the bioassay for IMG, 10 mL melted PDA was poured into one compartment of a dish. A mycelial agar plug (6 mm in diameter) of A. flavus or A. parasiticus from the margin area of 3-day-old PDA cultures (28 • C) was placed on PDA in that compartment. Then, two filter paper pieces (FPPs) of approximately 1.6 × 1.5 cm (length × width) in size were placed in the other compartment of that dish. A synthetic chemical was pipetted to the two FPPs at 2.5, 5.0, 12.5, 25.0, 50.0, or 100 µL on each FPP. In the dish for the control treatment (CK), sterile distilled water was added to the two FPPs, 50 µL on each FPP. There were four dishes as four replicates for each chemical at each dosage and the control treatment. The dishes were individually sealed with parafilm (Laboratory Parafilm R "M, " Neenah, WI, United States), and placed in an incubator at 28 • C in dark for 3 days. Diameter of the colony of A. flavus or A. parasiticus in each dish was measured and percentage of IMG was calculated using the following formula: where AD CK represents the average colony diameter of A. flavus or A. parasiticus in the control treatment; D VOC represents colony diameter of A. flavus or A. parasiticus in a dish for the treatment of an investigated VOC chemical at a given dosage; the value "6" represents diameter of the mycelial agar plug of A. flavus or A. parasiticus. The concentration for 50% IMG (IC 50 in µL/mL) by a chemical was inferred based on the data about IMG and the dosages of that VOC chemical applied to the dishes (Huang et al., 2011). In the bioassay for inhibition of conidial germination, 10 mL of melted GA medium was poured into a compartment of a two-compartment dish. An aliquot (100 µL) of the conidial suspension (1 × 10 6 conidia/mL) of A. flavus or A. parasiticus harvested from 3-day-old PDA cultures (28 • C) were pipetted onto that compartment, and the conidial suspension drop was evenly spread using a sterilized glass spatula. Meanwhile, a synthetic chemical was pipetted to two FPPs in the other compartment of that dish at 2.5, 5.0, 12.5, 25.0, 50.0, or 100 µL on each FPP. In the dish for the control treatment, conidia of A. flavus or A. parasiticus were plated on GA in one compartment and sterile distilled water was added to the two FPPs in the other compartment, 50 µL on each FPP. There were four dishes as four replicates for each VOC chemical at each dosage and for the control treatment. The dishes were sealed with the parafilm and placed in an incubator at 28 • C for 12 h. Conidial germination on GA in each dish was observed under a compound light microscope by randomly counting at least 100 conidia in each dish. Then, percentage of germinated conidia was calculated. A conidium was considered to have germinated when the length of the germ-tube was equal to or longer than the diameter of that conidium. The percentage of inhibition to conidial germination of A. flavus or A. parasiticus by a VOC chemical was calculated based on difference in percentages of the germinated conidia between the control treatment and the treatment with that VOC chemical at a given dosage. The IC 50 value for that VOC chemical was thus inferred based on the data about percentages of inhibition to conidial germination and the dosage of the chemical applied to the dishes (Huang et al., 2011). Suppression of Mycelial Growth and Sporulation of Aspergillus by the VOCs of S. yanglinensis 3-10 Two bioassays were carried out to determine the efficacy of the VOCs of S. yanglinensis 3-10 in suppression of mycelial growth and sporulation by A. flavus and A. parasiticus. The first bioassay is a time-course trial, aiming at determination of the time-course of production of the VOCs by S. yanglinensis 3-10 in AWG. S. yanglinensis 3-10 was inoculated in flasks containing AWG (80 g per flask) and the cultures were incubated at 28 • C in dark for 3, 7, 10 and 14 days. They were used as source of the VOCs in determination of antifungal activity against A. flavus and A. parasiticus in double-dish sets (DDSs) described by Huang et al. (2011). A DDS consisted of two cover-free bottom glass dishes (9 cm in diameter), one dish was loaded with 10 g AWG culture of S. yanglinensis 3-10 at a given incubation time and the other bottom dish with PDA (20 mL) was inoculated in the center with a mycelial agar plug (6 mm in diameter) of A. flavus or A. parasiticus. For the control treatment (CK), one bottom dish was loaded with 10 g fresh AWG and the other bottom dish with PDA (20 mL) was inoculated with A. flavus or A. parasiticus. The two bottom dishes for each treatment were put together in an opposite direction (upper dish with A. flavus/A. parasiticus, lower dish with S. yanglinensis/fresh AWG) and sealed with a piece of parafilm (Huang et al., 2011). There were three DDSs as three replicates for each treatment. The DDSs were placed at 28 • C in the dark for 3 days. Colony diameter of A. flavus or A. parasiticus in each DDS was measured. Meanwhile, the conidia of A. flavus or A. parasiticus in the PDA dish of each DDS were washed off using 20 mL water amended with 0.1% Tween 20 (v/v). The concentration of the conidia in the resulting conidial suspension was determined with the aid of a hemocytometer under a compound light microscope. Conidial yield (conidia/mm 2 ) in each culture was calculated with the data on total conidial number and colony size of that culture. The second bioassay is a dosage trial, aiming at determination of the antifungal activity of the VOCs from different dosages of the 7-day-old AWG cultures of S. yanglinensis 3-10 (VOCs 3−10AWG ). A DDS was established with a bottom dish containing 40 g fresh AWG (control), or the AWG culture of S. yanglinensis 3-10 at 5, 10, 20, 30, or 40 g per dish, and another bottom dish containing PDA inoculated with a mycelial agar plug of A. flavus or A. parasiticus. The DDSs were individually sealed with parafilm. There were three DDSs as three replicates for VOCs 3−10AWG of each dosage and the control treatment. The DDSs were placed at 28 • C in dark for three days. Colony diameter and number of conidial yield (conidia/mm 2 ) were measured using the procedures described above. Additionally, the VOCs 3−10AWG were determined for suppression of mycelial growth of 20 other fungi and two species of Pythium (Supplementary Table S1) in DDSs using the procedures described above. A DDS in the treatment of the VOCs 3−10AWG was established with a bottom dish with PDA, which was inoculated with a target organism, and another bottom dish, which was loaded with 10 g 7-day-old AWG culture of S. yanglinensis 3-10. In the control treatment, a DDS consisted of a bottom dish with PDA, which was also inoculated with the same target organism, and another bottom dish with 10 g fresh AWG. For each target organism, there were three DDSs as three replicates for VOCs 3−10AWG , and other three DDSs as three replicates for the control treatment. The DDSs were incubated at 20, 25, or 28 • C for 1-7 days depending on thermal adaptation of the target organism. Diameter of the colony in each DDS was measured. The IMG value against each target organism was calculated using the formula mentioned above. Suppression of Conidial Germination of Aspergillus by the VOCs of S. yanglinensis 3-10 Both A. flavus and A. parasiticus were inoculated on PDA and the cultures were incubated at 28 • C for 3 days. Conidia of each fungus were harvested from the PDA cultures by washing with sterile distilled water. The mixtures with conidia and hyphal fragments were filtered through four-layered cheesecloth to remove the hyphal fragments. The conidial concentration in the resulting conidial suspension was adjusted to 1 × 10 6 conidia/mL with sterile distilled water. Aliquots of the conidial suspension of A. flavus or A. parasiticus were pipetted onto the GA medium in Petri dishes (9 cm diameter) at 200 µL per dish and the conidia in the conidial suspension drop were evenly spread using a sterilized glass spatula. There were two bioassays in this experiment, the time-course bioassay and the dosage bioassay. In the time-course bioassay, a DDS was established with two bottom dishes, one bottom dish containing GA was inoculated with the conidia of A. flavus or A. parasticus, and another bottom dish was loaded with 10 g fresh AWG (CK) or 10 g AWG cultures of S. yanglinensis 3-10 of a given incubation time ( 3-, 7-, 10-, or 14-day old). There were three DDSs as three replicates for each treatment and the DDS cultures were then incubated at 28 • C in dark for 12 h. Conidial germination of A. flavus or A. parasiticus on GA in each DDS was observed under microscope by randomly counting at least 100 conidia on GA. Meanwhile, length of at least 50 randomly selected germ tubes in that DDS was measured. In the dosage bioassay, a DDS was established with a bottom dish containing the conidia of A. flavus A. parasticus on GA, and another bottom dish containing 40 g fresh AWG (CK) or the 7day-old AWG culture of S. yanglinensis 3-10 of a given dosage (5, 10, 20, 30, or 40 g per dish). There were three DDSs for each treatment as three replicates. The DDSs were individually sealed with parafilm and placed at 28 • C in dark for 12 h. Conidial germination of A. flavus or A. parasiticus in each DDS was observed and length of germ tubes of each fungus was measured. Streptomyces VOC-Mediated Suppression of Aspergillus Infection of Peanut Kernels Kernels of peanut (Arachis hypogaea L., cultivar unknown) were purchased from a local supermarket in Wuhan of China. They were soaked in sterile distilled water for 4 h, followed by surface sterilization in 70% ethanol (v/v) for 2 min and rinsing in sterile distilled water for three times, 1 min each time. Then, the kernels were blotted dry on pieces of sterilized paper towels and loaded in Petri dishes (6 cm in diameter), 10 kernels per dish. The cover of the dishes was removed and the bottom dishes with the peanuts were placed in a laminar flow hood for 30 min for evaporation of the water remains on the peanut kernel surface. Meanwhile, conidia of A. flavus and A. parasiticus were harvested from the PDA cultures (28 • C, 3 days) by washing with sterile distilled water. The resulting conidial suspensions (1 × 10 6 conidia/mL) were amended with 0.5% D-glucose (w/v), which served as the exogenous nutrient for triggering germination of the conidia. For each tested fungus, aliquots of the conidial suspension were pipetted to the peanut kernels in the dishes, 500 µL per dish and 42 dishes for each fungus. The dishes were gently shaken to ensure that all the kernels were contaminated with the conidia. The kernel-containing dishes inoculated with each fungus were divided into six lots as six treatments (seven dishes in each lot), one control treatment with VOCs from the fresh AWG (VOCs AWG ) and five treatments with the VOCs from S. yanglinensis (VOCs 3−10AWG ). The bioassay was done in 12 glass desiccators (∼5.8 L in airspace), six for A. flavus and another six for A. parasiticus. For each fungus, the desiccators for the control treatment was loaded at the bottom with 500 g fresh AWG as source of VOCs AWG , a seven-dish lot with the A. flavus-or A. parasiticusinoculated peanut kernels were placed on the perforated ceramic clapboard (Supplementary Figure S15). Five other desiccators were loaded at the bottom with the 7-day-old AWG cultures of S. yanglinensis 3-10 at 100, 200, 300, 400, and 500 g per desiccator (equivalent to 17, 34, 52, 69, and 86 g/L, respectively). Five other six-dish lots with the A. flavus-or A. parasiticusinoculated kernels were placed on the perforated ceramic clapboards of those desiccators, seven dishes in each desiccator. These five treatments were designated as VOCs 3−10AWG -17, VOCs 3−10AWG -34, VOCs 3−10AWG -52, VOCs 3−10AWG -69, and VOCs 3−10AWG -86. The desiccators were covered with the lids, sealed with parafilm, and finally maintained in an incubator at 28 • C in dark for 7 days. The kernels in three of the seven dishes in a desiccator (for a treatment) were individually rated for disease severity using a numerical scale of 0-5, where 0, healthy without visible mycelia or sporulation on the kernel surface; 1, sparse mycelia on the kernel surface without visible sporulation; 2, dense mycelia on the kernel surface without visible sporulation; 3, dense mycelia on the kernel surface with sparse sporulation; 4, dense mycelia on the kernel surface with moderate sporulation; and 5, dense mycelia on the kernel surface with vigorous sporulation. Then, the kernels in each dish were transferred to a 250-mL flask containing 50 mL water amended with 0.1% Tween 20 (v/v). The flask was stirred for 5 min to wash the conidia off. The mixture was filtered with four layers of cheesecloth to obtain the conidial suspension, which was consequently determined for conidial concentration using a hemocytometer. The conidial yield per kernel was calculated based on the data about conidial concentration, volume of the conidial suspension, and number of the kernels. The kernels in four other dishes in that desiccator were used for scanning electron microscope (SEM) observation of fungal colonization and sporulation and quantification of the content of aflatoxins with the following procedures. Scanning Electron Microscopy A peanut kernel from one of the seven dishes in a desiccator (for a treatment) was randomly selected for SEM observation of colonization and sporulation of the two fungi on the kernel surface. The peel of each kernel was carefully taken off and cut into to small pieces (∼3 × 3 mm, length × width) using a sharp razor blade. The kernel peel pieces were immediately fixed in the glutaraldehyde fixative, followed by dehydration with gradient ethanol, drying in a Critical Point Dryer (Model: 13200E-AB, SPI SUPPLIES, West Chester, PA, United States), and gold-coating in a sputter coater (Model: JFC-1600, NTC, Tokyo, Japan) using the conventional procedures. Finally, the specimens were observed under a SEM (Model: JSM-6390/LV, NTC, Tokyo, Japan). Quantification of the Aflatoxins in Peanut Kernels The kernels in three of the seven dishes in a desiccator (for a treatment) were dried at 50 • C for 3 days and ground to fine powder using a mortar and pestle. The powder (5 g) for each treatment was suspended in 25 mL 70% methanol (v/v) in a 50-mL plastic tube, followed by sonication for 60 min and centrifugation at 5000 × g for 10 min to remove the granules in the suspension. The resulting supernatant was transferred to a new plastic tube and hexane was added at the volume ratio of 1:1 to extract aflatoxins (Gong et al., 2015). The upper hexane layer (500 µL) was pipetted out and used for identification and quantification of the aflatoxins by LC-MS (Waters ACQUITY UPLC H-Class system coupled to the XEVO TQ-S tandem quadrupole, Waters Cooperation, Milford, MA, United States). The mobile phase for the linear gradient washing consisted of two components, namely, A (MeOH) and B (5 mmol/L ammonium acetate, 0.05% formic acid in water). The washing lasted for 7 min with the program being set as follows: 1 min with A + B (20% + 80%); 3 min also with A + B (A: 20%→100%, B: 80%→0%); 1 min with A alone; 0.5 min with A + B (A: 100%→20%, B: 0%→80%); and 1.5 min also with A + B (A: 20%, B: 80%). The flow rate was adjusted to 0.3 mL/min. AFB 1 , AFB 2 , AFG 1 , and AFG 2 were identified based on the molecular ion peaks (m/z) at 313, 315, 329, and 331, respectively (Nonaka et al., 2009). The standard AFB 1 , AFB 2 , AFG 1 , and AFG 2 (Sigma-Aldrich R , St. Louis, MO, United States) were used as reference in identification and quantification. Determination of Expression of the Aflatoxins Biosthynesis Genes The conidia of A. flavus or A. parasiticus were harvested from 3-day-old PDA cultures and then spread on a cellophane film placed on PDA in a Petri dish (9 cm in diameter) with 200 µL conidial suspension (1 × 10 7 conidia/mL). Another Petri dish was loaded with 10 g the 7-day-old AWG culture of strain 3-10 or 10 g fresh AWG (CK). Then, the two dishes were faceto-face sealed by parafilm to form a DDS. After co-culturing at 28 • C for 72 h, the mycelia on the film were collected and immediately frozen in liquid nitrogen. Total RNA in the mycelial sample was extracted using E.Z.N.A R Fungal RNA Kit (Omega Bio-tek, Inc., Norcross, GA, United States) according to the provided manual. Expression of eleven important genes (aflR, AccC, aflCa, aflA, aflS, aflO, aflD, aflF, aflP, aflQ, and aflX) in the aflatoxins biosynthesis pathway in A. flavus and A. parasiticus were determined by quantitative real-time PCR (qRT-PCR) using the method described by Gong et al. (2019). The primers used for the qRT-PCR are listed in Supplementary Table S3. Effect of the VOCs of S. yanglinensis on Growth of Peanut This is a VOCs-fumigation bioassay, aiming at determining the effect of the VOCs from S. yanglinensis 3-10 on growth of peanut seedlings. Peanut kernels (A. hypogaea cultivar: Zhonghua No. 12) were soaked in water for 12 h and placed on moisturized filter papers in Petri dishes (15 cm in diameter), 30 kernels per dish. The dishes were maintained at 28 • C under the lighting regime of 12-h light and 12-h dark for 3 days. The pre-germinated peanut kernels were sown in plant culture mix in plastic pots (9.5 cm × 9.0 cm, height × diameter), one kernel in each pot. The plant culture mix contained Organic Culture Mix (Zhejiang Peilei Organic Fertilizer Co., Ltd., Zhengjiang, Jiangsu Province, China; N + P + K, > 2%; Organic matter content, > 35%; pH 5.5-6.5) and vermiculite at a ratio of 6:4 (w/w). The culture mix in pots (9.0 cm × 8.5 cm, diameter × height) was watered to 70-80% of the maximum water holding capacity. Finally, the pots were maintained in a growth chamber (20-25 • C) under fluorescent light with the regime of 12-h light/12-h dark. When the peanut seedlings grew to reach the height of 6-8 cm, the pots with the seedlings were transferred to three plastic boxes (55 cm × 40 cm × 36.5 cm, length × width × height, ∼80 L in volume), 16 pots in each box, for the following three treatments, one box for the control treatment with the VOCs from 960 g fresh AWG medium and two other boxes for two other treatments with the VOCs from the 7-day-old AWG cultures of S. yanglinensis 3-10, one containing 960 g AWG culture of S. yanglinensis 3-10 as low dosage (12 g/L) and another one containing 2720 g AWG culture S. yanglinensis 3-10 as high dosage (34 g/L). The boxes were individually covered with plastic films and maintained in the growth chamber for 7 days. The seedlings were carefully uprooted, washed under running tap water to remove soil particles. Shoot length of each seedling was measured and the seedlings were dried at 50 • C for 48 h for measuring shoot the total dry weight of each seedling. Effect of Soil Amendment With S. yanglinensis 3-10 on Seedling Growth of Peanut This is a soil amendment bioassay, aiming at determining the effect of soil amendment with S. yanglinensis 3-10 on growth of peanut seedlings. The 7-day-old AWG cultures of S. yanglinensis 3-10 (28 • C) and the fresh AWG medium were air-dried at room temperature (20-25 • C) and ground to fine powder, which was separately amended with the plant culture mix by a ratio of 5% (w/w). The culture mix of different treatments was loaded in pots, where pre-germinated peanut kernels were sown, one kernel in each pot and 16 pots for each treatment. The pots were maintained in the growth chamber (20-25 • C, 12-h light and 12-h dark) for 30 days. Height and total dry weight of each peanut seedling were measured. This experiment was repeated two more times. Data Analysis Data on colony diameter, yield of conidia produced by A. flavus and A. parasiticus, percentages of germinated conidia and length of germ tubes, disease severity, and yield of aflatoxins in peanut kernels in related experiments were separately analyzed using PROC ANOVA (analysis of variance) in the SAS software (SAS Institute, Cary, NC, United States, version 8.0, 1999). Before ANOVA, the data on conidial yield per dish was log 10transformed, the data on percentages of germinated conidia were transformed to numerical data by multiplication of each percentage value with 100. After ANOVA, the values were accordingly back-transformed to their original numerical forms. The means of each parameter for different treatments in each experiment were separated using least significance different (LSD) test at α = 0.05. Antifungal Activity of the Selected VOCs Three synthetic compounds, namely, β-CA, M2M, and 2-PE, were purchased and tested for suppression of A. flavus and A. parasiticus. The results showed that M2M and 2-PE had high antifungal activity against the two fungi. In terms of IMG, M2M showed the IC 50 values of 7.2 and 8.0 µL/mL against A. flavus and A. parasiticus, respectively. 2-PE had even lower IC 50 values than M2M against the two fungi, 1.2 µL/mL against A. flavus and 1.5 µL/mL against A. parasiticus ( Table 2). In terms of inhibition of conidial germination, M2M showed the IC 50 values of 0.7 and 1.2 µL/mL against A. flavus and A. parasiticus, respectively. The values were higher than those of 2-PE, which had the IC 50 values of 51.2 µL/mL against A. flavus and 46.2 µL/mL against A. parasiticus. In contrast, β-CA had the IC 50 values higher than 100 µL/mL in terms of IMG and conidial germination of two Antifungal Activity of the VOCs of S. yanglinensis 3-10 Against Aspergillus Results from two bioassays in DDSs showed that the VOCs from the AWG cultures of S. yanglinensis 3-10 (VOCs 3−10AWG ) had strong antifungal activity against A. flavus and A. parasiticus. In the time-course bioassay, both A. flavus and A. parasiticus grew and formed significantly (P < 0.05) larger colonies on PDA in the control treatment with the VOCs from fresh AWG (VOCs AWG ) than in the treatment of VOCs 3−10AWG . At 3 dpi, A. flavus and A. parasiticus had average colony diameters of 54.2 and 46.2 mm, respectively, in VOCs AWG ( Table 3). The values were significantly (P < 0.05) lower than those in VOCs 3−10AWG from 3-to 14-day-old AWG cultures of S. yanglinensis 3-10, in which A. flavus had the average colony diameters smaller than 42 mm (reduced by 23-38% compared to that in VOCs AWG ) and A. parasiticus had average colony diameters smaller than 29 mm (reduced by 58-70% compared to that in VOCs AWG ). Both fungi sporulated abundantly in VOCs AWG with average conidial yield reaching up to 1.3 × 10 5 conidia/mm 2 for A. flavus and to 1.0 × 10 5 conidia/mm 2 for A. parasiticus. The values were significantly (P < 0.05) higher than those in VOCs 3−10AWG , in which A. flavus had average conidial yield lower than 0.3 × 10 5 conidia/mm 2 (reduced by 74-82% compared to that in VOCs AWG ), and A. parasiticus had average conidial yield lower than 3.1 × 10 3 conidia/mm 2 (reduced by 97-98% compared to that in VOCs AWG ). Results of conidial germination on GA (28 • C, 12 h) showed that in VOCs AWG , conidia of the two fungi germinated at the rates of approximately 95%. In most treatments of VOCs 3−10AWG , however, the conidial germination rates of both fungi were significantly (P < 0.05) reduced compared to that in VOCs AWG . A. flavus had average conidial germination rates ranging from 65 to 87% in VOCs 3−10AWG from 3-, 7-, and 10-day-old AWG cultures (reduced by 9-32% compared to that in VOCs AWG ). However, A. flavus germinated by 92.4% in VOCs 3−10AWG from the 14-day-old AWG culture, not significantly (P > 0.05) different from that in VOCs AWG . 19.2 ± 0.8 b 2.2 ± 1.9 b 28.9 ± 5.2 b 82.8 ± 26.7 b x Diameter of colony and sporulation were measured after incubation on PDA at 28 • C for 3 days. y Germinated conidia and length of germ tubes were measured after incubation on glucose agar at 28 • C for 12 h in dark. z Means ± SD within the same column for each fungus followed by the same letters are not significantly different (P > 0.05) according to least significance test. Regarding germ-tube length, A. flavus and A. parasiticus had the average values of 253.5 and 231.8 µm, respectively, in VOCs AWG ( Table 3). The values were significantly (P < 0.05) reduced in the treatments of VOCs 3−10AWG , in which A. flavus had average germ-tube length ranging from 58.5 to 210.8 µm (reduced by 17-76% compared to that in VOCs AWG ), and A. parasiticus had average germ-tube length ranging from 11.4 to 82.8 µm (reduced by 64-95% compared to that in VOCs AWG ). Results from the dosage bioassay showed that the efficacy of the VOCs 3−10AWG from the 7-day-old AWG cultures of S. yanglinensis 3-10 in suppression of mycelial growth, conidial production, conidial germination, and germ-tube elongation was positively proportional to the amount of the AWG cultures of S. yanglinensis 3-10 applied to DDSs. For A. flavus, with increase in the dosage of the AWG cultures of S. yanglinensis 3-10 from 5 to 40 g per DDS, the suppressive efficacy was increased from 32 to 79% for colony size, from 73 to 100% for conidial yield, from 2.9 to 99% for conidial germination rates, and from 38 to 98% for germ-tube length compared to corresponding values in VOCs AWG . Similarly, for A. parasiticus, the suppressive efficacy was increased from 38 to 78% for colony size, from 95 to 100% for conidial yield, from 2.5 to 100% for conidial germination rates, and from 66 to 100% for germ-tube length compared to corresponding values in VOCs AWG ( Table 4). The Antifungal Spectrum of the VOCs From S. yanglinensis 3-10 Besides A. flavus and A. parasiticus, 20 other fungi and two species of Pythium (Oomycetes) were detected for sensitivity to VOCs 3−10AWG from the 7-day-old AWG culture of S. yanglinensis 3-10 in the DDS bioassay. Results showed that the 20 fungi and fungi-like organisms differed in response to VOCs 3−10AWG for mycelial growth on PDA (Table 5). Streptomyces VOCs-Mediated Suppression of Expression of Aflatoxins Biosynthesis Genes in Aspergillus The results showed that the VOCs from S. yanglinensis 3-10 reduced expression of the aflatoxins biosynthesis genes. In A. flavus, expression of seven (aflR, aflA, aflS, aflP, aflQ, aflX, and AccC) out of the 11 tested genes was reduced by 1.18-20.17 folds in the treatment of the VOCs, compared to the expression level of each gene in the control treatment ( Figure 5A). In A. parasiticus, expression of all of the 11 genes reduced with the treatment of VOCs by 5.7-537.2 folds, compared to the expression level of each gene in the control treatment ( Figure 5B). Effects of the Streptomyces VOCs and Soil Amendment With S. yanglinensis 3-10 on Growth of Peanut Seedlings In the VOCs-fumigation bioassay, the peanut seedlings were exposed for 7 days (20-25 • C) to the VOCs either from the fresh AWG (VOCs AWG as control, 12 g/L) or from the AWG cultures of S. yanglinensis 3-10 (VOCs 3−10AWG , 12 or 34 g/L). The results showed that the peanut seedlings in both treatments grow normally. The two treatments did not significantly differ (P > 0.05) in average seedling height and total seedling dry weight per seedling (Supplementary Table S4 and Supplementary Figure S17). In the soil amendment bioassay, the cultural mix was amended either with the powder of the un-colonized AWG medium (control) or the powder of S. yanglinensis-colonized AWG (5%, w/w). The 30-day-old seedlings (25 • C) in both treatments did not differ significantly (P > 0.05) from each other in average seedling height and total dry weight per seedling (Supplementary Table S4). These results suggest that the VOCs from S. yanglinensis 3-10 and soil amendment with S. yanglinensis 3-10 may have no harmful effect on peanut seedling growth. DISCUSSION Control of A. flavus and A. parasiticus in food/feed production is fundamentally important due to their aflatoxins producing ability. Effective control of food/feed contamination by Aspergillus spp. can be achieved by minimizing the amount of the primary inoculum in the field and/or under the storage conditions (Abdel-Kareem et al., 2019). Except for the traditional control methods, the microbial VOCs used as fumigant may be a promising alternative method. Streptomyces spp. are well known as producers of hydrolytic enzymes and antifungal metabolites with inhibitory effects against many plant pathogenic fungi including Aspergillus species (Mander et al., 2016;Shakeel et al., 2018). Use of the VOCs from Streptomyces for control of Aspergillus contamination in peanut kernels has not been reported so far. In this study, we demonstrated that VOCs from S. yanglinensis 3-10 could prevent Aspergillus contamination on peanut kernels under the storage conditions. The VOCs also showed antifungal activity against mycelial growth of 20 fungal species and one Oomycete ( Table 5). In these fungi, except Aspergillus, the fungi in the genera of Mucor, Rhizopus, Botrytis, Monilia, and Pythium cause rot disease on fruits and vegetables during storage. These results suggest that the VOCs from S. yanglinensis 3-10 have a promising potential used as a biofumigant with a broad antifungal spectrum, and can be potentially used in food/feed postharvest disease control. The VOCs from S. yanglinensis 3-10 exhibited suppressive effect on A. flavus and A. parasiticus in vitro assay and on peanut kernels under storage condition. In the in vitro assay, 7-and 10-day-old AWG culture of S. yanglinensis 3-10 showed the great inhibitory activity to mycelial growth, sporulation, conidia germination, and germ-tube elongation of A. flavus and A. parasiticus (Table 3). In the in vivo assay, SEM observation showed that the conidia of A. flavus and A. parasiticus-inoculated on the surface of peanut kernels hardly germinated in the treatment with high dosages of the AWG cultures of S. yanglinensis 3-10 (Figures 2, 3), the average disease severity was significantly decreased under the fumigation by the VOCs from S. yanglinensis 3-10 ( Figure 4A). In the presence of high dosages of the AWG cultures of S. yanglinensis 3-10 (52-86 g/L), both A. flavus and A. parasiticus hardly grew, sporulates and produces aflatoxin (Figures 4B-D). It is well known that sporulation cultures of A. flavus and A. parasiticus are capable of producing aflatoxins (Seenappa and Kempton, 1980) and the conidia from A. flavus and A. parasiticus could be the major source of the primary inoculum (Diener et al., 1987). The VOCs from S. yanglinensis 3-10 exhibited great ability to prevent formation of the primary inoculum by inhibition of sporulation by A. flavus and A. parasiticus. Microbial VOCs can easily diffuse in an airtight condition (Wang et al., 2013), and the antifungal VOCs can inhibit the growth of plant pathogenic fungi by avoiding direct contact with the pathogens and hosts. Therefore, using VOCs from microbes for controlling food/feed postharvest diseases could be viewed as a safely and environment-friendly measure. This suggests that the VOCs from S. yanglinensis 3-10 could be used as a biofumigant agent for storage of peanut kernels. Nineteen major VOCs from S. yanglinensis 3-10 were detected and identified by SPME-GC-MS analysis ( Table 1), most of these compounds were detected in actinomycetes (Schöller et al., 2002;Wilkins and Schöller, 2009). The component 2-MIB was the main volatile among the VOCs emitted by S. yanglinensis 3-10 and trans-1,10-Dimethyl-trans-9-decalinol (geosmin) was also detected. Both 2-MIB and geosmin are tertiary alcohols with earthy smelling and the principal odor components of soil (Buttery and Garibaldi, 1976). These two VOCs can be found in Streptomyces, cyanobacteria, and fungi such as Penicillium and Aspergillus species (Jüttner and Watson, 2007). The VOC M2M has been detected in apple cortex (Leisso et al., 2015), strawberry (Song et al., 2017), chamomile oil (Bail et al., 2009), co-culture of Enterobacter cloacae and Pseudomonas aeruginosa (Lawal et al., 2018), and also in the cultures of actinomycetes (Dickschat et al., 2011). M2M has been reported to have a moderate inhibitory effect against Staphylococcus aureus, Enterococcus faecali, P. aeruginosa, Proteus vulgaris, Klebsiella pneumonia, Salmonella sp., and Candida albicans (Bail et al., 2009). In our study, M2M showed high inhibitory activity against germination of conidia of A. flavus and A. parasiticus ( Table 2). M2M and its homologs including 2-methylbutyl acetate, 2methylbutyl 2-methylbutyrate, 2-methylbutyl angelate, and ethyl FIGURE 5 | Expression of the genes for biosynthesis of the aflatoxins in A. flavus (A) and A. parasiticus (B) in the presence and absence of the VOCs of S. yanglinensis 3-10 (SY3-10). *significant difference at p < 0.05 in comparison to the control treatment according the Student's t-test. 2-methylbutyrate have been demonstrated to have inhibitory activity against fungi and bacteria, but the antimicrobial mechanism of these compounds has not been studied (Bail et al., 2009;Guo et al., 2019). The 2-PE is one of the most widespread aromatic VOCs (Schulz and Dickschat, 2007). In the VOCs from yeast (Huang et al., 2011), plant endophytic fungi M. albus (Strobel et al., 2001), M. crispans (Mitchell et al., 2010), and Streptomyces (Li et al., 2010), production of 2-PE was detected and the compound showed antifungal activity against many plant pathogenic fungi. Previous studies reported that 2-PE from Pichia anomala inhibit mycelial growth and expression of aflatoxin biosynthetic genes in A. flavus (Hua et al., 2014;Chang et al., 2015). It showed a lethal effect against bacteria and fungi at very low concentrations (0.3-0.5%) (Chang et al., 2015). At the sublethal concentration, 2-PE was found to reduce rates of mycelial growth and conidial gerimination (Chang et al., 2015). In previous studies, 2-PE was exhibited inhibitory effects on biosynthesis of DNA, amino acid and protein biosynthesis, and disruption of subcellular changes in membrane integrity (Chang et al., 2015). In our study, we found that 2-PE was more effective on suppression of mycelial growth than on conidial germination ( Table 2). This study found that the VOC β-CA showed weakly antifungal activity against mycelial growth and conidial germination of A. flavus and A. parasiticus ( Table 2). Previous studies demonstrated β-CA can promote plant growth (Minerdi et al., 2011). Yamagiwa et al. (2011) reported that β-CA could also significantly enhance growth of Brassica campestris and resistance to anthracnose disease caused by Colletotrichum higginsianum. It seems that the VOC from S. yanglinensis 3-10 may have potential to enhance plant growth and synergistically to exhibit antifungal activity. In the VOCs from S. yanglinensis 3-10, 3-methyl-2-(2-methyl-2-butenyl)-furan (rosefuran) was detected in 7, 10, and 14-day-old AWG cultures, but not in 3-day-old AWG culture, the content was increased with the elongation of the culture time. Rosefuran was a minor, but an important olfactive ingredient of Bulgaria rose, Elsholtzia ciliata oil (Tsukasa, 1989), and essential oil of Perilla ocimoides (Misra and Husain, 1987). Mori et al. (1998) reported that rosefuran was a sex pheromone of an acarid mite. To our knowledge, this is the first discovery of rosefuran in actinomycetes. Sulfur-containing VOCs dimethyl disulfide (DMDS) and dimethyl trisulfide (DMTS) were not detected in the VOCs from S. yanglinensis 3-10. They were reported as the main VOC component with a broad spectrum of antifungal activity in microbes such as Alcaligenes, Pseudomonas, Shewanella, and Streptomyces (Li et al., 2010;Gong et al., 2015Gong et al., , 2019. DMDS and DMTS have a repelling activity, which may limit its use as biofumigant. Until now, a few of VOCs are reported inhibitory activity against A. flavus and A. parasiticus (Gong et al., 2019). In our study, a novel VOC M2M showed inhibition of conidial germination, and 2-PE showed suppressive effect on mycelial growth in vitro, but the relative content of M2M and 2-PE was below 5%. These results indicated that inhibitory effect of the VOCs from S. yanglinensis 3-10 against Aspergillus may have synergistic effect among these volatiles. In Streptomyces, the biosynthesis pathway for 2-MIB, 2-M2B, geosmin, and β-CA have been studied (Supplementary Figure S2). Komatsu et al. (2008) identified 2-MIB synthase encoded by SCO7700 gene in Streptomyces coelicolor A3(2), SGR1269 gene in Streptomyces griseus IFO13350, SCAB 5041 gene in Streptomyces scabiei 87.22 (directly catalyzed the biosynthesis of 2-MIB and 2-M2B). Jiang et al. (2007) reported geosmin synthase encoded by SCO6073 gene in S. coelicolor A3(2) generated geosmin from farnesyl diphosphate. Nakano et al. (2011) characterized a terpenoid cyclase encoded by gene SGR2079 in S. griseus IFO13350 was responsible for biosynthesis of β-CA and caryolan-1-ol. In Enterobacter sp. CGMCC 5087, 2-PE was generated by arylalcohol dehydrogenase through the phenylpyruvate pathway. The amino acid alignment showed the conserved motifs in these key genes in our study are identical to the amino acid sequences in reference genes (Supplementary Figures S3-S12). It is supposed that the function of these key genes were similar to the reported genes. The VOCs from S. yanglinensis 3-10 could suppress expression of the aflatoxin biosynthesis genes. The aflR gene in A. flavus and A. parasiticus was downregulated by 20.17 and 6.26 folds, respectively, compared to the expression level of this gene in the control treatment. The aflS gene in A. flavus and A. parasiticus was also downregulated by 4.33 and 6.12 folds, respectively, compared to the expression level of this gene in the control treatment. The aflR gene is required for transcriptional activation of aflatoxin biosynthesis and the aflS gene is a transcriptional enhancer. Previous studies demonstrated that aflR gene and aflS gene might be involved in regulation of other genes in the aflatoxin biosynthesis pathway; thus, it can directly manipulate aflatoxins biosynthesis (Flaherty and Payne, 1997;Cary et al., 2000;Amare and Keller, 2014). In our study, the expression of these two genes was significantly reduced and the aflatoxin quantification result was further proved the importance of aflR and aflS expression in aflatoxin biosynthesis. After treatment with the VOCs from S. yanglinensis 3-10, the biomass and expression of the aflatoxins biosynthesis genes in A. flavus and A. parasiticus were decreased, these findings may explain reduction of aflatoxins production by the VOCs produced by S. yanglinensis 3-10. Aspergillus is commonly found in soil and crop debris. In biological control of A. flavus and A. parasiticus, using atoxigenic Aspergillus isolates can reduce aflatoxin contamination (Abbas et al., 2011;Amaike and Keller, 2011). In the field, soil amendment with the conidia of atoxigenic A. flavus and A. parasiticus or use in irrigation could reduce aflatoxin concentrations in peanuts (Dorner et al., 1992;Dorner and Horn, 2007), cotton (Cotty, 1994), and maize (Dorner, 2009). Our results indicate that the VOCs emitted by S. yanglinensis 3-10 and soil amendment with the AWG cultures of S. yanglinensis 3-10 did not show harmful effects on peanut seedling growth. Streptomyces spp. are soil dwelling bacteria, they can grow and produce versatile secondary metabolites in soil. S. yanglinensis 3-10 was proved to produce antifungal metabolites which showed inhibitory effects on A. flavus growth and aflatoxin production and other plant pathogenic fungi (Lyu et al., 2017;Shakeel et al., 2018). It is an acidophilic species with high adaptation ability in acidic soils in southern China. Considering the ability to produce antifungal metabolites and the VOCs, S. yanglinensis 3-10 can be developed as a biocontrol agent applied in the field for control of Aspergillus as well as other plant pathogenic fungi and Oomycetes or for prevention of food/feed contamination under the storage conditions. CONCLUSION The VOCs produced by S. yanglinensis 3-10 displayed a wide antifungal spectrum, including postharvest and soilborne plant pathogenic fungi, such as A. flavus and A. parasiticus. The VOCs also showed an inhibitory effect on production of the aflatoxins by A. flavus and A. parasiticus in peanut kernels through suppression of colonization by the two fungi and downregulation of expression of the aflatoxin biosynthesis genes in the two fungi. This study further demonstrated that S. yanglinensis 3-10 is a promising potential with versatile mechanisms in suppression of plant pathogenic fungi, including A. flavus and A. parasiticus. DATA AVAILABILITY STATEMENT Publicly available datasets were analyzed in this study. This data can be found here: GenBank accession numbers for nucleotide sequences: MK861971, MK861972, MK861973, MK861974, MK861975, and MK861976. AUTHOR CONTRIBUTIONS AL and LY designed research. AL performed research and analyzed the data of SPME-GC-MS. JZ and MW provided new agents and analyzed the data. AL and GL wrote the manuscript. FUNDING This study was financially supported by the China's Agricultural Research System (CARS-12).
Recombinant Peptide Mimetic NanoLuc Tracer for Sensitive Immunodetection of Mycophenolic Acid Mycophenolic acid (MPA) is an immunosuppressant drug commonly used to prevent organ rejection in transplanted patients. MPA monitoring is of great interest due to its small therapeutic window. In this work, a phage-displayed peptide library was used to select cyclic peptides that bind to the MPA-specific recombinant antibody fragment (Fab) and mimic the behavior of MPA. After biopanning, several phage-displayed peptides were isolated and tested to confirm their epitope-mimicking nature in phage-based competitive immunoassays. After identifying the best MPA mimetic (ACEGLYAHWC with a disulfide constrained loop), several immunoassay approaches were tested, and a recombinant fusion protein containing the peptide sequence with a bioluminescent enzyme, NanoLuc, was developed. The recombinant fusion enabled its direct use as the tracer in competitive immunoassays without the need for secondary antibodies or further labeling. A bioluminescent sensor, using streptavidin-coupled magnetic beads for the immobilization of the biotinylated Fab antibody, enabled the detection of MPA with a detection limit of 0.26 ng mL–1 and an IC50 of 2.9 ± 0.5 ng mL–1. The biosensor showed good selectivity toward MPA and was applied to the analysis of the immunosuppressive drug in clinical samples, of both healthy and MPA-treated patients, followed by validation by liquid chromatography coupled to diode array detection. ■ INTRODUCTION Mycophenolic acid (MPA) is a mycotoxin produced by Penicillium fungi, and it is widely used as an immunosuppressant drug to prevent organ rejection in transplanted patients. 1 Recently, it has also been tested as a chemotherapeutic agent as it inhibits the proliferation of cancer cells. 2 Due to the small therapeutic window that MPA has, it is very important to monitor correctly its levels inside the human body. 3 MPA is mainly found in the serum, but only 1% of the total MPA exists in the free form, which is the one responsible for its pharmacological activity. 3,4 Therefore, the availability of analytical methods for detecting MPA at low concentrations in serum is of great interest. Over the past decades, the determination of MPA has been carried out using liquid chromatography (LC) coupled with ultraviolet or mass spectrometry detection. 5−7 However, these methods often require skilled personnel and they are time-consuming and of high cost. Moreover, tedious sample treatment is mandatory in most cases. Fast screening methods such as immunoassays are highly relevant nowadays, and the use of antibodies has burst over the last years as simple analytical tools. Immunoassays offer outstanding versatility since they can be easily automated or integrated into a routine laboratory or a point-of-care testing device. Also, different immunoassays have been already implemented for the detection of MPA. 8−10 Those assays, however, fail to detect free MPA in blood samples and offer a poor selectivity as several potential interferences may alter the results. We have previously developed a homogeneous fluorescence polarization assay to detect free MPA in blood samples with good sensitivity, low cross-reactivity, and good recovery rates in real samples. 11 The analysis of low molecular weight molecules can sometimes be challenging. They might present high toxicity, carcinogenicity, high price, or are difficult to functionalize without altering their interaction with the antibody. A feasible solution to this is the use of peptide mimetics, also known as mimotopes, since they can be easily functionalized or fused to other proteins in a cost-effective way. Peptide mimetics have the exceptional ability to bind to the same antibody paratope as the antigen, and they can be applied to the development of competitive immunoassays or biosensors where they can replace the analyte conjugate used as the competitor. Phage display is a commonly applied technique for recombinant antibody development as well as to identify peptide mimetics. 12 Phage-based enzyme-linked immunosorbent assays (ELISA) using peptide mimetics have been widely described in the literature. These assays do not require much preparation, and they have good sensitivity as well as selectivity. 13−17 However, the presence of phage may have a significant effect on the binding kinetics, and previous reports have shown that the assay sensitivity can potentially improve when the peptide is used alone rather than in the phagedisplayed form. 18,19 Moreover, the assays would be faster, cheaper, and simpler if the peptide is fused to a fluorescent or luminescent protein, since the peptide fusion would be responsible for the analytical signal, and there would be no need of using any secondary antibody for that purpose. The coupling can typically be a genetic fusion or a chemical functionalization; however, the former one is preferred due to the fact that chemical modifications can lead to a series of secondary reactions that may alter the final product. Genetic modifications are more homogeneous and present a welldefined stoichiometry between the peptide and the protein. 20 In this work, we describe the first peptide mimetic for MPA and a bioluminescent-based immunoassay for the detection of MPA with a NanoLuc−peptide fusion in blood samples. First, the peptide mimetic was selected from a combinatorial peptide library by phage display. The high selectivity of the peptide mimetic for the recombinant MPA antibody fragment was demonstrated by a competitive phage-based ELISA. Moreover, surface plasmon resonance (SPR) was used to confirm the binding properties of the cyclic peptide (named A2) and MPA to the anti-MPA Fab antibody. Thereafter, a bioluminescent protein, NanoLuc, was coupled to the MPA mimicking peptide A2. NanoLuc is reported to be 100 times brighter than firefly or Renilla luciferases, and with a size as small as 19 kDa, it is catching the eyes of many researchers for many different applications. 21 The NanoLuc−peptide fusion was genetically crafted and implemented in a magnetic bead-based immunoassay that showed higher sensitivity than the phage-based ELISA. Finally, the bioluminescent assay was applied to analyze the free active forms of MPA in blood samples from transplanted patients. The results were validated by a reference method using rapid resolution LC with diode array detection (RRLC-DAD). The recombinant anti-MPA Fab was obtained from a phage display library and produced as described previously. 22 Biopanning Rounds. A commercial phage-displayed peptide library was used to select cyclic peptides that bind to the anti-MPA. The selection rounds were carried out with an automatic magnetic bead processor (KingFisher Thermo Fisher Scientific). See the Supporting Information for antibody coupling to magnetic beads. Briefly, the phage-displayed peptide library (∼2.0 × 10 11 phages) was incubated for 2 h with the anti-MPA conjugated beads (50 μg) in a total volume of 505 μL of PBST [PBS, pH 7.4 with 0.05% (v/v) Tween-20]. The beads were subsequently washed twice with PBST for 30 s, and then the bound phages were eluted with 100 μL of 0.1 M triethylamine (pH 11.2) for 30 min. The resulting solution containing the eluted phages was immediately neutralized with 70 μL of 1 mol L −1 Tris-HCl (pH 6.8). Amplification of the eluted phages was carried out by adding 70 μL of the eluate to a 40 mL early-log phase ER2738 culture in LB and incubating at +37°C for 4.5 h. The cells were harvested by centrifugation (10 min, 12,000g, +4°C), and the supernatant was collected. The amplified phages were precipitated overnight at +4°C after adding to the supernatant 1/6 volume of 20% poly(ethylene glycol) (PEG)/2.5 mol L −1 NaCl. Then, the precipitated phages were collected by centrifugation (15 min, 12,000g, +4°C) and resuspended in 3 mL of PBS. The precipitation was repeated with 20% PEG/2.5 mol L −1 NaCl on ice for 1 h, followed by centrifugation (10 min, 12,000g, +4°C ). Finally, the pellet containing the phages was resuspended Analytical Chemistry pubs.acs.org/ac Article in 500 μL of PBS. The amplified phage solution was utilized for the consequent selection round. After the first round, an additional 30 s washing step was introduced to harden the conditions of selection. After three panning rounds, several individual clones were isolated from each round and tested in phage-based ELISAs to select the one showing the highest sensitivity for the anti-MPA. Monoclonal phages were selected from fresh titering plates of each round. Briefly, 80 μL of ER2738 culture containing the monoclonal phages were incubated for 2.5 h at +37°C and were subsequently streaked out and grown overnight on IPTG/X-Gal plates at 37°C. Afterward, individual clones were inoculated on 500 μL of LB and grown for 6 h at +37°C. Finally, the cells were harvested (5 min, 10,000g, +4°C), and the supernatant was transferred to a fresh tube. The concentration of the amplified individual clones, determined by tittering, ranged from 10 11 to 10 12 pfu mL −1 . Phage-Based ELISA. The phage-displayed peptides were screened in an ELISA to test their binding to immobilized anti-MPA. The assay was carried out at room temperature (RT). The biotinylated anti-MPA (Supporting Information) [5 μg mL −1 in the assay buffer (SuperBlock supplemented with 0.05% Tween-20); 100 μL per well] was immobilized on streptavidin-coated wells (30 min), followed by three-time washes with PBST. The wells were then blocked with 280 μL of assay buffer for 30 min and washed again three times with PBS. Then, the amplified phage stock (between 10 10 and 10 11 pfu mL −1 ; 100 μL per well) was added to the wells in assay buffer and incubated for 1 h with slow shaking. After washing the wells as described above, the HRP-conjugated anti-M13 monoclonal antibody (1:5000 dilution in assay buffer; 100 μL per well) was added to the wells and incubated for 1 h. Finally, the plate was washed three times as described above and 100 μL of ABTS was added to the wells. After 5 min, absorbance at 405 nm was measured in a Varioskan plate reader (Thermo Scientific). The phage clone that showed binding to the anti-MPA Ab was tested in a similar assay in the presence of 100 ng mL −1 of free MPA. Furthermore, a bead-based assay was developed with the phage that showed significant competition in the plate-based assay. Briefly, black microtiter plates were blocked with 280 μL of assay buffer for 1 h at RT and subsequently washed three times with PBS. Then, the biotinylated anti-MPA (1.2 μg mL −1 ) and neutravidin-coated magnetic beads (125 μg mL −1 ) functionalized as described before, 18 were added to the wells in the assay buffer (total volume 260 μL per well), and incubated for 30 min at RT. After washing the beads using a plate washer with a magnetic support, the phage clone (10 11 pfu mL −1 ) and increasing concentrations of free MPA were added to the wells (in assay buffer, 60 μL per well) and incubated for 30 min at RT. The beads were washed again to remove the excess, followed by incubation with HRPconjugated anti-M13 antibody (1:5000 dilution in assay buffer; 80 μL per well) for 30 min at RT. Finally, after washing, 80 μL of Amplex UltraRed solution was added to each well, and the fluorescence was monitored with a CLARIOstar microplate reader (BMG Labtech) (λ ex = 530 nm and λ em = 590 nm). Construction of the NanoLuc Fusion Protein. The phage clone that showed the best response in the competition assay with free MPA was sequenced to identify the peptide sequence. To express the MPA peptide mimetic A2 in fusion with the NanoLuc protein, the latter one was PCR-amplified from the commercial vector ATG 42 23 using the Phusion Hot Start II DNA Polymerase. The forward primer, RP043, (5′-GAA AAC CTG TAT TTT CAG GGC GTC TTC ACA CTC GAA GAT TTC G-3′) hybridized to the 5′-end of the NanoLuc, and the reverse primer, RP044, (5′-ATA CAG ACC CTC ACA ACT GCC ACC TCC AGA GCC GCC ACC CGC CAG AAT GCG TTC GC-3′) hybridized to the 3′-end. The hybridizing part of the sequence is underlined. The fusion of NanoLuc with the cyclic peptide was carried out in the pMAL vector. In order to amplify this vector, the forward primer, RP039, (5′-GT TGT GAG GGT CTG TAT GCG CAT TGG TGC GGA GGC TAG GGA TCC GAA TTC CCT-3′) included a 5′-overhang (in bold) for the DNA sequence encoding the peptide mimetic for MPA, whereas the reverse primer, RP040, (5′-G AAA ATA CAG GTT TTC ATG ATG ATG ATG ATG ATG CAT AAT CTA TGG TCC TTG TTG G-3′) contained a His-tag. For the assembly, the vector and the insert were incubated at +50°C for 15 min with the NEBbuilder Master Mix. Then, NEB 5-alpha competent Escherichia coli cells were transformed with 2 μL of the assembled product according to the manufacturer's instruc- Analytical Chemistry pubs.acs.org/ac Article tions. 24 Successful cloning was proven by DNA sequencing analysis. Expression and Purification of the Fusion Protein. The A2-NanoLuc plasmid ( Figure S1A, Supporting Information) was first transformed into E. coli SHuffle Express cells according to the manufacturer's instructions. A single colony was selected on LB agar plates with 100 μg mL −1 ampicillin and grown on 15 mL of LB with 100 μg mL −1 ampicillin overnight. The next day, an aliquot of the overnight preculture was added to a 200 mL culture of LB with 100 μg mL −1 ampicillin and grown until an OD 600 (optical density at 600 nm) of 0.6 was reached. To induce the protein expression, IPTG was added at a final concentration of 0.4 mmol L −1 , and the expression was continued at +37°C for 4 h. The culture was then transferred to an ice bath for 10 min to stop the cell growth, and the cells were collected by centrifugation at 5000g for 10 min at +4°C and resuspended in NZY Bacterial Cell Lysis Buffer (approximately 5 mL of buffer per gram of cell paste) supplemented with a protease inhibitor cocktail, NZY Bacterial Cell Lysis Buffer supplemented with Lysozyme and DNase I according to the manufacturer's instructions. The cell lysis was carried out by sonication (VibraCell Ultrasonic Processor 130 W 20 kHz, Ampl 70%) for 10 s 5 times with 30 s breaks, and the insoluble cell debris was discarded by centrifugation at 15,000g for 15 min at +4°C. Finally, the cell lysate was purified with HisTrap purification columns according to the manufacturer's instructions, and the buffer was exchanged to PBS with Sephadex G-25 M columns. The purified proteins were aliquoted and stored at −20°C. The size and purity of the A2-NanoLuc fusion protein was confirmed by sodium dodecyl sulfate−polyacrylamide gel electrophoresis ( Figure S1B Supporting Information). The kinetic constants of the binding of the cyclic peptide (A2) and MPA were determined by Biacore T200 (GE Healthcare) (Supporting Information). Bioluminescent Immunoassay for MPA Detection. To detect MPA with the A2-NanoLuc fusion protein, a bead-based assay was carried out on a black microtiter well plate by immobilizing the biotinylated anti-MPA onto streptavidincoated magnetic beads ( Figure 1). Briefly, the wells were first blocked with assay buffer (SuperBlock with 0.05% Tween-20) for 1 h. Then, 60 μL of 5 μg mL −1 biotinylated anti-MPA in assay buffer and 20 μL of streptavidin beads (1:50 dilution from the stock) were added to the wells and incubated for 30 min at RT. After washing three times with PBST, 60 μL of a solution containing different concentrations of MPA and 77 μg mL −1 of the A2-NanoLuc in assay buffer was added to the wells and incubated 30 min at RT. Once the beads were washed, 60 μL of NanoGLO substrate in PBS were added and bioluminescence was measured after a 2 min incubation at 470 nm with a bandwidth of 80 nm using a CLARIOstar microplate reader. Sample Analysis. Volunteers donated whole blood samples with permission from the Ethics Committee from Hospital Clı́nico Universitario de Valladolid, Spain (no. PI 21-2245). The blood samples were kept at 20°C during transport and storage. The samples were treated following the procedure described previously (see the Supporting Information for details). 11 ■ RESULTS AND DISCUSSION Selection and Characterization of MPA Peptide Mimetics. To develop a competitive immunoassay for MPA detection, a peptide mimetic for MPA was selected from a cyclic 7-mer phage display peptide library (Ph.D.-C7C) in three consecutive panning rounds. Once the three panning rounds were carried out, a total of eight clones were isolated and tested using ELISA. One of the clones showed a very high signal-to-background ratio, as well as very low nonspecific binding when the assay was performed in the absence of anti-MPA ( Figure 2A); therefore, this clone (named A2) was selected for further analysis. Next, a competitive ELISA for A2 was carried out under the same assay conditions as before. However, in this case, 100 ng mL −1 of free MPA were added at the same time as the phage clone to test the competition between phage-displayed A2 and free MPA for the binding sites of the anti-MPA. A significant decrease in the signal was observed in the presence of MPA, demonstrating the success of the selection rounds and excellent performance of clone A2 as a peptide mimetic (data not shown). A fluorescent bead-based assay was developed to further optimize the assay conditions and confirm the viability of the selected phage clone. Neutravidin-functionalized magnetic beads were incubated with the biotinylated anti-MPA, and the competition was then tested between free MPA in concentrations ranging from 0 to 1600 ng mL −1 and clone A2. The results were similar to those obtained on the plate- Analytical Chemistry pubs.acs.org/ac Article based ELISA, confirming the successful selection of the peptide mimetic ( Figure 2B). By DNA sequencing of clone A2, the peptide sequence of ACEGLYAHWC, with a disulfide bond between the two cysteines, was identified. A synthetic biotinylated peptide with this sequence was consequently tested in a competitive neutravidin bead-based assay, showing competition at the nanomolar level. Contrary to the phage-based assay, this time, the biotinylated peptide was bound to neutravidin beads, and the nonbiotinylated anti-MPA was added thereafter. This antibody was then recognized with an anti-IgG-HRP antibody, measuring the same fluorescent signal as before. Due to the absence of the whole phage in this assay, the results prove that the peptide sequence obtained can be considered an outstanding mimetic for MPA since a similar response was obtained in comparison to the phage-based assay ( Figure S2, Supporting Information). As can be seen, the phage-based assay showed a slightly lower limit of detection (LOD), calculated as the 10% inhibition, 25 (0.69 ng mL −1 ) compared to the peptide-based assay (0.94 ng mL −1 ). However, the dynamic range, taken as the 20−80% inhibition, 26 is wider in the case of peptide-based assay (2.4−60 ng mL −1 ) than in phage-based assay (1.0−4.1 ng mL −1 ). The assay time is the same in both cases, and the detection is done by adding the same fluorescent dye. Binding Properties of Cyclic Peptide. To compare the binding properties of the biotinylated cyclic peptide and MPA toward the anti-MPA antibody, label-free SPR technology was applied. In the binding experiments, previously identified, produced, and purified Fab antibodies recognizing either MPA or ochratoxin A were immobilized onto sensor chip surfaces. 22 The same experimental conditions were used to study the binding properties of cyclic peptide (A2) and MPA. The results are presented in Figures S3 and S4 and summarized in Table S1 (Supporting Information). As expected, both cyclic peptide (A2) and MPA showed binding to the anti-MPA Fab antibody surface, and the binding responses increased in a concentration-dependent manner. In agreement with our previous results from the SPR assay using affinity in solution approach, the affinity constant for MPA and anti-MPA Fab antibody interaction was ∼40 nmol L −1 . 22 The affinity of the interaction between cyclic peptide (A2) and anti-MPA Fab antibody is 2 orders of magnitude lower compared to the affinity of MPA−anti-MPA Fab antibody interaction. This is due to the slower association and faster dissociation of cyclic peptide (A2)−anti-MPA Fab antibody complex compared to the corresponding values for the MPA−anti-MPA Fab antibody complex. Bioluminescent Bead-Based Immunoassay for MPA Detection. To improve the assay sensitivity and to provide a faster and cheaper assay, the peptide mimetic was fused to a bioluminescent enzyme, both in the N-terminus and Cterminus (A2-Nanoluc and NanoLuc-A2, respectively), and a simple immunoassay for MPA detection was established using the A2-NanoLuc fusion protein. The fusion protein was produced cost-effectively by bacteria, in which the bioluminescent protein can be already incorporated. After purification, both NanoLuc-A2 and A2-NanoLuc fusion proteins showed bright luminescence in the presence of the substrate, proving that the assay did not require a secondary antibody or any other chemical modification to obtain the analytical signal. Both fusion proteins also proved to recognize the anti-MPA and compete with free MPA at the nanomolar level for the binding sites of the antibody ( Figure S5, Supporting Information); however, the A2-Nanoluc product showed a wider dynamic range and lower dispersity at low concentrations, and it was selected for further characterization (Figure 3). This confirmation was carried out with a beadbased assay, in which streptavidin-coated beads were incubated first with the biotinylated anti-MPA, and then, A2-NanoLuc and free MPA were added simultaneously to the solution. This bead-based immunoassay improved both the dynamic range and the sensitivity compared to similar bead-based assays carried out with the phage-displayed A2 and with the synthetic peptide A2-bio ( Figure S2, Supporting Information). The LOD was 0.26 ng mL −1 and the IC 50 value was 2.9 ± 0.5 ng mL −1 . The dynamic range ranged between 0.64 and 14 ng mL −1 . The interday relative standard deviation was 12% on average (n = 3), whereas the value for assays on three different, nonconsecutive days was 9%. The A2-NanoLuc fusion protein proved to be stable for more than 6 months upon storage at −20°C in PBS. For comparison purposes, this bioluminescent assay provided a better sensitivity, a shorter analysis time, and simplicity, since there is no need to add a secondary antibody, than those described previously using HRP as the label and fluorometric detection. In addition, the sensitivity of this assay is better than for other immunoassays described in the literature, as well as for several commercially available kits for the analysis of MPA (Table S2, Supporting Information). Cross-Reactivity. To prove the selectivity of the method, the assay was performed in the presence of different MPA metabolites found in blood, such as mycophenolic acid glucuronide (MPAG) and acyl-mycophenolic acid glucuronide (acyl-MPAG), as well as other immunosuppressant drugs commonly co-administered to transplanted patients, tacrolimus and cyclosporin ( Figure S6 Supporting Information). As can be observed in Figure 4, acyl-MPAG showed a very similar behavior to MPA in the assay (58% cross-reactivity, calculated as the IC 50 for MPA divided by the IC 50 of acyl-MPAG). This metabolite is an active form of MPA, contrary to MPAG; 4 therefore, the assay can be designed to detect the active forms of MPA in blood. Nevertheless, acyl-MPAG is found at lower concentrations than MPA, 27 and it was not detected by highperformance LC in any of the analyzed samples. Concerning MPAG, the cross-reactivity was negligible at 0.03%, and for the two other immunosuppressant drugs, it was lower than 0.03%. Matrix Effect. The matrix effect was tested in the presence of different dilutions of the ultrafiltered serum samples [1/2, 1/ 6, and 1/8, (v/v)], treated following a previously described procedure, 11 in PBST. Figure 5 shows that no significant differences (p > 0.05) were observed between the dose response curves obtained in PBST or in an ultrafiltered serum diluted 1/8 (v/v) with the buffer. Therefore, such dilution was used for further experiments. Sample Analysis. The optimized assay was applied to the analysis of blood samples from transplanted patients (T1−T5) and healthy control patients (H1−H3), and the results were validated by RRLC-DAD (Supporting Information) ( Figure 6). Figure S7 (Supporting Information) shows a chromatogram of a standard mixture of the metabolites. As expected, no MPA was detected in the control samples. A statistical comparison of the results obtained by both methods using a paired t-test demonstrated that there are no significant differences between them at a 95% confidence level. The RRLC-DAD results confirmed that the active metabolite, acyl-MPAG, was not present in any of the samples, and therefore, the biosensor response was only due to the free MPA. Furthermore, the MPAG levels found in the analyzed samples were below the limit of quantification of the biosensor; hence, the nonactive metabolite of MPA did not cross-react in the analysis (Table S3, Supporting Information). The results show that patients T1 and T2 had the highest MPA concentration levels, and the results in all cases correlate favorably with the administered doses (Table S4, Supporting Information). ■ CONCLUSIONS In this work, we proved that phage display is a useful technique for the selection of MPA peptide mimetics for the development of immunoassays and biosensors. A bioluminescent beadbased assay using a luciferase enzyme as a reporter provided higher sensitivities, shorter analysis times, and cost-effective assays than other formats using HRP as the label and fluorometric detection. The assay allows the analysis of the active forms of MPA in plasma, that is, free MPA and acyl-MPAG. No relevant cross-reactivity was observed with other nonactive forms of MPA in plasma as well as with other drugs jointly administered to transplanted patients. The results were compared favorably with a reference RRLC-DAD-based method. Protocols for antibody coupling to magnetic beads and antibody biotinylation; details about the synthetic peptide-based ELISA; description about the SPR measurements; details about the RRLC-DAD method; blood sample treatment; construction of the NanoLuc-
Effects of HBT correlations on flow measurements The methods currently used to measure collective flow in nucleus--nucleus collisions assume that the only azimuthal correlations between particles are those arising from their correlation with the reaction plane. However, quantum HBT correlations also produce short range azimuthal correlations between identical particles. This creates apparent azimuthal anisotropies of a few percent when pions are used to estimate the direction of the reaction plane. These should not be misinterpreted as originating from collective flow. In particular, we show that the peculiar behaviour of the directed and elliptic flow of pions observed by NA49 at low p_T can be entirely understood in terms of HBT correlations. Such correlations also produce apparent higher Fourier harmonics (of order n larger than 3) of the azimuthal distribution, with magnitudes of the order of 1%, which should be looked for in the data. I. INTRODUCTION In a heavy ion collision, the azimuthal distribution of particles with respect to the direction of impact (reaction plane) is not isotropic for non-central collisions. This phenomenon, referred to as collective flow, was first observed fifteen years ago at Bevalac [1], and more recently at the higher AGS [2] and SPS [3] energies. Azimuthal anisotropies are very sensitive to nuclear matter properties [4,5]. It is therefore important to measure them accurately. Throughout this paper, we use the word "flow" in the restricted meaning of "azimuthal correlation between the directions of outgoing particles and the reaction plane". We do not consider radial flow [6], which is usually measured for central collisions only. Flow measurements are done in three steps (see [7] for a recent review of the methods): first, one estimates the direction of the reaction plane event by event from the directions of the outgoing particles; then, one measures the azimuthal distribution of particles with respect to this estimated reaction plane; finally, one corrects this distribution for the statistical error in the reaction plane determination. In performing this analysis, one usually assumes that the only azimuthal correlations between particles result from their correlations with the reaction plane, i.e. from flow This implicit assumption is made, in particular, in the "subevent" method proposed by Danielewicz and Odyniec [8] in order to estimate the error in the reaction plane determination. This method is now used by most, if not all, heavy ion experiments. However, other sources of azimuthal correlations are known, which do not depend on the orientation of the reaction plane. For instance, there are quantum correlations between identical particles, due to the (anti)symmetry of the wave function : this is the so-called Hanbury-Brown and Twiss effect [9], hereafter denoted by HBT (see [10,11] for reviews). Azimuthal correlations due to the HBT effect have been studied recently in [12]. In the present paper, we show that if the standard flow analysis is performed, these correlations produce a spurious flow. This effect is important when pions are used to estimated the reaction plane, which is often the case at ultrarelativistic energies, in particular for the NA49 experiment at CERN [13]. We show that when these correlations are properly subtracted, the flow observables are considerably modified at low transverse momentum. In section 2, we recall how the Fourier coefficients of the azimuthal distribution with respect to the reaction plane are extracted from the two-particle correlation function in the standard flow analysis. Then, in section 3, we apply this procedure to the measured two-particle HBT correlations, and calculate the spurious flow arising from these correlations. Finally, in section 4, we explain how to subtract HBT correlations in the flow analysis, and perform this subtraction on the NA49 data, using the HBT correlations measured by the same experiment. Conclusions are presented in section 5. II. STANDARD FLOW ANALYSIS In nucleus-nucleus collisions, the determination of the reaction plane event by event allows in principle to measure the distribution of particles not only in transverse momentum p T and rapidity y, but also in azimuth φ, where φ is the azimuthal angle with respect to the reaction plane. The φ distribution is conveniently characterized by its Fourier coefficients [14] v n (p T , y) ≡ cos nφ = 2π 0 cos nφ dN where the brackets denote an average value over many events. Since the system is symmetric with respect to the reaction plane for spherical nuclei, sin nφ vanishes. Most of the time, because of limited statistics, v n is averaged over p T and/or y. The average value of v n (p T , y) over a domain D of the (p T , y) plane, corresponding to a detector, will be denoted by v n (D). In practice, the published data are limited to the n = 1 (directed flow) and n = 2 (elliptic flow) coefficients. However, higher harmonics could reveal more detailed features of the φ distribution [7]. Since the orientation of the reaction plane is not known a priori, v n must be extracted from the azimuthal correlations between the produced particles. We introduce the two-particle distribution, which is generally written as where C(p 1 , p 2 ) is the two-particle connected correlation function, which vanishes for independent particles. The Fourier coefficients of the relative azimuthal distribution are given by We denote the average value of c n over (p T 2 , y 2 ) in the domain D by c n (p T 1 , y 1 , D), and the average over both (p T 1 , y 1 ) and (p T 2 , y 2 ) by c n (D, D). Using the decomposition (2), one can write c n as the sum of two terms: c n (p T 1 , y 1 , p T 2 , y 2 ) = c flow n (p T 1 , y 1 , p T 2 , y 2 ) + c non−flow where the first term is due to flow: and the remaining term comes from two-particle correlations: In writing Eq.(5), we have used the fact that sin nφ 1 = sin nφ 2 = 0 and neglected the correlation C(p 1 , p 2 ) in the denominator. In the standard flow analysis, non-flow correlations are neglected [7,8], with a few exceptions: the correlations due to momentum conservation are taken into account at intermediate energies [15], and correlations between photons originating from π 0 decays were considered in [16]. The effect of non-flow correlations on flow observables is considered from a general point of view in [17]. In the remainder of this section, we assume that c non−flow n = 0. Then, v n can be calculated simply as a function of the measured correlation c n using Eq.(5), as we now show. Note, however, that Eq.(5) is invariant under a global change of sign: v n (p T , y) → −v n (p T , y). Hence the sign of v n cannot be determined from c n . It is fixed either by physical considerations or by an independent measurement. For instance, NA49 chooses the minus sign for the v 1 of charged pions, in order to make the v 1 of protons at forward rapidities come out positive [13]. Averaging Eq.(5) over (p T 1 , y 1 ) and (p T 2 , y 2 ) in the domain D, one obtains : v n (D) = ± c n (D, D). This equation shows in particular that the average two-particle correlation c n (D, D) due to flow is positive. Finally, integrating (5) over (p T 2 , y 2 ), and using (7), one obtains the expression of v n as a function of c n : This formula serves as a basis for the standard flow analysis. Note that the actual experimental procedure is usually different: one first estimates, for a given Fourier harmonic m, the azimuth of the reaction plane (modulo 2π/m) by summing over many particles. Then one studies the correlation of another particle (in order to remove autocorrelations) with respect to the estimated reaction plane. One can then measure the coefficient v n with respect to this reaction plane if n is a multiple of m. In this paper, we consider only the case n = m. Both procedures give the same result, since they start from the same assumption (the only azimuthal correlations are from flow). This equivalence was first pointed out in [18]. III. AZIMUTHAL CORRELATIONS DUE TO THE HBT EFFECT The HBT effect yields two-particle correlations, i.e. a non-zero C(p 1 , p 2 ) in Eq. (2). According to Eq.(6), this gives rise to an azimuthal correlation c non−flow n , which contributes to the total, measured correlation c n in Eq.(4). In particular, there will be a correlation between randomly chosen subevents when one particle of a HBT pair goes into each subevent. The contribution of HBT correlations to c non−flow n will be denoted by c HBT n . In the following, we shall consider only pions. Since they are bosons, their correlation is positive, i.e. of the same sign as the correlation due to flow. Therefore, if one applies the standard flow analysis to HBT correlations alone, i.e. if one replaces c n by c HBT n in Eq.(8), they yield a spurious flow v HBT n , which we calculate in this section. First, let us estimate its order of magnitude. The HBT effect gives a correlation of order unity between two identical pions with momenta p 1 and p 2 if |p 2 − p 1 | < ∼h /R, where R is a typical HBT radius, corresponding to the size of the interaction region. From now on, we takeh = 1. In practice, R ∼ 4 fm for a semi-peripheral Pb-Pb collision at 158 GeV per nucleon, so that 1/R ∼ 50 MeV/c is much smaller than the average transverse momentum, which is close to 400 MeV/c: the HBT effect correlates only pairs with low relative momenta. In particular, the azimuthal correlation due to the HBT effect is short-ranged : it is significant only if φ 2 − φ 1 < ∼ 1/(Rp T ) ∼ 0.1. This localization in φ implies a delocalization in n of the Fourier coefficients, which are expected to be roughly constant up to n < ∼ Rp T ∼ 10, as will be confirmed below. For small n and (p T 1 , y 1 ) in D, the order of magnitude of c HBT n (p T 1 , y 1 , D) is the fraction of particles in D whose momentum lies in a circle of radius 1/R centered at p 1 . This fraction is of order (R 3 p T 2 m T ∆y) −1 , where p T and m T are typical magnitudes of the transverse momentum and transverse mass (m T = p 2 T + m 2 , where m is the mass of the particle), respectively, while ∆y is the rapidity interval covered by the detector. Using Eq.(7), this gives a spurious flow of order v HBT The effect is therefore larger for the lightest particles, i.e. for pions. Taking R = 4 fm, p T ∼ m T ∼ 400 MeV/c and ∆y = 2, one obtains |v n (D)| ∼ 3 %, which is of the same order of magnitude as the flow values measured at SPS. It is therefore a priori important to take HBT correlations into account in the flow analysis. We shall now turn to a more quantitative estimate of c HBT n . For this purpose, we use the standard gaussian parametrization of the correlation function (2) between two identical pions [19]: One chooses a frame boosted along the collision axis in such a way that p 1z + p 2z = 0 ("longitudinal comoving system", denoted by LCMS). In this frame, q L , q o and q s denote the projections of p 2 − p 1 along the collision axis, the direction of p 1 + p 2 and the third direction, respectively. The corresponding radii R L , R o and R s , as well as the parameter λ (0 ≤ λ ≤ 1), depend on p 1 + p 2 . We neglect this dependence in the following calculation. Note that the parametrization (10) is valid for central collisions, for which the pion source is azimuthally symmetric. Therefore the azimuthal correlations studied in this section have nothing to do with flow. Note also that we neglect Coulomb correlations, which should be taken into account in a more careful study. We hope that repulsive Coulomb correlations between like-sign pairs will be compensated, at least partially, by attractive correlations between opposite sign pairs. Since C(p 1 , p 2 ) vanishes unless p 2 is very close to p 1 , we may replace dN/d 3 p 2 by dN/d 3 p 1 in the numerator of Eq.(6), and then integrate over p 2 . As we have already said, q s , q o and q L are the components of p 2 − p 1 in the LCMS, and one can equivalently integrate over q s , q o and q L . In this frame, y 1 ≃ 0 and one may also replace dN/d 3 p 1 by (1/m T 1 )dN/d 2 p T 1 dy 1 . The resulting formula is boost invariant and can also be used in the laboratory frame. The relative angle φ 2 − φ 1 can be expressed as a function of q s and q o . If p T 1 ≫ 1/R, then to a good approximation If p T 1 ∼ 1/R, Eq.(11) is no longer valid. We assume that R s ≃ R o and use, instead of (11), the following relation : To calculate c HBT n (p T 1 , y 1 , D), we insert Eqs. (10) and (11) in the numerator of (6) and integrate over (q s , q o , q L ). The limits on q o and q L are deduced from the limits on (p T 2 , y 2 ), using the following relations, valid if p T 1 ≫ 1/R : Since q s is independent of p T 2 and y 2 (see Eq.(11)), the integral over q s extends from −∞ to +∞. Note that values of q o and q L much larger than 1/R do not contribute to the correlation (10), so that one can extend the integrals over q o and q L to ±∞ as soon as the point (p T 1 , y 1 ) lies in D and is not too close to the boundary of D. By too close, we mean within an interval 1/R o ∼ 50 MeV/c in p T or 1/(R L m T ) ∼ 0.3 in y. One then obtains after integration At low p T , Eq.(11) must be replaced by Eq. (12). Then, one must do the following substitution in Eq. (14) : where χ = R s p T and I k is the modified Bessel function of order k. Let us discuss our result (14). First, the correlation depends on n only through the exponential factor, which suppresses c HBT n in the very low p T region p T 1 < ∼ n/2R s . For n smaller than R s p T ≃ 10, the correlation depends weakly on n, as discussed above. Neglecting this n dependence, (14) reproduces the order of magnitude (9). To see this, we normalize the particle distribution in D in order to get rid of the denominator in (14), and the numerator (1/m T 1 )(dN/d 2 p T1 dy 1 ) is of order 1/ p T 2 m T ∆y. However, Eq. (14) is more detailed, and shows in particular that the dependence of the correlation on p T 1 and y 1 follows that of the momentum distribution in the LCMS (neglecting the m T and y dependence of HBT radii). This is because the correlation c HBT n is proportional to the number of particles surrounding p 1 in phase space. Let us now present numerical estimates for a Pb-Pb collision at SPS. We assume for simplicity that the p T and y dependence of the particle distribution factorize, thereby neglecting the observed variation of p T with rapidity [20]. The rapidity dependence of charged pions can be parametrized by [20]: with σ = 1.4 and y = 2.9. The normalized p T distribution is parametrized by with T ≃ 190 MeV [20]. This parametrization underestimates the number of low-p T pions. The values of R o , R s and R L used in our computations, taking into account that the collisions are semi-peripheral, are respectively 4 fm, 4 fm and 5 fm [22]. The correlation strength λ is approximately 0.4 for pions [23]. Finally, we must define the domain D in Eq. (14). It is natural to choose different rapidity windows for odd and even harmonics, because odd harmonics have opposite signs in the target and projectile rapidity region, by symmetry, and vanish at mid-rapidity ( y = 2.9), while even harmonics are symmetric around mid-rapidity. Following the NA49 collaboration [21], we take 4 < y < 6 and 0.05 < p T < 0.6 GeV/c for odd n, and 3.5 < y < 5 and 0.05 < p T < 2 GeV/c for even n. We assume that the particles in D are 85% pions [13], half π + , half π − . Then, for an identified charged pion (a π + , say) with p T = p T 1 and y = y 1 , the right-hand side of Eq.(14) must be multiplied by 0.85 × 0.5, which is the probability that a particle in D be also a π + . Substituting the correlation calculated from Eq. (14) in Eq. (8), one obtains the value of the spurious flow v HBT n (p T , y) due to the HBT effect. Fig.1 displays v HBT n , integrated between 4 < y < 5 (as are the NA49 data) as a function of p T . As expected, v HBT n depends on the order n only at low p T , where it vanishes due to the exponential factor in Eq. (14). HBT correlations, which follow the momentum distribution, also vanish if p T is much larger than the average transverse momentum. Assuming that 1/R s ≪ m, T , we find from Eq.(14) that the correlation is maximum which reproduces approximately the maxima in Fig.1. Although data on higher order harmonics are still unpublished, they were shown at the Quark Matter '99 conference by the NA45 Collaboration [24] which reports values of v 3 and v 4 of the same order as v 1 and v 2 , respectively, suggesting that most of the effect is due to HBT correlations. Similar results were found with NA49 data [25]. IV. SUBTRACTION OF HBT CORRELATIONS Now that we have evaluated the contribution of HBT correlations to c non−flow n , we can subtract this term from the measured correlation (left-hand side of Eq.(4), which will be denoted by c measured n in this section) to isolate the correlation due to flow. Then, the flow v n can be calculated using Eq. In this section, we show the result of this modification on the directed and elliptic flow data published by NA49 for pions [13]. The published data do not give directly the two-particle correlation c measured n , but rather the measured flow v measured n . Since these analyses assume that the correlation factorizes according to Eq.(5), we can reconstruct the measured correlation as a function of the measured v n . In particular, We then perform the subtraction of HBT correlations in both the numerator and the denominator of Eq. The behaviour of v n (p T ) is constrained at low p T : if the momentum distribution is regular at p T = 0, then v n (p T ) must vanish like p n T . One naturally expects this decrease to occur on a scale of the order of the average p T . This is what is observed for protons [13]. However, the uncorrected v measured 1 and v measured 2 for pions remain large far below 400 MeV/c. In order to explain this behaviour, one would need to invoke a specific phenomenon occurring at low p T . No such phenomenon is known. Even though resonance (mostly ∆) decays are known to populate the low-p T pion spectrum, they are not expected to produce any spectacular increase in the flow. HBT correlations provide this low-p T scale, since they are important down to 1/R ≃ 50 MeV/c. Once they are subtracted, the peculiar behaviour of the pion flow at low p T disappears. v 1 and v 2 are now compatible with a variation of the type v 1 ∝ p T and v 2 ∝ p 2 T , up to 400 MeV/c. V. CONCLUSIONS We have shown that the HBT effect produces correlations which can be misinterpreted as flow when pions are used to estimate the reaction plane. This effect is present only for pions, in the (p T , y) window used to estimate the reaction plane. Azimuthal correlations due to the HBT effect depend on p T and y like the momentum distribution in the LCMS, i.e. (1/m T )dN/dyd 2 p T , and depend weakly on the order of the harmonic n. The pion flow observed by NA49 has peculiar features at low p T : the rapidity dependence of v 1 is irregular, and both v 1 and v 2 remain large down to values of p T much smaller than the average transverse momentum, while they should decrease with p T as p T and p 2 T , respectively. All these features disappear once HBT correlations are properly taken into account. Furthermore, we predict that HBT correlations should also produce spurious higher harmonics of the pion azimuthal distribution (v n with n ≥ 3) at low p T , weakly decreasing with n, with an average value of the order of 1%. The data on these higher harmonics should be published. This would provide a confirmation of the role played by HBT correlations. More generally, our study shows that although non-flow azimuthal correlations are neglected in most analyses, they may be significant.
Cost-effectiveness of dapagliflozin versus DPP-4 inhibitors as an add-on to Metformin in the Treatment of Type 2 Diabetes Mellitus from a UK Healthcare System Perspective Background Type 2 diabetes mellitus (T2DM) is a chronic, progressive condition where the primary treatment goal is to maintain control of glycated haemoglobin (HbA1c). In order for healthcare decision makers to ensure patients receive the highest standard of care within the available budget, the clinical benefits of each treatment option must be balanced against the economic consequences. The aim of this study was to assess the cost-effectiveness of dapagliflozin, the first-in-class sodium-glucose co-transporter 2 (SGLT2) inhibitor, compared with a dipeptidyl peptidase-4 inhibitor (DPP-4i), when added to metformin for the treatment of patients with T2DM inadequately controlled on metformin alone. Methods The previously published and validated Cardiff diabetes model was used as the basis for this economic evaluation, with treatment effect parameters sourced from a systematic review and network meta-analysis. Costs, derived from a UK healthcare system perspective, and quality-adjusted life years (QALYs), were used to present the final outcome as an incremental cost-effectiveness ratio (ICER) over a lifetime horizon. Univariate and probabilistic sensitivity analyses (PSA) were carried out to assess uncertainty in the model results. Results Compared with DPP-4i, dapagliflozin was associated with a mean incremental benefit of 0.032 QALYs (95 % confidence interval [CI]: −0.022, 0.140) and with an incremental cost of £216 (95 % CI: £-258, £795). This resulted in an ICER point estimate of £6,761 per QALY gained. Sensitivity analysis determined incremental costs to be insensitive to variation in most parameters, with only the treatment effect on weight having a notable impact on the incremental QALYs; however, there were no scenarios which raised the ICER above £15,000 per QALY. The PSA estimated that dapagliflozin had an 85 % probability of being cost-effective at a willingness-to-pay threshold of £20,000 per QALY gained. Conclusions Dapagliflozin in combination with metformin was shown to be a cost-effective treatment option from a UK healthcare system perspective for patients with T2DM who are inadequately controlled on metformin alone. Background Type 2 diabetes mellitus (T2DM) is a chronic condition characterised by elevated blood glucose levels as a result of resistance to the action of insulin. T2DM can lead to numerous micro-and macro-vascular complications and may cause substantial disability. It is increasingly prevalent, with the T2DM population in the UK expected to rise to 3 million by 2017 [1], and it is currently estimated to account for 7-12 % of the total UK National Health Service (NHS) expenditure [2,3]. Although drug costs are increasing [1], the greatest component of the economic burden of T2DM is the treatment of diabetic complications [2], which can be reduced with effective management of the disease. The primary treatment goal of T2DM management is to reduce glycated haemoglobin (HbA1c) levels to below 6.5 % for first line treatment or below 7.5 % for second line treatment. This is recommended in the UK by the National Institute for Health and Care Excellence (NICE) in order to effectively reduce diabetes-related complications [3]. The principles of the NICE guidelines are in line with those outlined in the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD) combined position statement, which support a target HbA1c goal for adults with T2DM of around 7 %, depending on individual patient characteristics [4]. However, T2DM represents a major clinical priority, as between 30-40 % of all patients receiving treatment fail to reach the blood glucose targets recommended by NICE and over three-quarters are overweight or obese [4,5]. Metformin is commonly used as a first-line treatment in diabetes; however, due to the progressive nature of T2DM, many patients at some point will require additional therapy to maintain glycaemic control. The selection of additional treatment options is often complex due to the number of factors that must be considered. Unintended sequelae such as hypoglycaemia, weight changes and side effects are important considerations as they can have a significant impact on patients' adherence and quality of life [4]. Dapagliflozin was the first in a new class of selective sodium-glucose co-transporter 2 (SGLT2) inhibitors licensed in Europe. Both dapagliflozin and dipeptidyl peptidase-4 inhibitors (DPP-4i) have been recommended by NICE in the UK as second-line therapies (dual therapy, add-on to metformin) in patients with T2DM, when diet and exercise plus metformin fail to achieve glycaemic targets. In order for healthcare decision makers to ensure patients receive the highest standard of care within the available budget, the clinical benefits of each treatment option must be balanced against the economic consequences. This study aimed to assess the long-term costeffectiveness of dapagliflozin versus DPP-4i, as dual oral therapies in combination with metformin, in patients who were inadequately controlled on metformin alone, from the perspective of the UK NHS. The objective was to present the model here as it was reviewed and accepted by NICE. In addition to glycaemic control, key factors that may differ across therapies and therefore drive treatment decisions in clinical practice, such as weight and hypoglycaemic risk, were also considered in the analysis. Results of a previously published network meta-analysis (NMA), comparing the major clinical outcomes for dapagliflozin with DPP-4i as an add-on to metformin [6], acted as a key source of clinical inputs for this economic analysis. This reported a non-significant reduction in HbA1c (−0.08 % [95 % CI: −0.25, 0.10]) and a significant reduction in weight (−2.85 kg [95 % CI: −3.39, −2.30]) for dapagliflozin compared with DPP-4i [6]. Assessments of the cost-effectiveness of dapagliflozin versus other antidiabetec agents used as add-ons to metformin [7], for indications other than as an add-on to metformin and in settings other than the UK [8] have been presented elsewhere. Methods The economic evaluation analysed the cost-effectiveness of dapagliflozin as an add-on to metformin (DAPA + MET) versus DPP-4i as an add-on to metformin (DPP-4i + MET) in adults aged 18 years and older with T2DM who were inadequately controlled on metformin alone. The main assessment metric was the incremental costeffectiveness ratio for dapagliflozin compared with DPP-4i therapy, with effectiveness measured in qualityadjusted life years (QALYs). QALYs represent a composite measure of estimated post-treatment life years adjusted for the quality of life (or utility) of those life years. The economic evaluation was conducted from the perspective of the UK NHS, and a discount rate of 3.5 % was applied to both costs and health effects as recommended in the NICE Methods Guide [9]. Model structure The published Cardiff stochastic simulation diabetes model was used as the basis for this economic evaluation as this has previously been validated to accurately model important clinical outcomes for diabetic patients [10][11][12]. The model utilises risk equations from the UK Prospective Diabetes Study (UKPDS) 68 to estimate long-term micro-and macro-vascular complications, as well as diabetes-related mortality and non-diabetesrelated mortality [13]. In total, seven micro-and macro-vascular complications were included in the model (ischaemic heart disease, myocardial infarction, congestive heart failure, stroke, amputation, blindness, and end-stage renal disease), along with cardiovascular (CV) death, non-CV death, drug-related hypoglycaemic events and additional adverse events associated with an SGLT2 inhibitor. The cumulative incidence of all complications depended on patients' baseline characteristics and time-and treatment-dependent evolution of modifiable risk factors, including HbA1c, body mass index (BMI), the ratio of total-and high-density lipoprotein (HDL) cholesterol and systolic blood pressure (SBP). This allowed the model to measure the extent to which age and gender affected the incidence of diabetes-related complications and to take into account factors such as the increased risk of stroke amongst smokers and the lower incidence of MI associated with the UK Afro-Caribbean population with T2DM [13]. In the base case analysis, 100 cohorts of 30,000 individual patients were modelled and this was tested to ensure stability in the simulation results had been reached. Patients were simulated through 6-monthly time intervals over a total period of 40 years, indicative of a lifetime horizon for an average T2DM patient. Six-monthly rather than annual cycles were chosen to allow more detailed transitions to be modelled and to reflect the common follow-up time in clinical practice. At the end of each 6-month cycle, the UKPDS risk equations determined the occurrence of the fatal and non-fatal complications. Annual UKPDS risk equations were adjusted to reflect 6-monthly risks by converting to a rate and then converting this to a 6-monthly time frame. All-cause mortality events were estimated using gender-specific life tables for the UK [14]. Once a fatal event occured in the model, life years and QALYs were updated and the simulation ended for the patient. In the model, each treatment would result in a oneyear reduction in HbA1c; this timeframe was reflective of the length of data from the NMA data source comparing dapagliflozin with oral antidiabetic therapies [6]. After this point, a continued rise was assumed due to disease progression, which was derived from a regression analysis of the UKPDS dataset [13]. Similar assumptions were used for SBP and cholesterol. In clinical studies, DAPA + MET has resulted in a statistically significant reduction in body weight compared with a sulphonylurea + MET and with placebo [15,16]. Significant differences between DAPA + MET and DPP-4i + MET for this outcome were also demonstrated in the NMA [6]. Hence, the effect of patient weight in terms of risk of CV complications and the impact on patient health-related quality of life (HRQoL) was incorporated into the economic model. Initially, progression of weight was established from the impact of each treatment on weight over a 12-month period. In the dapagliflozin arm, patients' weight was assumed to be maintained in year 2 based on 2-year clinical extension data [17]. The same assumption of stable weight in year 2 was also made for the comparator arm. At the time this analysis was performed for NICE, no further long-term data on patient weight were available and therefore the assumption was made that the initial weight loss would be fully regained in a linear manner to a level that corresponded to the patients' baseline weight. However, a recent study has shown that the weight lowering effect of dapagliflozin is maintained for four years [18], suggesting that the assumptions used in the model were conservative. Patient population The model population was a cohort representative of UK T2DM patients and was designed to best illustrate where dapagliflozin would be used as part of a UK treatment strategy. In line with clinical trials, patients considered in this model had failed to achieve adequate control on prior metformin monotherapy and therefore required modification to their treatment regimen. Baseline characteristics for intervention and comparator arms (Table 1) were sourced from a systematic review and class-level NMA of relevant phase 3 randomised controlled trials (RCTs) [6]. Treatment sequence The first modelled treatment lines were DAPA + MET and DPP-4i + MET for the intervention and comparator groups respectively. Simulated patients received the allocated therapy until their HbA1c level increased towards a pre-specified threshold limit. This switching threshold was set equal to the mean baseline HbA1c value of patients entering the phase 3 clinical trials included in the NMA, (8.05 %). At this point, patients in the model switched therapy; first to insulin + MET and then to intensified insulin (simulated by increasing the dose by 50 %). Patients then remained on this latter treatment for the remainder of the time horizon and the HbA1c levels progressed according to the UKPDS regression analysis. The treatment duration in the model was determined by a combination of the HbA1c baseline value, the HbA1c treatment effect and the predefined HbA1c treatment switch threshold. Patients also had a separate risk of discontinuing therapy due to tolerability issues during the first cycle. Treatment effects For each treatment effect, a one-year reduction in HbA1c and weight was applied using data for the relative efficacy of DAPA + MET and DPP-4i + MET derived from the previously published NMA of dual therapy RCTs; further detailed information on the methods and results of the NMA has been described elsewhere [6]. The authors of the NMA deemed the included studies to be of good quality [6]. The NMA reports the relative effects of each agent in comparison with other agents whereas in this model we used the absolute changes from baseline for each agent (Table 1). Values presented for subsequent treatments (insulin + MET and intensified insulin) were sourced directly from previous studies ( Table 1). The probability of discontinuation in the first cycle after treatment initiation and the respective probabilities of hypoglycaemic events, urinary tract infections and genital infections for each therapy are also outlined in Table 1. Of the studies included in the NMA, only two reported the change in SBP as an outcome [19,20]. Due to the limited evidence available for treatment effect on cholesterol and SBP, no difference in effect between dapagliflozin and DPP-4i was assumed, so these values were set to zero in the model. Costs Costs included in the model were assessed from a UK health service perspective. A systematic literature review covering economic evaluations of relevance to a UK context for drug interventions for T2DM was undertaken. An overview of cost inputs applied in the model is presented in Table 3. UKPDS 65 [21] cost data, indexed to 2011 (the year the analysis was conducted for NICE) using the Hospital and Community Health Services Pay and Price index, was utilised as a key source for model inputs. Acquisition costs were sourced from the NHS Drug Tariff and were regarded as representative of the actual costs paid by the NHS [22]. Health-related quality of life A systematic literature review was carried out to identify sources of utilities for those factors that most affect HRQoL in patients with T2DM, namely diabetes-related complications, hypoglycaemia, weight change and other adverse events. The UKPDS 62 study [23] identified from the systematic review was used to inform the majority of values as the utilities were derived from a UK population and this was the same cohort from which the risk equations were derived. Utility data for end-stage [46]. Chosen as most recent study reporting weight effect included in the NICE HTA report renal disease (ESRD), hypoglycaemic events and urinary tract infection were sourced from alternative studies identified in the review [24,25]. Body weight utilities were sourced from a Canadian study by Lane et al. as this was the only reference identified by the systematic review that made a distinction in terms of utility change for BMI increase and decrease, and the data was elicited specifically from T2DM patients [26] (Table 1). No utility decrement could be identified specifically for genital infections, so this was assumed to be equivalent to that for urinary tract infections. Approach to sensitivity analysis To assess the impact of uncertainty on the model results, both deterministic univariate sensitivity analysis (SA) and probabilistic sensitivity analysis (PSA) were carried out. Parameters selected for variation in the univariate SA were the risk factors known to influence outcomes in the UKPDS equations, as well as others where the uncertainty around the point estimate was high, such as the utilities and the cost of complications. These parameters were varied in the univariate SA around their 95 % confidence/credible intervals. Where data was unavailable, the standard error (SE) was assumed to be a percentage of the mean in line with the magnitude of other SEs for other similar variables. As such, disutilities for T2DM complications were varied by ±10 %, and total non-drug costs were varied by ±25 %. PSA was conducted by simulating 1,000 cohorts of 30,000 patients in which values of key parameters (including those not varied in the deterministic analysis) were drawn randomly and independently from the parameter distributions. The impact of the HbA1c switching threshold was tested separately, as it helps to determine the treatment duration for each intervention in the model. As the treatment effect on clinical parameters was only applied during this treatment period, it was important to fully test the assumptions made in the calculation of this parameter. The effect on weight was also expected to be an important driver of the cost-effectiveness results, given the impact that DAPA + MET has been reported to have from clinical trials on weight and the significant difference in this outcome between DAPA + MET and DPP-4i + MET reported in the indirect comparison [6,15,16]. As such, the utilities associated with this variable were investigated in scenario analysis. Results The treatment group, DAPA + MET, was associated with a mean incremental benefit of 0.032 QALYs (95 % CI: −0.022, 0.140) when compared with the DPP-4i + MET control arm (Table 2). This effect is largely explained by differences in patient weight, which has a significant impact on HRQoL [26]. The mean incremental cost was estimated to be £216 (95 % CI: £-258, £795) and was mainly driven by the higher acquisition cost of dapagliflozin ( Table 2). An ICER point estimate of £6,761 per QALY gained was calculated. The results of the univariate SA are presented as tornado graphs (Fig. 1). These highlight the range of both the incremental costs and incremental QALYs for the parameters that most affect these outcomes. The base case value is represented by the central vertical line and the outcome values are plotted for the maximum and minimum values of each selected parameter. As can be seen from Fig. 1, the point estimate for incremental costs was relatively insensitive to the variation applied to the model parameters. Improving the HbA1c lowering effect of dapagliflozin resulted in an incremental cost increase of £165 compared with the base case, due to the increased treatment duration, which eventually led to higher drug acquisition costs for the dapagliflozin strategy. However, due to the incremental gain in QALYs observed (0.06), the overall ICER decreased to £4,140. The point estimate for incremental QALYs was shown to be most sensitive to variation of the treatment effect on body weight of DPP-4i (Fig. 1). When varying the weight effect of DPP-4i between the outer limits of its 95 % CI, the incremental QALYs ranged from 0.015 to 0.169; changing this parameter for dapagliflozin resulted in a QALY range of −0.002 to 0.066. The assumption around the HbA1c switching threshold was investigated by increasing the value from 8.05 to 8.5 % as studies have shown that many patients will exceed this initial threshold in practice [27,28]. The resulting ICER decreased from £6,761 to £5,227 per QALY gained. However, by using alternative, lower estimates for BMI utility effect [29], the ICER increased to £12,763 per QALY gained. In both cases, the ICERs still fell below the lower limit of generally accepted ICER values in the UK for diabetes medicines (£20,000 per QALY). The distribution of the ICER estimates from the PSA shows that in most instances, DAPA + MET is both more effective and more costly than DPP-4i + MET (Fig. 2, top panel). Analysis of the PSA results demonstrated that DAPA + MET had an 85 % probability of being cost-effective compared with the DPP-4i + MET treatment strategy at a willingness-to-pay threshold of £20,000 per QALY gained (Fig. 2, bottom panel). Discussion The economic analysis of DAPA + MET versus DPP-4i + MET, as accepted by NICE in the UK, has shown dapagliflozin to be a cost-effective use of NHS resources and therefore a valuable treatment option for T2DM patients who are inadequately controlled on metformin monotherapy. Results from a previously published NMA showed dapagliflozin added on to metformin resulted in a superior weight reduction outcome compared with DPP-4i [6]. This outcome has been shown to be important to patients in terms of quality of life and was a key driver in the associated gain in QALYs observed with dapagliflozin in this economic evaluation [30]. Ultimately, the results show that the incremental QALYs come at an acceptable incremental cost when employing the commonly accepted payer willingness-to-pay threshold of £20,000 per additional QALY gained employed in the Fig. 1 Univariate sensitivity analyses: Tornado graphs of incremental costs (top) and incremental QALYs (bottom). Variations of selected parameters are displayed as a range from the base case value (y-axis). Parameters include HbA1c change from baseline (ΔHbA1c), weight change from baseline (ΔWeight), BMI utility values and total non-drug costs. *It can be observed from the tornado graph for incremental costs that assuming a larger/smaller effect of dapagliflozin on HbA1c reduction would result in increased incremental costs. This can be explained by the model structure: in case of larger HbA1c reduction, patients would remain on the more expensive treatment option longer, whereas for the smaller HbA1c effect, patients would switch sooner to the next treatment line, leading to increased costs associated with AEs. Abbreviations: Comp., comparator; DAPA + MET, dapagliflozin added on to metformin; QALY, quality-adjusted life year; BMI, body mass index UK by NICE, indicating the cost-effectiveness of dapagliflozin versus DPP-4i as an add-on to metformin for the treatment of T2DM. The model has previously undergone peer-review [11,12] and health technology assessments [31] and in terms of structure is very similar to a previously developed and validated cost-effectiveness model [32,33]. The same model has also been used to compare the costeffectiveness of DAPA + MET with sulphonylurea + MET; this analysis produced a similar magnitude of costs and QALYs for DAPA + MET as estimated in the current analysis and also concluded that DAPA + MET was a costeffective use of NHS resources in the UK setting [7]. In addition, the cost-effectiveness of dapagliflozin as an add-on to insulin has been investigated from a Dutch healthcare perspective, where it was also found to have an ICER within acceptable cost-effectiveness limits [8]. Although the model has been validated against UKPDS datasets and extensive scenario and sensitivity analyses have been performed, several assumptions exist which may have an impact on the robustness of the study outcomes. Firstly, the model does not include less severe health states, such as microalbuminuria and foot ulcers; however, the utility and cost impact of such health states would be expected to be minimal and therefore have a negligible impact on the final ICER. Secondly, the model treats the patients as a cohort with mean baseline values and mean treatment estimates, as data availability did not allow for a more sensitive approach. Heterogeneity in the population was tested in the model through the sensitivity analysis, but future modelling could simulate individual patients with different characteristics and link these to treatment effect if clinical trial data allowed. Thirdly, the assumption that the mean HbA1c at baseline is a valid representative of the switching threshold does not take into account the potentially skewed nature of this parameter. Unfortunately, data was not available to allow a normality test; however, the switching threshold has been tested in one-way sensitivity analysis and was found not to change the conclusion over the costeffectiveness of DAPA + MET over DPP-4i + MET. In terms of limitations in the clinical inputs, the main one was the lack of an RCT directly comparing DAPA + MET with DPP-4i + MET. In the absence of such a head-to-head trial, an NMA was conducted using Bayesian methodology, the methods and limitations of which have been previously discussed in the publication of Goring et al. [6] The uncertainty around the efficacy of dapagliflozin was investigated through one-way sensitivity analysis and showed that as the efficacy of dapagliflozin on lowering HbA1c levels was increased, the total costs were also increased due to the longer treatment period estimated. The additional QALYs gained, however, mean that the resultant ICER still fell as would be expected. Additionally, there was a lack of available long-term data regarding the effect of dapagliflozin or DPP-4i as an add-on to metformin on the development of diabetes-related micro-and macro-vascular complications; it was assumed instead that valid lifetime predictions of events can be made using the UKPDS 68 risk equations [13]. These risk equations have been widely used by researchers modelling diabetes treatments [10,11,33] and although they are not without limitations [34], there are no other sources for risk prediction that have been based on such a large number of T2DM patients. The UKPDS risk equations are also derived from a study of over 5,000 UK patients, making them highly applicable to the perspective of the current analysis. Since this analysis for NICE was performed, more recent UKPDS risk equations than the UKPDS 68 ones have been made available [35], but their use in health economic models has not yet been validated by health technology assessment agencies. Therefore we decided to maintain the use of the UKPDS 68 equations in the model, as these have been reviewed and accepted by NICE during the appraisal of dapagliflozin [36]. We also acknowledge that an alternative source of utility values has been published [37]; however, we think it is important that the model that was reviewed by NICE is published. Assumptions to extrapolate HbA1c beyond trial outcomes were designed to best represent the progressive nature of T2DM in clinical practice, and the time paths were shown to be in line with those reported in the UKPDS 68 study [13]. Weight change, which was associated with CV risk and a decreased HRQoL whilst on treatment, was extrapolated beyond the 2-year trial data and was shown to be a key risk factor in determining QALYs during the sensitivity analysis. However, as mentioned previously, a conservative approach was adopted when extrapolating these data, and as such, any beneficial effect associated with weight loss was likely underestimated. Recently published long-term data reporting sustained weight effect over 4 years confirms the conservative nature of this assumption [18]. Key uncertainties within the economic evaluation arose around the BMI utilities, as there is some uncertainty over the precise relationship between change in BMI and disutility in T2DM. The values used to estimate the impact of increasing/decreasing BMI on utilities were seen to vary in the literature, although sensitivity analyses showed that this did not impact on the cost-effectiveness of dapagliflozin. The outcomes from the NMA, which included international studies, can be considered to be generalisable to patients in a UK setting as the populations defined were representative of the treatment indication in the UK and the average baseline demographics were similar to the UK T2DM population recruited for the UKPDS studies [13]. Additionally, cost and resource use was derived from UK sources, and utility data were largely sourced from the UKPDS studies. The results may also be of interest to other countries where DPP-4 inhibitors are commonly used in clinical practice as add-ons to metformin, especially as the treatment effect data are sourced from international trials. However, available treatments and advised strategies may differ between countries, and factors such as the risk equations, utilities and costs may be subject to change. As such, extrapolation of these results to countries outside of the UK should be performed with caution and local adaptation of the economic model would be advised. Conclusion Dapagliflozin represents the first-in-class selective SGLT2 inhibitor licensed in Europe, and has shown in clinical trials to have an effect on HbA1c comparable with existing treatments and a superior outcome in terms of weight reduction [38]. This analysis confirmed that DAPA + MET, in comparison to DPP-4i + MET, is cost-effective within acceptable UK thresholds for the treatment of patients with T2DM. Availability of supporting data Supporting data used for the development of the model is available on request.
Unraveling the Role of microRNA and isomiRNA Networks in Multiple Primary Melanoma Pathogenesis Background Malignant cutaneous melanoma (CM) is a potentially lethal form of skin cancer whose worldwide incidence has been constantly increasing over the past decades. During their lifetime about 8% of patients with CM will develop multiple primary melanomas (MPM). Patients affected by MPM could have a genetically determined susceptibility, though germline mutations in hereditary melanoma genes are rarely detected. Methods To better characterize the biology of this subset of melanomas, we explored the miRNome of 24 single and multiple primary melanomas, including multiple tumors from the same patient, using a smallRNA sequencing approach and bioinformatic detection of miRNA isoforms. The differential expression of specic miRNAs/isomiRs was obtained using quantitative PCR. Results From a supervised analysis, 22 miRNAs were differentially expressed in MPM compared to single CM, including key miRNAs involved in epithelial-mesenchymal transition (EMT). Moreover, the rst and second melanoma from the same patient presented a different miRNA prole. Ten miRNAs, including miR-25-3p, 149-5p, 92b-3p, 211-5p, 125a-5p, 125b-5p, 205-5p, 200b-3p, 21-5p and 146a-5p, were further validated in a larger cohort of single and multiple melanoma samples (N=47). Overall, the Pathway Enrichment Analysis revealed a more differentiated and less invasive status of MPMs. Analyzing our smallRNA seq data, we detected a panel of melanoma-specic miRNA isoforms (isomiRs), which were validated in The Cancer Genome Atlas SKCM cohort. Specically, we identied hsa-miR-125a-5p|0|-2 isoform as 10-fold over-represented in melanoma and differentially expressed in MPMs. IsomiR-specic target analysis revealed that the miRNA shortening confers a novel pattern of target gene regulations, including genes implicated in melanocyte differentiation and cell adhesion. Conclusions Overall we provided a comprehensive characterization of the miRNA/isomiRNA regulatory network of multiple primary melanomas, highlighting mechanisms of tumor development and correlating in an average time of 33 months (range 3–98). MPM patients were tested for germline genetic alterations in CDKN2A, CDK4 and MITF gene 15 and only one patient was found to have a germline CDKN2A mutation (c.249C > A p.His83Gln exon 2) of unknown clinical signicance. Melanoma specimens were examined by two dermato-pathologists. Regarding genetic factors, somatic mutations of BRAF gene have been found in almost 40-50% of sporadic CMs located in body sites with intermittent UV exposure; 15-20% of the other cases are associated to NRAS mutations and correlated with chronic UV exposure 24 . A small portion of melanomas occurs in acral or mucosal locations and a subset of them are related to KIT and GNAQ mutations 7 . These ndings have brought important therapeutic implications and changed the management of CM patients with the development of speci c target therapies. Germline mutations instead, can be found in multiple or familial cases of CM. The most frequently described germline mutation is in CDKN2A (cyclin-dependent kinase inhibitor 2A) gene occurring in 8-15% of subjects diagnosed with multiple primary melanomas (MPMs) without familial history and up to 40% of patients with hereditary CM 6,25,40,44 . Mutations in other susceptibility genes such as CDK4 (cyclin-dependent kinase 4), MITF (microphtalmia-associated transcription factor) and POT1 (protection of telomeres 1) are less frequently detected 4,14 . During their lifetime about 8% of patients with cutaneous melanoma will develop multiple primary melanomas, usually at a young age and within 3 years from the rst tumor/diagnosis 18 . The occurrence of MPMs in the same patient is thought to be related to a personal genetic susceptibility in association with environmental factors. These patients may represent a model of high-risk CM occurrence. As a matter of fact, it is estimated that a personal history of CM is a strong risk factor for the development of a subsequent primary CM 18,43 . The excision of a prior CM determines a risk up to 8.5% to develop another CM and the frequency of MPMs is reported to be between 0.2 and 10% 13,25,29 . The above-reported rates may underestimate the lifetime rates due to limited series of patients and different follow-up periods. Variability may also arise due to differences in environmental factors such as ultraviolet radiation exposure across geographical regions. Among the cases of MPMs, 13-40% of patients are diagnosed with synchronous lesions (i.e. a subsequent primary CM diagnosed within 3 months from the prior diagnosis), while the remainder develop metachronous lesions 1,29,41 . The risk of a subsequent CM is highest in the rst year following the diagnosis of the primary CM; however, this risk remains increased for at least 20 years 1 Moreover, the frequency of germline mutations in melanoma susceptibility genes (CDKN2A, CDK4, MITF, POT1/ACD/TERF2IP, TERT, BAP1) is lower than expected in MPM patients 5,6,9,25 . Therefore, a better characterization of MPM pathogenesis and biological features is of the outmost importance. The dysregulation of small noncoding RNAs, speci cally microRNAs (miRNAs, 18-22 nucleotides in length), plays a signi cant role in tumorigenesis, including melanoma onset and progression 46 . MiRNAs regulate multiple and speci c target genes, determining an oncogenic or tumor-suppressive function, being implicated in the proliferation, apoptosis and tumor progression. Moreover, miRNA global expression pro le faithfully re ects the overall expression pro le of normal and pathological cells and tissues, with the advantage to be feasible also from formalin-xed and para n-embedded (FFPE) tissues. In this study, we investigated the global miRNA and isomiRNA expression pro le of multiple primary melanomas using an unbiased smallRNA sequencing approach. A comparison of familial/non familial MPM vs. single primary melanoma miRNome was established in order to investigate the possible similarities. Moreover, the evolution of MPM miRNA pro le was assessed matching multiple tumors from the same patient. Clinical samples A retrospective series of 47 samples from 29 patients was collected. Patients were selected among those referring to the melanoma center of the Dermatology Unit at Bologna University Hospital. The study was approved by Comitato Etico Indipendente di Area Vasta Emilia Centro -CE-AVEC, Emilia-Romagna Region (number 417/2018/Sper/AOUBo). Before study entry, all the patients provided written and voluntary informed consent for inclusion, collection and use of clinical-pathological data and samples and data privacy. The specimens were classi ed into three groups: benign nevi, single primary cutaneous melanoma (CM) and multiple primary melanoma (MPM). Group 1 (n = 3), benign nevi of 3 patients with no prior diagnosis of CM or non-melanoma skin cancer and follow up of at least 10 years. Group 2 (n = 35), MPM samples from 17 patients with prior diagnosis of ≥ 2 CMs. 3 out of 17 patients had positive family history of CM (FAM). MPM patients were tested for CDKN2A, MITF and CDK4 genetic alterations and only 1 patient had a mutation in CDKN2A gene (c.249C > A p.His83Gln). Group 3 (n = 9), 9 samples from CM patients with no history of prior CMs and a follow-up of at least 10 years. Tumor and nevi samples were formalin-xed and para n-embedded (FFPE). For each sample, 5/6 tissue sections on glass slides were obtained. One section was stained with hematoxylin-eosin (HE) and examined by an expert pathologist to select the tumor/nevus area, which was grossly dissected before RNA extraction. RNA extraction RNA was isolated from 10 µm-thick FFPE sections using miRNeasy FFPE kit (Qiagen) according to the manufacturer's instructions. Depara nization was performed with xylene followed by an ethanol wash. RNA was eluted in 30 µL of RNAse-free water and quanti ed by absorbance at 260 and 280 nm. SmallRNA sequencing We analyzed 3 benign nevi, 4 single CM, 17 multiple primary or familial melanomas from 8 different patients. The 24 smallRNA libraries were generated using TruSeq Small RNA Library PrepKit v2 (Illumina, RS-200-0012/24/36/48), according to manufacturer's indications. Brie y, 35 ng of puri ed RNA was linked to RNA 3' and 5' adapters, converted into cDNA, and ampli ed using Illumina primers containing unique indexes for each sample. High Sensitivity DNA kit was adopted for libraries quanti cation using Agilent Bioanalyzer (Agilent Technologies, California, USA5067-4626) and the 24 DNA libraries were combined in equal amount to generate a libraries pool. Pooled libraries underwent to size selection employing magnetic beads (Agencourt) and amplicons with a length in the 130-160 bp range, were recovered. Finally, 20pM of pooled libraries, quanti ed using the HS-DNA Kit (Agilent) were denatured, neutralized and combined with a Phix control library (standard library normalizator). A 1.8 pM nal concentration of pooled libraries (obtained by dilution with a dedicated buffer as described in Illumina protocol guidelines) was obtained and sequenced using NextSeq 500/550 High Output Kit v2 (75 cycles) (Illumina, FC-404-2005) on the Illumina NextSeq500 platform. Raw base-call data were demultiplexed using Illumina BaseSpace Sequence Hub and converted to FASTQ format. After a quality check with FastQC tool, the adapter sequences were trimmed using Cutadapt, which was also used to remove sequences shorter than 16 nucleotides and longer than 30 nucleotides. Reads were mapped using the STAR algorithm. Only reads that mapped unambiguously to the genome (at least 16 nucleotides aligned, with a 10% mismatch allowed) were used for the analyses. The reference genome consisted in human miRNA sequences from the miRbase 21 database. Raw counts from mapped reads were obtained using the htseq-count script from the HTSeq tools 3 . Counts were normalized using DESeq2 bioconductor package 37 Quanti cation of isomiRs IsomiRs were identi ed in our NGS dataset of 24 samples as described in Loher et al. 36 . Brie y, sequence reads were quality trimmed using the cutadapt tool, and mapped unambiguously using SHRIMP2 (PMID: 21278192) to the human genome assembly GRCh38. During the mapping, no insertions or deletions, and at most one mismatch was permitted. IsomiRs were identi ed as done previously 36,52 . For TCGA isomiR analysis, short RNA-seq Aligned BAM les were downloaded from the Genomic Data Commons Data Portal (https://portal.gdc.cancer.gov/) for all 32 cancer types. IsomiR pro les were generated using the same approach as described in Loher et al. 36 To simpli ed the labeling of the isomiRs, we used the annotation system developed by Loher et al. This nomenclature speci es the name of the canonical miRNA, the start site (5' end) of the isomiR compared to the canonical miRNA sequence in miRBase, the end-site (3' end) and the eventual insertion of uracil. In particular, to annotate the start and end site of an isomiR, a negative (-) or positive sign (+) followed by the number of nucleotides is used to indicate how many nucleotides the isomiRs terminus has, when compared to the canonical miRNA sequence. Zero indicate the same terminus of the canonical miRNA sequence. We quanti ed isomiR abundances in reads per million (RPM). Only reads that passed quality trimming and ltering and could be aligned exactly to miRNA arms were used in the denominator of this calculation. IsomiR targets were predicted using the RNA22 algorithm 35 and targets were allowed to be present in the 5´UTR, CDS, and 3´UTR of the candidate mRNA. We selected only those targets that had a p-value < 0.01 and a predicted binding energy <-16 while also allowing G:U wobbles and bulge's within the seed region. Statistical analysis Normalized sequencing data were imported and analyzed in Genespring GX software (Agilent Technologies). Differentially expressed miRNAs were identi ed using a fold change > 1.5 lter and moderated t-test (FDR 5% with Benjamini-Hochberg correction) in CM vs. MPM comparison and foldchange > 1.2 and paired t-test (p < 0.05) in 1st vs. 2nd MPM comparison. Cluster Analysis, was performed using Manhattan correlation as a similarity measure. Principal Component Analysis was performed on 24 samples using all human miRNAs detected by NGS analysis (n = 1629). Graphpad Prism 6 (GraphPad Software) was used for statistical analyses. Group comparison was performed using unpaired t-test, when data had a normal distribution, with or without Welch's correction according to the signi cance of the variance test. Data that did not present a normal distribution were compared using Mann-Whitney non-parametric test. Association of gene expression with overall survival in TCGA SKCM cohort, was obtained using Oncolnc website (http://www.oncolnc.org), logrank test was used to calculate the p-value. Pathway Analysis Pathway and network analysis of differentially expressed miRNAs, miR-125a-5p isoforms and their targets was investigated using the web-based software MetaCore (GeneGo, Thomson Reuters). A p value of 0.05 was used as a cut off to determine signi cant enrichment. Patient Characteristics Demographic, clinical and pathological features of 29 patients are summarized in Table 1. A total number of 16 males and 13 females were included, with a mean age at rst diagnosis of 59 years for single primary melanomas and 53 years for multiple primary melanomas. Nine patients had single cutaneous melanoma and 10 years of follow up; 17 developed more than one primary melanoma in an average time of 33 months (range 3-98). MPM patients were tested for germline genetic alterations in CDKN2A, CDK4 and MITF gene 15 and only one patient was found to have a germline CDKN2A mutation (c.249C > A p.His83Gln exon 2) of unknown clinical signi cance. Melanoma specimens were examined by two dermato-pathologists. The microRNA pro le of multiple primary melanoma The global miRNA pro le of 17 multiple primary melanomas, obtained from 8 patients, was analyzed using a smallRNA sequencing approach. For each MPM patient we analyzed the rst and second primary tumor, and for one case also a third one. Three patients had a family history of melanoma. We compared the global miRNA pro le of MPM toward that of 4 single melanomas and 3 benign nevi. From the smallRNA sequencing data, we identi ed 1629 mature miRNAs expressed in melanoma and nevus cells. The unsupervised Principal Component Analysis (PCA) of all miRNAs and all samples (n = 24) revealed that familial and non-familiar multiple primary melanomas have a greatly overlapping miRNA pro le (Fig. 1A), which is different from single cutaneous melanoma (CM) and benign nevi (BN). Indeed, a statistical comparison between familial and non-familial MPMs did not provide any signi cant result. Therefore, we considered familial and non-familial melanomas as a unique group in all subsequent analyses. From the PCA we can already observe that MPMs displayed a miRNA pro le more similar to benign nevi than CMs. When we compared multiple and single melanoma tumors, we obtained a markedly different miRNA expression pro le and a list of 22 miRNAs differentially expressed (adjusted p < 0.05, Table 2), which are represented with a Volcano plot in Fig. 1B. Cluster analysis of these samples based on the expression of the 22 differentially expressed miRNAs con rmed the separation between single and multiple tumors Fig. 1C. The MPM group was constituted by the paired rst and second tumors (and one additional tumor in one case) developed by the same patient over the years (Table 1). Comparing the miRNA pro le of these two groups using a paired statistical analysis, miRNAs that characterize the second tumors, usually thinner and less aggressive than the rst melanoma given their early diagnosis, were identi ed. Despite the similarities between the two matching MPMs, a variation in miRNA expression was observed (Fig. 1B). Speci cally, thirty-seven miRNAs were differentially expressed between the rst and second MPM (paired t-test, p < 0.05, Table 3) and a signi cant separation was obtained applying the cluster analysis (Fig. 1D). Validation of microRNA differential expression in single and multiple primary melanomas and paired primary tumors from the same patient Nine miRNAs were selected for an independent technical validation using quantitative RT-PCR in 47 novel samples including BN, CM and 1st and 2nd MPM. Speci cally, we included the miRNAs differentially expressed between CM and MPM (miR-21-5p, miR-25-3p, miR-125b-5p, miR-146a-5p, miR-205-5p, miR-149-5p) and others between the rst (MPM 1st ) and second (MPM 2nd ) melanoma within the same MPM patient (miR-149-5p, miR-92b-3p, miR-200b-3p, miR-125a-5p). According to the smallRNA NGS results, an upregulated expression of miR-21-5p, miR-25-5p, miR-146a-5p, and a downregulated expression of miR-125b-5p, miR-149-5p and miR-205-5p in CM compared to MPM were expected. In MPM samples, all selected miRNAs are upregulated in the MPM 2nd compared to MPM 1st. In the validation experiment, we included also miR-211-5p, considering that it is a melanocyte speci c miRNA, whose genetic locus is located inside melastatin gene and whose expression is particularly high in nevi. The expression of this miRNA was higher in BN, with borderline statistical signi cance when compared to CM or MPM in our NGS data. The validation was performed in a cohort of 29 patients, as described in Table 1, using miR-16-5p as a reference gene due to its invariant expression in NGS data. Expression distributions of selected miRNAs in benign nevi, cutaneous melanoma and multiple primary melanoma samples are represented in Fig. 2 upregulation of miR-200b-3p and miR-205-5p was observed in MPM. A similar trend can be observed for miR-149-5p. As expected, the melanocyte speci c, MITF-regulated miR-211-5p is progressively downregulated in multiple and single melanomas (Fig. 2). Functional annotation of multiple primary melanoma miRNA signature The list of 22 miRNAs differentially expressed in multiple vs. single melanomas, was uploaded in Metacore software (Clarivate Analytics) to identify both the pathways that are signi cantly regulated by these miRNAs (Supplementary Table 1, Additional le 1) and the most signi cant miRNAs/targets networks ( Supplementary Fig. 1A, Additional le 2). Multiple primary melanomas were found to have a higher expression or miR-200 family, miR-205-5p and miR-149-5p compared to single CM and even nevi ( Fig. 2 and Fig. 4). These microRNAs target ZEB1/TCF8 and ZEB2/SIP1 genes, and by doing so they inhibit the epithelial-mesenchymal transition (EMT) pathway. This pathway therefore appears to be speci cally activated in single melanomas (Fig. 4). From MetaCore network analysis, three hub genes (TLR4, ITGA6 and BTG2) were identi ed as targeted by multiple miRNAs, either up-or down-regulated in multiple melanomas. When we assessed the association of TLR4, ITGA6 and BTG2 gene expression with melanoma prognosis, we observed that their higher expression (median cutoff) was signi cantly associated with a worse overall survival in TCGA SKCM cohort of 458 samples ( Supplementary Fig. 1B, Additional le 2). IsomiRNA analysis revealed that miR-125a-5p isoforms are dysregulated in multiple primary melanoma Interestingly, miR-125a-5p differential expression in MPM was not con rmed by qPCR technology and we wondered about a possible explanation. We observed that the reads generated by the smallRNA sequencing experiment and attributed to mature miR-125a-5p following the standard matching pipeline were actually shorter by 1, or most frequently 2 nucleotides (lack of GA at the 3' end) in all samples (Supplementary Fig. 2A, Additional le 3). Although miRBase database reports a unique mature sequence for each miRNA, the so called canonical form, many evidences from deep sequencing experiments suggest that miRNAs have frequent modi cations in length and sequence in human tissues. These miRNA isoforms are called isomiRs 11 . We analyzed the isomiR expression level in all single and multiple primary CMs from our NGS experiments. We found 90 miRNAs with sequence and length heterogeneity, generating 324 different isomiRs, and 40 canonical microRNAs without any isomiR. In addition, we found 40 isomiRs named "orphan", because their canonical miRNA sequences could not be detected. For each isomiR, we calculated the average expression in melanoma samples and the ratio between each isomiR and its canonical miRNA. Finally, we obtained a panel of 17 miRNAs whose isoforms are 3-to 10-fold more abundant in melanoma than their canonical form (Table 4). Among them, hsa-miR-125a-5p|0|-2 isoform was differentially expressed in multiple vs. single primary melanomas and between the rst and second tumor of the same patient (paired t-test P = 0.0006). Unusually, miR-125a-5p canonical and 3' shorter isoforms show an opposite expression trend in nevi, single and multiple primary melanomas (Fig. 5A). We studied two different technical approaches for miRNA quanti cation based on qPCR (miRCURY LNA and miSCRIPT, both by Qiagen), to selectively quantify miR-125a-5p isoforms in all samples and validate the NGS data. Speci cally, we used miR-125a-5p miRCURY LNA assay (Exiqon/Qiagen) for the quanti cation of the canonical, 24nt-long isoform ( Supplementary Fig. 2B, Additional le 3). Results revealed a lack of variation of this mature isoform between single and multiple melanomas, and a higher expression in the rst vs. second melanoma (Fig. 5B,C). To quantify the miR-125a-5p 22nt-long isoform, we selected the miSCRIPT assay by Qiagen. The assay can quantify both the long and short isoforms of miR-125a-5p due to the use of a universal 3' primer for miRNA ampli cation. Given the high predominance of the short isoform in our NGS data, we assumed this assay could provide a bona de quanti cation of the short 22nt-long isoform ( Supplementary Fig. 2B, Additional le 3). As expected, an increase in miR-125a-5p levels in MPMs vs. CMs and in the second tumor from the same patient was observed (Fig. 5B,C). We examined the expression of hsa-miR-125a-5p|0|-2 and 0|0 (WT) isoforms across TCGA tumor types and discovered an overall higher expression of the shorter form in human cancers and a speci cally altered ratio of the two forms in SKCM (cutaneous melanoma cohort), which shows the largest variation (Fig. 6). Discussion The risk of melanoma development is in uenced by environmental and genetic factors. Families with history of melanoma could have a germline mutation that confers hereditary susceptibility, and this is particularly demonstrated in families where more members develop multiple primary melanomas. In 1968, Lynch and Krush described the familial atypical multiple mole-melanoma (FAMMM syndrome) which encompasses an association between pancreatic cancer, multiple nevi, and melanoma 38 . In the 70 s Clark described a similar phenotype, the B-K mole syndrome, consisting of familial melanoma in the setting of numerous atypical nevi. In the early 1990′s, germline mutations in the cell cycle gene, p16 (CDKN2A), were reported among a subset of FAMMM kindreds. Nowadays, most studies report a very low prevalence of CDKN2A /CDK4 in familial or multiple melanoma patients, especially in the Southern Europe countries 9 . Though MPM patients often report similar sun exposure experiences, the high percentage of atypical nevi in these patients and their family members, the frequent family history of melanoma, as well as the early onset of melanoma (young age) suggest that predisposing factors for the development of multiple melanomas are involved. Regardless of family history, they are reported also cases of multiple primary melanoma in individuals without familial history of melanoma. In these cases, germline mutations in melanoma predisposing genes are rarely detected. Therefore, it is evident that some other genetic or epigenetic factor is active in multiple primary melanoma to fuel multiple events of melanocytic transformation. In this study, we provide the rst comprehensive molecular characterization of MPMs by assessing their miRNome with a smallRNA sequencing approach. The global microRNA expression re ects the mRNA expression of cells and tissues, with the advantage of being assessable in FFPE tissues. This analysis revealed a speci c expression pattern of multiple melanoma tumors when compared to single cutaneous melanoma. MPM miRNome is more similar to benign nevi, thus suggesting a less aggressive and more differentiated phenotype. We validated a panel of microRNAs in additional samples, including also multiple tumors from the same patient, obtaining a panel of microRNAs differentially expressed in tumors from the same patient. We provide here evidence that MPMs, from a biological point of view, have a less invasive phenotype as pointed out by the main regulatory pathways activated in these tumors, thus providing further elements of discussion to support MPM less aggressive evolution. It is worth mentioning that microRNAs known to inhibit epithelial-mesenchymal transition (e.g. miR-200 family, miR-205, miR-149) are more expressed in multiple primary melanoma compared to single melanoma. Tumor cells promote EMT to escape from the microenvironment and migrate to a new location to develop metastasis 22 . The acquisition of a mesenchymal phenotype promotes the production of extracellular matrix proteins, the resistance to apoptosis, the invasiveness and the migration 30 . EMT results from the loss of cell-to cell junctions, induced by the loss of E-cadherin; the process is mediated by transcription factors, including SNAIL, SLUG, SIP1, and E2A, and affected by regulatory proteins such as TGFβ, EGF, PDGF, ERK/MAPK, PI3K/AKT, SMADS, RHOB, β-catenin, LEF, RAS, C-FOS, integrins β4 and integrin α5 53 ., EMT has been reported in melanoma cells, despite their origin from neural crest-derived melanocytes. In fact, EMT promotes the metastatic phenotype of malignant melanocytes 2 8 . Moreover, melanocytes express Ecadherin, which mediates the adhesion between melanocytes and keratinocytes 51 . Many studies described the loss of E-cadherin in melanoma 32,49 , and CDH1 ectopic expression was associated with the downregulation of adhesion receptors, such as MCAM/MUC18 and β3 integrin subunit, resulting in suppression of melanoma cells invasion 26 . Hao et al. observed a switch from E-cadherin to N-cadherin expression in melanoma progression, a process regulated by PI3K and PTEN through TWIST and SNAIL 23 . Consistently, we examined the main cellular hubs regulated by MPM speci c miRNAs and discovered that they are centered in TLR4, ITGA6 and BTG2 proteins. MicroRNAs regulating these hubs are mostly downregulated in MPMs and high expression of these three genes is associated with a more favorable prognosis in TCGA SKCM cohort. Integrin α6 (ITGA6), also known as CD49f, is a transmembrane glycoprotein adhesion receptor that mediates cell-matrix and cell-cell interactions. ITGA6 was identi ed and described as an important stem cell biomarker. Indeed, it is the only common gene expressed in embryonic stem cells, neural stem cells and hematopoietic stem cells 28,45 . It is also expressed in more than 30 stem cell populations, including cancer stem cells 31 . ITGA6 can combine with other integrins such as integrin beta 1 and integrin beta 4 to form respectively integrin VLA-6 and TSP180. The role of ITGA6 in melanoma is not clear but our observation point toward its upregulation in MPMs upon miR-25 and 29 downregulation. BTG2 is part of the anti-proliferative BTG/TOB family and its expression is p53-dependent 47 . This protein is involved in several cellular processes, including cell cycle regulation, DNA damage repair, cell differentiation, proliferation and apoptosis. However, its role is often cell-type dependent 39 . In fact, BTG2 inhibits proliferation and migration, acting as a tumor suppressor protein, in gastric cancer cells 59 and in lung cancer cells 57 , while in bladder cancer it promotes cancer cell migration 55 . In B16 melanoma cells it was shown that miR-21 promotes a metastatic behavior through the downregulation of many tumor suppressor proteins, including PTEN, PDCD4 and BTG2 58 . In MPMs, we observe the downregulation of several miRNAs targeting BTG2, including miR-21-5p, 146a-5p, 132-3p, 15a-5p. Therefore, an upregulation of BTG2 is to be expected. Toll-like Receptor 4 (TLR4) belongs to TLR family and plays an important role in in ammation and cancer. TLR4 protein is expressed at very low levels in melanoma cells in vivo (Human protein atlas) but its activation has been reported to promote an in ammatory microenvironment and tumor progression in vitro 20 . In addition, TLR4 is associated with induction of proliferation and migration of melanoma cells 50 . TLR4 plays an important role in melanoma also because it interacts with TRIM44, a negative prognostic factor in melanoma. In particular, TRIM44 binds and stabilizes TLR4 leading to the activation of AKT/mTOR signaling, which results in EMT promotion 56 . This biological role for TLR4 in melanoma is partially in contrast with our observation of a better survival in melanoma patients with higher TLR4 levels. Finally, we extended our molecular investigation to miRNA isoforms that were most abundant in our samples. According to the recent observation that miRNA isoforms can discriminate human cancers 52 , we detected a relevant number of miRNA variants in our dataset of single and multiple melanomas. A speci c isoform of miR-125a-5p, lacking 2 nucleotides at the 3'end, was detected as differentially expressed in MPMs. This isoform is highly abundant in melanoma, as we con rmed by analyzing its levels across 32 tumor types from TCGA database; and the ratio between miR-125a-5p isoform and canonical form is the broader in TCGA SKCM tumors (range 0.1-1100 times) and 2-6 logs more abundant in nevi and melanomas in our study. Moreover, we detected a speci c dysregulation of the isoform, but not the canonical form, in multiple melanomas. Bioinformatic analyses revealed that miR-125a-5p shorted isoform loses the ability to target and regulate a group of genes speci cally involved in cell adhesion and cell differentiation. Particularly relevant seems to be the lack of regulation of genes involved in neuronal differentiation. Indeed, miR-125a is the human ortholog of lin-4, the very rst miRNA identi ed in C. Elegans in 1993 34 . In mammals, miR-125 is expressed in embryonic stem cells and promote cell differentiation. Speci cally, miR-125 has a speci c role in adult nervous system development and neuronal differentiation 10,33 . The imbalance between major miR-125 isoforms in melanocytes could re ect a major role for miR-125 in melanocyte development and differentiation from the neural crest 42 , differentiating this lineage from other common ancestor cells. A role that is consequently re ected in melanoma development and progression. Conclusions Overall, we provide here a comprehensive characterization of microRNA/isomiRNA dysregulation and regulatory network in single and multiple primary melanomas. The pattern of miRNA alterations supports a less aggressive phenotype of multiple primary melanomas, whilst isomiR-125a-5p levels proved to be Differential microRNA expression in 1st vs. 2nd melanoma from the same patient. Before-after plot showing the paired expression of 4 selected microRNAs in 17 multiple primary melanoma (MPM) patients. miR-92b-3p, miR-205-5p, miR-200b-5p and miR-149-5p are signi cantly downregulated in the 1st melanoma compared to the 2nd melanoma. Each miRNA was tested in triplicate by quantitative RT-PCR. Relative miRNA expression was normalized on invariant miR-16-5p. Paired P-value is reported. Figure 3 Differential microRNA expression in 1st vs. 2nd melanoma from the same patient. Before-after plot showing the paired expression of 4 selected microRNAs in 17 multiple primary melanoma (MPM) patients. miR-92b-3p, miR-205-5p, miR-200b-5p and miR-149-5p are signi cantly downregulated in the 1st melanoma compared to the 2nd melanoma. Each miRNA was tested in triplicate by quantitative RT-PCR. Relative miRNA expression was normalized on invariant miR-16-5p. Paired P-value is reported.
Search for resonant pair production of Higgs bosons decaying to bottom quark-antiquark pairs in proton-proton collisions at 13 TeV : A search for a narrow-width resonance decaying into two Higgs bosons, each decaying into a bottom quark-antiquark pair, is presented. The search is performed using proton-proton collision data corresponding to an integrated luminosity of 35.9 fb (cid:0) 1 at p s = 13 TeV recorded by the CMS detector at the LHC. No evidence for such a signal is observed. Upper limits are set on the product of the production cross section for the resonance and the branching fraction for the selected decay mode in the resonance mass range from 260 to 1200 GeV. This paper reports the results of a search for narrow-width resonances in the 260{1200 GeV mass range, decaying into a pair of Higgs bosons, each decaying into a pair of bottom quarks. The search is performed using pp collision data collected at p s = 13 TeV data. In cases, a consistent a statistical of the deviations. In (cid:12)gure 8, the NLO theoretical cross section for the gluon fusion production of a bulk KK-graviton Introduction The discovery of a Higgs boson (H) [1][2][3], with mass of 125 GeV [4,5] and properties consistent with the standard model (SM) of particle physics at the CERN LHC, motivates searches for resonances via their decays into Higgs bosons. Several theories for physics beyond the SM posit narrow-width resonances decaying into pairs of Higgs bosons (HH). For instance, models with a warped extra dimension [6] predict the existence of new particles such as the spin-0 radion [7][8][9] and the spin-2 first Kaluza-Klein (KK) excitation of the graviton [10][11][12], which could decay to HH. These models have an extra warped spatial dimension compactified between two branes, with an exponential metric κl, κ being the curvature and l the coordinate of the extra spatial dimension [13]. The benchmark of the model is the ultraviolet cutoff of the theory, Λ ≡ √ 8πe −κl M Pl , M Pl being the Planck scale. In proton-proton (pp) collisions at the LHC, the graviton and the radion are produced primarily through gluon-gluon fusion and are predicted to decay to HH with branching fractions of approximately 10 and 23%, respectively [14]. Previous searches for resonant HH production have been performed by the ATLAS and CMS Collaborations with pp collisions at √ s = 8 and 13 TeV. The decay channels studied include bbbb [15][16][17], bbτ τ [18], bbγγ [19,20], γγWW [21], and bbWW [22]. This paper reports the results of a search for narrow-width resonances in the 260-1200 GeV mass range, decaying into a pair of Higgs bosons, each decaying into a pair of bottom quarks. The search is performed using pp collision data collected at √ s = 13 TeV -1 - JHEP08(2018)152 with the CMS detector at the CERN LHC, corresponding to an integrated luminosity of 35.9 fb −1 . The main challenge of this search is to discriminate the final-state signature of four bottom quark jets from the overwhelming multijet quantum chromodynamics (QCD) background. This is addressed by dedicated online selection criteria that include b jet identification and by a model of the multijet background that is tested in control regions of data. The analysis closely follows the approach adopted for the 8 TeV data [15] but the sensitivity for high resonance mass values is enhanced because of the significant increase in production cross section at 13 TeV, a new trigger strategy and a more efficient algorithm for identifying jets originating from bottom quarks. Detector and simulated samples The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. A silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections, reside within the solenoid. Forward calorimeters extend the pseudorapidity (η) [23] coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in ref. [23] Simulated samples of signal events are produced using various Monte Carlo (MC) event generators, with the CMS detector response modeled using the Geant4 [24] program. To model the production of a generic narrow-width spin-0 resonance, we use an MC simulation of the bulk radion produced through gluon fusion. The momenta and angular distributions for decay products of a spin-2 resonance are distinct from those for a spin-0 resonance, and result in different kinematic distributions. Therefore, we evaluate the signal efficiencies for a narrow-width spin-2 resonance from a separate simulation of the first excitation of a bulk KK graviton produced through gluon fusion and forced to decay to a pair of Higgs bosons with the parameters reported in ref. [25]. Bulk graviton and radion signal events are simulated with masses in the range 260-1200 GeV and widths of 1 MeV (narrow-width approximation), using the MadGraph5 amc@nlo 2.3.3 [26] event generator at leading order (LO). The resonance is forced to decay into a pair of Higgs bosons which in turn decay into bb. The parton distribution function (PDF) set NNPDF3.0 [27] with LO accuracy is used. The showering and hadronization of partons are simulated with pythia 8.212 [28]. During the 2016 data-taking period the average number of pp interactions per bunch crossing was approximately 23. The simulated samples include these additional pp interactions, referred to as pileup interactions (or pileup), that overlap with the event of interest in the same bunch crossing. Simulated events are weighted to match the number of pp interactions per event in data. Event reconstruction The particle-flow (PF) algorithm [29] is used to reconstruct and identify individual particle in an event with an optimized combination of information from the various elements of the CMS detector. The algorithm identifies each reconstructed particle (PF candidate) as an electron, a muon, a photon, or a charged or neutral hadron. The reconstructed vertex with the largest value of summed physics-object transverse momentum squared (p 2 T ) is taken to be the primary pp interaction vertex. The physics objects are the jets, clustered using the jet finding algorithm [30,31] with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the p T of those jets. This vertex is used for all the objects in the event reconstructed with the PF algorithm. Jets are reconstructed from PF candidates using the anti-k T clustering algorithm [30], with a distance parameter of 0.4, as implemented in the FastJet package [31,32]. Jet identification criteria are also applied to reject jets originating from detector noise. The average neutral energy density from pileup interactions is evaluated from PF objects and subtracted from the reconstructed jets [33]. Jet energy corrections are derived from the simulation, and are confirmed with in situ measurements of the energy balance in dijet and photon+jet events [34]. Jets are identified as originating from b quarks ("b jets") using the DeepCSV [35] discriminator, a new b tagging algorithm based on a deep neural network with four hidden layers [36]. The DeepCSV discriminator employs the same set of observables as those used by the combined secondary vertex (CSV) algorithm [35,37], except that the track selection is expanded to include up to six tracks, further improving the b jet identification. The operating point chosen corresponds to a 1 (12)% rate for misidentifying a light-flavor (c-flavor) jet as a b jet. The b tagging efficiency for jets with p T in the 30-150 GeV range is approximately 69% and gradually decreases for lower and higher jet p T [35]. Event selection The search for a narrow-width X → H(bb)H(bb) resonance is performed for mass values in the 260-1200 GeV range. The angular distributions for the decay products of such a resonance vary substantially over this range. In order to increase the sensitivity of this search, different criteria are used for events in two distinct mass regions: the low-mass region (LMR), for resonance masses from 260 to 620 GeV, and the medium-mass region (MMR), for masses from 550 to 1200 GeV. The boundary between the LMR and the MMR is at 580 GeV. It has been chosen by optimizing for the expected sensitivity and takes into account the uncertainties associated with the background modeling. The mass range above 1200 GeV (high-mass region) is not covered by this search. Above 900 GeV, the Higgs bosons have a momentum considerably higher than their mass and the Higgs to bb decays are reconstructed more efficiently as single hadronic jets with a larger anti-k T distance parameter (0.8) [38]. JHEP08(2018)152 Events are selected online by combining two different trigger selections to identify b jets, both using the CSV algorithm. For the first trigger selection, four jets with p T > 30 GeV and |η| < 2.4 are required. The latter requirement ensures that the jet lies within the tracker acceptance. Of those four jets, two are required to have p T > 90 GeV and at least three jets are required to be tagged as b jets. The second trigger selection requires four jets with p T > 45 GeV and at least three of those jets identified as b jets. Events are selected offline by requiring at least four b tagged jets with p T > 30 GeV and |η| < 2.4. The selected jets are combined randomly into pairs to form two Higgs boson candidates with masses m H 1 and m H 2 . For the LMR, HH candidates are chosen from the four selected jets such that |m H − 120 GeV| < 40 GeV for each candidate Higgs boson. For the MMR the H candidates are selected using ∆R = √ (∆η) 2 + (∆φ) 2 less than 1.5, where ∆η and ∆φ are the differences in the pseudorapidities and azimuthal angles (in radians) of the two jets. In the two-dimensional space defined by the reconstructed masses of the two Higgs boson candidates, H 1 and H 2 , a circular signal region (SR) is defined with R < 1, where R is defined as: The central mass value (M) is the average of the means of the m H 1 and m H 2 distributions for simulated signal events and the parameter r is set to 20 GeV. The centers of these circular regions have been determined separately for the LMR and MMR and found to be 120 and 125 GeV, respectively. If there are multiple HH candidates in an event, the combination that minimizes R 2 is used. After these event selection criteria are applied, the dijet invariant mass resolution for m H 1 and m H 2 is approximately 10-13%, depending on the p T of the reconstructed Higgs boson, with a few percent shift in the value of the mass peak, relative to 125 GeV. The Higgs boson mass resolution is further improved by applying multivariate regression techniques similar to those used in the searches for SM Higgs bosons decaying to bb in CMS [39,40]. The regression estimates a correction that is applied after the standard CMS jet energy corrections [34,41], and it is computed for individual b jets to improve the accuracy of the measured energy with respect to the b quark energy. To this end, a specialized boosted decision tree [42] is trained on simulated b jets from tt events, with inputs that include observables related to the jet structure and b tagging information. The average improvement in the Higgs boson mass resolution, measured with simulated signal samples, is 6-12%, depending on the p T of the reconstructed Higgs boson. The use of the regression technique increases the sensitivity of the analysis by 5-20% depending on the mass hypothesis. The regression technique is validated with data samples of Z → (ee, µµ) events with two b tagged jets and in tt-enriched samples [39]. The cumulative selection efficiencies of the selection criteria described above for the graviton and radion signal benchmarks are reported in figure 1. The reconstructed resonance mass (m X ), computed as the invariant mass of H 1 and H 2 , is displayed for simulated signal events with different mass hypotheses in figure 2. In order to improve the resolution for the resonance mass, the momenta of the reconstructed b quarks are corrected by constraining the invariant mass of the Higgs boson candidates to be 125 GeV. Since jet direction is reconstructed with better resolution than jet p T , this constraint mainly benefits the latter. The improvement in resolution for the reconstructed signal resonance ranges from 20 to 40% depending on the mass hypothesis, resulting in an improvement of the sensitivity by 10-20%. The shift in the reconstructed m X , due to these corrections is also shown in figure 2. The mass shift is linear in m X and is caused by the asymmetry of the corrections due to the jet momentum resolution across the p T range considered. Signal and background modeling To search for signal events with various mass hypotheses we fit the m X distribution for data events in the SR to the sum of two parametric models, one for the signal and one for the SM background, which is dominated by multijet production. This procedure is performed independently in three regions: two within the LMR, as described below, and the MMR. Quantum interference between the signal and non-resonant SM background processes is neglected. The parametric signal models are built by fitting the m X distributions obtained from simulated signal events. The shape of the signal m X distribution is different for the LMR and the MMR, as a result of the different event selection criteria. In the LMR, a sum of two Gaussian functions is used, to account for the effect of reconstructing the two Higgs boson candidates using jets not originating from their decays. This occurs in about 30% of the events for the lowest resonance mass hypothesis, decreasing to about 5% at the highest resonance mass hypothesis. Five parameters are required, the mean and width of the two -5 -JHEP08 (2018)152 200 300 400 500 600 700 800 900 1000 1100 Figure 2. The m X distribution for simulated signal events (spin-2 bulk KK-graviton) after the event selection criteria for the 450, 750, and 1000 GeV mass hypotheses, with and without the correction obtained by constraining m H (kinematic constraint) and the specific b jet energy corrections (regression). The distributions are normalized so that the area under the curve for each mass is the same. Gaussian functions and the ratio of their integrals. In the MMR, the signal is modeled with a function that has a Gaussian core smoothly extended on both sides via exponential tails. This requires two parameters for the mean and width of the Gaussian function, and two parameters for the exponential tails [15]. The parametric function used to model the background m X distribution is obtained from control regions in data in which events are expected to have kinematic properties similar to events in the SR. These control regions are: (i) sideband regions (SB) defined figure 3), and (ii) events in the SR and both SB regions that do not qualify as signal events due to a requirement that one of the four jets used to reconstruct m X is not identified as a b jet (anti-tag selection). For the LMR selection, the m X distributions in all control regions exhibit a kinematic turn-on, due to the trigger requirements, followed by a smoothly falling distribution at the larger m X values. The m X distributions in all control regions for the MMR selection exhibit a similar smoothly falling shape with increasing m X . This feature allows the adoption of a common form of the parametric background model in the SR and control regions. The parameters of the background model are determined by the fit used to extract a possible signal. In the following we describe the derivation of the background model from the different control regions. For the LMR, it is difficult to model simultaneously the turn-on and the tail of the m X distribution. Thus, the modeling of the background is split into two ranges, LMR1 and LMR2, with two different parametric models to accommodate either a turn-on shape or a falling distribution, respectively. An indirect method of determining the boundary between LMR1 and LMR2 is used to avoid revealing the m X distribution in the SR before fitting for a possible signal. The method uses two selections in the SR and SB, as described in table 1. For the background, the shape of the m X distribution in the SR (A) is predicted from the shape of the m X distribution in the SR using the anti-tag selection (C) but reshaped by the ratio, as a function of m X , between the m X distribution in the SB with the nominal selection (B) and the m X distribution in the SB with the anti-tag selection (D), so that A = CB/D. To test the validity of this method, new SR and SB regions, centered at 150 GeV, are defined along the diagonal in figure 3 in a similar fashion to those centered at 120 GeV. In that case the prediction E = GF/H for the m X distribution in the SR region centered at 150 GeV can be tested by directly comparing to the m X distribution in that region. Figure 4 shows the agreement between this prediction and the observed distribution, validating the use of this method for the SR distribution centered at 120 GeV. The boundary between LMR1 and LMR2 is then set at m X = 310 GeV, by optimizing for the expected sensitivity. The signal and background are evaluated independently in LMR1= [260, 310] GeV, LMR2= [310,580] GeV and the MMR= [580, 1200] GeV. For LMR1, a function with a Gaussian core smoothly extended to an exponential tail on the high side is fitted to the m X distribution [15], while the function defined in ref. [43] eq. (9) is used in LMR2 and the MMR. This function was originally used to describe a Compton spectrum and has three free parameters describing the mean, width and extent of the right side tail. In each case, the goodness of the fit, characterized by the χ 2 per degree of freedom, is found to be reasonable. The fit of the background model on the SB for the MMR is shown in figure 5. In the absence of a theoretical prediction for the background model, an alternative one based on the Crystal Ball function [44] has been studied, and the difference between them drives the assessment of a systematic uncertainty due to the choice of the particular model. Pseudo-datasets are generated from the alternative function and fitted with the nominal functions to compute biases in the reconstructed signal strength. This procedure is performed for each mass hypothesis and the corresponding biases range between 30-80 fb for the LMR, and 0.1-4 fb for the MMR. The SM top quark pair production in the SR is estimated from simulation to contribute up to 10 and 15% of the selected events in the LMR and the MMR, respectively. Since the tt contribution to the background exhibits a shape very similar to that for the multijet process, it is implicitly included in the data driven estimate. Systematic uncertainties Sources of systematic uncertainties that affect the signal yields are listed in table 2. The signal yield for a given production cross section is affected by a 2.5% systematic uncertainty in the measurement of the integrated luminosity at CMS [45]. The uncertainty in the signal normalization caused by the choice of the PDF set [46] contributes up to 3.5%. The jet energy scale [34,41] is varied within one standard deviation as a function of jet p T and η, and the efficiency of the selection criteria recomputed. The signal efficiencies are found to vary by up to 2.9%. The effect of the uncertainty in the jet energy resolution [34,41] is evaluated by smearing the jet energies according to the measured uncertainty. This causes variations in the signal efficiency of between 0.9 and 2.1%. The trigger efficiency is evaluated in a tt-enriched control sample in which an isolated muon is required in addition to the four b tagged jets with p T > 30 GeV and |η| < 2.4 demanded in the analysis. The efficiency of each online kinematic and b tagging requirement is evaluated separately and then the efficiency of all the trigger selection criteria is computed in terms of conditional probability. The associated systematic uncertainties are found to impact signal efficiencies at the level of 5-9%. Signal yields are corrected to match the b tagging efficiency measured in data [35]. The associated uncertainty is evaluated to be about 6-8%. The systematic uncertainty associated with the choice of the parametric background model is evaluated as described in section 5. We account for the bias as a signal-shaped systematic uncertainty in the background model with normalization centered at zero and Jet energy resolution 0.9-2.1 1.0-1.5 b tagging scale factor 6.5-6.9 6.9-8.6 Trigger efficiency 6.4-9.0 5.3-7.0 PDF 1.5-2.2 2.1-3.5 Table 2. Impact of systematic uncertainties on the signal efficiencies in the LMR and the MMR. a Gaussian uncertainty with standard deviation equal to the bias. The impact on the expected limit ranges between 0.3-1.5%. Results The m X distribution for data in the SR and results of the fit to the parametric background model are shown in figures 6, for the LMR1 and the LMR2, and 7 for the MMR. Parameters controlling the shapes and yields of the signal are allowed to float within ranges determined by the systematic uncertainties. The parameters and normalization of the multijet background shape are left to float freely in the fit. The shapes of the background-only fit are found to adequately describe the data in each of the search region. The observed and expected upper limits on the cross section times branching fraction for pp → X → H(bb)H(bb) at 95% confidence level (CL) are computed using the modified frequentist CL s method [47][48][49][50]. These limits are shown in figures 8 and 9 for spin-2 and spin-0 hypotheses, respectively. The green and yellow bands represent the ±1 and ±2 standard deviation confidence intervals around the expected limits. The observed limits in these figures exhibit several deviations from the expected one beyond ±2 standard deviation. The local p-value of the most significant excess (deficit) is 2.6 (−3.6) standard deviation. In order to estimate the global significance of these deviations within the search range, we estimated, based on pseudo-experiments, a global probability to see three (four) positive or negative deviations from the expected limits consistent with narrow positive or negative signal contributions with the square sum of the significances exceeding 5.1 (5.6) standard deviation, as observed in data. In both cases, the global probability was found to be in a percent range, consistent with a statistical nature of the observed deviations. In figure 8, the NLO theoretical cross section for the gluon fusion production of a bulk KKgraviton with κl and κ set to 35 and 0.5M Pl respectively is shown. This KK-graviton is excluded at 95% CL in the mass ranges of 320-450 and 480-720 GeV. In figure 9, the NLO theoretical cross section for the gluon fusion production of a radion with decay constant Λ = 3 TeV is shown. Such a radion is excluded at 95% CL in the mass ranges of 260-280, 300-450, and 480-1120 GeV. Figure 9. The observed and expected upper limits on the cross section for a spin-0 resonance X → H(bb)H(bb) at 95% CL, using the asymptotic CL s method. The theoretical cross section for the production of a radion, with Λ = 3 TeV, κl = 35, and no radion-Higgs boson mixing, decaying to four b jets via Higgs bosons is overlaid. The transition between the LMR and the MMR is based on the expected sensitivity, resulting in the observed discontinuity. Summary A search for a narrow-width resonance decaying into two Higgs bosons, each decaying into a bottom quark-antiquark pair, is presented. The search is performed using protonproton collision data corresponding to an integrated luminosity of 35.9 fb −1 at √ s = 13 TeV recorded by the CMS detector at the LHC. No evidence for a signal is observed and upper limits at 95% confidence level on the production cross section for spin-0 and spin-2 resonances in the mass range from 260 to 1200 GeV are set. These cross-section limits are translated into an exclusion at 95% confidence level of a bulk KK-graviton (with κl = 35 and κ = 0.5M Pl ) in the mass ranges of 320-450 GeV and 480-720 GeV. The corresponding excluded mass ranges for a radion (with decay constant Λ = 3 TeV) are 260-280 GeV, 300-450 GeV and 480-1120 GeV. This analysis outperforms a similar search by CMS using 17.9 fb −1 collected at 8 TeV [15] and extends the sensitivity to the gluon fusion production of a radion with decay constant Λ = 3 TeV and to bulk graviton with κ set to 0.5M Pl . Acknowledgments We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF ( Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
Longitudinal Surveillance for SARS-CoV-2 Among Staff in Six Colorado Long-Term Care Facilities: Epidemiologic, Virologic and Sequence Analysis Background: SARS-CoV-2 emerged in 2019 and has become a major global pathogen. Its emergence is notable due to its impacts on individuals residing within long term care facilities (LTCFs) such as rehabilitation centers and nursing homes. LTCF residents tend to possess several risk factors for more severe SARS-CoV-2 outcomes, including advanced age and multiple comorbidities. Indeed, residents of LTCFs represent approximately 40% of SARS-CoV-2 deaths in the United States. Methods: To assess the prevalence and incidence of SARS-CoV-2 among LTCF workers, determine the extent of asymptomatic SARS-CoV-2 infection, and provide information on the genomic epidemiology of the virus within these unique care settings, we collected nasopharyngeal swabs from workers for 8–11 weeks at six Colorado LTCFs, determined the presence and level of viral RNA and infectious virus within these samples, and sequenced 54 nearly complete genomes. Findings: Our data reveal a strikingly high degree of asymptomatic/mildly symptomatic infection, a strong correlation between viral RNA and infectious virus, prolonged infections and persistent RNA in a subset of individuals, and declining incidence over time. Interpretation: Our data suggest that asymptomatic SARS-CoV-2 infected individuals contribute to virus persistence and transmission within the workplace, due to high levels of virus. Genetic epidemiology revealed that SARS-CoV-2 likely spreads between staff within an LTCF. Funding: Colorado State University Colleges of Health and Human Sciences, Veterinary Medicine and Biomedical Sciences, Natural Sciences, and Walter Scott, Jr. College of Engineering, the Columbine Health Systems Center for Healthy Aging, and the National Institute of Allergy and Infectious Diseases. Abstract. SARS-CoV-2 emerged in 2019 and has become a major global pathogen in an astonishingly short period of time. The emergence of SARS-CoV-2 also has been notable due to its impacts on individuals residing within skilled nursing facilities (SNFs) such as rehabilitation centers and nursing homes. SNF residents tend to possess several risk factors for the most severe outcomes of SARS-CoV-2 infection, including advanced age and the presence of multiple comorbidities. Indeed, residents of long-term care facilities represent approximately 40 percent of US SARS-CoV-2 deaths. To assess the prevalence and incidence of SARS-CoV-2 among SNF workers, determine the extent of asymptomatic infection by SARS-CoV-2, and provide information on the genomic epidemiology of the virus within these unique care settings, we sampled workers weekly at five SNFs in Colorado using nasopharyngeal swabs, determined the presence of viral RNA and infectious virus among these workers, and sequenced 48 nearly complete genomes. This manuscript reports results from the first five to six weeks of observation. Our data reveal a strikingly high degree of asymptomatic infection, a strong correlation between RNA detection and the presence of infectious virus in NP swabs, persistent RNA in a subset of individuals, and declining incidence over time. Our data suggests that asymptomatic individuals infected by SARS-CoV-2 may contribute to virus transmission within the workplace. Introduction The COVID-19 pandemic has resulted in disproportionally high morbidity and mortality among residents in skilled nursing facilities (SNFs). As of June 2, 2020, the Centers for Medicare and Mediciaid Services reported over 30,000 deaths due to COVID-19 in long-term care facilities in the US, representing 42% of COVID-29-related US deaths (Nursing Home COVID-19 Public File Data.CMS.gov). In six states, deaths in long-term care facilities accounted for over 50% of all COVID-19 deaths (Delaware, Massachusetts, Oregon, Pennsylvania, Colorado, and Utah). The high burden of COVID-19 within SNFs is principally due to the risk profile of many residents, which includes advanced age and the presence of severe comorbidities (1). Accordingly, strategies to mitigate SARS-CoV-2 transmission to SNF residents have included restrictions on visitation, cessation of group activities and dining, and confinement to individual living quarters. While SNF residents are largely isolated, SNF employees are permitted to enter resident rooms provided they have passed a daily screening process for fever, COVID-19 respiratory symptoms or known exposure. However, a significant fraction of individuals infected with SARS-CoV-2, the causative agent of COVID-19, have a lengthy latency period prior to exhibiting COVID-19 symptoms, and many remain asymptomatic throughout the course of infection (2,3). Pre-symptomatic and asymptomatic SNF workers are a potential source of unrecognized transmission within SNFs and are thus an attractive focus for interventions directed at suppressing transmission within these facilities. To date, there have been no studies focused on longitudinal surveillance of asymptomatic workers within skilled nursing facilities. Therefore, we assessed SARS-CoV-2 infection among employees at five SNFs in Colorado. Workers were enrolled into the study and sampled by nasopharyngeal (NP) swab weekly for five or six consecutive weeks. Swabs were assayed for virus infection by qRT-PCR and plaque assay, and individuals with evidence of infection were instructed to self-quarantine for ten days. Using data on worker infection, site-specific prevalence at study onset and incidence rate over time was calculated. Viral genomes also were sequenced to assess viral genetic diversity within and between SNFs. Our results document a surprising degree of asymptomatic infection among healthy workers, and extreme variation in the prevalence and incidence of infections between different SNFs. We observed that the median number of consecutive positive weekly tests was two, indicating that RNA was present in the nasopharynx of most individuals for at least eight days, however some individuals had viral RNA in their nasopharynx for over five weeks. A small number of individuals had RNA reappear in the nasopharynx after apparent clearance. Sequencing studies lend limited support to the observation that transmission may occur within SNFs and, combined with the epidemiologic and other data provided here, highlight the importance of testing and removing positive workers from contact with vulnerable SNF residents. Data obtained from longitudinal surveillance studies such as this ongoing effort will provide crucial information about infectious disease transmission dynamics within complex workforces and inform best practices for preventing or mitigating future COVID-19 outbreaks within SNFs. Materials and Methods. Study sites. Five SNF facilities in Colorado were chosen to participate in the SARS-CoV-2 surveillance project. Weekly nasopharyngeal (NP) swabs were collected for a five to six week period on 454 consented individuals. Participants were asked to provide their job code but were otherwise de-identified to the investigators. This study was reviewed and approved by the Colorado State University IRB under protocol number 20-10057H. Participants provided consent to participate in the study and were promptly informed of test results and, if positive, instructed to self-isolate for a period of ten days. Return to work also required absence of fever or other symptoms for the final three days of isolation. qRT-PCR. One-step reverse transcription (RT) and PCR reaction was performed using the EXPRESS One-Step SuperScript qRT-PCR Kit (ThermoFisher Scientific) in a 20ul final reaction volume per the manufacturer's instructions. Primer/probe sets for SARS-CoV-2 are as described elsewhere [(4) and CDC diagnostic testing guidelines: https://www.fda.gov/media/134922/download) and were obtained from IDT. A primer/probe set for human RNase P transcript served as a control for RNA quality (not shown). RNA standards for SARS-CoV-2 nucleocapsid (N) and envelope (E) were kindly provided by Dr. Nathan Grubaugh, and served as positive controls. 96-well PCR plates were prepared on ice and centrifuged at 1282 RCF for 2min at 4°C. Plates were run on a QuantStudio3 using the following cycling conditions: Reverse transcription at 50°C for 15 minutes, followed by a single inactivation step (95°C for 3 minutes); 40 cycles alternating between 95°C for 5 seconds and 60°C for 30 seconds completed the reaction. Specimens with a cycle threshold (CT) less than 38 were considered positive. Samples were initially screened with an N1 primer/probe set as described in the US CDC diagnostic guidelines. If a positive or inconclusive result was obtained, the sample was retested with both N2 and E primer/probe sets (4). Specimens positive by two or more primer sets were considered RNA positive for SARS-CoV-2. Plaque assay for infectious virus. Plaque assays were performed on African Green Monkey Kidney (Vero) cells (ATCC CCL-81) according to standard methods (5). Briefly, 250 uL of qRT-PCR positive specimen was inoculated into nearly confluent cell monolayers. After incubation, cells were provided with a tragacanth semi-solid overlay, and fixed and stained after two days of incubation with 30% ethanol and 0.1% crystal violet. Plaques were counted manually. Incidence Estimation. The rate at which workers acquired infections was estimated as the number of new infections per 100 workers per week at each facility and was estimated for weeks 2 through 6. A worker was classified as having an incident infection if they tested positive for the first time following a negative test one-week prior (or two weeks prior if they were not surveyed one-week prior) and if they had not previously tested positive for SARS-CoV-2 in our surveys. The population at risk includes all workers who had not yet been infected and with a negative test in the past week (or two weeks prior if not tested the prior week). Next-generation sequencing library preparation for positive samples. Viral RNA from positive patient samples was prepared for next-generation sequencing. Briefly, cDNA was generated using SuperScript IV Reverse Transcriptase enzyme (Invitrogen) with random hexamers. PCR amplification was performed using ARTIC network (https://artic.network/)V2 tiled amplicon primers in two separate reactions by Q5 High-fidelity polymerase (NEB) essentially as previously described (6). First-round PCR products were purified using Ampure XP bead (Beckman Coulter). Libraries were prepared using the Nextera XT Library Preparation Kit (Illumina) according to manufacturer protocol. Unique Nextera XT i7 and i5 indexes for each sample were incorporated for dual indexed libraries. Indexed libraries were again purified using Ampure XP bead (Beckman Coulter). Final libraries were pooled and analyzed for size distribution using the Agilent High Sensitivity D1000 Screen Tape on the Agilent Tapestation 2200, final quantification was performed using the NEBNext® Library Quant Kit for Illumina® (NEB) according to manufacture protocol. Libraries were then sequenced on the Illumina MiSeq V2 using 2 x 250 paired end reads. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 9, 2020. . https://doi.org/10.1101/2020.06.08.20125989 doi: medRxiv preprint Deep sequencing analysis. Next-generation sequencing data were processed to generate consensus sequences for each viral sample. MiSeq reads were demultiplexed, quality checked by FASTQC, paired-end reads were processed to remove Illumina primers and quality trimmed with Cutadapt, duplicate reads were removed. Remaining reads were aligned to SARS-CoV-2 reference sequence by Bowtie2 (GenBank: MT020881.1). Alignments were further processed, and quality checked, using Geneious software, consensus sequences were determined and any gaps in sequences were filled in with the reference sequence or cohort specific consensus sequence. Consensus sequences were aligned in Geneious and a neighbor-joining tree generated with the Reference sequence as an outgroup and 1000 bootstrap replicates. Results. SARS-CoV-2 prevalence and incidence in five SNFs. Employees at five SNFs throughout Colorado were tested weekly for SARS-CoV-2 viral RNA (vRNA) for a total of five or six weeks via NP swab. Staff included nursing, administrative, maintenance and other professions. A mean of 75 individuals per facility were tested weekly (range 29-115) with varying viral RNA levels within NP swabs (Fig. 1A). The percentage of NP swabs that tested positive for viral RNA each week varied considerably by facility, but showed a general downward trend over the course of the study period ( Fig. 1B). SARS-CoV-2 infection prevalence during the first week of testing, and the incidence of infections in subsequent weeks also varied widely between facilities ( Fig. 1C and Table A1). Staff at Site A remained uninfected throughout the entire six week study period. In contrast, 22.5% of workers at site D had prevalent infections at the start of the study and incidence was high initially (12.2 per 100 workers per week), declining over time. At site C, initial infection prevalence was lower (6.9%) and the incidence declined to zero by week 3. However, two facilities with low prevalence in week 1 (sites B and E) saw an increase in casesincluding, at site B, incident infections detected after four weeks of no infections. Infections were observed in workers across all job types, including roles with typically high patient contact (e.g. nursing) and low patient contact (e.g., maintenance) ( Table A2). Infectious SARS-CoV-2 in nasopharyngeal swabs. All NP swabs with detectable SARS-CoV-2 N1 vRNA were assayed for N2-and E-containing viral transcripts and evaluated for the presence of infectious virus by plaque assay (Fig. 2). We observed high concordance between SARS-CoV-2 viral RNA regardless of genome region assayed (N1, N2 or E) ( Fig. 2A). N1 viral RNA level was positively correlated with the amount of infectious virus (Fig. 2B) in swab material (least squares linear regression R 2 =0.7885), demonstrating the virus within these individuals is infectious. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Levels of viral RNA tend to decline over the duration of infection and correspond to low levels of infectious virus. Within the study period, incident infections varied in length from one to four weeks (Fig. 3A-D), as determined by detection of viral RNA via qRT-PCR for the SARS-CoV-2 N1 gene. Levels of viral RNA were generally highest during the first week of infection and declined in subsequent weeks (Fig. 3F). Infectious virus was detected in individuals with high levels of viral RNA and also declined over the course of infection. In general, infectious virus was not detected in individuals with less than 100,000 N1 RNA copies/swab (Fig. 3 and 2B). Six individuals exhibited two positive tests, separated by a period of negative tests (Fig. 3E). In these individuals, initial infections were typically followed by a period of 1-2 weeks during which viral RNA was undetectable. Viral RNA was then detected a second time, usually for just one week. These resurgences in viral RNA were normally associated with no, or very low levels of infectious virus. RNA quality was evaluated for the interim negative tests and was found to be within acceptable parameters (not shown). SARS-CoV-2 sequencing. 48 partial genome sequences were obtained over the first two weeks of observation. Mean genome coverage was 29,268nt (range = 27,656 to 29,831) and mean coverage depth was 621 reads per position (range = 376 -2,138). Gaps in sequencing alignment due to ARTIC V2 primer incompatibilities were filled in with the reference strain MT020881.1 pending additional sequencing. Once complete, these sequences will be deposited into NCBI. The resulting NJ tree obtained from these 48 sequences were aligned to a reference strain from early in the US outbreak and to four strains collected from Colorado. The tree was reasonably clearly resolved into a number of clusters with moderate bootstrap support (i.e. greater than 50%). These included two major clusters that were composed exclusively of sequences obtained from individuals employed at the same SNF (Fig. 4). Thirty-six sequences derived from 31 individuals from Site D formed a single cluster apparent in the lower part of the tree. Five sequences from four individuals from Site C similarly clustered in our preliminary analysis. In contrast, the remaining seven sequences from six individuals did not tend to associate with others from the same facility. Three different facilities are represented in this group of sequences. Finally, we sequenced SARS-CoV-2 from ten individuals on two successive weeks. In general, sequences from the same individuals were identical to, or very closely related to, those collected previously from that individual (e.g. C2980_1 and C2980_2). Some evidence for mutation accumulation was detected in, for example, C2673_1 and C2573_2, as well as D1882_1 and D1882_2. Discussion. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 9, 2020. . https://doi.org/10.1101/2020.06.08.20125989 doi: medRxiv preprint SNFs, including nursing homes, residential treatment facilities and other long-term care providers are increasingly recognized as key venues for SARS-CoV-2 transmission due to the vulnerable populations that tend to inhabit them. Due to their disproportionate contribution to the burden of COVID-19 mortality, they also represent an attractive target for surveillance testing and interventions that may include removing SARS-CoV-2 positive staff from the workplace. Therefore, we longitudinally sampled asymptomatic workers at five SNFs in Colorado to determine the proportion of workers at these facilities who had SARS-CoV-2 RNA in their nasopharynx, and continued weekly testing as they self-isolated for ten days. Return to work also required absence of fever for the final three days of isolation, without antipyretic use. Individuals who continued to test positive after two weeks were notified and recommended to continue self-isolation until a negative test result was returned. Our data clearly demonstrate the potential for large numbers of workers at SNFs to be asymptomatically infected and for the concentration of infections to vary widely across facilities. One facility never had a single worker test positive, while otfhers had up 22.5% of workers with SARS-CoV-2 RNA during the first week of surveillance. Infections varied considerably over time. The steady declines in the incidence of infections in staff in the two facilities with the highest initial infection prevalence is encouraging and hints at the potential impact of worker screening programs. However, the detection of incident infections at facility B, after four weeks of negative tests underscores the on-going threat of infections in worker populations. Notably, participation in our sampling scheme was high, with approximately 85% of workers from each facility being sampled each week. These results clearly demonstrate that asymptomatically infected workers may be common in particular SNFs. Because qRT-PCR detects viral RNA, not infectious virus, it may be that RNA-positive workers are not infectious to others, despite high levels of viral RNA. This could be attributable to the presence of free RNA (i.e. RNA that is not packaged into virus particles) or to antibodies within the mucosa that neutralize virus infectivity. Therefore, we tested NP swab samples for the presence of infectious virus via plaque assay. Importantly, we found that viral RNA was strongly positively correlated with infectious virus. In samples with high levels of viral RNA (N1 CT<30), infectious virus tended to be present, whereas lower viral RNA levels often had undetectable levels of infectious virus. Because plaque assays have lower sensitivity than qRT-PCR, it is unsurprising that samples with fewer than ~1000 RNA copies tended to have undetectable levels of infectious virus. Moreover, our data supports the observation that asymptomatic workers can harbor high levels of infectious virus within their mucosa and may therefore contribute to transmission of SARS-CoV-2. . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 9, 2020. . https://doi.org/10.1101/2020.06.08.20125989 doi: medRxiv preprint The longitudinal design of this study permitted characterization of asymptomatic individuals over time, including several who were vRNA and/or plaque assay positive for one, two, three or four consecutive weeks. We also observed individuals who were vRNA positive, then negative, then again became vRNA positive. While it is possible that these individuals were re-infected with SARS-CoV-2 after clearing their initial infection, we find that unlikely (7). Instead, this phenomenon may be due to host factors that led to suppression of viral replication in the nasopharynx, or an NP swab that failed to capture virus. It is also unlikely that the intervening negative tests in these individuals were due to poor RNA quality, because all samples were tested for human RNase P (CDC diagnostic guidelines) and had comparable levels across all samples. Sequencing of the viruses from these individuals, will help determine the likelihood of re-infection versus host factor activity. Sequencing of virus genomes also provided insights into SARS-CoV-2 transmission in our study population. Our data encompasses a sample of 48 genomes obtained during the first two weeks of observation (Site D is most highly represented because it had the highest number of SARS-CoV-2 cases during the first two weeks). Sequences from our study were compared to a strain sequenced during the early phase of the COVID-19 outbreak in the US, and to the four other SARS-CoV-2 sequences currently available from Colorado. The most notable feature of the phylogenetic tree is the fairly clear and consistent clustering of virus sequences by facility. This type of clustering could be due to transmission within staff at the facility, or from a shared community source outside of the workplace. For example, it may be that workers at these facilities socialize frequently outside of work or reside in close proximity, and that transmission occurred during non-work-related activities. Sampling in the workplace would therefore represent the distribution of genomes in the community and not work-related transmission. While we cannot rule out this possibility, it seems more likely transmission occurred within the workplace. Community transmission seems more likely to produce clusters that are not associated with a given facility, which is not what we have observed most prominently in this data thus far. Our sequencing results therefore are consistent with workplace transmission of SARS-CoV-2, but we cannot rule out the possibility that transmission occurred elsewhere. Additional data on the degree of viral genetic diversity in the larger community would . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 9, 2020. . https://doi.org/10.1101/2020.06.08.20125989 doi: medRxiv preprint add significant power to our ability to discriminate between these two non-mutually exclusive scenarios. Overall, our study highlights the high SARS-CoV-2 infection rates within asymptomatic individuals at a high-transmission risk/spread setting. Identifying, and removing these infected and infectious individuals from the facility, provides a way to reduce transmission and potential outbreaks. While our work focused on skilled nursing facilities, this approach could be applied to other high-risk settings (correctional facilities, factories, etc.). . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 9, 2020. . Colorado-derived sequences were obtained from NCBI. Figure 1 Week of sampling Week *Incidence is estimated as the number of new infections per week per 100 workers. A worker was classified as having an incident infection if it was their first positive test and they had a negative test one week prior (or two weeks prior if not tested one week prior). *Analysis looks at the percent of workers that tested positive at least once during the 5-6 week study period. Analysis is limited to the four sites where COVID-19 was detected (B, C, D, E). . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 9, 2020. . https://doi.org/10.1101/2020.06.08.20125989 doi: medRxiv preprint
Electrically driven photon emission from individual atomic defects in monolayer WS2 Electron-stimulated photon emission from individual point defects in monolayer WS2 could be visualized with atomic resolution. Previous observations of light emission by electron tunneling in heterostructures (50) and their many-body theoretical analysis (51) had already indicated that the photon emission yield was only a small fraction of the tunneling current (< 10 −3 ), so it could be described within rst-order perturbation theory, in a way analogous to its elastic counterpart (42). To rst-order, light emission in STM involves one quantum of excitation present in the system at any given time (i.e., the electron transition from the initial to the nal state, which can possibly couple to an intermediate excitation in the medium such as a plasmon, followed by subsequent decay into an emitted photon). Under these conditions, a full quantum treatment of both the radiation and the generally lossy tip-sample system is fully equivalent to a semi-classical formulation in which the initial and nal electronic states (assuming a one-electron picture with wave functions ψ i (r) and ψ f (r), and energies ε i and ε f , respectively) de ne an inelastic tunneling current j(r, t) = j(r)e −iωt + j * (r)e iωt with which we can treat as a classical source at the transition frequency ω = ε i − ε f (44). We only need to describe e −iωt components and then use causality to obtain the full time dependence by taking twice the real part of the calculated complex eld amplitudes. Dropping the overall e −iωt factor for simplicity, we can write the electric eld produced by the above current in terms of the electromagnetic 3 × 3 Green tensor G(r, r , ω) as E(r) = i ω d 3 r G(r, r , ω) · j(r ). Supplementary Materials We further assume a local dielectric description of the system, which allows us to characterize it through a permittivity (r, ω) given by the frequency-dependent dielectric function of the material present at each position r (e.g., = 1 in vacuum). The Green tensor then satis es the relation (52) ∇ × ∇ × G(r, r , ω) − k 2 (r, ω)G(r, r , ω) = 4πk 2 δ(r − r )I 3 , where k = ω/c is the free-space light wave vector and I 3 is the 3 × 3 unit matrix. As an illustration of the generality of this formalism, its application to transitions between the states of a fast electron in the beam of a transmission electron microscope readily leads to a widely used expression for the electron energy-loss probability (53). This approach is also useful to study the decay of excited atoms in the presence of arbitrarily shaped structures (54), where the small size of the associated current distribution j compared with the light wavelength allows us to condense it into a transition dipole. For STML, such transition-dipole approach also constitutes a reasonable approximation (37). Emission and decay from a dipole In the study of both atomic decay and STM-induced light emission we can generally exploit the fact that the extension of the involved electronic states is small compared with the light wavelength, thus reducing Eq. (S2) to E(r) ≈ G(r, r 0 , ω) · p, where is the transition dipole moment (the rightmost expression results from inserting Eq. (S1) into the integral and integrating one of the terms by parts) and r 0 denotes the position of the dipole (e.g., the atom position). Incidentally, when the initial and nal electron states are solutions of the Schrödinger equation with the same potential, the above expression for the transition dipole reduces to p = −e d 3 r ψ * f (r)ψ i (r) r, although this result might not be generally applicable in STM, where the two states can even have di erent e ective masses. In particular, for an excited atom in vacuum (i.e., taking G(r, r 0 , ω) = G 0 (r, r 0 , ω) = (k 2 I 3 + ∇ ⊗ ∇)e ik|r−r | /|r − r | (52)), calculating the emitted power from the integral of the outgoing Poynting vector over a distant sphere centered at the atom, and dividing the result by the photon energy ω, we readily nd the photon emission rate (i.e., the atom decay rate, since there are no other decay channels in this con guration) to be Γ 0 = 4k 3 |p| 2 /3 , which agrees with the expression obtained from a more tedious procedure involving quantization of the electromagnetic eld (55). The present formalism has also been extensively used to rigorously obtain the decay and emission rates of excited atoms near surfaces and nanostructures (54,56), with the Green tensor obtained analytically in simple geometries, or via numerical solution of Maxwell's equations in more involved structures. Like for an dipole in vacuum, the emission rate is also given by the integral of the Poynting vector over a distant spherical surface divided by ω. However, the emission rate is only one of the contributions to the total decay rate. The latter can be obtained from a simple general expression, obtained for example by integrating the outgoing Poynting vector over a small sphere surrounding the dipole (54): where r 0 is the dipole position and G ref = G − G 0 is the re ected (by the structure) component of the Green tensor. Dipole near a planar surface For r and r near a planar surface, G ref (r, r , ω) admits an analytical expression in terms of plane waves by introducing re ection at the surface through the Fresnel coe cients r p and r s for p and s polarization. More precisely, using the identity where R = (x, y) and k z = k 2 − Q 2 + i0 + with Im{k z } > 0, operating with k 2 I 3 + ∇ ⊗ ∇ on the exponentials inside the integrand, and projecting onto the dyadic identity I 3 =ê ± p ⊗ e ± p +ê s ⊗ê s +k ± ⊗k ± de ned in terms of polarization and propagation unit vectorsê signs must be used for z > z (z < z ); likewise, taking z, z > 0 and the surface at z = 0, the re ection component becomes G ref (r, r , ω) = (ik 2 /2π) d 2 Q 1/k z e iQ·(R−R )+ikz(z+z ) r pê + p ⊗ e − p + r sês ⊗ê s , where downward waves emanating from z are converted into upward waves reaching z upon re ection at the surface. Introducing these expressions into Eq. (S3) and taking the atom to be placed at a distance z 0 above the surface (i.e., r = (0, 0, z 0 )), we obtain the eld In the far-eld limit (kr 1), this expression yields E(r, ω) → f (kR/r) e kr /r, which allows us to evaluate the emitted power as the integral of the radial component of the Poynting vector over a distant upward hemisphere, (c/2π) z>0 dΩ r |f (kR/r)| 2 , from which the photon emission rate Γ em is obtained by one more dividing the result by ω. Finally, we nd In the absence of a surface (r p = r s = 0), the above expression reassuringly yields Γ 0 /2, indicating that half of the decay rate in free space is accounted for by upward photon emission (and the other half by downward emission). In the presence of the surface, the decay rate is given by Eq. (S5); for our dipole near a planar surface, this leads to the expression The Q > k part of this integral involves evanescent waves (i.e., an imaginary normal light wave vector k z ) that contribute to the decay through absorption (proportional to Im{r p } and Im{r s }), for example via the emission of plasmons (57); the Q < k part is a combination of absorption and photon emission. We rst illustrate the application of Eq. (S7) to study the emission from a dipole on a planar surface in the absence of a tip. We consider an out-of-plane dipole, which will later be identi ed with the tip-sample current, as we argue above. Figure S1A shows the resulting spectra for different materials. A rst observation is that the emission grows with photon energy, as expected from the k 3 coe cient in front of the integral in Eq. (S7); in physical terms, the dipole appears to be bigger in front of the photon wavelength as the energy increases, therefore undergoing better coupling to radiation. A second observation is that the emission is similar in magnitude in all cases, and in particular, the results for SiC are nearly indistinguishable when the material supports monolayers of graphene and WS 2 . A third observation is that the emission rate remains almost unchanged when the dipole is separated by 5 nm from the surface (note that the results are normalized to the dipole strength |p| 2 ), as this distance is much smaller than the light wavelength in the spectral range under consideration. The emission rate is only a part of the decay rate, as the latter also receives contributions from absorption by the material [see Eq. (S8)]. At zero separation, the decay rate diverges as a result of the unphysical 1/r Coulomb interaction at small distances in the local response approximation used here to describe the materials (i.e., we use frequency-dependence dielectric functions). This divergence is however removed when incorporating spatial dispersion in the response, which is in general an e ect that becomes important only at small separations below 1 nm (58), so for simplicity we ignore it in this discussion and continue using the local approximation. In Fig. S1B, we plot the upward emission and decay rates when the dipole is 2 nm above the surface, both of them normalized to the emission rate in vacuum Γ 0 . The normalized emission rate near the material takes unity-order values, and therefore, we conclude that the presence of the surface does not signi cantly a ect the emission relative to free space, as we one could anticipate from the close resemblance of the results obtained for di erent materials in Fig. S1A. At low frequencies, gold and silver produce good screening (perfect-conductor limit), thus doubling the magnitude of the dipole (through its image contribution) and increasing the upward emission by a factor of 4 relative to free space. In contrast, the decay rate becomes 2-4 orders of magnitude larger than in free space, with silver giving the lowest values among the materials under consideration because of the relatively low losses in this noble metal. Again, the decay rate in SiC does not change signi cantly (in log scale) when decorating it with monolayers of graphene or WS 2 . The decay is thus dominated by non-radiative contributions [parallel wave vector Q > k in Eq. (S8)]. Although this is an inelastic e ect, it should result in a dark tunneling current, which is still smaller than the elastic tunneling rate (see below). The radiationless absorption accompanying this process should additionally produce local heating, thus raising the interesting question of whether it can be detected through its bolometric e ect, or perhaps via direct charge-carrier separation in an appropriately engineered sample. The angular dependence of the emission (Fig. S1C) does not depart signi cantly from the well-known cos 2 θ distribution in free space as a function of emission angle relative to the dipole orientation. This dependence should also change when a tip is present, although the bulk of the emission is directed sideways, and therefore, the e ect of the tip should not be dramatic. Elastic and inelastic dipoles associated with tunneling from a tip into a defect of a 2D material We now address the question of what is actually measured by recording maps of light emission in a STM while scanning a defect in a 2D material. We adopt the dipole approximation [Eqs. (S3) and (S4)] and consider a spherical evanescent electron wave ψ i (r) = Ce −κ i |r−r i | /|r − r i | emanating from the tip (43) (centered at r i , see Fig. S2). Nevertheless, the conclusions drawn below should not be too sensitive to the exact details of the initial wave function, provided it has an atomic-scale origin. Here, C is a normalization constant that depends on the detailed atomic Figure S2. Schematic representation of the elements involved in the theoretical description of STML from a defect in a 2D material. A dipole moment (downward arrow) associated with the transition between initial and nal electron states acts as an electromagnetic source and produces light emission away from the tip region, assisted by coupling to tip plasmons. The initial state originates in an atomic protuberance from the tip and therefore, it can be approximated as an spherical wave emanating from a tip position r i . Incidentally, plasmon-photon coupling can take place after long plasmon propagation away from the apex region. shape and composition of the tip, while κ i gives the evanescent spill out of the initial state outside the tip, which is determined by its binding energy relative the the vacuum threshold (see below). Using the representation of this type of wave given by Eq. (S6) combined with Eq. (S4), we nd a transition dipole where κ z = κ 2 i + k 2 , we have renamed the integration variable as Q → k to distinguished the optical parallel wave vector Q (see above) from the electronic parallel wave vector k , and we have used the fact that z < z i in the region near the nal state (see Fig. S2). We now argue that the initial electron evanescent wave has a decay length 1/κ i / √ 2m e φ dictated by the tip work function φ ∼ 5 eV; this leads to 1/κ i 0.1 nm, which we have to compare with the lateral size of the 2D nal state D ∼ 1 nm; we conclude that κ i D 1, and therefore, the largest values of k ∼ 1/D needed in the above integral to obtain a good representation of the nal state can be neglected in front of κ i ; therefore, we can approximate κ z ≈ κ i and disregard the in-plane components of p, which then reduces to As the emission is proportional to |p| 2 [see Eq. (S7)], we conclude that the photon yield is probing the nal-state wave function at the lateral position of the atomic-scale tip: We can obtain a more insightful result by noticing that the k in-plane Fourier component of the nal state must decay with distance z above a plane at z = z f that is right outside the , where κ f is determined by the nal state energy relative to the vacuum level, and in the rightmost part of this expression we have approximated k κ f , similar to what we have done for κ i in the initial state. Also, neglecting the photon energy and applied bias potential energy in front of the binding energy of initial and nal states referred to vacuum, we further approximate κ f ≈ κ i , which allows us to work out the integral in Eq. (S9) to nd where d = z i − z f is the tip-sample distance. The emission probability is then obtained as 4k 3 |p| 2 /3 (see expression for Γ 0 above) multiplied by the radiative Purcell factor (i.e., the ratio of the radiative component of the local density of optical states to its value in vacuum). This factor can be substantially enhanced due to coupling to plasmons (see below). We now recall that the elastic tunneling current is also proportional to ψ f (R i , z f ) 2 (43), and therefore, both the elastic current and the inelastic photon emission rate are proportional to the nal state electron probability right under the tip position. In more detail, specifying the Terso and Hamann (43) formalism to a nal state with the characteristics considered above, we nd the STM elastic current to be contributed by the initial state under consideration with the matrix Again, we conclude that both STM and STML intensities are proportional to the defect orbital electron probability |ψ f (R i , z f )| 2 at the sample surface, and both of them are attenuated by the same exponential factor e −2κ i d . . In more rigorous terms, this is a consequence of the symmetry of the electromagnetic Green tensor, which under those conditions satis es G(r, r , ω) = transpose{G(r , r, ω)}. A direct application of the reciprocity theorem to our STML geometry (i.e., with r at the tip and r at the light detector) allows us to state that the enhancement in the emission rate from the transition dipole of Fig. S2 along a given outgoing direction must be equal to the enhancement of the near-electric-eld intensity at the position of that dipole for light incident from that same direction. This enhancement factor is also known as the Purcell factor P (ω) (59), which coincides with the variation of the emission rate from an optical emitter (e.g., a quantum dot or a uorescent molecule) normalized to the rate in vacuum. We calculate P (ω) = |E/E ext | 2 (i.e., the ratio of local to externally incident eld intensities, which must be then understood as the emission enhancement through reciprocity) in Fig. S3 for a tip of 20 nm radius and 1 nm separation from a SiC surface. The enhancement reaches 4 orders of magnitude at energies below the gold plasmon (i.e., < 2.5 eV, see Fig. S3A) and is highly localized near the tip (Fig. S3B). Incidentally, the Purcell factor coincides with the so-called local density of optical states normalized to its value in vacuum (60), and just for clarity, we de ne P (ω) in this work as the contribution coming from radiative modes (i.e., P (ω) quanti es the e ective number of radiative decay channels at frequency ω normalized to that number in vacuum). The spectral pro le of the enhancement depends on the detailed tip morphology. Upon examination of several tips (see below), it is unlikely that a spectrally narrow plasmon is supported by the tip, and therefore, P (ω) is expected to be generally characterized by a broad spectral distribution with a sharp cuto at the plasmon energy. Additionally, like in the calculations of Fig. S3 within the eld-enhancement picture, the tip is expected to act as a collector of light that couples to propagating plasmons, which in turn move toward to tip region, thus enhancing the eld at the tip apex relative to the incident one. In this respect, there is room for improvement of tip around ω ∼ 2 eV with bandwidth ∆ω ∼ 1 eV, a tip-sample distance d ∼ 1 nm, and a Purcell factor P ∼ 10 4 (see Fig. S3), we nd a photon-to-electron overall ratio of 10 −4 , in excellent agreement with our experimentally estimated yield. Incidentally, we are neglecting in this calculation inelastic tunneling associated with nonradiative processes (i.e., mediated by direct material absorption), which could produce a signi cant contribution (see Fig. S1B), although this should not change the order of magnitude of the estimated ratio here obtained. It is important to note that elastic tunneling (for electron injection into the sample) requires the energy E 2D of the nal 2D defect state studied in this work to be below the Fermi level of the metallic tip E tip F , while in STML the photon energy ω compensates for the di erence between tip and 2D states, which must then satisfy the condition 0 < ω < E tip F − E 2D , thus producing a correspondingly broad spectral emission. Obviously, the di erence E tip F − E 2D depends on the applied potential energy V bias , with the onset for emission determined by the condition V bias > E 2D − E 2D F (i.e., the tip Fermi energy must be above the 2D defect state energy). MONOLAYER AND BILAYER WS 2 ON GR/SIC SAMPLE Mono-and bilayer WS 2 islands were grown ex-situ by chemical vapor deposition (20) on epitaxial graphene (GR) on (6H)-SiC substrates (62). Further details can be found in Refs. 19, 20. We identi ed several types of defects in as-grown samples including transition metal substitutions and oxygen substituting sulfur (19,35). Sulfur vacancies, which are absent in as-grown samples, can be deliberately introduced by annealing the sample at 600 • C in vacuum (36). Figure S4. dI/dV spectroscopy of WS 2 defects and various defect-free substrates. A dI/dV spectroscopy on di erent substrates and defect locations (see legend on the right). B dI/dV spectroscopy of the two unoccupied Vac S top (red) and three Cr W defect states (orange). ADDITIONAL STS MEASUREMENTS In Fig. S4, dI/dV spectra at di erent sample locations are shown: mono-and bilayer epitaxial graphene on SiC (light gray), mono-and bilayer WS 2 on Gr(2ML)/SiC (black and dark gray), a sulfur vacancy (Vac S ) and chromium substituent (Cr W ) in WS 2 (1ML)/Gr(2ML)/SiC. Vac S features two unoccupied defect states in the band gap (36) and Cr W hosts three defect states close to the conduction band minimum (19). STM LUMINESCENCE Optical spectra at di erent sites The STM luminescence is speci c to the atomic site at which electrons are injected. In Fig. S5 the STML spectra recorded on Vac S top, Cr W , non-defective WS 2 below the WS 2 bulk emission. The steps in the defect emission (white arrows) can be explained by the discrete defect states, which are the nal states of the inelastic electron tunneling process. On both WS 2 (1ML) and WS 2 (2ML) only emission at higher tunneling biases is observed (top left corner in Fig. S5F,J), which correspond to inelastic electron tunneling into the WS 2 conduction band. STML is also observed for Gr/SiC and Au(111). The detected photons have energies lower than or equal to the injected electron energy. We associate the isoelectronic transitions where the photon energy equals the electron energy (oblique dashed lines in Fig. S5K STML is also observed at negative sample bias. In Fig. S7 36). Therefore, the defect becomes negatively charged (Vac − S ) in the vicinity of the tip. Accordingly, STML for Vac S at negative bias (Fig. S7C) is likely related to inelastic tunneling events out of the populated (formerly unoccupied) defect states. STML current dependence The photon counts follow a linear relation as a function of tunneling current at least up to 50 nA. The linear dependence between injected electrons and emitted photons suggests a singleelectron excitation process. No saturation behavior or current-dependent change in emission spectrum were observed, in contrast to what has been reported for other systems (30). The extrinsic emission yield (detected photons per tunneling electron Y , see above for an analytical estimate) is the product of the intrinsic quantum e ciency η 0 of the radiative tunneling process, the tip-mediated coupling e ciency κ tip from the tunnel junction into the far-eld (plasmon enhancement), and the detection e ciency of the optical setup κ setup : The setup detection e ciency is about 10 −3 , which accounts for the solid angle of collection, optical losses ( ber coupling, mirrors, and lenses), and the detector quantum e ciency. At larger tunneling biases of about 3.5 V, we detect ∼ 10 −7 photons per electron in the far eld. Hence, the intrinsic quantum e ciency times the tip enhancement is estimated to be 10 −4 using a standard Au coated W tip. As discussed in the next section, the tip shape has a decisive impact on the brightness and spectral shape of the emission. Here, we rely on stochastic tip changes by nanometer deep indentations into a Au surface. The plasmonic coupling of the quantum emitter could easily be enhanced by choosing a more optimized plasmonic or optical cavity. A representative STM tip was analyzed using scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX). We used a ZEISS Ultra 55 FESEM setup equipped with a Bruker X-ray energy dispersive spectrometer for elemental mapping. As shown in Fig. S9, the tip apex becomes morphologically less de ned on the sub-micron scale after eld emission and nano-indentations into the Au surface. However, the W tip wire is clearly coated with a Au lm at the very apex of the tip, which results in a plasmonic enhancement e ect. We also compared etched tungsten tips to etched silver tips. Like for the W tips, we used eld emission and surface pokes in Au to sharpen the Ag tip. The bulk tip material has essentially no e ect on the STML spectrum, as shown in Fig. S10. The STML spectra on Au(111) are very similar for both the W and Ag tips. This suggests that only the mesoscopic tip shape and material (Au in both cases) matters for the spectral emission properties. The shape of the tip, however, has a decisive e ect on the STML emission spectrum. While the STML spectrum is only marginally changed after small pokes (≈1 nm approach from the tunneling set-point) at zero bias (Fig. S11A), the spectral pro les change substantially when 2.5 V are applied during the pokes (Fig. S11B). The emission intensity can be dramatically changed, but also di erent spectral ranges become enhanced. In Fig. S12 a series of STML spectra are shown after consecutive big pokes (> 1 nm approach from the tunneling set-point) at zero bias. The spectrum is considerably modi ed after each poke, both in intensity and spectral shape. This shows that the spectral transfer function that modulates the STML spectra is dominated by the mesoscopic tip shape. It also hints at the potential for spectral enhancement by tailoring the nanocavity formed by the tip and the substrate. A B after small indentations @ 0V after small indentations @ 2.5V Figure S11. STML spectrum on Au(111) after small tip pokes. A STML spectra taken on Au(111) after consecutive tip reshaping by small (< 1 nm) pokes into the Au surface at 0 V bias. The emission spectrum using intermediate-state tips is barely changed. B STML spectrum on Au(111) after consecutive small tip reshaping at 2.5 V bias. The STML is gradually modi ed in intensity and spectral weight. after large indentations (>10nm) @ 0V Figure S12. STML spectrum on Au(111) after big tip pokes. STML spectra taken on Au(111) after consecutive tip reshaping by large (> 10 nm) pokes into the Au surface at 0 V bias. The emission spectrum is signi cantly changed after each poke in both intensity and spectral shape.
Plasmid DNA contaminant in molecular reagents Background noise in metagenomic studies is often of high importance and its removal requires extensive post-analytic, bioinformatics filtering. This is relevant as significant signals may be lost due to a low signal-to-noise ratio. The presence of plasmid residues, that are frequently present in reagents as contaminants, has not been investigated so far, but may pose a substantial bias. Here we show that plasmid sequences from different sources are omnipresent in molecular biology reagents. Using a metagenomic approach, we identified the presence of the (pol) of equine infectious anemia virus in human samples and traced it back to the expression plasmid used for generation of a commercial reverse transcriptase. We found fragments of multiple other expression plasmids in human samples as well as commercial polymerase preparations. Plasmid contamination sources included production chain of molecular biology reagents as well as contamination of reagents from environment or human handling of samples and reagents. Retrospective analyses of published metagenomic studies revealed an inaccurate signal-to-noise differentiation. Hence, the plasmid sequences that seem to be omnipresent in molecular biology reagents may misguide conclusions derived from genomic/metagenomics datasets and thus also clinical interpretations. Critical appraisal of metagenomic data sets for the possibility of plasmid background noise is required to identify reliable and significant signals. Results equine Infectious anemia virus pol sequences are derived from extrinsic plasmids. In a previous study, we detected contigs containing the polymerase (pol) gene of the retrovirus Equine infectious anemia virus (EIAV) in all evaluated human samples from healthy volunteers (n = 4) 48 . EIAV is a retrovirus infecting Equidae but not reportedly humans and also has not been reported as a zoonotic disease of humans so far 49 . A phylogenetic analysis of the sequences found in relation to those of other lentiviridae such as Human Immunodeficiency Virus-1 pol (HIV-1; NC_001802.1), Feline Immunodeficiency Virus pol (FIV; NC_001482.1) and Maedi/Visna pol strain kv1772 (NC_001452.1) showed a high similarity of the sequences detected with the pol gene of the EIAV clone CL 22 strain (ID: M87581.1; Fig. 1). Further alignment of sequences showed no genetic variation among the pol sequence we found, which is highly unusual for retroviruses with high mutation rates. Only when compared to the standard strain EIAV Wyoming, a small number of nucleotide differences had been identified. All fragments found, corresponded only to a part of the pol gene of EIAV reference strains (1.667 kb). Furthermore, the pol sequences identified were flanked by a CmR sequence (Chloramphenicol acetyltransferase; ID: EDS05563.1), and in the case of the longest contig available by an additional Bla Tem-1 resistance-encoding sequence (ID: WP_000027050.1, Fig. 2A). Further assembly of EIAV pol flanking sequences revealed additional genes indicative for the presence of an expression vector including a Histidine-Tag, a Ribosomal Binding Site (RBS), a lac operator, a T5 promoter and a lambda t0 as well as a rrnB T1 terminator (Fig. 2B). To validate the presence of a vector and to identify the source of contamination, we tested all laboratory consumables and clinical samples used previously by Thannesberger et al., with the use of a PCR assay that is specific for the EIAV pol sequences found. Surprisingly, all of these samples were negative for EIAV pol sequences (Fig. 3A). To exclude the presence of an RNA template of the EIAV pol sequences, samples had been tested again after reverse transcription with Omniscript RT Kit (Qiagen, Hildesheim, Germany). After that, all samples that were reverse transcribed had been tested positive for EIAV pol sequences, including also the non-template control of the reaction mix (Fig. 3B). Therefore, we suspected that the RT kit used (Omniscript RT Kit) is the EIAV pol source. To validate this hypothesis, we treated all of these samples with a different reverse-transcriptase (iScript cDNA Synthesis Kit, Biorad, California, USA) and repeated the same experiment. These experiments yielded uniformly negative test results (data not shown), which further indicates that the Omniscript RT Kit was the source of the EIAV pol sequences. In order to quantify the overall genomic background noise present during the virome testing procedure, a qPCR was designed that is specific for the CmR resistance found frequently in the EIAV contigs. Three different time steps, reflecting the enzymatically treatment incorporated in the standard workflow of the VIPEP method, had been tested and designated T0, T1 and T2. Time step T0 contained the reverse transcription mix (Omniscript RT Kit) without performing reverse transcription, T1 after the reverse transcription, and T2 was after a multiple displacement amplification (MDA) of 1 µl T1 with REPLI-g Mini Kit (Qiagen, Hildesheim, Germany). The plasmid copy number increased from 39,249 per µl at T0 to 383,045 copies at T1 and 245,444,045 copies at T2. Characterization of omnipresent natural and artificial plasmid residues in NGss reagents. After that, all contigs available from the previous study had been re-evaluated in silico for the presence of plasmid sequences such as selection markers and origin of replication to evaluate the possible presence of additional artificial expression vectors. We found multiple other sequences exhibiting characteristics of expression vectors (Fig. 4). Of 4956 contigs from twelve samples, 1.61% (n = 80) contained plasmid sequences. These sequences were found in such diverse samples such as human urine (n = 4), pharyngeal lavages (n = 4), technical replicate groups Analysis of EIAV plasmid (A) Blast search revealed for sequences above 2.5 kb the presence of a CAT (Chloramphenicolacetyltransferase). For the longest sequence UN_TR272_len_4326 a second bacterial resistance (AmpR-) conferring a resistance to ß-Lactam antibiotics such as Ampicillin. (B) Plasmid map of the predicted Omniscript RT Kit expression plasmid which was identified as the source of the EIAV pol. Qiagen confirmed that such a plasmid is used for their Omniscript product. The EIAV pol sequence is in-frame with a histidin-tag, flanked by a BamHI and a HindIII restriction site and followed by a lambda t0 terminator. Further downstream a inactive CmR resistance followed by a rrnB T1 Terminator. Further upstream a AmpR promoter together with a ß-lactamase can be found. In front of the Insert is a Ribosomal Binding Site (RBS) with a T5 promoter to ensure strong transcription. The system is induced by a lac operator. The backbone of the plasmid seems to be pDS56/RBSII and therefore the origin of replication may be pBR332. The whole plasmid with the name p6EIAV-RT was created by Dr. Stuart J LeGrice in 1991. (n = 2) and a non-template control (n = 1). The relative abundance of plasmid background ranged from 0.16% in the Non-Template Control (NTC) up to 20.83% in one patient sample. Interestingly, the urine samples had a higher plasmid background with a mean of 11.67% (Max: 20.83%; Min: 2.65%; SD: 8.97%) compared to the pharyngeal lavage samples with a mean of 4.67% (Max: 10.47%; Min: 2.65%; SD: 4.42%). The urine technical replicates had higher plasmid residues compared to the pharyngeal lavage technical replicates (6.757% vs. 4.225%) (Fig. 5). Characterization of plasmid residues. Of the 80 contigs with plasmid signatures, 41% (n = 33) had an origin of replication, 63% (n = 51) a selection marker and 52% (n = 42) an insert. Apart from the EIAV coding expression vector, three other artificial expression vectors could be identified by their inserts. Of these inserts, 19% included a chimera of a human-mouse chimera Bicaudal 1 gene (n = 8), 11% the UL-32 gene of the Cytomegalovirus (n = 5) and 5% the leukemia fusion protein AML1-MTG8 (n = 2). All contigs with a specific insert had been aligned and the consensus sequence displayed in SnapGene Viewer gave a predicted plasmid map (Fig. 5). The plasmids coding for Bicaudal 1 chimera and UL-32 genes were identical to those used for other studies in our laboratory and had, therefore, been identified as laboratory contaminants. BLAST of the 2268 bp long fragment of "Und_TR29_ len2635", found in the Und sample (Undetermined contigs), showed a 99% query coverage with homo sapiens mRNA for AML1-MTG8 fusion protein (GenBank: D13979.1). The source of this plasmid remains unknown. Natural plasmids residues are derived from a variety of sources. Besides the presence of artificial plasmids, natural occurring plasmids from different species were found in all twelve samples (n = 12). The most frequent plasmid was from Micrococcus spp. Table 1). The plasmid sequences we found from Serratia maracesens pUO901 (ID: NG_047232.1) and Enterobacter cloacae pEC005 (ID: NG_050201.1) coded only for antibiotic resistances. The first one was identified as a aminoglycoside-(3)-N-acetyltransferase (AAC(3)s), whereas the latter coded for a Class A extended-spectrum beta lactamase TEM-157 (Table 1). These plasmids are likely from natural sources. Detection of plasmid residues in commercially available polymerases. To evaluate whether plasmid residues are commonly present in commercially available polymerase preparations, we tested Taq polymerases (n = 4), high-fidelity polymerases (n = 2) and qPCR mastermixes (n = 7) for the presence of an origin of replication (pBM1/pUC19/pBR322/ColE1) and selection markers (bla TEM-1 ; CmR). An origin of replication and an ampicillin resistance had been found in two polymerase preparations (HotStarTaq, EvaGreen). The complete definition for an artificial fragmented plasmid is as following: "May contain several artificial sequences similar to a complete vector but is missing one criteria which can be: ori (O), selection marker (SM), promoter region with insert (I) regardless length and is not naturally occurring". Due to the nature of fragmented plasmids, they may have either one or two features and are further characterized by them (e.g. ori with selection marker = O + SM). Sequences containing neither an ori, selection marker or insert but contained any other plasmid feature (e.g. histidine-tags) were termed very short fragments (VSF An origin of replication had only been found in one polymerase preparation (iTaq Universal Probes Supermix). A Chloramphenicol resistance had not been found in any of the polymerase preparations tested. The methodology used did not incorporate a negative control to see if a positive signal can be obtained. Therefore, possible laboratory cross-contamination could not be excluded entirely although being unlikely due to PCR mastermix preparation in CleneCab PCR Workstation and highly specific primers. (Herolab, Wiesloch, Germany). To confirm our findings, enzymes preparations that had been tested positive for plasmid residues were used as template and amplified with a previously plasmid negative polymerase preparation, (GoTaq G2 Hot Start Polymerase; Promega). The HotStarTaq was still positive for Ori-and Ampicillin presence and the EvaGreen 2X qPCR Express Mix-ROX remained only positive for Ori presence, indicative for possible presence of artificial expression plasmids. All previous positive tested Taq enzymes from BioRad had been tested negative and, therefore, reconfirmed negative for plasmid presence ( Table 2). Analysis of metagenomics studies. Finally, we analyzed previously published metagenomic data sets of human gut and plasma samples as well as a data set using different whole genome amplification kits [50][51][52] for the presence of plasmid residues. Retrospective analysis of these data sets, natural plasmid residues had been found in most sets and most commonly Acinetobacter sp. and Escherichia sp. as source organisms (Table 1 and Table 2). The highest diversity of plasmids had been found in metagenomic data focusing on the fecal microbiome 53 . Especially metagenomic studies analyzing high bio mass samples such as microbiome studies are expected to contain a higher amount and diversity of natural plasmids compared to samples with low biomass (e.g. plasma). Remarkably, a plasmid highly similar to Xuhuaishuia manganoxidans strain DY6-4 had been detected in several samples of two unrelated metagenomics studies although this bacterium has been found only in the Pacific Clarion-Clipperton Fracture Zone 51 (Table 3) so far. Discussion The presence of bacterial DNA residues in commercially available enzymes, DNA extraction kits and other molecular grade reagents have been recognized recently 21,26,41,52 . The presence of plasmids in molecular biology reagents, however, has remained unnoticed, so far. We found natural and artificial plasmid residues in most tested NGS reagents including particularly recombinant generated enzyme preparations. Sources of these plasmids included laboratory contaminants as well as bacteria and expression vectors used for the generation of recombinant proteins. Plasmid sequences have been identified frequently in NGS studies, but may have been attributed erroneously to bacteria. Hence, plasmid sequences present in clinical and environmental samples may have far-reaching consequences. Metagenomic studies are increasingly used in addition to standard PCR assays to address clinical questions as reviewed in Klymiuk & Steininger 54 . Enzymes used for these assays are generated by recombination in (with) prokaryotic systems. Plasmid sequences may misguide clinical treatment decisions and adversely affect patient outcome. For example, antimicrobial resistance testing is increasingly adjunct by testing bacterial isolates for the 48 presence of genes that confer resistance 55 . In the studies analyzed, common antibiotic resistance gene sequences had been found from Enterobacter cloacae and Serratia marcesens. These two pathogens are increasingly resistant to multiple or most antimicrobial drug classes and the presence of resistance genes in clinical samples would not be surprising or questioned 14,15,17,45 . Consequently, the choice of antimicrobial treatment would be misguided towards reserve antimicrobials that are more toxic than standard ones. At least one patient death was documented in association with a false-positive test result by a contaminated mastermix 56 . Misguidance of clinical decisions may also be associated with false-positive PCR results. We found evaluated EIAV sequences in all human samples. We could identify the plasmid used for the generation of the reverse transcriptase as the source of these sequences. Identification of a horse retrovirus in human samples was implausible, which guided our investigation into the right direction. In general, the presence of host-specific viral, genomic or plasmid DNA (e.g. Xuhuaishuia manganoxidans strain DY6-4) in samples derived from other hosts should be questioned for their plausibility. Still, recombinant reverse transcriptase is also used in PCR assays for detection of EIAV in horse samples and this pol sequence is used in several detection assays as target 57 . A positive test result would be plausible and negative controls would test negative because they are usually not treated with a reverse transcriptase. In case of a single positive EIAV test result, however, all horses of the stable would be culled. Elimination of plasmid sequences from molecular biology reagents is difficult and costly. The presence of natural plasmids from bacteria such as Ralstonia sp., Bradyrhizobium sp. and Legionalla sp., are common contaminants in Ultrapure Water and are difficult to avoid 21 . Contamination of reagents from the human body may remain unnoticed. In one of our recent metagenomic studies, we found plasmid fragments from Ralstonia sp., Burkholderia sp., Enterobacter sp., Acinetobacter sp., and Micrococcus sp. 48 . The first two were likely introduced by water samples, whereas the later were likely introduced through human handling as these microbes are part of the normal human skin flora 58 . Previously, we found Bicaudal-1 and UL32 protein expression plasmids in human samples 48 . These plasmids were very likely contaminations as our research group used these plasmids in another research study. In addition, prokaryotic expression plasmids are commonly used to generate enzymes for molecular biology and are difficult to eliminate. For example, we identified the plasmid used for the generation of the EIAV reverse transcription as the pDS56/RBSII-based plasmid expression vector by the backbone 59 . Nevertheless, we also found differences in the level of contamination between the enzyme preparations from different manufacturers, which also indicates the feasibility of reducing this background signal. A possible, inexpensive and feasible solution to the problem of plasmid residues in metagenomics studies may be the testing of technical replicates of the samples as well as the negative controls in parallel and subtracting during bioinformatics analysis signals detectable in both samples. Databases that comprehensively annotate the different expression vectors used for recombinant generation of proteins are important in this respect. Furthermore, specification of the type and sequences of expression plasmids used in the package inserts of every molecular biology reagent would be helpful. Nevertheless, most production processes of enzymes are proprietary and, in our experience, companies are very hesitant to provide this information. Another solution, presented by de Goffau and colleagues, would be to use different isolation kits during sample preparation to control if the results are reproducible 60 . In conclusion, we found that plasmid sequences are frequently present in molecular biology reagents. The sources for this background noise in metagenomic studies are diverse and include contamination of reagents from the environment, cross-contamination in the laboratory from purposely generated plasmids, as well as plasmids used for the generation of enzymes. The amount and type of plasmids found in metagenomics studies may greatly vary upon pre-treatment of samples (e.g. use of different enzymes). The presence of these plasmids in samples may have far-reaching consequences including the misguidance of therapeutic decisions in human and veterinary medicine -particularly when unexpected. Our observations open up whole new avenues to identifying and appropriately addressing these potential issues. Background plasmid noise may be eliminated for example from signals by use of appropriate negative controls, manufacturers of enzymes and recombinant proteins may inform customers of the possible presence of plasmid traces, and metagenomic data will be interpreted even more cautiously. Methods Urine and pharyngeal lavage samples from human healthy volunteers had been collected in a sterile collection cup (Greiner Bio-One GmbH, Kremsmünster, Austria) as described previously 48 . Lavages had been collected by asking the patient to gurgle with 10 ml of sterile, physiologic sodium-chloride solution (0.9% NaCl Mini-Plasco isotonic solution, B. Braun-Austria GmbH, Maria Enzersdorf, Austria) for a minimum of one minute and collecting the lavage fluid in a sterile tube. Samples had been kept on ice and had directly been processed. Nucleic acids had been enriched with Vivaspin 20 50.000 MWCO PES ultracentrifugation columns (Sartorius, Aubagne, France) at 4000 g and 4 °C. Total DNA and RNA had then been purified with the Roche High Pure Viral Nucleic Acid Kit (Roche, Mannheim, Germany) and reverse transcribed with either iScript cDNA Synthesis Kit (Bio-Rad, Hercules, USA) or Omniscript RT Kit (Qiagen, Hildesheim, Germany) according to the manufactures instructions. The samples had been cryopreserved at −80 °C until testing. For quantitative analyses of plasmid copies, a qPCR assay amplifying in part the chloramphenicol acetyltransferase (CmR) encoding gene had been designed with the use of the online-tool GenScript Real-time PCR (TaqMan) Primer Design (https://www.genscript.com/ssl-bin/app/primer). The 20 µl reaction mix contained 9 µl iTaq Universal Probes Supermix (Bio-Rad, Hercules, USA), 300 nM primers (Forward: 5′-GAC-GGT-GAG-CTG-GTG-ATA-TG-3′; Reverse: 5′-TGT-GTA-GAA-ACT-GCC-GGA-AA-3′), 200 nM of the CmR Probe (5′-FAM-CGC-TCT-GGA-GTG-AAT-ACC-ACG-ACG-TAMRA-3′) and 5 µl template. The reaction had been done in a 96-well optical microtiter plate (Life Technologies, Carlsbad, CA, USA) and amplified in a StepOnePlus Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA). The reaction mix had been pipetted into a MicroAmp Fast 96-Well Reaction Plate 0.1 ml (Applied Biosystems, California, USA) and afterwards 5 µl of template had been added. The cycling conditions included an initial denaturation step at 95 °C for 2 minutes, followed by 40 cycles of denaturation for 15 seconds at 95 °C and 20 seconds extension time at 60 °C. Every run of the CmR qPCR included a serial dilution of the plasmid pDONR221 from 3 × 10 1 to 3 × 10 6 copies per well for calculation of a standard curve and quantification of target sequences. Each DNA sample had been analyzed in triplicate and at least 12 negative controls, only containing the reaction mix with 1 µl ddH 2 0 as template, had been included in each run. In order to test commercially available polymerases for presence of plasmid sequences, a specific pan-Ori primer pair (For ward: 5′ -AGT-TCG-GTG-TAG-GTC-GT T-CG-3′ ; Reverse: 5′-GCC-TAC-ATA-CCT-CGC-TCT-GC -3′) had been designed with the online primer design tool Primer3 v.0.4.0. (http://bioinfo.ut.ee/primer3-0.4.0/primer3/). This PCR assay allowed detection of pBM1, pBR322, ColE1 and pUC19 in one reaction. The commonly used penicillin resistance bla TEM-1 , had been detected by a PCR using a primer pair designed by Lee and colleagues (Forward: 5′-CTA-CGA-TAC-GGG-AGG-GCT-TA-3′, Reverse: 5′-ATA-AAT-CTG-GAG-CCG-GTG-AG-3′) 53 . For the detection of Chloramphenicol resistance (CmR) the same primer pair had been used as for the described qPCR. Cycling conditions and set up of reaction mixes had been conducted according to the enclosed manufacturer's manual except that no template had been added. All PCR reactions consisted of 30 cycles with 30 seconds denaturation at 95 °C, 30 seconds annealing at 60 °C and 25 seconds extension time at 72 °C. The time needed for initial denaturation and final extension as well as primer, MgCl 2 and dNTP concentration may vary upon polymerase or mastermix used. Cycling conditions for High-Fidelity Polymerases such as Q5 and iProof were shorter (10 seconds denaturation and 20 seconds extension time). As positive control for Ampicillin and Ori presence, 1 µl of a 1 ng/µl pcDNA3.1(+) dilution has been used as template. The (RT)-qPCR mastermixes had been pipetted according to each manufacturer's manual. The same cycling conditions had been used as for the PCR reaction. To exclude false-positive results, 0.125 µl to 0.2 µl of pure enzyme had been used as template for amplification with the GoTaq G2 DNA Polymerase (Promega, Madison, Wisoconsin, USA) which had no detectable plasmid residues. Cycling conditions included a 2 minute initial denaturation step at 95 °C, followed by 30 In order to evaluate contigs for further potential plasmid contaminations, sequences had been evaluated for the presence of common plasmid features including origin of replication (F1, pBR322, pUC19, p15a, ColE1, SV40), selection markers (Chloramphenicol, Ampicillin (Bla Tem-1 ), Kanamycin (Tn5), Streptomycin (aadA), Puromycin (pac) and Hygromycin (hph)), promoter (T7, T3, Sp6, AmpR, CMV, tet, LacI, polyhedrin, SV40), terminator (rrnB T1-T2, lambda), protein tags (Histidine, HA, Streptavidin) and primer binding sites (pBluescript SK, pBluescript KS, M13 pUC and other commonly used primer sites). All plasmid sequences had been searched from 5′ to 3′as well as from 3′to 5′. Sequences with at least one of these characteristics had been analyzed further by the SnapGene Viewer software (GSL Biotech LLC, Chicago, USA), which automatically annotates plasmid features. All sequences attributed to plasmids had been analyzed via their annotated features and classified into artificial vectors or artificial plasmid fragments (see Fig. 5A). As final step, known plasmid sequences had been searched in the short read metagenome sequence data of all samples, which was described earlier by Thannesberger and colleagues 48 as well as published raw data from other metagenomics studies [50][51][52] . We used the previously described bioinformatic pipeline 48 which estimates the coverage along the plasmids and rejects short regions of unspecific coverage. All plasmid sequences from the NCBI RefSeq database, release 77, had been used as reference 54 Abbreviated summary. Due to increasing sequencing throughput enabled through Next-Generation sequencing (NGS), the analysis of all microbial genomes present in a single sample became possible (metanogemics). The indiscriminant sequencing of all nucleic acid sequences present in a sample by metagenomics does pose the risk of attributing biological significance to contaminating sequences as well as biasing the biological signal through a technical signal. Thus research conclusions and clinical decisions may be misguided significantly. We found that background plasmid sequences are present in every biological sample and have been erroneously interpreted as clinically significant biological differences previously. Through recognition of this significant background in metagenomic studies, however, we show how to devise effective countermeasures such as labelling of commercial reagents for presence of plasmids used for generation of recombinant proteins, and specifying these.
Fusion Bialgebras and Fourier Analysis We introduce fusion bialgebras and their duals and systematically study their Fourier analysis. As an application, we discover new efficient analytic obstructions on the unitary categorification of fusion rings. We prove the Hausdorff-Young inequality, uncertainty principles for fusion bialgebras and their duals. We show that the Schur product property, Young's inequality and the sum-set estimate hold for fusion bialgebras, but not always on their duals. If the fusion ring is the Grothendieck ring of a unitary fusion category, then these inequalities hold on the duals. Therefore, these inequalities are analytic obstructions of categorification. We classify simple integral fusion rings of Frobenius type up to rank 8 and of Frobenius-Perron dimension less than 4080. We find 34 ones, 4 of which are group-like and 28 of which can be eliminated by applying the Schur product property on the dual. In general, these inequalities are obstructions to subfactorize fusion bialgebras. Introduction Lusztig introduced fusion rings in [26]. Etingof, Nikshych and Ostrik studied fusion categories [9] as a categorification of fusion rings, see also [8,6]. A central question is whether a fusion ring can be unitarily categorified, namely it is the Grothendieck ring of a unitary fusion category. Jones introduced subfactor planar algebras as an axiomatization of the standard invariant of a subfactor in [16]. Planar algebras and fusion categories have close connections. There are various ways to construct one from the other. For example, if N ⊂ N G is the group crossed product subfactor of a finite group G, then the 2-box space P 2,+ of its planar algebra captures the unitary fusion category V ec(G) and its Fourier dual P 2,− captures the unitary fusion category Rep(G). The Grothendieck ring of a unitary fusion category can be realized as the 2-box space of a subfactor planar algebra using the quantum double construction, such that the ring multiplication is implemented by the convolution of 2-boxes [28,22]. Recently, Jiang, the first author and the third author formalized and proved numbers of quantum inequalities for subfactor planar algebras [21,14,13,25] inspired by Fourier analysis. These inequalities automatically hold for the Grothendieck rings of unitary fusion categories C as explained in [22], through the well-known quantum double construction from unitary fusion categories to subfactors, see e.g. [28]. Moreover, the Fourier dual of a subfactor is still a subfactor. So these inequalities also hold on the Fourier dual of the Grothendieck ring, which can be regarded as representations of the Grothendieck ring. This paper is inspired by three questions: • Vaughan Jones [17]: What are the applications of these inequalities on subfactors to other areas? • Zhenghan Wang [39]: Are these inequalities obstructions of categorification? • Pavel Etingof [5]: Do the inequalities on Grothendieck rings hold on fusion rings? In this paper, we prove that these quantum inequalities on subfactor planar algebras hold on fusion rings and partially, but not all, on the Fourier dual of fusion rings. Therefore, the inequalities that fail on the dual of the fusion rings are new analytic obstructions for unitary categorification of fusion rings. For examples, the quantum Schur product theorem [ [21], Theorem 4.1] holds on the Fourier dual of Grothendieck rings, but not on the Fourier dual of fusion rings. It turns out to be a surprisingly efficient obstruction of unitary categorification of fusion rings. Moreover, it is easy to check the Schur product property on the dual of a commutative fusion ring in practice. In this way, we find many fusion rings which admit no unitary categorification, due to the Schur product property, and which cannot be ruled out by previous obstructions. In §2, we introduce fusion bialgebras as a generalization of fusion rings and their duals over the field C. The definition of fusion bialgebras is inspired by the 2-box spaces P 2,± of subfactor planar algebras. We show that if P 2,+ is commutative, then it is a fusion bialgebra. If a fusion bialgebra arises in this way, then we say that it is subfactorizable. We classify fusion bialgebras up to dimension three. The classification of the two dimensional subfactorizable fusion bialgebras is equivalent to the remarkable classification of the Jones index of subfactors [15]. It remains challenging to classify three dimensional subfactorizable fusion bialgebras. In §3- §6, we systematically study quantum Fourier analysis on fusion bialgebras. We show that the Hausdorff-Young inequalities, uncertainty principles hold for fusion bialgebras and their duals; Young's inequalities and the sum-set estimate hold for fusion bialgebras, but not necessarily on their duals. We characterize their extremizers in §6. In fact, for the dual of a fusion bialgebra, Young's inequality implies Schur product property, and Schur product property implies the sum-set estimate. Therefore, Young's inequality is also an obstruction to unitary categorify a fusion ring or to subfactorize a fusion bialgebra, and the sum-set estimate is a potential obstruction. It is worth mentioning that the Schur product property (or Young's inequality) holds on arbitrary n-box space of the Temperley-Lieb-Jones planar algebra if and only if it is a subfactor planar algebras, namely the circle parameter is the square root of the Jones index [15]. In §8, we reformulate Schur product property (on the dual) in terms of irreducible representations of the fusion ring/algebra, especially in terms of the character table for the commutative case. In the family of fusion algebras of rank 3 with every object self-dual, we observe that about 30% of over 10000 samples do not have the Schur product property (on the dual). So they cannot be subfactorized. We consider families of rank 4 or 5 fusion rings, and we compare (visually) Schur product criterion and Ostrik's criterion [32,Theorem 2.21]. Next, we give a classification of simple integral fusion rings of Frobenius type with the following bounds of Frobenius-Perron dimensions (with FPdim = p a q b , pqr, by [10]). rank ≤ 5 6 7 8 9 10 all FPdim < 1000000 150000 15000 4080 504 240 132 First, given a Frobenius-Perron dimension, we classify all possible types (the list of dimensions of the "simple objects"). Secondly, we classify the fusion matrices for a given type. We derive several inequalities from Fourier analysis on fusion rings which bound the fusion coefficients using the dimensions. These inequalities are efficient in the second step of the classification. For some specific types, the use of these inequalities reduced drastically the computation time (from 50 hours to 5 seconds). We end up with 34 simple integral fusion rings in the classification (all commutative), 4 of which are group-like and 28 of which cannot be unitarily categorified by showing that the Schur product property (on the dual) does not hold. It remains 2 ones. None of these 28+2 ones can be ruled out by already known methods. It has two motivations, first the categorification of a simple integral non group-like fusion ring would be non weaklygroup-theoretical and so would provide a positive answer to Etingof-Nikshych-Ostrik [10, Question 2], next there is no known non group-like examples of irreducible finite index maximal depth 2 subfactor [33,Problem 4.12], but its fusion category would be unitary, simple, integral (and of Frobenius type, assuming Kaplansky's 6th conjecture [19]). In summary, Fourier analysis on subfactors provides efficient analytic obstructions of unitary categorification or of subfactorization. Fusion Bialgebras In this section, we introduce fusion bialgebras which capture fusion algebras of fusion rings over C and their duals, namely representations. The definition of fusion bialgebras is motivated by a connection between subfactor planar algebras and unitary fusion categories based on the quantum double construction. Its algebraic aspects have been discussed in [22]. In this paper, we investigate its analytic aspects and study Fourier analysis on fusion bialgebras. The fusion bialgebra has a second multiplication and involution # on the fusion algebra. Several basic results on fusion rings, see for example [6], can be generalized to fusion bialgebras. Many examples of fusion bialgebras come from subfactor theory, and we say that they can be subfactorized. It is natural to ask whether a fusion bialgebra can be subfactorized. The question for the two dimensional case is equivalent to the classification of the Jones index. If a fusion ring has a unitary categorification, then the corresponding fusion bialgebra has a subfactorization. We introduce analytic obstructions of subfactorization from Fourier analysis on subfactors, so they are also obstructions of unitary categorification. We discuss their applications in §8. 2.1. Definitions. Let N = Z ≥0 be the set of all natural numbers. Let R ≥0 be the set of non-negative real numbers. Definition 2.1. Let B be a unital *-algebra over the complex field C. We say B has a R ≥0 -basis B = {x 1 = 1 B , x 2 , . . . , x m }, m ∈ Z ≥1 , if (1) {x 1 , . . . , x m } is a linear basis over C; (2) x j x k = m s=1 N s j,k x s , N s j,k ∈ R ≥0 ; (3) there exists an involution * on {1, 2, . . . , m} such that x * k := x k * and N 1 j,k = δ j,k * . We write the identity 1 B as 1 for short, if there is no confusion. When N s j,k ∈ N, B gives a fusion ring, and B is called a fusion algebra. The *-algebra B with a R ≥0 -basis B can be considered as a fusion algebra over the field C. Definition 2.2. For a unital *-algebra B with a R ≥0 -basis B, we define a linear functional τ : B → C by τ (x j ) = δ j,1 . Then τ (x j x k ) = N 1 j,k = δ j,k * and τ (xy) = τ (yx) for any x, y ∈ B. Moreover (1) N s * j,k = τ (x j x k x s ) = τ (x s x j x k ) = N k * s,j = τ (x k x s x j ) = N j * k,s . Note that x k * x j * = (x j x k ) * . We obtain Frobenius reciprocity (2) N s j,k = N s * k * ,j * = N k j * ,s . Therefore τ is a faithful tracial state on the *-algebra B. Following the Gelfand-Naimark-Segal construction, we obtain a Hilbert space H = L 2 (B, τ ) with the inner product x, y = τ (y * x), and a unital *-representation π of the * -algebra B on H. Moreover B forms an orthonormal basis of H. On this basis, we obtain a representation π B : B → M m (C). In particular, π B (x j ) k,s = N s j,k . We denote the matrix π B (x j ) by L j . Then and L * j = L j * . Remark 2.3. Under the Gelfand-Naimark-Segal construction, the * -algebra B forms a C * -algebra, which is also a von Neumann algebra. In this paper, we only consider the finite dimensional case, so we do not distinguish C * -algebras and von Neumann algebras. Definition 2.4. For a unital *-algebra B with a R ≥0 -basis B, we define a linear functional d : B → C by setting d(x j ) to be the operator norm L j ∞ of L j , below denoted by x j ∞,B . (1) A has a nonnegative real eigenvalue. The largest nonnegative real eigenvalue λ(A) of A dominates absolute values of all other eigenvalues of A. (2) If A has strictly positive entries then λ(A) is a simple positive eigenvalue, and the corresponding eigenvector can be normalized to have strictly positive entries. (3) If A has an eigenvector f with strictly positive entries, then the corresponding eigenvalue is λ(A). Proposition 2.6. Let B be a unital *-algebra with a R ≥0 -basis B. Then Proof. The right multiplication of x j on the orthonormal basis B defines a matrix R j . Then R = m j=1 R j has strictly positive entries. Let v = m j=1 λ j x j be the simple positive eigenvector of the right action R. By Theorem 2.5, we can normalize v, such that λ 1 = 1 and λ j > 0. As L j v is also a positive eigenvector, we have that Note that m j=1 d(x j )x j is an eigenvector for R by the equation above, we see that Finally, we see that d(x j ) ≥ 1. Definition 2.7 (An alternative C * -algebra A). We define an abelian C * -algebra A with the basis B, a multiplication and an involution #, The C * -norm on A is given by for any x ∈ A. Proposition 2.8. The linear functional d is a faithful state on A. Proof. Note that {d(x j )x j } are orthogonal minimal projections of A. By Proposition 2.6, d(x j ) ≥ 1, so d is faithful. Definition 2.9. For any 1 ≤ t ≤ ∞, the t-norms on A and B are defined as follows: Under the Fourier transform, the multiplication on B induces the convolution on A. We denote the convolution of x, y ∈ A by x * y := F −1 (F(x)F(y)). The C * -algebras A and B share the same vector spaces, but have different multiplications, convolutions and traces. These traces are non-commutative analogues of measures. We axiomatize the quintuple (A, B, F, d, τ ) as a fusion bialgebra in the following definition. To distinguish the multiplications and convolutions on A and B, we keep the notations as above. Definition 2.13 (Fusion bialgebras). Suppose A and B are two finite dimensional C * -algebras with faithful traces d and τ respectively, A is commutative, and F : A → B is a unitary transformation preserving 2-norms (i.e. τ (F(x) * F(y)) = d(x # y) for any x, y ∈ A). We call the quintuple (A, B, F, d, τ ) a fusion bialgebra, if the following conditions hold: Furthermore, if F −1 (1) is a minimal projection and d(F −1 (1)) = 1, then we call the fusion bialgebra canonical. Remark 2.15. We show that subfactors provide fruitful fusion bialgebras in §2.2. One can compare the three conditions in Definition 2.13 with the corresponding concepts in subfactor theory. is also a fusion bialgebra, with λ 1 , λ 2 > 0. Therefore, any fusion bialgebra is equivalent to a canonical one up to a gauge transformation. Proof. It follows from the definition of the fusion bialgebra in Definition 2.13. are multiples of minimal projections of A. Moreover, B is invariant under the gauge transformation. Conversely, any C * -algebra B with a R ≥0 -basis B can be extended to a canonical fusion bialgebra, such that F −1 (x j ) are multiples of minimal projections of A. On the other hand, suppose (A, B, F, d, τ ) is a fusion bialgebra. Let P j , 1 ≤ j ≤ m, be the minimal projections of A, and F −1 (1) = δ B P 1 , for some δ B > 0. The modular conjugation J is a *-isomorphism, so J(P j ) = P j * , for some 1 ≤ j * ≤ m. Then F(P j ) = F(P j * ) * and J(P 1 ) = P 1 . Moreover, By the Schur Product property, for someÑ s j,k ∈ R ≥0 . Since the functional d is faithful, d(P j ) > 0. Taking the inner product with P 1 on both sides of Equation (3), we have thatÑ In particular,Ñ 1 1,1 = δ −1 B . Take Therefore, {x j } 1≤j≤m forms a R ≥0 -basis of B. Moreover, it is the unique R ≥0 -basis of B such that F −1 (x j ) are positive multiples of minimal projections in A. Furthermore, applying the gauge transformation, we obtain a canonical fusion bialgebra In this fusion bialgebra, the minimal projections in A are still P j , 1 ≤ j ≤ m. Their convolution becomes The corresponding x j becomes Therefore, the R ≥0 -basis B is invariant under the gauge transformation. Proposition 2.21. Let (A, B, F, d, τ ) be a fusion ring. Then for any x, y, z ∈ A, we have This completes the proof of the proposition. Examples. Example 2.22. When the basis B forms a group under the multiplication of B, the C * -algebra B is the group algebra, H is its left regular representation Hilbert space, and τ is the normalized trace. On the other side, the C * -algebra A is L ∞ (B) and d is the unnormalized Haar measure. Example 2.23. When the basis B forms a fusion ring, the C * -algebra B is the fusion algebra. The quintuple (A, B, F, d, τ ) is a canonical fusion bialgebra. Proof. Let P j , j = 1, 2, . . . , m be the minimal projections of P 2,+ and P 1 be the Jones projection. Let T r be the unnormalized trace of P 2,+ , namely T r(P 1 ) = 1. Take x j = 1 √ T r(Pi) F(P j ) and x j * = 1 √ T r(Pi) F(P j ), where P j is the contragredient of P j . Then x j x k = N s j,k x s , x 1 is the identity, x * k = x k * , N s j,k ≥ 0, and N 1 j,k = δ j,k * . Remark 2.25. On the 2-box space P 2,± of a subfactor planar algebra, the Fourier transform is a 90 • rotation and the contragredient is a 180 • rotation, see e.g. §2.1 in [21]. Definition 2.26 (Subfactorization). We call (P 2,+ , P 2,− , F s , tr 2,+ , tr 2,− ) the fusion bialgebra of the subfactor N ⊂ M. We say a fusion bialgebra (A, B) can be subfactorized, if it comes from a subfactor N ⊂ M in this way. We call N ⊂ M a subfactorization of the fusion bialgebra. 2.3. Classifications. In this section, we classify fusion bialgebras up to dimension three. By the gauge transformation, it is enough to classify canonical fusion bialgebras, which reduces to classify the R ≥0 -basis of C * -algebra by Theorem 2.17. Recall in Theorem 2.24 that (P 2,+ , P 2,− , F s , tr 2,+ , tr 2,− ) of a subfactor planar algebra is a fusion bialgebra, if P 2,+ is abelian. We refer the readers to [1,2,3,23,36,4] for known examples of three dimensional fusion bialgebras from 2-box spaces of subfactors planar algebras. In these examples, different subfactor planar algebras produce different fusion bialgebras. In general, without assuming P 2,+ to be abelian, (P 2,+ , P 2,− , F s , tr 2,+ , tr 2,− ) has been studied as the structure of 2-boxes of a planar algebra, see Definition 2.25 in [21]. One may ask when a subfactor planar algebra (generated by its 2-boxes) is determined by its structure of 2-boxes, equivalently by its fusion bialgebra when P 2,+ is abelian. A positive answer is given in Theorem 2.26 in [21] for exchange relation planar algebras: exchange relation planar algebras are classified by its structure of 2-boxes. Classifying fusion bialgebras is a key step to classify exchange relation planar algebras. On the other hand, it would be interesting to find different subfactors planar algebras generated by 2-boxes with the same fusion bialgebras (or structures of 2-boxes). The one-parameter family of three dimensional canonical fusion bialgebras in the above classification can be realized as the 2-box spaces of a one-parameter family of planar algebras constructed in [23]. For each d 2 ≥ 1, there are a complex-conjugate pair of planar algebras to realize the fusion bialgebra as the 2-box spaces. So such a realization may not be unique. Moreover, these planar algebras arise from subfactors if and only if µ = cot 2 ( π 2N +2 ) for some N ∈ Z + . Inspired by this observation, we conjecture that: For any y ∈ B, we define its contragredient as y :=F F(y) . When B is commutative, it is natural to ask whether the dual (B, A, F −1 , τ, d) is also a fusion bialgebra. We need to check the three conditions in Definition 2.13. The conditions (2) and (3) always hold on the dual, but condition (1) may not hold. τ (e B ) . Proof. Since the gauge transformation only changes the global scaler, without loss of generality, we assume that (A, B, F, d, τ ) is a canonical fusion bialgebra. Then (1) and Proposition 2.6, We have τ (1) τ (e B ) = µ. 2.5. Self Duality. In this subsection, we will give the definition of the self-dual fusion bialgebra and study the S-matrix associated to it. The maps Φ A , Φ B implementing the self-duality may not be unique, even for finite abelian groups. Consequently, the contragredient maps on A and B are anti- * -isomorphisms. Proof. The statements follow from the fact that the contragredient map is linear and Proposition 2.41. A self-dual canonical fusion bialgebra is symmetrically self-dual if and only if S k j = S j k . In this case, Proof. For a self-dual canonical fusion bialgebra, we have that By Propositions 2.38 and 2.40, the fusion bialgebra is symmetrically self-dual if and only if (Φ B F) 2 = FF if and only if (S 2 ) k j = δ j,k * if and only if S k j = S j k . In this case, Remark 2.42. For the group case, S is a bicharacter, see [24] for the discussion on self-duality and symmetrically self-duality. This completes the proof of the theorem. Schur Product Property In this section, we will study Schur product property for the dual of a fusion bialgebra. x * B y : We say B has the Schur product property, if x * B y ≥ 0, for any x, y ≥ 0 in B. This completes the proof of the proposition. Proof. It follows from the definition of self-dual fusion bialgebras. We define a linear map ∆ : B → B ⊗ B such that Then ∆ is a * -preserving map. We say ∆ is positive if ∆(x) ≥ 0 for any x ≥ 0. Proof. We denote by ι the identity map. Note that for any Proof. By the Schur product property on A, Note that F(J(x) * x) = |F(x)| 2 ≥ 0, and any positive operator in B is of such form. Therefore, by Proposition 2.12 and Equation (4), if and only if the Schur product property holds on B. The Schur product property may not hold on the dual, even for a 3-dimensional fusion bialgebra. We give a counterexample. For this reason, Young's inequality do not hold on the dual as well, see §5 for further discussions. As a preparation, we first construct the minimal projections in B. where λ 2 , λ 3 are the solutions of and ν j = 1 + Proof. Note that µ = 1 + d 2 2 + d 2 3 . By Proposition 2.35, For j = 2, 3, d(Q 1 Q j ) = 0, so for some ν 2 , ν 3 > 0. As Q 2 j = Q j , we have that ν j = 1 + . Furthermore, Q 2 Q 3 = 0, so Take Solving the linear system, we have that Therefore, λ 2 , λ 3 are the solutions of Proof. Fix 0 < a < 1, and b = 1 − a. Take d 2 → ∞ and d 2 By Proposition 3.7, the Schur product property does not hold in general on the dual. Numerically, one can take Remark 3.9. Subsection 8.3 provides a complementary approach for the study of this family of rank 3 fusion bialgebras, leading to visualize the areas of parameters where Schur product property (on the dual) does not hold, and to a character table whose matrix (function of the fusion coefficients) is equal to the inverse of the one underlying Theorem 3.7 (function of the Frobenius-Perron dimensions). Hausdorff-Young Inequality and Uncertainty Principles In this section, we will recall some inequalities for general von Neumann algebras first and then we will prove the Hausdorff-Young inequalities and uncertainty principles for fusion bialgebras. [14]). Let M be a von Neumann algebra with a normal faithful tracial state τ and 1 ≤ p, q ≤ ∞ with 1/p + 1/q = 1. Then for any (2) for p = ∞, xy 1 ≤ x ∞ y 1 if and only if the spectral projection of |x| corresponding to x ∞ contains the projection R(y) as subprojection, where R(y) is the range projection of y. This proves the first inequality. Figure 1. The norms of the Fourier Transform. For the second inequality, we let This completes the proof of the proposition. We divide the first quadrant into three regions R T , R F , R T F . Recall that µ = m j=1 d(x j ) 2 is the Frobenius-Perron dimension of B. Let K be a function on [0, 1] 2 given by as illustrated in Figure 1. be a fusion bialgebra and x ∈ B Then for any 1 ≤ p, q ≤ ∞, we have Proof. It follows from the proof of Theorem 3.13 in [25]. We leave the details to the readers. Uncertainty Principles. We will prove the Donoho-Stark uncertainty principle, Hirschman-Beckner uncertainty principle and Rényi entropic uncertainty principle for fusion bialgebras. For any x ∈ A, we let R(x) be the range projection of x and S(x) = d(R(x)). For any x ∈ B, S(x) = τ (R(x)). Theorem 4.8 (Donoho-Stark uncertainty principle). Let (A, B, F, d, τ ) be a fusion bialgebra. Then for any 0 = x ∈ A, we have S(x)S(F(x)) ≥ 1; Proof. The second inequality is the reformualtion of the first one. We only have to prove the first one. In fact, i.e. S(x)S(F(x)) ≥ 1. This completes the proof of the theorem. For any x ∈ B, the von Neumann entropy H(|x| 2 ) is defined by H(|x| 2 ) = −τ (x * x log x * x) and for any x ∈ A the von Neumann entropy is defined by Proof. We assume that where p ≥ 2 and 1/p + 1/q = 1. By using the computations in the proof of Theorem 5.5 in [14], we have d dp . We obtain that By Proposition 2.12, we have that f (2) = 0. By Theorem 4.5, we have f (p) ≤ 0 for p ≥ 2. Hence f (2) ≤ 0 and Remark 4.10. Let (A, B, F, d, τ ) be a fusion bialgebra. The Hirschman-Beckner uncertainty principle is also true for x ∈ B with respect to the Fourier transform F. We give a second proof of Theorem 4.8: Proof. By using Theorem 4.9 and the inequality log S(x) ≥ H(|x| 2 ), for any x 2,A = 1, we see that Theorem 4.8 is true. For any x ∈ A or B and t ∈ (0, 1) ∪ (1, ∞), the Rényi entropy H t (x) is defined by Then H t (x) are decreasing function (see Lemma 4.3 in [25]) with respect to t for x ∞,A ≤ 1 and x ∞,B ≤ 1 respectively. Theorem 4.11 (Rényi entropic uncertainty principles). Let (A, B, F, d, τ ) be a fusion bialgebra, 1 ≤ t, s ≤ ∞. Then for any x ∈ A with x 2,A = 1, we have Proof. The proof is similar to the proof of Proposition 4.1 in [25], using Theorem 4.6. Young's Inequality In this section, we study Young's inequality for the dual of fusion bialgebra and the connections between Young's inequality and the Schur product property. Proposition 5.1. Let (A, B, F, d, τ ) be a fusion bialgebra. Then for any x, y ∈ A, we have This completes the proof of the proposition. Remark 5.2. It would be natural to ask whether the following Young's inequality for the dual (B, A, F, τ, d) holds in general, but it does not, because we will see that it implies the Schur product property on the dual, which does not hold on many examples provided by Theorem 3.8, Subsections 8.3 and 8.4. we actually have that Inequality (6) is true. Hence Inequality (7 follows directly by the fact that any element is a linear combination of four positive elements. Proof. As in the proof of Proposition 3.5, for any x, y ∈ B, we have that This completes the proof. Proposition 5.6. Let (A, B, F, d, τ ) be a fusion bialgebra. Then for any x, y ∈ A, we have Proposition 5.7. The following two statements are equivalent for C > 0: (2) ⇒ (1): Proposition 5.8. Let (A, B, F, d, τ ) be a fusion bialgebra. The following statements: (1) the Schur product property holds on the dual. Proposition 5.10. Let (A, B, F, d, τ ) be a fusion bialgebra. Then for any x, y ∈ A, 1 ≤ p ≤ ∞, Proof. We have This completes the proof of the proposition. Proposition 5.14. Let (A, B, F, d, τ ) be a fusion bialgebra. Then the following are equivalent: (1) x * B y r,B ≤ x p,B y q,B , 1 ≤ p, q, r ≤ ∞, 1/p + 1/q = 1 + 1/r for any x, y ∈ B; (2) x * B y 1,B ≤ x 1,B y 1,B for any x, y ∈ B; (3) x * B y ∞,B ≤ x ∞,B y 1,B for any x, y ∈ B. We say the dual has Young's property if one of the above statements is true. Proof. It follows the similar proof of Proposition 5.7 and Proposition 5.9 and 5.10. Remark 5.15. By Proposition 5.8, we have that for the dual, Young's property implies Schur product property. be a fusion bialgebra. Then for any x, y ∈ A, we have Proof. It follows from the Schur product property. Remark 5.17. Let (A, B, F, d, τ ) be a fusion bialgebra. Suppose that the dual has Schur product property. Then Proof. We have that Remark 5.20. We thank Pavel Etingof for noticing us another proof of Theorem 5.19 from an algebraic point of view [5]. Theorem 5.21 (Sum set estimate). Let (A, B, F, d, τ ) be a fusion bialgebra. Suppose that the dual has the Schur product property. Then for any x, y ∈ B, we have Proof. By Proposition 5.12 and 5.8, the proof is similar to the one of Theorem 5.19. Fusion Subalgebras and Bishifts of Biprojections In this section, we define fusion subalgebras, biprojections and bishifts of biprojections for fusion bialgebras. We prove a correspondence between fusion subalgebras and biprojections. We prove partially that bishifts of biprojections are the extremizers of the inequalities proved in the previous sections. Proposition 6.3. Let (A, B, F, d, τ ) be a fusion bialgebra and P a biprojection. Then there is a fusion subalgebra A 0 such that the range of P is A 0 . Proof. We write F(P ) = m j=1 λ j x j . By the fact that P is a projection and F(P ) is a multiple of a projection, we obtain that λ j = 0 or λ j = d(x j ),and Solving the Equation (8), we obtain that Let and By Equation (9) and (10), we have that the involution * is invariant on I A0 and Hence Then P is a biprojection. It indicates that R(F(B)) is a shift of R(F(B)) and B is a left (right) shift of B (2) and (3) can be followed by the property of J and J B . Definition 6.8 (Bishift of biprojection). Let (A, B, F, d, τ ) be a fusion bialgebra and B a biprojection. A nonzero element x is a bishift of the biprojection B if there is y ∈ A, a shift B g of R(F(B)) and a right shift B h of B such that Proof. By Proposition 5.16, we have Then we obtain that Hence the inequalities above are equalities, i.e. Definition 6.11. Let (A, B, F, d, τ ) be a fusion bialgebra. An element x ∈ A is a bi-partial isometry if x and F(x) are multiples of partial isometries. An element x ∈ A is an extremal bi-partial isometry if x is a bi-partial isometry and x, F(x) are extremal. Theorem 6.12. Let (A, B, F, d, τ ) be a fusion bialgebra. Then the following statements are equivalent: x is an extremal bi-partial isometry. Proof. The arguments are similar to the one of Theorem 6.4 in [14], since only the Hausdorff-Young inequality is involved. Proposition 6.13. Let (A, B, F, d, τ ) be a fusion bialgebra and w an extremal bi-partial isometry. Suppose that w is a projection. Thenw is a right shift of a biprojection. By the assumption, we have Let P = F(w)F(w) * . Then P is a multiple of a projection in B and We will show that F −1 (P ) is a multiple of partial isometry. By Corollary 4.2, we have to check (13) and Proposition 4.1 is a multiple of a partial isometry. By Schur product property, we have that F −1 (P ) > 0 and F −1 (P ) is a multiple of a projection. Hence By Equation (14), we have Hence w is a right shift of R(F −1 (P )). Corollary 6.14. Let (A, B, F, d, τ ) be a fusion bialgebra. Then a left shift of a biprojection is a right shift of a biprojection. Expanding the expression, we have 1 Hence S(P * Q) = S(P ). Remark 6.18. Let (A, B, F, d, τ ) be a fusion bialgebra. If the Schur product property holds on the dual, the results in Theorem 6.17 are true for projections in B. Theorem 6.20 (Extremizers of the Hausdorff-Young inequality). Let (A, B, F, d, τ ) be a fusion bialgebra. Suppose the dual has Young's property, Then the following are equivalent: x is a bishift of a biprojection. Quantum Schur Product Theorem on Unitary Fusion Categories In this section, we reformulate the quantum Schur product theorem (Theorem 4.1 in [21]) in categorical language. Planar algebras can be regarded as a topological axiomatization of pivotal categories (or 2-category in general). Subfactor planar algebras satisfy particular conditions designed for subfactor theory, see Page 9-13 of [16] for Jones' original motivation. A subfactor planar algebra is equivalent to a rigid C * -tensor category with a Frobenius *-algebra. The correspondence between subfactor planar algebras and unitary fusion categories was discussed by Müger, particularly for Frobenius algebras in [27] and for the quantum double in [28]. Let D be a unitary fusion category, (or a rigid C * -tensor category in general). Let (γ, m, η) be a Frobenius *-algebra of D, γ is an object of D, m ∈ hom D (γ ⊗ γ, γ), η ∈ hom D (1, γ), where 1 is the unit object of D, such that (γ, m, η) is a monoid object and (γ, m * , η * ) is a comonoid object. Let ∪ γ = η * m be the evaluation map and ∩ γ = m * η be the co-evaluation map. Then ∪ * γ = ∩ γ . We construct a quintuple (A, * , J, d, τ ) from the Frobenius algebra: Take the C * algebra with the ordinary multiplication and adjoint operation. For x, y ∈ A, their convolution is The modular conjugation J is the restriction of the dual map of D on A. The Haar measure is where 1 · is the identity map on the object ·. The Dirac measure is We reformulate the quantum Schur product theorem on subfactor planar algebras and its proof as follows: Theorem 7.1 (Theorem 4.1 in [21]). Given a Frobenius algebra (γ, m, η) of a rigid C * -tensor category, for any x, y ∈ A := hom D (γ, γ), x, y > 0, we have that Proof. Let √ x and √ y be the positive square roots of the positive operators x and y respectively. Then Note that d is a faithful state, so d(x), d(y) > 0. Moreover, d(x * y) is a positive multiple of d(x)d(y), so d(x * y) > 0 and x * y > 0. By the associativity of m and J(m) = m, the vector space hom D (γ, γ) forms another C * -algebra B, with a multiplication * and involution J. The identity map induces a unitary transformation F : A → B, due to the Plancherel's formula, Proposition 7.2. When A is commutative, the quintuple (A, B, F, d, τ ) is a canonical fusion bialgebra. Proof. The Schur product property follows from Theorem 7.1. The modulo conjugation property holds, as the duality map is an anti-linear *-isomorphism. The Jones projection property holds, as F(1 1 ) is the identity of B. Moreover, 1 1 is a minimal central projection and d(1 1 ) = 1, so (A, B, F, d, τ ) is a canonical fusion bialgebra. Following the well-known correspondence between subfactor planar algebras and a rigid C * tensor category with a Frobenius *-algebra, we reformulate the subfactorization of a fusion bialgebra as follows: Definition 7.3. A fusion bialgebra is subfactorizable if and only if it is the quintuple (A, B, F, d, τ ) arisen from a Frobenius *-algebra in a rigid C * tensor category constructed above. There is another way to construct the dual B using the the dual of D w.r.t. the Frobenius algebra γ, which is compatible with the Fourier duality of subfactor planar algebras. The dualD of D w.r.t. the Frobenius algebra (γ, m, η) is defined as the γ − γ bimodule category over D, with the unit object γ. The dual Frobenius *-algebra of (γ, m, η) is (γ,m,η),γ = γ ⊗ γ,m = 1 γ ⊗ ∪ γ ⊗ 1 γ ,η = m * . Then the quintuple from the Frobenius algebraγ ofD is dual to the quintuple from the Frobenius algebra γ of D. In particular, the C * -algebra B can be implemented by homD (γ,γ) with ordinary multiplication and adjoint operation. Let C be a unitary fusion category. Take D = C C , then D has a canonical Frobenius algebra (γ, m, η). Here . . , X m } is the set of irreducible (or simple) objects of C , X 1 is the unit and where FPdim(X j ) is the Frobenius-Perron dimension of X j , FPdim(C ) = m j=1 FPdim(X j ) 2 is the Frobenius-Perron dimension of C and ON B(X j , X k ; X s ) is an orthonormal basis of hom C (X j ⊗X k , X s ); and FPdim(C ) 1/4 η ∈ hom D (1, γ) is the canonical inclusion (in particular,γ is the image of the unit of C under the action of the adjoint functor of the forgetful functor from Z(C ) to C ). Its dualD is isomorphic to the Drinfeld center Z(C ) of C as a fusion category. This construction is well-known as the quantum double construction. Consequently, Proposition 7.4. Let R be the Grothendieck ring of a unitary fusion category C . Then the canonical fusion bialgebra associated to the fusion ring R is isomorphic to the one (A, B, F, d, τ ) associated to the canonical Frobenius algebra γ of C ⊗ C in the quantum double construction. So it is subfactorizable. Proof. Following the notations above, we take x j := FPdim(X) −1 1 Xj 1 Xj . Then where and # are the ordinary multiplication and adjoint operator on the commutative C * -algebra A = hom C (γ, γ) respectively, δ j,k is the Kronecker delta and N s j,k = hom C (X j ⊗ X k , X s ). Therefore, the fusion bialgebra associated to the Grothendieck ring is isomorphic to the fusion bialgebra arisen from the canonical Frobenius algebra (γ, m, η) of C C in the quantum double construction. So it is subfactorizable. Remark 7.5. To encode the fusion rule of C as the convolution on A exactly, our normalization of the Frobenius algebra (γ, m, η) is slightly different from the usual one identical to planar tangles in planar algebras, see e.g. Equation Proof. Applying the quantum Schur product theorem, Theorem 7.1, to the Frobenius algebra (γ,m,η) of the Drinfeld center Z(C ), we obtain the Schur product property on B. We obtain an equivalent statement on A as follows (see another equivalent statement in Proposition 8.3): Let (A, B, F, d, τ ) (or (A, * , J, d, τ ) as in Remark 2.14) be the canonical fusion bialgebra associated with the Grothendieck ring R of the unitary fusion category C . Then Proof. It follows from Propositions 3.6 and 7.6. We give a second proof without passing through the Drinfeld center Z(C ). The last inequality follows from reflection positivity of the horizontal reflection, namely the dual functor on C . Remark 7.8. Let us mention [11] which contains a reformulation of our first proof together with a discussion on some integrality properties of the numbers appearing in the Schur product criterion. In particular, if Grothendieck ring R is commutative, then B is commutative. There is a one-to-one correspondence between minimal projections P j in B and characters χ j of R, j = 1, 2, . . . , m: Take P j * B P k = m s=1N s j,k P s , thenN s j,k ≥ 0, due to the Schur product property on B. The dual of the fusion ring R is independent of its categorification. The Schur product property may not hold on the dual of a fusion ring in general. Therefore, the Schur product property is an analytic obstruction of unitary categorification of fusion rings. We discuss its applications in §8. Similarly, Young's inequality and sumset estimates are also analytic obstructions of unitary categorification of fusion rings. Applications and Conclusions In this section, we show that the Schur product property on the dual is an analytic obstruction for the unitary categorification of fusion rings. Furthermore, this obstruction is very efficient to rule out the fusion rings of high ranks (we apply it on simple integral fusion rings). The inequalities for the fusion coefficients (Proposition 8.1) in the next subsection are essential for finding new fusion rings more efficiently. Upper Bounds on the Fusion Coefficients. In this subsection, we obtain inequalities for fusion rings from the inequalities proved in previous sections. for any t ≥ 1; Let (A, B, F, d, τ ) be the fusion bialgebra arising from the fusion ring A. By Theorem 5.11, we have for any 1/r + 1 = 1/p + 1/q, If r < ∞, then we obtain that If r = ∞, then we have In Inequality (18), let r = 2, p = 1, q = 2, we have This proves (1). In Inequality (19), let p = t and q = t t−1 for any t ≥ 1. Then This shows (2) is true. Take p = q = 2 in Inequality (19), we have N j,k ≤ d(x ). By Equation (1), we have This indicates that (4) is true. By Theorem 5.11 again, we have Note that j, k, , t can be interchanged, we see (5) is true. Proposition 8.2. Let A be a fusion ring. Suppose that the fusion bialgebra arising from A is self-dual. Let S be the S-matrix associated to A. Then we have the following inequalities: Proof. It follows from the Hausdorff-Young inequalities. 8.2. Schur Product Property Reformulated. In this subsection we reformulate Schur product property (on the dual) using the irreducible complex representation of the fusion algebra, which in the commutative case, becomes a purely combinatorial property of the character table. Note that Proposition 3.3 states that if the fusion ring A is the Grothendieck ring of unitary fusion category, then Schur product property holds on the dual of A, so it can be seen as a criterion for unitary categorification. Proposition 8.3 (Non-Commutative Schur Product Criterion). The Schur product property holds on the dual of a fusion ring/algebra A with basis {x 1 = 1, . . . , x r } if and only if for all triple of irreducible unital *-representations (π s , V s ) s=1,2,3 of the fusion ring/algebra A over C, and for all v s ∈ V s , we have Proof. Let A be a fusion ring/algebra with basis {x 1 = 1, . . . , x r } and (A, B, F, d, τ ) the fusion bialgebra arising from A. By Proposition 3.6 and the fact that d is multiplicative, the Schur product property holds on B if and only if for all α s,i ∈ C. Now let M i be the matrix (N i k,j * ) which is also (N k i,j ) by Frobenius reciprocity, so that M i is the fusion matrix of x i . Let u s be the vector (α s,i ). Then Then the criterion is equivalent to have for all u s ∈ C r . Recall that the map π : x i → M i is a unital *-representation of A. So Equation (20) implies Equation (21). On the other hand, π is faithful, so Equation (21) implies Equation (20). Assume that the fusion ring/algebra A is commutative, then for all i, x i x i * = x i * x i , so that the fusion matrices M i are normal (so diagonalizable) and commuting, so they are simultaneously diagonalizable, i.e. there is an invertible matrix P such that P −1 M i P = diag(λ i,1 , . . . , λ i,r ), so that the maps π j : M i → λ i,j completely characterize the irreducible complex representations π j of A. We can assume that π 1 = d, so that λ i,1 = d(x i ) = M i . Proof. Immediate from Proposition 8.3, because here the irreducible representations are one-dimensional, so that there we have v * s π s (M i )v s = v s 2 π s (M i ). In order to test the efficiency of Schur product criterion, we wrote a code computing the character table of a commutative fusion ring/algebra and checking whether Schur product property holds (on the dual) using Corollary 8.5. The next two subsections presents the first results. 8.3. Fusion Algebras of Small Rank. Ostrik [32] already classified the pivotal fusion category of rank 3. In this section we would like to show how efficient is Schur product criterion in this case. We will next consider two families of rank 4 fusion rings/algebras found by David Penneys and his collaborators 1 [35], and finally look to a family of rank 5 fusion rings/algebras. Recall [32, Proposition 3.1] that a fusion ring A of rank 3 and basis {x 1 = 1, x 2 , x 3 } satisfies either x * 2 = x 3 and then is CC 3 , or x * i = x i and then is of the following form (extended to fusion algebras): with m, n, p, q ∈ R ≥0 and m 2 + n 2 = 1 + mq + np (given by associativity). Note that x 3 x 2 = x 2 x 3 by Frobenius reciprocity, so that the fusion algebra is commutative. We can assume (up to equivalence) that m ≤ n, and then n > 0 (because if n = 0 then m = 0 and the above associativity relation becomes 0 = 1, contradiction), so that p = (m 2 + n 2 − 1 − mq)/n; and it is a fusion ring if and only if in addition m, n, p, q ∈ Z ≥0 and n divides (m 2 − 1 − mq). x + m The matrix M i is self-adjoint thus its eigenvalues (and so the roots of χ i ) are real. By using [12, Theorem A.4], we can deduce the following character table: We observe that about 30% of over 10000 samples can be ruled out by Schur's criterion 2 . Note that Ostrik used the inequality in [32, Theorem 2.21] to rule out some fusion rings. See Figure 2 to visualize the efficienty of Schur product criterion and Ostrik's criterion for this family. Note that Ostrik's criterion works for the fusion rings only (not algebras 3 ) and is no more efficient for higher ranks, whereas Schur product criterion does (see Subsection 8.4). Then, let us mention two families (denoted K 3 and K 4 ) of fusion algebras of rank 4 with self-adjoint objects provided by David Penneys and his collaborators [35]. Visualize the obstructions on Figures 3 and 4. Finally, let us consider the family of fusion rings of rank 5 with exactly three self-adjoint simple objects. By Frobenius reciprocity, the fusion rules must be as follows (with 16 parameters): 2 It is nontrivial to characterize the set of all the triples (m, n, q) for which Schur product property (on the dual) does not hold. Using the above character table together with Theorem 8.5 and computer assistance, for q, n, m ∈ Z, 0 ≤ q ≤ 30, 1 ≤ n ≤ 30 and 0 ≤ m ≤ n, there are exactly 14509 fusion bialgebras (resp. 542 fusion rings), and among them, 4757 (resp. 198) ones can be ruled out from subfactorization (resp. unitary categorification) by Schur product criterion. 3 Consider the (δ 1 , δ 2 )-Bisch-Jones subfactor, its 2-box space provides a fusion algebra in this family with (m, n, q) = (0, (δ 2 2 − 1) which is often in the colored area of the figure for Ostrik's criterion, for example if (δ 1 , δ 2 ) = ( √ 2, 10/3) then (m, n, q) = (0, √ 91/3, 5). Let us also mention here that for a fusion ring, subfactorizable is strictly weaker than unitarily categorifiable, because if (δ 2 1 , δ 2 2 ) = (6 + 2 √ 6, 2) then (m, n, q) = (0, 1, 4), which is ruled out from pivotal categorification by Ostrik's paper [32]. Figure 2. Rank 3: for q = 5, the set of (m, n) such that Schur product property on the dual (resp. Ostrik's inequality) does not hold is (numerically) given by the right (resp. left) figure (where, for clarity, neither m ≤ n nor m 2 + n 2 − 1 − mq ≥ 0 is assumed). About the right one, there are two areas, one (at the bottom) is finite, the other infinite; moreover, the projection of these two areas on the m-axis overlap around m = q. Each area corresponds to the application of Theorem 8.5 on one column. The form appears for all the samples of q we tried, so it is not hard to believe that it is the generic form, and in particular that Schur product property (on the dual) does not hold if q + 1 ≤ m ≤ n and n ≥ 2q + 2, with m, n, q ∈ R ≥0 (so that the corresponding fusion bialgebras admit no subfactorization); it should be provable using the given character table (we did not make the computation). such that n k i,j ∈ Z ≥0 , and s n s i,j n t s,k = s n s j,k n t i,s (associativity). We found (up to equivalence) exactly 47 ones at multiplicity ≤ 4 (by brute-force computation), 4 of which are simple. The Schur product property on the dual (resp. Ostrik's inequality) does not hold on exactly 6 (resp. 1) among the 47 ones, and on exactly 2 (resp. 1) among the 4 simple ones. Schur product criterion may be more efficient at higher multiplicity. Here are the two simple ones on which the Schur product property on the dual (and Ostrik's inequality) holds (note that they are also of Frobenius type). Same convention as above (to simplify, a is not assumed non-negative) with g = 10. It should be the generic shape for fixed g. Ostrik's inequality always holds, so the left figure is empty. About the figure for Schur product criterion on the right, the structure is similar to Figure 2, one finite area on the bottom, one infinite area, and the projection of both on the b-axis should overlap around b = g. Proof. Let G be a perfect group and let π be a one-dimensional representation of G. By assumption, every g ∈ G is a product of commutators, but π(G) is abelian (because π is one-dimensional), so that π(g) = π(1). It follows that π is trivial. Now assume that every one-dimensional representation is trivial, and consider the quotient map p : G → Z with Z = G/ [G, G] which is abelian. Then p induces a representation π of G with π(G) abelian, so that π is a direct sum of one-dimensional representations. It follows by assumption that Z = π(G) is trivial, which means that G is perfect. This proposition leads us to call perfect a fusion ring with m 1 = 1. Note that a non-perfect simple fusion ring is given by a prime order cyclic group. The fusion ring A is called of Frobenius type if FPdim(A) is an algebraic integer for all i, and if A is integral, this means that d(x i ) divides FPdim(A). Kaplansky's 6th conjecture [19] states that for every finite dimensional semisimple Hopf algebra H over C, the integral fusion category Rep(H) is of Frobenius type. If in addition H has a * -structure (i.e. is a Kac algebra), then Rep(H) is unitary. For a first step in the proof of this conjecture, see [18,Theorem 2]. Note that there exist simple integral fusion rings which are not of Frobenius type (see Subsection 9.2). The integral simple (and perfect) fusion rings of Frobenius type are classified in the following cases (with FPdim = p a q b , pqr, by [10]), with computer assistance, significantly boosted by Proposition 8. [6,2], [10,1], [11,1], [15,2], [24,1]] 0 Question 8.12. Are there only finitely many simple integral fusion rings of a given rank (assuming Frobenius type and perfect)? Is the above list the full classification at rank ≤ 6? If the Schur product property (on the dual) is assumed to hold, is it full at rank ≤ 8? Let us write here the fusion matrices and character tables for the first fusion ring ruled out written above, and for the two which were not. Then the use of inequalities in Proposition 8.1 boosted the computation, allowing us to extend the bounds significantly. Its character table is the following: It is possible to see why it was ruled out by Schur product criterion by observing this character table (in particular its last column) together with Corollary 8.5: Remark 8.13. Here we applied Corollary 8.5 by using three times the same block (i.e. irreducible representation, or column here), but it is not always possible. For example, the simple fusion ring of type [ [1,1], [5,2], [8,2], [9,1], [10,1]] (the one not given by PSL (2,9)) required two blocks to be ruled out.
Screening for Mild Cognitive Impairment: Comparison of “MCI Specific” Screening Instruments Background: Sensitive and specific instruments are required to screen for cognitive impairment (CI) in busy clinical practice. The Montreal Cognitive Assessment (MoCA) is widely validated but few studies compare it to tests designed specifically to detect mild cognitive impairment (MCI). Objective: Comparison of two “MCI specific” screens: the Quick Mild Cognitive Impairment screen (Qmci) and MoCA. Methods: Patients with subjective memory complaints (SMC; n = 73), MCI (n = 103), or dementia (n = 274), were referred to a university hospital memory clinic and underwent comprehensive assessment. Caregivers, without cognitive symptoms, were recruited as normal controls (n = 101). Results: The Qmci was more accurate than the MoCA in differentiating MCI from controls, area under the curve (AUC) of 0.90 versus 0.80, p = 0.009. The Qmci had greater (AUC 0.81), albeit non-significant, accuracy than the MoCA (AUC 0.73) in separating MCI from SMC, p = 0.09. At its recommended cut-off (<62/100), the Qmci had a sensitivity of 90% and specificity of 87% for CI (MCI/dementia). Raising the cut-off to <65 optimized sensitivity (94%), reducing specificity (80%). At <26/30 the MoCA had better sensitivity (96%) but poor specificity (58%). A MoCA cut-off of <24 provided the optimal balance. Median Qmci administration time was 4.5 (±1.3) minutes compared with 9.5 (±2.8) for the MoCA. Conclusions: Although both tests distinguish MCI from dementia, the Qmci is particularly accurate in separating MCI from normal cognition and has shorter administration times, suggesting it is more useful in busy hospital clinics. This study reaffirms the high sensitivity of the MoCA but suggests a lower cut-off (<24) in this setting. INTRODUCTION As society ages, the prevalence of cognitive impairment (CI) is expected to rise [1,2], resulting in increased numbers of older people presenting with memory complaints. Memory loss is a spectrum from subjective memory complaints (SMC), which is The Montreal Cognitive Assessment (MoCA) is a well-established cognitive screen, highly sensitive at differentiating MCI from normal cognition and dementia [9] and is widely validated against the most commonly used instrument, the Mini-Mental State Examination (MMSE) [10,11], in multiple settings [12][13][14], disorders [15][16][17] and languages [18][19][20][21]. Normative population data are also available [22,23]. The MoCA overcomes the high ceiling effects and educational bias associated with the MMSE [24], has fewer practice effects and is available in multiple formats [24]. Although the MoCA is increasingly considered the short cognitive instrument of choice, its use as a screen presents some challenges. It is long, taking at least 10 minutes to complete [9], and its subtest scores are criticized for having low accuracy when predicting impairment in their respective cognitive domains [25]. Its specificity at its recommended cut-off (<26) is low, between 35% [12] and 50% [14], lower than that reported in the original validation cohort [9]. Recently, it has been suggested that lowering its cut-off will improve its specificity without adversely affecting its sensitivity [26]. The Quick Mild Cognitive Impairment screen (Qmci), presented in Supplementary Material 1, is a short screening test for CI that was developed as a rapid, valid, and reliable instrument for the early detection and differential diagnosis of MCI and dementia [27,28]. It correlates with the standardized Alzheimer's Disease Assessment Scalecognitive section, Clinical Dementia Rating scale and the Lawton-Brody activities of daily living scale [29]. Neither the MoCA nor the Qmci are usually compared to short screens designed specifically to detect MCI as well as dementia. Furthermore, little is known about the optimal cut-off scores for either instrument in patients referred to a clinic. Given this, we chose to compare the Qmci and MoCA, two "MCI specific" screening instruments, in a geriatric memory clinic population. Participants Patients referred for investigation of memory loss were recruited from a university hospital memory clinic in Cork City, Ireland, between March 2012 and December 2014. Alzheimer's disease and vascular type dementia were classified using the Diagnostic and Statistical Manual of Mental Disorders (4th-edition) [30]. Severity was correlated with the Reisberg FAST scale [31]. Early dementia was defined clinically as noticeable deficits with demanding organizational tasks, e.g., decreased job function (as opposed to 'prodromal Alzheimer's disease', which is synonymous with 'MCI due to Alzheimer's disease' and defined by biomarkers). Mild dementia was defined if assistance in complicated instrumental activities such as handling medications and finances etc. was required. MCI was diagnosed using Petersen's criteria [32] according to the National Institute on Aging-Alzheimer's Association workgroup diagnostic guidelines [6]. Frontotemporal dementia (FTD) was diagnosed clinically referencing the Lund-Manchester Criteria [33]. FTD MCI was diagnosed clinically with reference to proposed criteria [34]. Parkinson's disease dementia (PDD) and MCI were defined by the Movement Disorder Society Guidelines [35,36], Lewy body dementia (LBD) and MCI using the third report of the LBD Consortium [37]. SMC was defined as subjective non-progressive memory complaints in patients without objective cognitive deficits or functional decline, scoring 'poor' or 'fair' on a five-point Likert scale in response to the question "how is your memory?" [38]. Normal controls were recruited by convenience sampling from healthy participants, usually caregivers, without cognitive problems accompanying the patients. Those with active depression (n = 23), aged <45 years (n = 22), declining consent (n = 3), with an unclear diagnosis (n = 21), unable to communicate in English (n = 2), or with resolving delirium in patients recently discharged from hospital (n = 2), were excluded. Depression was excluded clinically and screened with the Geriatric Depression Scale short-form [39] (cut-off ≥7, to optimize specificity [40]). Functional level was measured clinically with the assistance of the Barthel Index [41]. Unless there was co-existing physical disability, all patients diagnosed with SMC or MCI had a normal Barthel Index score of 20/20. Outcome measures The Qmci has six subtests, covering five domains: orientation, registration, clock drawing, delayed recall, verbal fluency (VF) (a test of semantic verbal fluency, e.g., naming of animals within one minute) and logical memory (LM) (testing immediate verbal recall of a short story) [27,28]. Scored out of 100 points, it has a median administration time of 4.24 minutes [28]. The recommended cut-off score for CI (MCI or dementia) is <62 [42]. The MoCA is scored out of 30 points and has seven subtests, covering five cognitive domains: visuospatial/executive function, naming, memory, attention, language, abstraction, delayed recall and orientation [9,25]. For screening in clinics, where high sensitivity is required, the established MoCA threshold of <26 is suggested [9] although a lower threshold (<24) may have better predictive value [24]. Data collection Consecutive referrals underwent a comprehensive work-up including history, physical examination, laboratory testing, neuropsychological assessment, and neuroimaging, usually over two sessions, approximately six months apart, to maximize the accuracy of the final diagnosis. Two informant-rated assessments, the 8-item AD8 questionnaire [43,44] and the Informant Questionnaire on Cognitive Decline in the Elderly-short form [45], were used to inform the diagnosis. Cognitive screening with the Qmci and MoCA was performed in a random counterbalanced order, alternating which of the two tests was scored first to reduce learning or fatigue effects, approximately one hour before consultant review, by two independent trained raters, blind to each other and the final diagnosis. Alternate validated versions of VF and LM were used for the Qmci to reduce learning effects [46]. Normal controls underwent a similar comprehensive review but did not undergo laboratory testing or neuroimaging and few were available for a second evaluation. The study adhered to the tenets of the Declaration of Helsinki. Ethics approval was obtained from the Clinical Research Ethics Committee of the Cork Teaching Hospitals and where possible participants provided written consent; assent was obtained from the relatives or caregivers of individuals who were felt to lack capacity in accordance with current Irish law. Analysis Data were analyzed using SPSS 20.0. The Shapiro-Wilk test was used to test normality and found that the majority of data were non-parametric. These were compared using the Mann-Whitney U test. Analysis of covariance (ANCOVA) was used to control the results of analysis for age and education. Accuracy was assessed with receiver operating characteristics (ROC) curves, compared with the Hanley method [47]. Binary logistic regression was used to control ROC curves for the effects of age and education. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for all tests at different cut-off points and by age (≤75 and >75 to balance numbers between groups) and education (<12 and ≥12, mean education in the United Kingdom and Ireland [48]). RESULTS In total, 551 participants were included: 73 with SMC, 103 with MCI, 274 with dementia, and 101 normal controls. Of those with MCI, 79% (n = 81) were amnestic type MCI and 21% (n = 22) were non-amnestic; 60% (n = 62) had deficits in a single domain and 35% (n = 36) in multi-domains. Five could not be clearly identified as single or multidomain. The median age of participants was 76 y (interquartile range, IQR ± 12) and the majority were female (n = 363, 66%). Patients with dementia (median age of 77 ± 10 years) were significantly older than those with SMC (72 ± 11, p < 0.001), MCI (76 ± 13, p = 0.03) and normal controls (74 ± 14, p < 0.001). The median time in education was 12 ± 4 y. Those with dementia had spent less time in education (11 ± 3 y) compared to those with SMC (12 ± 4 y, p = 0.07), MCI (13 ± 5 y, p < 0.001), or controls (13 ± 4 y, p < 0.001). There was no significant difference in age (p = 0.82) or education (p = 0.16) between those with SMC and controls. Patients with MCI were of a similar age to those with dementia (p = 0.06) but were significantly older than controls (p = 0.01); they had also received more time in education than patients with dementia (p < 0.001) and normal controls (p < 0.001). To control for age and education, ANCOVA was used to test differences between participant groups. This confirmed a statistically significant difference in Qmci scores between all three groups (controls, MCI, and dementia), irrespective of age or education, F(2,333) = 311.96, p < 0.001, partial η 2 = 0.65. A similar effect was found for the MoCA: F(2,333) = 190.20, p < 0.001, partial η 2 = 0.53. The Qmci screen scores had a "stronger" difference between the three groups compared to the MoCA, based on a higher value of the partial etasquared (effect size). The majority of patients with dementia were classified as early to mild stage dementia (n = 201, 73%). Participant characteristics, median test scores according to diagnosis and the prevalence of MCI and dementia subtypes are presented in Table 1. Comparing the accuracy of the tests at differentiating normal controls from MCI showed that the Qmci had significantly greater accuracy, area under the curve (AUC) of 0.90 (95% confidence interval: 0.86-0.94) than the MoCA, AUC of 0.80 (95% confidence interval: 0.74-0.86), p = 0.009. The Qmci was also significantly more accurate than the MoCA in separating normal controls from patients with CI (i.e., MCI and dementia), an AUC of 0.94 versus 0.90 respectively, p = 0.04. In their ability to discriminate SMC from MCI, the Qmci had better (AUC 0.81) accuracy than the MoCA (AUC 0.73), p = 0.09, albeit a non-significant difference. Both instruments had similar, excellent accuracy at differentiating MCI from dementia (AUC of 0.95 versus 0.91 respectively, p = 0.2), and patients with SMC from CI (AUC of 0.97 versus 0.93 respectively, p = 0.23). Both were poor at discriminating normal controls from SMC, (p = 0.28). ROC curves are presented in Fig. 1. Correcting the ROC curves for the effects of age and education showed that the Qmci more accurately differentiated MCI from normal controls (AUC of 0.94; 95% confidence interval: 0.90-0.97) compared with the MoCA (AUC of 0.84; 95% confidence interval: 0.77-0.90), z = 2.76, p = 0.006, see Fig. 2a. The Qmci was also significantly better at separating MCI from dementia (AUC 1.00; 95% confidence interval: 0.998-1.00) than the MoCA (AUC of 0.978; 95% confidence interval: 0.96-0.99), a small but statistically significant difference, z = 2.69, p = 0.007, see Fig. 2b. The ability of both instruments to separate normal controls from CI was then assessed at different cut-off scores. Patients with SMC were analyzed separately. At their established cut-offs, <62 for the Qmci and <26 for the MoCA, the Qmci had a sensitivity of 90% and specificity of 87% (PPV of 0.96, NPV 0.70) for CI, compared to 96% sensitivity and 58% specificity (PPV of 0.89, NPV 0.80) for the MoCA (<26). At these cut-offs the MoCA had a false positive rate of 11% compared to 4% for the Qmci; the MoCA misclassified 42/101 (42%) of controls as having CI compared to 13/101 (13%) with the Qmci. Increasing the Qmci cut-off to <65 improved the sensitivity (94%) but reduced the specificity (80%). Reducing the MoCA cut-off for CI to <24 yielded an optimum sensitivity and specificity for the test, 89% and 83% respectively, and reduced the false positive rate to 5%. Sensitivity, specificity, PPV, and NPVs are presented in Table 2. Adjusting for age and education showed that the Qmci was most sensitive and specific for CI in patients with less education (<12 years), with a sensitivity and specificity of 99% and 85% for those aged ≤75; 97% and 74% respectively for those aged >75 years. Sensitivity was lowest for younger patients aged ≤75 with ≥12 years in formal education (74%). The MoCA had similar excellent sensitivity (99%) for older patients (>75) with less education (<12) but very low specificity (37.5%). The MoCA had greater sensitivity for those with more education. These values, adjusted for age and education, are presented in Table 3. Reanalysis of the data comparing the ability of the tests at their established cut-offs to differentiate SMC from MCI and dementia showed similar results, see Supplementary Material 2. DISCUSSION As populations age more patients are presenting to increasingly busy outpatient clinics, many of which are under-resourced [49], necessitating the use of short instruments to identify MCI and monitor progression to dementia. Prompt diagnosis is particularly important as new management strategies emerge [50,51]. With multiple instruments available for MCI [52] and dementia [53], choosing one instrument is challenging. This paper explores the accuracy, sensitivity, and specificity of the Qmci and MoCA in their ability to detect CI (MCI and dementia) and differentiate normal controls from SMC, MCI, and dementia. The results suggest that the Qmci is briefer than the MoCA and particularly accurate in distinguishing MCI from controls. Both instruments had excellent accuracy in separating MCI from dementia, and normal cognition and SMC from MCI and CI. The results reaffirm the high sensitivity of the MoCA but show that the Qmci has excellent sensitivity and specificity. The established cut-off scores did not provide the highest sensitivity and specificity for either instrument. At the widely used cut-off of <26 [9], the MoCA had 96% sensitivity but only 58% specificity for detecting CI compared to 90% and 87% respectively for the Qmci at a cut-off of <62 [42]. The poor specificity of the MoCA at this cut-off is similar to results published elsewhere [12,15,17,24,26]. Specificity improved when the cut-off was lowered and from this data, the optimal cutoff appears to be that suggested by Damian et al. at <24 [24]. The Qmci was associated with fewer false positive results for CI, 4% at <62, compared to 11% for the MoCA at <26. At this cut-off, a large percentage of controls (42%) screened positive using the MoCA. While high sensitivity is desirable for any screening test, false positive rates of this magnitude may result in large numbers undergoing unnecessary investigation, negating the purpose of screening. Similar results were found for identifying those with SMC. Indeed, neither instrument was accurate in distinguishing SMC from normal controls, which reflects the challenges in defining this condition [54]. Although all patients received interval assessment, this duration may not have been [42]. * Cut-off for cognitive impairment selected from Nasreddine et al. [9]. * * Cut-off for cognitive impairment selected from Damian et al. [24]. * * * Cut-off for cognitive impairment selected from Luis et al. [12]. * * * * Cut-off for mild cognitive impairment selected from Freitas et al. [26]. sufficient to see progression in those diagnosed with SMC. This study also reaffirms that cognitive instruments require adjustment for age and education. The Qmci had low sensitivity in those with more time in formal education (≥12 years) and the MoCA low specificity in those with less time in formal education (<12 years). This is similar to other studies demonstrating that established MoCA cut-offs lack accuracy, particularly specificity, among older adults and those with less time in formal education [22]. The study suggests several potential advantages of the Qmci over the MoCA in this clinic sample. The Qmci is more efficient because it takes half the time to complete compared to the MoCA. The MoCA had significant floor effects (median score of two points in severe cases), making it particularly difficult for those with severe dementia to complete. As the Qmci is scored from 100 points, each subtest provides more information. This is exemplified by the scoring of the Clock. Although requiring more interpretation, the scoring of the Qmci-Clock, provides more detailed information and contributes more to the final test score. The Qmci also incorporates fewer subtests that require normal vision than the MoCA. Visual impairment affects the performance of subjects on cognitive testing, particularly in older adults [55]. Visual tasks account for 27% of the MoCA's (Visuospatial/Executive and Naming) overall score compared to 15% for the Qmci (clock drawing). This said, both tests can be corrected to account for incomplete data and recently a modified version of the MoCA for the visually impaired has been validated [56]. However, the elimination of subtests that require vision (naming, visuospatial and executive function) reduces the discriminating function of the 'MoCA-BLIND', particularly its ability to differentiate MCI from controls [56]. Given that this is the principal advantage of the MoCA, suggests that it is overly weighted towards visual tests, important when assessing older adults [56]. The Qmci, on the other hand, derives its accuracy for identifying MCI from its delayed recall, VF, and LM subtests [28]. The main advantage of the MoCA is its sensitivity, the most important psychometric property for screening instruments [57], particularly in those with higher levels of education. The Qmci however, provided an arguably better balance, particularly at a higher cut-off (<65). The MoCA is less weighted toward language, with 73% of the test requiring verbal skills versus 85% for the Qmci. The MoCA is validated widely, in different languages and clinical settings. Validation of the Qmci in other countries, languages, and cultures is now underway. The Qmci is translated into several languages and is validated in Dutch [58]. Future validation should also include comparison with the MoCA and other screens for MCI in different clinical populations such as the Addenbrooke's Cognitive Examination-III [59]. This study has a number of limitations. The sample size was small, underpowering the study, potentially causing bias, and limiting it to a non-inferiority validation study. A power calculation, performed a priori, suggested that approximately 300 participants (normal controls and patients with MCI) would be required to show superiority of one instrument over the other in separating MCI from normal controls. An attempt was made to classify patients with a diagnosis and subtype where possible. However, as this was a study in clinical practice, some dementia subtypes may have been misclassified. Although participants were screened in random order and alternative versions of tests were used, learning effects may have occurred. Patients with SMC were diagnosed clinically and no specific screening test such as the MAC-Q was used [60]. This said, a short Likert scale in response to a single question could substitute as a valid screen [38]. However, the type of instrument selected may affect the diagnostic accuracy and it is suggested that age-anchored reference questions provide the most utility [61]. As participants were a homogenous sample (older Irish Caucasians), attending a single center (a university hospital memory clinic), there is the potential that spectrum bias may reduce the external validity. These effects have been found for patients with MCI and are particularly affected by age and education [62]. As presented in Table 3, participants' age and education were seen to affect the sensitivity and specificity of both the MoCA and Qmci screen. Future validation studies, using age and education specific cut-offs, could be used to minimize this potential source of bias. Finally, the high prevalence of cognitive impairment (68% of the total sample) affects the ability to interpret the accuracy of tests [57]. However, the high prevalence seen and the characteristics of patients with dementia (significantly older than normal controls) reflect clinical practice. In summary, this is the first study to compare a short cognitive screen, designed specifically to differentiate MCI from normal and dementia with the MoCA in a "real-life" outpatient setting. While the MoCA overcomes many of the difficulties associated with the MMSE, particularly in those with high levels of education [63], the MoCA is long and has suboptimal specificity among older adults attending a memory clinic, particularly at its established cutoff (<26). As older adults represent the majority of patients who have cognitive screening performed for symptoms of memory loss, the Qmci may be a shorter and more accurate alternative, especially when used with a higher cut-off score. The MoCA may be better with a lower cut-off than the established score, particularly in older adults with high levels of education. Further research is required to confirm these findings and compare the Qmci and MoCA with other "MCI specific" instruments and in different clinical settings, particularly in primary care, where the brevity and usability of the Qmci is likely to be of most benefit. ACKNOWLEDGMENTS The Centre for Gerontology and Rehabilitation is supported by Atlantic Philanthropies, the Health Service Executive of Ireland, the Health Research Board of Ireland and the Irish Hospice Foundation. The authors would like to thank Dr. Brian Daly and the nurses in the assessment and treatment center, St. Finbarr's Hospital for their assistance. SUPPLEMENTARY MATERIAL The supplementary material is available in the electronic version of this article: http://dx.doi.org/ 10.3233/JAD-150881.
NARCISSISM IN PAULA HAWKINS' NOVEL THE GIRL ON THE TRAIN Narcissism is one of the most common diseases in psychology, but its concern is the least. Therefore, this research was focused on narcissism disorder of one of the main characters named Megan Hipwell in the novel The Girl on the Train by Paula Hawkin. The analysis was done by applying psychoanalytic theory, in narcissism, which is the most common mental disorder among the members of a society. The method of research was qualitative, which required an in-depth analysis of the literary works used by the writers of this research according to the selected theories. The analysis was aimed to find illustrations and evidence of the main character in the novel The Girl on the Train to illustrate narcissism disorder. In this research, it was found that the main character suffered from narcissistic disorder, such as big ego, over-self confidence, exploitation of Interpersonal relationships, arrogance, and deficient social conscience. The triggers were rationalization and projection. Introduction Many personality problems arise at the present time. Nearly 1 billion people live with mental disorders, and one person dies every 40 seconds due to suicide. It reminds us of the importance of increased attention to neglected mental health. Especially during the Covid 19 pandemic, almost all age groups in various countries are a force to undergo new habits that can be bad for mental health. Humans do have defence mechanisms that will naturally help them deal with things that each individual cannot accept (Ihsan & Tanaya, 2019). Like antibodies, these defence mechanisms have limitations, so humans cannot rely on them for a long time. People who have suffered long-term abuse are more likely to be diagnosed with certain personality disorders (Kirsten, 2012). Multiple tests (such as blood tests) can identify physical diseases such as heart disease and diabetes. However, considering the personality disorder, more energy is needed to focus on everyone (Tambunan, 2018). By the psychology of literature, the reader would get knowledge about psychology by reading a literary work, in this case, a novel. One of the famous psychological novels in the 21st-century is The Girl on the Train. The researchers choose The Girl on The Train, a novel written by Paula Hawkins in 2015, because it tells that exploiting psychological power is always fascinating. The novel covers unconscious mental conditions and discusses the reality that hides behind one's fantasies on love and ownership and this study aims to analyze narcissistic disorder of the main character, Megan Hipwell. Paula Hawkin as the author of the novel The Girl On The Train, is well-known as the author of the Mystery and Thriller category at the Goodreads Choice Award in 2015. Besides, Paula is also a freelancer in several publications and wrote a financial advice book for women entitled Money Goodness. After her first novel, The Girl On The Train, she also wrote another novel, Into The Water, released in 2017. Hawkin's novels have unique characteristics; most stories have exciting plots and complicated questions that make the audience curious about the story and wait for the story's end. In addition, all her literary works are fascinating to analyze. Megan Hipwell has an exciting personality that needs to be analyzed. At first glance, Megan looks like a happy wife to have a loyal husband. She is an interesting figure filled with life fantasies that can never be satisfied, and she is also an art gallery artist in a small town. Megan is accustomed to seeking solace elsewhere to fulfil her desires that her husband could not give her. She has drugged Scott during their marriage, a compulsive liar; she has an affair with another man, and she always tries to have an affair with her therapist too. In her mind there is always a thought of pleasure. The character perfectly shows the social behavior disorder of narcissism. Matters on personality disorders have become more widespread, people are becoming more and more anxious about these mental problems. People begin to randomly guess and diagnose who has this disease or what disease they might have. Narcissism is a psychological disorder. Personality Disorder Personality is defined as a collection of behavioural, cognitive, and emotional patterns (developed by biological and environmental factors). Although there is no universally accepted definition of personality, most theories focus on motivation and psychological interaction with the human environment. At the same time, personality disorder refers to people with mental problems or people who behave abnormally. Many things can cause personality disorders; from the environment or genetics (Kjennerud, 2014). In other words, genes and the environment are both crucial factors in the development of human thinking. Feist and Feist (2008) believe that those psychologically disturbed people are incapable of love and have failed to establish a union with others. Psychologically impaired people refer to people with personality disorders. People with mental illness do not receive enough love and cannot socialize themselves with other people. Lenzenweger and Clark say in Feist and Feist (2008), "When they encounter situations in which their typical behaviour patterns do not work, they are likely to intensify their inappropriate ways of coping, their emotional controls may break down, and unresolved conflicts tend to reemerge". Like other humans with typical personalities, people with personality disorders think they are normal. However, when they feel their situation is overwhelmed, they will do their best to solve their problems. People with personality disorders usually live in harsh environments that are mentally unacceptable. Nonetheless, the environment is not a single factor; it could be started from poor treatment of genetics and the environment, making the people's mentality better. If something disturbs his inner peace, he usually uses defence mechanisms. But in the long run, these defence mechanisms will be ineffective, and personality disorders will develop in their place. Narcissism In his psychoanalytic theory, Freud explains that narcissism is a theory of libido or sexual needs; libido is directed towards both oneself (ego-libido) and others (object-libido) (Adams, 2014). When in love, individuals prioritize others they love, but narcissists prioritize themselves. In short, narcissism is when the ego is much more deeply involved than usual. In the social psychology view of personality, narcissism uses social relationships to regulate self-concept and self-esteem. Narcissists do not pay attention to interpersonal intimacy, warmth, or other positive long-term relationships. They are still very good at building relationships and use these relationships to show popularity, success, and high status in the short term (Campbell et al., 2010). From a clinical and social personality point of view, narcissism includes aspects of maintaining self-esteem or self-improvement. They try to achieve personal goals without wanting to empathize with the interests of others around them. It can be seen from the selfish attitude or the tendency to use anything to enhance his persona. Narcissists tend to blame other situations or people if what they want is not achieved (Campbell et al., 2010). Narcissism is a personality characterized by excessive fantasies or behavior towards power, beauty, success, or ideal love, a great need to be admired by others, and a lack of empathy based on Diagnosis and Statistical Manual of Mental Disorder IV-R. The psychological approach reveals the novel characters' pattern to determine the novel's narcissism. Five criteria of narcissistic personality disorder: 1. Inflated self-image (e.g., displays cocky self-assurance and exaggerates achievements; seen by others as self-centered, haughty, and arrogant). 2. Interpersonal exploitativeness (e.g., used to enhance self and indulge desires) 3. Cognitive expansiveness (e.g., used to exhibit immature fantasies and redeem selfillusions 4. Insouciant temperament (e.g., manifesting a general air of nonchalance and imperturbability). 5. Deficient social conscience (e.g., disregarding conventional rules of shared social living, viewing them as naïve or inapplicable to self; revealing a careless disregard for personal integrity and an indifference to the rights of others) (Weiner & Craighead, 2010) In social conditions, narcissism can create a need for power over others. This situation is forming because the narcissistic individual needs to be appreciated, recognized, praised, and seen as achieving. This need reflects the narcissistic individual's dependence on external sources of gratification but rejects those external sources' consequences or responsibility (Campbell et al., 2010). Even so, society generally rejects individuals like this. There are many reasons. The narcissistic individual exaggerates his accomplishments, only wants to befriend those who admire him, resists criticism, is arrogant, aggressive, self-promoting, and disliked. There are four types of narcissism: individuals who love themselves, individuals who love themselves in the past, individuals who love themselves in the future, and individuals who love individuals who used to be part of themselves (Adams, 2014). Most narcissists understood today are the self-loving type of all time. Even so, individuals who like themselves in the past -which means they don't love themselves now -and individuals who have huge aspirations about themselves in the future classified as narcissistic individuals. The narcissist may look solid, brimming with force and predominant. However, these people try to reduce their endurance by showing others the reality they cannot survive. Even if they lie when necessary, so that individuals applaud for them, in this way, they gather the energy for survival, and ultimately, they believe in their mistakes. Narcissism's role is practically something very similar to the vast majority of defence mechanism's roles: securing and serving the delicate self. It can be proved that they are worthy of attention, thus linking things they have or recognized by society. Narcissism has no specific knowledge of this problem; however, many young people and people in the mid-1920s are most prone to this problem. In any case, middle-age is when narcissism worsens. (Adams, 2014). Defence Mechanism We realize that humans have an instinct to always live like animals, but not only that, humans have something not only always alive, morals, loyalty, etc. Humans can consider morals and aesthetics, and it can be that only humans who enjoy moral and political status and dignity have rights (Ihsan & Tanaya, 2019). Humans somehow want morality, evaluation, etc., which are established by the citizens and make all the community veins agree with the truth and judgment. Citizens determine things based on human attitudes, which they impose as good attitudes and standardize human morality. However, different people are ignored by them. Those who are different do look at as bad people who are treated poorly by those around them. Therefore, humans have a defence mechanism to be free from mental destruction. It is what the writers intend to discuss in this research. The defence mechanism is an instrument made by the mind of the person that aims to make people feel comfortable in their environment, and the central defence is used to protect humans by keeping up unsatisfactory driving forces, emotions on the primary side of the human mind's consciousness (Cramer, 2000). Thus, the defence mechanism serves to control anxiety. Anxiety on a large scale can cause problems for these individuals, such as depression, and what is even worse is personality disorders (Ihsan & Tanaya, 2019). People have a great deal of anxiety, like uneasiness about their future, being separated from everyone else, being left, and tension for a vast scope can inconveniences the individual, like discouragement. The more terrible is a behavioral condition. Indeed, this defence mechanism either avoids or controls the human mind from being destroyed by all these anxieties. Every individual uses Defense mechanism instruments that are unknowingly performed by numerous individuals when they feel insecure about something. Individuals can't deliberately pick which Defense mechanism they will utilize or which Defense mechanism fits them better. The oblivious human perspective will figure out which Defense mechanism instrument will do with the person's character and what Defense mechanism the people need during that time. Researchers only took three mechanisms for this study of the ten mechanisms described by Freud. The defence mechanisms for the ego are as follows Projection and Rationalization. Projection Projection is a form of self-defense by dealing with disturbing anxiety by distorting the facts as if the guilty party is someone else, not himself. On the other hand, Projection is a defence mechanism that emerges when we share our weaknesses, problems, and mistakes with others (Cramer, 2000). Rationalization. A rationalization is a form of self-defence by making excuses to manipulate facts so that the actions taken make sense and can be accepted. We justify a thought or threatening action by persuading ourselves that there is a rational explanation for the view or activity. A psychoanalytic defence mechanism occurs when the ego does not accept the real motive of individuals' behaviour and replaces it with a hidden reason. Here the action is perceived, but the explanation that caused it is not. Behaviour reinterprets to look reasonable and acceptable (Weiner & Craighead, 2010). Research Method Megan Hipwells' The Girl on the Train is the object of the study. This analysis is done by applying psychoanalytic theory, especially in narcissism, which is the most common mental disorder in society, and people's consciousness is now lacking. The method is qualitative, requiring an in-depth analysis of the literary works of this study according to the selected theories. A descriptive qualitative research design has been carried out since the data are in words, phrases, sentences, and utterances. The data are of primary and secondary ones. The preliminary data are taken from the novel, and the auxiliary information is taken from other sources. The data for this study are collected by reading, identifying, interpreting, and counting citations in the novel. In addition, the data are analyzed based on the theory of narcissistic disorder. Results and Discussion Personality disorders are not another new issue in society. Regarding personality disorders, many people, for the most part, consider it a maniac or an odd individual. Indeed, individuals with Personality disorders do not generally appear to have issues with their minds. An individual with an ordinary appearance does not preclude that person is diagnosed with a psychological disorder. Therefore, it is harder to perceive mental illness than physical illness. Likewise, Personality Disorders are not identified with sociopaths, manslaughters, double personality, and so on. The minor simple things like over-self confidence can show a personality disorder with a specific classification, and individuals called that narcissism disorder. The following are the characteristics of the narcissistic disorder in the character Megan Hipwell. Over self-confident An individual has a high admiration for himself; he can be considered to have a narcissistic disorder. (Campbell et al., 2010) states that "Narcissism is associated with over self-confidence..…." The following citations represent her over self-confidence I find myself standing in front of my wardrobe, staring for the hundredth time at a rack of pretty clothes, the perfect wardrobe for a manager of a small but cutting-edge art gallery. Nothing in it says 'nanny' (Hawkin, 2015: 24). People who have narcissistic personality disorder feel that their social status is the highest. They feel special and always want to be privileged by others. The cause of a symptom is the level of confidence that is too high to maintain their existence. According to him, the clothes are not suitable for babysitters. Such evidence proves that Megan feels she is unique because she has a higher status than the nanny in dressing. It shows that she takes Nationally Accredited and indexed in DOAJ and Copernicus care of her appearance, makes herself look physically perfect, and becomes an outstanding individual compared to others. They further explain that individuals who like to preen, dress up, and want to admire themselves could be said to be narcissistic. The fact that Megan likes preening is shown in the following dialogue " I long for my days at the gallery, prettied up, hair done......" (Hawkin, 2015: 25). Narcissism is self-love, excessive concern for oneself, characterized by very extreme respect for oneself. Exploitation of the Interpersonal Relationships In narcissistic behaviour, interpersonal relationships mean exploiting others to achieve their own goals-women who are busy directing their narcissistic attitude to achieve the desired goal. The relation is considered as satisfying herself. "......jumble up all the men, the lovers, and the exes, but I tell myself that's OK because it doesn't matter who they are. it matters how they make me fell.....why can't they give it to me? " (Hawkin, 2015: 74). and "I was with a man who excited me, who adored me……I didn't need it to endure, or sustain. I just needed it for right then" (Hawkin, 2015: 221). She is an interesting figure filled with life fantasies that can never be satisfied. She is accustomed to seeking solace elsewhere to fulfil desires that her husband could not accomplish. She is just thinking about the pleasure. In normal conditions, almost all women choose to be faithful. Megan has a lot of faith in her husband's loyalty to their marriage. However, narcissistic individuals fail to build specific interpersonal relationships such as dating because they negatively impact weak commitment, infidelity, and high and unlimited sociosexuality (Campbell et al., 2010). Megan has indicated narcissism disorder seen through her affairs with several men in her life. "…. I saw him, and I wanted him, and I thought, why not? I don't see why I should have to restrict myself, lots of people don't. men don't." (Hawkin, 2015: 61). Even though Megan knows the man has already had a family, that does not stop her . Megan meets the man at a hotel. She is cautious because she knows if what she does is found out by Scott, bad things happen to her. It would be a disaster for the guy to cheat on. Significantly narcissistic is associated with dominating, vengeful behaviour. (Campbell, 2010). Whereas in the social psychology view of personality, narcissism uses social relationships to regulate self-esteem and self-concept. Narcissists do not focus on interpersonal intimacy, warmth, or other positive long-term relationships .... (Campbell et al., 2010) Big Ego Eugene states that "Anything other than the ego is narcissism…". When one thinks that nothing is more important than oneself, a big ego can lead to a narcissistic disorder: the bigger ego, the more difficult it is to become selfless. Most narcissists are people with big egos who try to impose their will on others. They usually do something necessary to meet their needs, such as feeling comfortable, happy, or anything that benefits them. The big ego of Megan seen on page 216 " I'm going to have to swallow my pride and my shame and go to him. He's going to have to listen. I'll make him" (Hawkin, 2015: 216). It means what Megan does to Kamal is one of the characteristics of a narcissist. Megan sees herself as a unique individual. She believes that her affairs are always more important than the other's affairs. Her attitude has a centered attitude towards her that ignores the people who are in the vicinity. It is caused by the perception from within themselves higher than others. She also hopes to be prioritized in terms of excellent and special treatment or unreasonable, meaning priority arises. Their demand are to be fulfilled automatically, and that is suitable for their expectations. Megan often feels she has the right to get good things that have advantages for her. Being Arrogant Women Megan is also an arrogant woman. The attitude is shown by Megan as she wants others to understand her suffering for Scott's behaviour. Still, she also wants to be seen as acceptable in her absence. This attitude gives rise to the thought that Megan feels excellent and capable even without Scott. "I can live without him, I can do without him just fine-but don't like to lose. It's not like me. None of this is like me. I don't get rejected. I'm the one who walks away" (Hawkin, 2015: 174). Megan's attitude in the above quote shows that she can go through life without Scott, which indirectly indicates she displays cocky self-assurance, making an Inflated self-image. According to Concini (Weiner & Craighead, 2010), inflated self-image is one of the five narcissistic characteristics. The narcissistic tendency will lead her to an extreme ego or me. In that condition, women are not easily conquered, defending their dignity, physically and psychologically. It is due to a stable level of consciousness in the appreciation of his weaknesses. Deficient Social Conscience Corsini mentions five criteria of narcissistic personality disorder, one of which is the deficient social conscience. It is found in the following excerpt. " I didn't want him to leave his wife, just wanted him to want to leave her. to want me that much " (Hawkin, 2015). Based on the data above quote, Megan cannot understand the feelings of others, especially understanding the sentiments of Anna, the wife of Tom. Narcissistic women have a centered attitude ignoring the people who are in the vicinity. Excessive confidence in the ability of self makes women feel narcissistic andhampered for sensitivity toward others. Megan wants Tom to leave his wife. Even though Megan knows Tom has already a child with Anna, that does not stop her from expecting Tom. Megan shows Deficient social conscience, disregards conventional rules of shared social living, revealing an indifference to the rights of others. Defence Mechanism Megan displays the typical narcissistic trait of repressing unwanted thoughts and memories. Narcissists have a variety of defence mechanisms at their disposal. There are two aspects in Defence Mechanism: projection and rationalization. Projection Megan's first projection is made towards Scott. Megan says that Scott is so tired all the time. She is not interested anymore. He cannot provide what she needs. Everything she thinks only about the baby (364-365). He is no longer available to him. That fact is the reason why she starts to find out another man who is known to him. It is Tom. Projections occur to protect the ego from guilt or fear/worry (Cramer, 2000). By projecting Scott, she tries to defend herself from Scott's judgment for her affair. Rationalization The rationalization is used by Megan when neurotic anxiety attacks her when Mac has realized that she is the person who kills Libby. Therefore, his rationalization is used by making several reasons why she kills her baby. She reveals that she does not mean this. A defence mechanism occurs when the ego does not accept the real motive of individuals' behaviour and replaces it with a hidden reason (Cramer, 2000). She hopes that Mac and Kamal do not blame her for this by doing a rationalization defence mechanism. Megan's thoughts indicate that she uses primary narcissistic defence mechanisms to cope with unwanted thoughts and memories. Conclusion Narcissism is a personality disorder caused by past mental abuse, and narcissists do their best to prove themselves superior. The narcissistic in the novel Girl On The Train can found in Megan Hipwell, an art gallery artist who ironically has a personality disorder. Her behaviours, such as her big ego, over-self confidence, Exploitation of Interpersonal Relationships, being arrogant woman, and deficient social conscience, clarify that Megan is a narcissist. Narcissism in the novel is also caused by the harassment of the main character by the environment. In real life, narcissists believe to be selfish, meaning they only care for them and put everyone after them. But it is these people who feel the most insecure about their existence. That is why these people try to protect their existence by rationalization and projection.
The improved and the unimproved: Factors influencing sanitation and diarrhoea in a peri-urban settlement of Lusaka, Zambia. Accounting for peri-urban sanitation poses a unique challenge due to its high density, unplanned stature, with limited space and funding for conventional sanitation instalment. To better understand users, needs and inform peri-urban sanitation policy, our study used multivariate stepwise logistic regression to assess the factors associated with use of improved (toilet) and unimproved (chamber) sanitation facilities among peri-urban residents. We analysed data from 205 household heads in 1 peri-urban settlement of Lusaka, Zambia on socio-demographics (economic status, education level, marital status, etc.), household sanitation characteristics (toilet facility, ownership and management) and household diarrhoea prevalence. Household water, sanitation and hygiene (WASH) facilities were assessed based on WHO-UNICEF criteria. Of particular interest was the simultaneous use of toilet facilities and chambers, an alternative form of unimproved sanitation with focus towards all-in-one suitable alternatives. Findings revealed that having a regular income, private toilet facility, improved drinking water and handwashing facility were all positively correlated to having an improved toilet facility. Interestingly, both improved toilets and chambers indicated increased odds for diarrhoea prevalence. Odds of chamber usage were also higher for females and users of unimproved toilet facilities. Moreover, when toilets were owned by residents, and hygiene was managed externally, use of chambers was more likely. Findings finally revealed higher diarrhoea prevalence for toilets with more users. Results highlight the need for a holistic, simultaneous approach to WASH for overall success in sanitation. To better access and increase peri-urban sanitation, this study recommends a separate sanitation ladder for high density areas which considers improved private and shared facilities, toilet management and all-inclusive usage (cancelling unimproved alternatives). It further calls for financial plans supporting urban poor access to basic sanitation and increased education on toilet facility models, hygiene, management and risk to help with choice and proper facility use to maximize toilet use benefit. Introduction Sustainable Development Goal (SDG) 6 focuses on universal access to improved drinking water and sanitation by the year 2030. Access to basic services such as water, sanitation and hygiene (WASH) is still low in high density peri-urban settlements. This primarily results from their being low income unplanned settlements having limited space and municipal provisions [1]. Consequently, residents use a mix of improved and unimproved WASH facilities [2,3]. In the sub-Saharan nation of Zambia, WASH factors have been found to be responsible for 11.4% of all deaths [4]; only 67.7% and 40% of the population have access to improved drinking water and sanitation respectively [5]. In comparison to national statistics, peri-urban figures reveal that approximately 56% and as much as 90% of the peri-urban population lack access to safe water and sanitation facilities respectively [6]. Poor WASH has also been linked to the nations annual cholera outbreaks which usually emanate from rural fishing villages and peri-urban settlements [7]. During the 2017/2018 rain season, an outbreak of cholera emanating from the peri-urban resulted in 5,905 registered suspected cases, the majority of them (91.7%) from Zambia's capital city, Lusaka [8]. Approximately 70% of the city's population are peri-urban residents; the city is home to 37 peri-urban settlements [5]. Household WASH and sociodemographic data in one peri-urban settlement in Lusaka were collected in order to identify factors associated with household access to improved/unimproved WASH and inform future participatory action research among resident children and youth. As the peri-urban has been a common epicentre of diarrheal disease outbreaks, this article focuses on access to peri-urban sanitation. Key points of focus are commonalities, risk factors and plausible intervention areas. Of particular interest in this article is the nature of sanitation facility owned and/or used by the household, and the factors associated with the use of improved and/or unimproved sanitation facilities. Bearing in mind the 2030 target of universal access to basic/improved sanitation [9] rather than co-use between improved and unimproved facilities, the study took a unique assessment of the simultaneous use of improved toilet facilities and unimproved sanitation in the form of chambers: bucket, pan, plastic or other unsealed containers which are collected or disposed daily in toilets, by informal collectors, with solid waste or thrown as flying toilets [3,10]. As a major goal of meeting the SDG targets is the alleviation of disease risk, household diarrhoea prevalence was also assessed. Objectives of the study were therefore, to: (i) investigate peri-urban sanitation through determining the associations between household socio-demographic and WASH characteristics, and household sanitation facility, chamber use and diarrhoea prevalence; and (ii) narrow down and recommend plausible interventions focused towards attainment of SDG 6 in the peri-urban/ high density areas for the purpose of informing research, policy and WASH institutions. Methodology The study used an exploratory cross-sectional design, with data collected between September and October, 2018. A brief breakdown of research site selection and sampling procedure is given in Fig 1. A questionnaire and observation checklist were used for data collection (see S1 and S2 Appendices), and findings were analysed using multivariate logistical regression. The following is a detailed description of the research process. Research site A previous WASH assessment informed the selection of the research site (i.e., Stage 1 in Fig 1) [10]. The site was also 1 of 2 informal settlements cited as epicentres of the 2017/2018 cholera outbreak in Lusaka (i.e., Stage 2 in Fig 1) [8]. Within the settlement, 3 out of 13 health zones were selected for data collection (i.e., Stage 3 in Fig 1). The zones were selected in collaboration with a local youth group named Dziko Langa. The groups' decisions were informed by their findings from a photovoice exercise focused on assessing local WASH priorities. Photovoice required participants to take pictures and tell the story of local/peri-urban WASH [11]; the selected zones would also be sites for Dziko Langa's future WASH intervention through action research. Other than recommendations from group members, criteria for zone selection considered availability of WASH facilities, public services and distance from the main road. One of the zones housed the local hospital and several government schools, another housed the biggest market in the settlement, having the 2 nd largest number of households among the 13 zones, and the final zone was further in the settlement, off the main road. This variation in development, facilities and population densities among the zones ensured higher possibility of representativeness. Sampling and sample size The number of households in the settlement was 33,185 [10]; the selected zones housed 9,114 households (representing 27.5% of the settlement). Households were selected via systematic random sampling with data collectors targeting every 5 th house and marking each house after data collection to prevent duplication. Sampling commenced from an agreed intersection of the main road/boundary of each zone going into the interior, and zonal boundary markers were clearly defined to all data collectors. In cases where tenants lived in a cluster of houses with their landlords (a common occurrence in Lusaka peri-urban) [12], or where neighbours shared WASH facilities, the 5 th household, regardless of who owned the WASH facilities, was the first priority for sampling and the cluster sharing WASH facilities was considered as one household. This was done to avoid duplicating facilities. In several cluster cases, approached households referred data collectors to the landlord, or neighbour in charge of the facility stating the need for permission in order to assess facilities. The sampling goal was N = 500 for the overall WASH study; a sample size of N = 369 was achieved (i.e., Stage 4 in Fig 1). Purposive sampling was applied on collected data; sampling criteria required households with toilets and information on all required variables (N = 205) (i.e., Stage 5 in Fig 1). Zambia's Fifth National Development Plan indicated that 10% of the peri-urban population had access to 'satisfactory' sanitation facilities [6]. More recent statistics however, indicated that 99% of urban households had access to a facility (regardless of whether it was improved or unimproved as per current study focus) [5]. Using a confidence level of 95% with our sample (N = 205), the latter proportion (99%) gave a confidence interval of ±1.36 while the former (10%) gave a confidence interval of ±4.09. Sociodemographic data were requested from household heads as they were deemed responsible for and/or knowledgeable on household WASH decision making. The study followed the definition of household head as per the Zambia Living Conditions Monitoring Survey which categorised the household head as the person who normally made daily decisions concerning the running of the household irrespective of gender and/or marital status [5]. Where unavailable, data collectors either collected data from the eldest/responsible available adult if permitted (�18 years), returned to the household at an alternative time to collect data from the household head directly, or skipped to the next house in the sequence. This was done to ensure that the diversity of household heads in the research area (employed and unemployed) were sampled. In most cases, individuals were not willing to give information without the consent of the household head, as it was the household heads sociodemographic information that was required. In some cases, individuals contacted the household head for permission or to clarify information. The percentage of household head vs. non-household heads who divulged data was 68% vs. 32%. Compliance with ethical standards Prior to the commencement of the study, all processes, documentation and data collection tools underwent ethical screening, and were approved by ERES Converge Ethical Approval Board, Lusaka (Ref. No. 2017-Mar-012) and the Faculty of Health Sciences, Hokkaido University, Japan (Ref. No. . In line with this, signed informed consent was collected from all participants and all participation was voluntary. Furthermore, data were only collected from persons 18 years and older. The research was conducted under the Sanitation Value Chain Project, registered with the Research Institute for Humanity and Nature based in Kyoto, Japan. The study design, data collection, analysis, article and all other aspects related to the research were fully under the discretion of the researchers. Data collection A questionnaire was used to collect sociodemographic data and household WASH information; questions on socio-demographic data, household sanitation, chamber use and diarrhoea prevalence were extracted for the purpose of the study (see S1 Appendix). Sociodemographic data was collected in alignment with criteria from the Zambia Demographic and Health Survey 2013-2014 [13]. Since Zambia is a signatory of the SDGs, household sanitation was assessed using the 2017 World Health Organisation and United Nations Children Education Fund Joint Monitoring Programs' Guidelines for WASH (hereinafter referred to as the WHO-UNICEF JMP) [9]. Questions relating to household WASH as per S1 Appendix followed the aforementioned guidelines; a WASH checklist was developed as an observatory guide to determine household WASH service levels (see S2 Appendix). Both sociodemographic and WASH data were collected using Open Data Kit (ODK) Collect as the phone application for initial data collection and KoBoToolbox as the online data server post-collection. Data collectors had 4 days training on how to use ODK Collect, and fill in the questionnaire and checklist. Note that data collectors entered participant responses in the application, which they later verified for error before upload to the online server. To reduce error, the researcher and research assistants shadowed different pairs of data collectors through the first half of the data collection period. Household demographic and WASH questionnaire. Sociodemographic data collected from the household head were age, gender, marital status, education level, employment status, income, house ownership and number of household members. For a more in-depth look into peri-urban sanitation, questions were also asked on toilet ownership and management (cleaning and cleaning frequency, maintenance, hygiene). This would help determine internal and external matters of access, control and management of household sanitation. Data were also collected on the use of chambers and diarrhoea prevalence. Use of chambers is a relatively well known practice in the peri-urban irrespective of an individual's access to sanitation [10]. According to an update of the WHO-UNICEF JMP, chambers fall in the category of unimproved sanitation as they present significant health risks. When disposed in the open or with solid waste, they equate to open defecation [3]. With their normalcy, an analysis of chamber use could show chamber impacts and expose barriers to toilet use in the periurban. Lastly, household diarrhoea prevalence was assessed as per previous studies: any household member having 3 or more watery stools within 24 hours in the last 2 weeks [14][15][16]. This information was also collected to gauge the relationship between peri-urban health (diarrhoea prevalence) and sanitation. Household WASH checklist. WASH data were collected by viewing the households' water source, sanitation facility, faecal disposal site (e.g., septic tank) and handwashing station/ location; where permitted, photographs of WASH facilities were also taken to assist later validation. GPS coordinates of all participating households were also taken for this purpose. Observations facilitated household WASH assessment via the 2017 WHO-UNICEF JMP which categorises WASH facilities into improved (safely managed, basic and limited) vs. unimproved (unimproved and surface water/open defecation) for drinking water and sanitation, and facility (facility with soap and water, and facility without soap and/or water) vs. no facility for hygiene, i.e., handwashing [9]. Households with access to piped water, boreholes or tube wells, protected dug wells, protected springs, and packaged or delivered water sources were categorised as having 'Improved' drinking water. 'Unimproved' drinking water was indicated by households that accessed water from unprotected sources (dug well or spring) and surface water (directly from a river, dam, lake, pond, stream, canal or irrigation canal). Having a handwashing facility, regardless of soap and/or water availability was categorised as 'Facility', whilst the absence of such facilities was categorised as 'No facility'. Of primary importance to this research was the categorisation of sanitation. Improved facility status was granted to households that accessed flush/pour flush to piped sewer systems, septic tanks or pit latrines, ventilated improved pit latrines, composting toilets or pit latrines with slabs. 'Unimproved' facility was used to categorise households using pit latrines without slab or platform, bucket latrines, and disposal of faeces in fields, forests, bushes, open bodies of water or other open spaces, or with solid waste [9]. In cases where households had more than one toilet or type of sanitation, the most used by the household was the one assessed. Data analysis Data were analysed using JMP1 Pro, Version 13.1.0 (SAS Institute Inc., Cary, NC, 2016) for Microsoft Windows 10 Pro. Descriptive statistics were used to analyse socio-demographic and household WASH characteristics. The association between household heads socio-demographic details and household WASH characteristics was evaluated using multivariate stepwise logistic regression in order to identify a parsimonious set of predictors of toilet facility category, chamber use and diarrhoea prevalence. To select variables for stepwise regression, bivariate odds ratios were computed between each dependent and independent variable; only those resulting in p <0. 25 were included in the multivariate model. For toilet facility, eligible dependent factors were employment, income, toilet ownership, private vs. shared facility, number of households using the toilet, toilet cleaning frequency, drinking water, handwashing, chamber use and diarrhoea prevalence. For chamber use, eligible dependent factors were gender, number of household members, toilet ownership, number of households using the toilet, toilet cleaning responsibility, toilet hygiene, toilet facility and diarrhoea prevalence. Lastly, for diarrhoea prevalence, eligible dependant variables were gender, education, private vs. shared facility, number of households using the toilet, number of persons using the toilet, toilet cleaning responsibility, toilet cleaning frequency, toilet hygiene, toilet facility and chamber use. S3 Appendix shows results of bivariate odds ratios for each independent variable. As per the Akaike Information Criterion, eligible factors were then computed via a backwards stepwise method to determine factors that significantly contributed to sanitation facility (Improved vs. Unimproved), chamber use (Yes vs. No) and diarrhoea prevalence (Yes vs. No). The p-value threshold for entry and removal into the model to determine adjusted odds ratio was 0.25 and 0.1 respectively. The level of significance was set at p <0.05 with a confidence interval of 95%. Sociodemographic characteristics Participant sociodemographic characteristics are shown in Table 1. Whilst participant percentages were almost evenly divided by age group, education level, employment status and those owning or renting their residence, the majority were female (83.4%), married/living together (70.7%), receiving irregular income (74.6%) and housing a maximum of 5 persons in their households (62.4%). Based on the varying means and sources of income, several respondents were not able to state a specific or average amount of money they earned per month, so respondents were instead categorised as having regular (known average amount) and irregular (unknown average amount) income. Categorisation of regular income was irrespective of amount, and focused on respondents who could state a known consistent income pattern. Table 2 outlines information on the households' WASH status, sanitation characteristics and diarrhoea prevalence. The distribution of characteristics among persons using the toilet, toilet cleaning and hygiene responsibility, and handwashing facility status was relatively even. Majority of toilets were not owned by the household (resident), but externally (74.1%) which was also reflected in 80.5% of toilets being shared. The majority of shared toilets were used by �5 households (73.2%); the maximum number of households registered as using one toilet was 20 (Median = 3), and the maximum number of persons using one toilet was 33 (Median = 9.5). To incorporate the aspect of toilet sharing into number of toilet users, we considered the sharing of 1 toilet by 2 average households (N = 9.4 persons). As such, toilet users were divided into �9 persons (49.3%) vs. �10 persons (50.7%). Household WASH characteristics and diarrhoea prevalence With multiple users and owners of sanitation facilities, the responsibilities of toilet cleaning, maintenance (in case of toilet damage, or emptying) and hygiene (the supply of hygiene materials such as toilet paper, cleaning materials and handwashing station for example) were divided into resident and external [12]. Resident management of toilet cleaning and hygiene was at 49.8% and 53.7% respectively. Most participants reported that toilet cleaning was done several times a day to daily (92.7%). The majority of toilets (89.8%) underwent a form of maintenance when damaged, malfunctioning or full (including emptying for pit latrines); of the sample, 29.6% of participants attested to use of a chamber. Access to improved toilet facility was at 72.7%, and drinking water at 84.9%. Having a handwashing facility was at 41.0%. Household diarrhoea prevalence within the past 2 weeks was at 8.3%. Table 3 gives results for logistic regression analysis of factors associated with households having access to an improved toilet facility. The significant independent predictors that increased PLOS ONE Factors influencing sanitation and diarrhoea in a peri-urban settlement of Lusaka, Zambia Table 4 shows the logistic regression analysis of factors associated with using a chamber. Independent predictors of using a chamber were being female (AOR = 3.41, 95% CI: 1.10-10.53), PLOS ONE Factors influencing sanitation and diarrhoea in a peri-urban settlement of Lusaka, Zambia residents ownership of the toilet (AOR = 4.14, 95% CI: 1.81-9.48), and toilet hygiene being handled externally (AOR = 3.36, 95% CI: 1.56-7.25). Additionally, chamber users had higher odds of having diarrhoea (AOR = 6.49, 95% CI: 1.99-21.11) and were more likely to have an unimproved toilet facility (AOR = 2.33, 95% CI: 1.12-4.87). Table 4 indicated that the odds of chamber use were higher for households with access to unimproved toilets. Additional data analysis further revealed that unimproved toilets were more likely owned by residents than external toilet owners like landlords (OR = 2.46, 95% CI: 1.26-4.80; p < .01). Moreover, resident/family house ownership also increased the odds of having access to private facilities (OR = 4.38, 95% CI: 2.04-9.39; p < .01) Table 5 shows the logistic regression analysis of factors associated with household member diarrhoea prevalence in the past 2 weeks. Number of households using a toilet and whether a toilet was private or shared did not offer any significant result to diarrhoea prevalence. Higher odds were found however, for having a toilet used by �10 people and having diarrhoea (AOR = 3.80, 95% CI: 1.11-13.08). The odds for having diarrhoea were found to be lower for persons not using a chamber (AOR = 0.16, 95% CI: 0.05-0.48) and using an unimproved toilet facility (AOR = 0.18, 95% CI: 0.04-0.90). Access to improved drinking water and having a handwashing facility gave no significant results. Sociodemographic characteristics Participant socio-demographics (see Table 1) revealed some important characteristics to consider about peri-urban residents and lifestyle. Consistent with a previous study [12] but inconsistent with government data [5], female headed households were most common (83.4%); and most respondents were either married or living together (70.7%). The study also had 26% of household heads in the age range of 18-29 years. There could be several reasons for this finding. Firstly, the definition of household head is not linked to age, gender, marital or economic status; primary focus is on normal daily decision making pertaining to running of the household [5]. Secondly, national statistics show that women have higher poverty levels, possibly impacting female residential choices [13]. Thirdly, Zambia has a relatively young population: over 60% are under 25 years of age, with a life expectancy of 49 and 53 years for men and PLOS ONE Factors influencing sanitation and diarrhoea in a peri-urban settlement of Lusaka, Zambia women respectively [13]. Diseases such as malaria and tuberculosis are some that have impacted the Zambian population pyramid, leaving several young and more elderly persons to fend for even younger family, bearing economic impacts. National statistics show that the largest age group of household heads is 18-29 years [5]. Of the overall sample, 58.0% were unemployed, higher than the 31% registered across the peri-urban [17]. Only 25.4% received regular income. Whilst income level has been noted to have an impact on sanitation [18], the study findings were linked to income consistency. In addition to the status of the household head, several studies have linked house ownership to WASH decision making [12,19,20] with landlords in most instances, having more say on household WASH than their tenants (residents). The sample offered a good balance between participants who were renting houses (55.6%) and those staying in their own, or family owned households (44.4%). Ownership of the household by the resident, or family meant more autonomy on WASH decision making [12,19,20]. Lastly, being a high density area, the number of household members was considered. According to the 2015 Living Conditions Monitoring Survey, average household size in urban Lusaka is 4.7 persons [5]. Household WASH characteristics and diarrhoea prevalence Just as household ownership has an impact on WASH decision making and management, toilet ownership has an impact on sanitation decision making and management (toilet type, cleaning, cleaning frequency, maintenance and hygiene), determining responsible persons. In the peri-urban where shared WASH is a commonality, these could be the resident, neighbour, landlord, family member or a private/public patron [12,19]. The aspect of responsibility seeks to discuss the level of autonomy for household sanitation and the subsequent bureaucracies that arise from having joint responsibility for, having no responsibility for, or being at the mercy of a second party's decision making. This raises questions like: how free do residents feel to use the facilities? To what extent can residents choose or make amendments to their sanitation? How quickly can/do external parties react to sanitation challenges? How much liability is placed on residents? Residents owning their own toilet was only at 25.9%, with 80.5% of toilets being shared by 2 or more households. Toilet sharing is highly characteristic of periurban settlements due to insufficient space for toilet construction and land tenure for example [19,20], and has been more recently encouraged by WHO as an acceptable alternative to not owning a toilet in high density areas [1]. Toilet cleaning was at 49.8% for residents vs. 50.2% external; toilet hygiene was the inverse at 53.7% for residents and 46.3% for external persons. Toilets were said to be cleaned at least daily (92.7%). Toilet maintenance, a less frequent need, was not done by 10.2% of the households. Access to improved toilet facility was at 72.7%. Due to the facility focus of the study however, this statistic is not easily comparable with government peri-urban data which includes non-facility sanitation under the unimproved bracket. For peri-urban access to improved drinking water, current study findings were almost 2 times higher than government statistics (84.9% vs. 44%) [5]. This could be due to the location of 2 of the 3 zones where data collection was done (closer to the main road, and public facilities), warranting an easier access to basic services and facilities [21]. Despite all households having access to toilet facilities, use of chambers was at 29.8%. Multivariate stepwise logistic regression computed by this study offered insight towards understanding why chambers still maintained relevance among persons with toilet access, even of improved level. Finally, household diarrhoea prevalence for the last 2 weeks was at 8.3%. Data collection was done in September-October, Zambia's hot and dry season. During this time of year, there is no rainfall and therefore, diarrhoea prevalence is generally low [13,22]. As the point of understanding diarrhoea prevalence was to understand risk related to sanitation choices, assessing risk during low prevalence periods would give better revelations pertaining to sanitation choices. Factors contributing to improved toilet facility access The household head having regular income increased the odds of having an improved toilet by 6.3% (AOR = 6.29, 95% CI: 1.71-23.14). Several studies have linked sanitation choices to income, economic status and willingness to pay for services amongst others [10,12,20,23]. Despite access to sanitation being declared a basic human right however, it still comes at a cost which several governments and citizenry cannot afford [23]. This finding indicates sanitation as an investment; with regular income supporting planning, peri-urban residents made the effort towards accessing improved toilet facilities. It also supports the possible benefits of subsidies, payment and investment plans in the area of sanitation acquisition [18]. More often, private toilets proved to be improved facilities (AOR = 4.43, 95% CI: 1.42-13.87). With this result, it can be assumed that in addition to regular income, having private facilities gave more autonomy for choice on type of sanitation procurement [20]. With the more recent WHO Guidelines on Sanitation and Health considering shared toilets as a solution in densely populated areas [1] running alongside the popularity of shared facilities as per our sample (80.5%), collaborations between households for the procurement of improved shared toilet facilities might pose as a suitable solution. In a study by Tidwell et.al. focused on shared facilities, findings indicated that toilet owners (predominantly landlords) worried about their tenants' ability to afford improved sanitation and thus, opted for cheaper toilet models [12]. Their successful intervention towards improvement of peri-urban sanitation facilities through creating dialogue among landlords and their tenants allowed for joint autonomy, collaboration and decision making towards access to improved sanitation. It also opens the door to more communal and social sanitation opportunities. Households' availability of a handwashing facility also increased the odds of having an improved toilet (AOR = 7.98, 95% CI: 2.90-21.95). Improved toilets were also significantly correlated to having improved drinking water access (AOR = 4.80, 95% CI: 1.68-13.77). Knowledge on handwashing is often revealed through an analysis of WASH knowledge, attitudes and practices, or linked to education [24]. In this study, however, household heads education level bore no significance. Rather, similar to having a handwashing facility, the availability of accessible water for toilet flushing, cleaning, handwashing and hygiene would be a plausible consideration to determine the type of sanitation facility selected by the household [25]. As such, improved water access would preclude greater investment in toilet facility and higher likelihood of access to handwashing facility (both facilities requiring water availability). A seemingly unexpected result was that having an improved toilet facility increased the odds of household diarrhoea prevalence by 10.9% (AOR = 10.89, 95% CI: 1.54-77.10). It must be stressed at this juncture that diarrhoea, beyond being waterborne, is spread through faecal oral transmission [1]. Toilets are therefore likely places for faecal contamination, particularly when proper toilet structure, maintenance, use and hygiene are not considered. This prompts sanitation recommendations to go beyond encouragement towards procurement of improved toilet facilities to more education on toilet hygiene and maintenance. Blind recommendation towards use of improved toilets minus consideration of these factors may reduce open defecation, but increase toilet users' access to faecal contamination, thereby escalating risk of contamination and diarrhoea prevalence through toilet use [25,26]. It should be noted that though the result was significant (p < .05), the 95% CI range for diarrhoea prevalence was quite wide (95% CI: 1.54-77.10) indicating that though valid, this result may not be a good reflection of this specific sample. Lastly, access to an improved sanitation facility reduced the odds for chamber use (AOR = 0.27, 95% CI: 0.12-0.64). With most improved facilities being private, having a handwashing facility and having access to improved drinking water supply, it could be assumed that the level of convenience offered did not warrant the need for alternative sanitation. This is a positive result, indicating the suitability of the sanitation system for peri-urban residents, particularly when all WASH facilities were available and of improved status [25]. It also offers credence to the SDG targets 1.4 [1] relating to the need for universal acquisition of basic services (inclusive of basic WASH). Factors contributing to chamber use All chamber users attested to having access to a toilet. As such, findings show chambers as complementary to the primary toilet facility regardless of whether the toilet was improved (24.16%) or unimproved (44.64%). This chamber use despite access to toilet facilities indicates inefficiencies with the primary toilet facility for users. For successful intervention towards the eradication of open defecation and a complete move to improved sanitation, these inefficiencies must be explored. This requires looking at chambers as a chosen alternative to both open defecation and toilet facilities. Findings indicated that odds of using a chamber were higher for those having an unimproved facility (AOR = 2.33, 95% CI: 1.12-4.87). Chamber use was also higher when residents owned their own toilet facility (AOR = 4.14, 95% CI: 1.81-9.48). Studies have found that toilet sharing, more common with external toilets, had an impact on freedom of toilet use [23,27]; and as such, residents owning their own toilet would be expected to offer more freedom of toilet use to the household. Whether a toilet was private or shared however, rendered an insignificant result. A further look indicated unimproved toilets as more likely owned by residents (OR = 2.46, 95% CI: 1.26-4.80) and that resident/family owned houses had increased odds of accessing private facilities (OR = 4.38, 95% CI: 2.04-9.39). With residents already owning private, unimproved toilet facilities, use of chambers would firstly, more likely create minimal tension to users as there would be no major shift in sanitation level (both are unimproved forms of sanitation). Note also, that there are several overlaps in the reasons for open defecation [27] and chamber use as indicated in the current study, i.e., gender restrictions, toilet ownership and hygiene. Secondly, chambers may in some instances, carry more benefit to users in terms of comfort and/or ease of use when compared to their unimproved toilet facility. That residents would own private toilet facilities in itself indicates the household will to have their own sanitation facilities. As much as results indicated positive correlations between having a private toilet and access to improved facilities, pairing this finding with the cost implications of having an improved toilet (see Table 3) may indicate some opportune benefits for toilet sharing in relation to acquisition of improved toilets amongst the urban poor seeking to own facilities, but limited by cost. A third possibility could be that residents ownership of their own private facilities averted social pressures for good sanitation practices that may come from the use of shared facilities, i.e., cleaning, maintenance and hygiene [28]. However, this finding was not corroborated with study results. Chamber use was actually more likely when toilet hygiene was handled externally (AOR = 3.36, 95% CI: 1.56-7.25). If responsible persons did not fulfil their duty, toilet users could find it more convenient to use chambers and make use of private hygiene materials; rendering the use of a toilet hygienically insignificant [26,27]. Studies covering toilet hygiene for shared facilities have indicated the challenges of shared facilities in comparison to private ones, citing the importance of duty rotas and accountability for improved toilet access and use [12]. There was however, no significant result rendered between private vs. shared toilets and toilet hygiene, cleaning, cleaning frequency and maintenance in the present study. Social pressure for the improvement of sanitation has been used by a number of studies successfully [12,29], and could be an avenue worthy of more research for shared facilities in high density areas. Findings revealed that gender also played a role in chamber use with females having higher odds for use (AOR = 3.41, 95% CI: 1.10-10.53). In line with previous studies [10,26,30], chambers were often considered convenient, private and safe. With pit latrines being outdoor sanitation facilities, use at late hours carried risk, particularly to female toilet users who feared being attacked or harassed by male users. Chambers were also found convenient in times of illness, where constant journeying to the toilet would be strenuous, driving home that the toilet model was not convenient for all toilet users. Lastly, chamber use increased the odds of household diarrhoea prevalence (AOR = 6.49, 95% CI: 1.99-21.11). This is most likely due to faecal management before and after disposal which creates opportunity for faecal contact [30]. Chambers can be used inside or outside the house. As diarrhoea is spread through faecal contamination, poor storage of faeces within the house increases the risk of ingestion of faeces. With poor storage and usage, spillage, disposal, flies and other house insects, rodents and small children all become actors in increasing faecal contact within and around the household. If chambers are reusable, cleaning them also poses a health risk through increased faecal contact. If not, chambers can be disposed of in the toilet (depending on the toilet and chamber type, this could lead to blockage and/or failure to empty the facility), with solid waste or thrown as a flying toilet (tossed in an open space) [30]. Disposal into open spaces or solid waste is part of the definition of open defecation [9] which has been proven a health risk increasing diarrhoea prevalence. Factors contributing to household diarrhoea prevalence With both improved and unimproved sanitation bearing risk to household diarrhoea prevalence (see Tables 3-5), further analysis of peri-urban socio-demographics linked to diarrhoea prevalence and sanitation characteristics were made. Interestingly, there was no significant risk between diarrhoea prevalence and drinking water or handwashing. There were also no significant findings linking household diarrhoea prevalence and the frequency of toilet cleaning or if toilet maintenance and emptying was conducted. The only significant result found in addition to having an improved toilet facility and using a chamber was the number of persons using the toilet. Toilets used by �10 persons were found to increase the risk of household diarrhoea prevalence (AOR = 3.80, 95% CI: 1.11-13.08). Rather than households, the focus on number of persons using the toilet allows a more direct count of users, bearing in mind household dynamics, i.e., the extended family system and communal society. It takes into account both the formal and informal nature of toilet sharing which private toilets are not removed from due to the fact that some private toilets may have more usage than shared toilets due to the number of household members and overall users. That said, number of households using a toilet and whether a toilet was private or shared did not offer any significant result to diarrhoea prevalence. Attention to and control of the number of users may help to tackle aspects of overuse, misuse and subsequent faecal contamination. With the status quo of the peri-urban however, this act may not be feasible: space for toilet construction may be lacking and the costs of management for additional toilet facilities would be considered high [23]. Nevertheless, the finding reiterates firstly, that the call to end open defecation primarily through the use of toilet facilities shifts faecal contamination points from open air locations to toilets, defeating the purpose of installation and use of these facilities [23,25,26]. Secondly, that in the promotion of toilet ownership and usage, education on how to use and maintain facilities should be considered a package deal to allow the reasons for promoting toilet use against open defecation to retain meaning [23,25,26]. An important point to be garnered from the results is the inability of sanitation facilities on their own, whether improved or unimproved, to alleviate the disease burden. Proper use and maintenance must be considered to allow safe use of facilities by multiple users. Limitations of the study The sufficient yet small sample size would mean that a larger, more spread out sample may grant more detail about the nature of peri-urban sanitation. That said, it is not possible to generalise these findings across all national and international peri-urban settlements. Cross-sectional studies conducted at a different time point may also give more information on household diarrhoea prevalence and its implications on chamber usage. Lastly, tenants' opting out of the study in preference for their landlords' participation may have had an impact on findings. Conclusion Key findings of the study indicate a duality of peri-urban sanitation, with households making use of both improved and unimproved sanitation. Sociodemographic characteristics related to use of improved toilet facility and chambers were income and gender respectively. The impact of income on sanitation is a reflection of the cost implications that hinder the right to sanitation for the urban poor; whilst the gender disparity on chamber use indicates the diverse needs of women and girls, and the subsequent social disparities often overlooked relating to the adequate provision of peri-urban sanitation. Findings also highlighted an interlinkage between household WASH access and quality, with the ownership of an improved toilet facility predicting improved drinking water, presence of hygiene facility and lowering the odds of chamber use (unimproved sanitation methods), but like chambers, having high odds for household diarrhoea prevalence. This indicates inefficiencies with the system requiring alternatives and a failure of the facility to protect users' from faecal contamination. The result prompts a shift towards education on proper toilet facility use and management to reduce health risk in high density areas, particularly with an increased number of users heightening risk. For unimproved toilet users (the more likely to use chambers), it indicates the ease of use within service level brackets (unimproved facility to unimproved facility). With residents seeking to own private toilets regardless of service level, the quality of the facility owned could be accounted to cost. In summary, in order to truly meet and achieve the intended benefits of SDG targets towards eradication of open defecation towards improved health and well-being in the periurban, the duality of peri-urban sanitation must be addressed. Whilst improved sanitation facilities hold some benefit, the current sanitation systems used in peri-urban Lusaka, Zambia do not fully cater for the needs of the urban poor, women and girls, being inaccessible by cost and, gender and social dynamics respectively. Recommendations for peri-urban sanitation As indicated in the WHO-UNICEF Core Guidelines, sharing of toilets is a plausible solution for high density sanitation. Interventions focused on collaborations between households for the procurement of improved shared toilet facilities would aid in a move towards improved sanitation access [12,31]. Creating collaborations would tackle aspects of improved toilet construction and maintenance for joint, landlord and public toilets. With results indicating a recognition and willingness by residents to own toilets despite monetary constraints, financial strategies such as pooling of funds and payment plans can be considered/encouraged for the urban poor, aiding towards the procurement and construction of improved private, shared and public toilet facilities. Considering the high cost that current toilet models already have despite their inability to cover all user requirements, greater value would be gained by users for a model that, despite costs, can cover all required needs. More so, when used by neighbourhoods as public facilities, these models could become sources of communal income. Similar systems could also be trialled for communal drinking water and handwashing improvements. With peri-urban WASH proving to be quite communal (shared facilities) rather than private (per household), a WASH ladder for high density areas might prove beneficial, taking into account facility management, and common cultural, demographic needs and differences. As this study primarily focused on peri-urban sanitation, a High Density Sanitation Ladder (Fig 2) was created for consideration through amending the 2017 WHO-UNICEF JMP sanitation ladder (changes to the original ladder are indicated in bold) [9]. PLOS ONE Factors influencing sanitation and diarrhoea in a peri-urban settlement of Lusaka, Zambia The ladder incorporates the unique sanitation needs in high density areas through taking note of universal use, complete access and sanitation management regardless of toilets private or shared status. That said, private/shared status has no impact on sanitation level in the suggested model. The upgrade from limited to basic is based on the limited facility being usable by all toilet users, at all times (no co-use of unimproved sanitation) with an available responsibility plan or rota. The upgrade from Basic to Safely Managed contains all these plus faecal disposal as per the original 2017 WHO-UNICEF JMP model. Further studies can be done to look at water and hygiene in high density areas. Additionally, more intervention studies can be done to look into the possible benefits of using social pressure for the improvement of shared sanitation. Based on the health impacts of chamber use and it's similarities to open defecation, future assessments to determine progress on open defecation should consider all modes of household sanitation including chamber use regardless of households' available sanitation facility. This will help in tackling all forms of unimproved sanitation simultaneously to avoid shifting within sanitation ladder brackets and rather, encourage upgrading.
DUDMap: 3D RGB-D mapping for dense, unstructured, and dynamic environment Simultaneous localization and mapping (SLAM) problem has been extensively studied by researchers in the field of robotics, however, conventional approaches in mapping assume a static environment. The static assumption is valid only in a small region, and it limits the application of visual SLAM in dynamic environments. The recently proposed state-of-the-art SLAM solutions for dynamic environments use different semantic segmentation methods such as mask R-CNN and SegNet; however, these frameworks are based on a sparse mapping framework (ORBSLAM). In addition, segmentation process increases the computational power, which makes these SLAM algorithms unsuitable for real-time mapping. Therefore, there is no effective dense RGB-D SLAM method for real-world unstructured and dynamic environments. In this study, we propose a novel real-time dense SLAM method for dynamic environments, where 3D reconstruction error is manipulated for identification of static and dynamic classes having generalized Gaussian distribution. Our proposed approach requires neither explicit object tracking nor object classifier, which makes it robust to any type of moving object and suitable for real-time mapping. Our method eliminates the repeated views and uses consistent data that enhance the performance of volumetric fusion. For completeness, we compare our proposed method using different types of high dynamic dataset, which are publicly available, to demonstrate the versatility and robustness of our approach. Experiments show that its tracking performance is better than other dense and dynamic SLAM approaches. Introduction Simultaneous localization and mapping (SLAM) is to produce a consistent map of environment and to estimate the pose in the map using noisy range sensor measurements. SLAM problem has been extensively studied by researchers in the field of robotics. After the appearance of Kinect, there are many solutions, which fuse the color image and depth map. Visual SLAM produces a sparse solution by relying on points matching, whereas direct methods can produce a dense reconstruction by minimization of the photometric error. However, none of the above methods addresses the problem of dynamic objects in the environment. Conventional approaches in mapping assume that the environment is static. Although the static assumptions are valid in a small region, change is inevitable when dynamic elements exist or large-scale mapping is necessary. By classifying dynamic content as outliers, a small fraction can be managed. However, SLAM problem in highly dynamic scenes is still not solved completely because there is no suggested framework found in the literature. Another biggest difficulty in robot navigation is unstructured environment. In unstructured environments, it is not easy to find discrete geometries because of noisy edge or plane. Significant research has been carried out for unstructured environments, especially in the field of autonomous navigation, and a number of effective approaches have been developed using laser range finder. However, there is no effective RGB-D SLAM method for real-world unstructured and dynamic environments. In this study, we propose DUDMap (see https://www. dropbox.com/s/lsexrz82ewdzo0w/DUDMAP_sample. mp4?dl¼0): dense, unstructured, and dynamic mapping. Our approach requires neither explicit object tracking and object classifier nor purely geometric method in contrast to recent approaches discussed by Yu et al. 1 and Taneja et al., 2 which makes it robust to any type of moving object. Furthermore, we assume a dynamic environment consisting of static and dynamic classes having generalized Gaussian distribution to detect dynamics. We reconstruct scene geometry using signed distance function (SDF) instead of surfels. This makes our method to easily create a dense final mesh and such representation is useful in robotic applications because it defines the distance to surface. The main contribution of this article is a novel and an effective SDF-based SLAM algorithm that is resistant to dynamics and also the following: We identify the dynamics using image registration residual combining with Gaussian mixture model. The number of dynamic objects does not limit our approach because we do not employ any type of moving object detection and tracking. Our method generates a final intense 3D mesh without using semantic information or object classifier. We eliminate repeated views and use only consistent data for decreasing the required computational power. We compare our method with other state-of-the-art systems using TUM dataset, 3 together with other high dynamic datasets including Bonn, 4 VolumeDeform, 5 and CVSSP RGB-D dataset 41 (used with permission), which are publicly available, showing the superior performance of our approach. To evaluate the outdoor performance of our method, we use commercially available ZED camera for map generation and dynamic filtering. Experiments illustrate that our method produces consistent result both in indoor and outdoor applications. These are demonstrations of real-world unstructured dynamic environments of our approach. The rest of this article is organized as follows. The second section reviews state-of the-art visual SLAM methods that attack the problem of dynamic environments. The third section is devoted to the overall structure of our system by giving details about proposed approaches for local keyframe extraction and dynamic removal. The fourth section shows the experiments conducted and gives the evaluation result by comparing our method against other state-of-theart methods, whereas the fifth section provides concluding remarks. State-of-the-art methods ORB-SLAM2 7 (latest version ORB-SLAM3 8 ), S-PTAM, 9 and RTAB-Map 10 are the best state-of-the-art featurebased visual SLAM approaches in static environments. To increase the performance of such feature-based method in dynamic environment, dynamic objects are considered generally as spurious data, and dynamic object is removed as outliers using RANdom SAmple Consensus (RANSAC) and robust cost function. On the other hand, targeted attempts are still being made to increase performance in dynamic scenes. For instance, DVO-SLAM 11 uses photometric and depth errors instead of visual features. The joint visual odometry scene flow 12 proposes an efficient solution to estimate the camera motion. However, odometry-based methods either cannot recover from inaccurate image registration or lacks a loop closure detection approach independent of pose estimate. SDFs have long been studied to represent the 3D volumes in computer graphics. [13][14][15] Newcombe et al. 38 proposed the SDF-based RGB-D mapping by generating precise maps in static environments. Elastic fusion (EF) 16 is another method based on SDF, which can work in small scenarios. CoFusion (CF) 17 is a contemporary method for reconstructing several moving objects, however, it works with slow camera motions only and its performance deteriorates significantly with increasing camera speed. Static fusion (SF) 18 simultaneously estimates the camera motion together with dynamic segmentation of the image. However, it works only sequences without having high dynamics at the beginning. Palazzolo et al. 4 propose refusion, where dynamics detection is done using the residuals obtained from the registration on SDF. This approach can create a consistent mesh of the environment, however, highly dynamical change deteriorates mapping performance. Some methods use motion consistency to validate tracked points, where dynamic objects are segmented generally as spurious data since they conflict with the motion consistency of background over consecutive frames. For instance, Wang and Huang 19 segment dynamic objects using RGB optical flow. Nevertheless, the algorithm is still not robust enough for TUM high dynamic scenarios. Kim and Kim 20 propose to use the difference between depth images to eliminate the dynamics in the scene. However, this algorithm requires an optimized background estimator suitable for parallel processing. Azartash et al. 21 use the image segmentation for discrimination of the moving region from the static background. Experimental results show that the accuracy remains almost the same in low dynamic scenarios. Tan et al. 22 use an adaptive RANSAC for removing outliers. This method can work in dynamic situations with a limited number of slowly moving objects. Other methods use classifiers to identify the dynamic objects. Kitt et al. 23 combine the motion estimation with object detection; however, this method requires a classifier, which makes this method inapplicable to online explorations. Bescos et al. 24 propose DynaSLAM, which combines a prior learning by mask regionbased convolutional neural network (R-CNN) 25 and multiview geometry to segment dynamic content. Multiview geometry consists of region growth algorithm, which makes it unsuitable for real-time operation even running on NVIDIA Titan GPU. Mask fusion 26 also uses mask R-CNN for semantic segmentation. DS SLAM, 27 RDS-SLAM, 28 and semantic SLAM 29 are other semantic-based algorithms, which use the SegNet. 1 Pose fusion, 30 implemented on EF, uses open pose CNN 31 for human pose detection, which limits this method in the nonhuman dynamic object scenes. Flow fusion 32 uses optical flow residuals with PWC-Net 33 for dynamic and static human objects. However, such approaches are relying heavily on prior training methods. Therefore, if an unlearned dynamic occurs in camera view, estimation results are bigger. Furthermore, learning-based semantic information is time-consuming with heavy computational burden. In our work, we reconstruct our scene geometry using SDF instead of surfels in contrast to EF and SF, and therefore, we can directly generate the mesh of the environment using such representation without using object tracking and object classifier. Moreover, a number of dynamic objects or their speeds do not limit our approach. Preliminaries and notations In our approach, we denote a 3D point as [X, Y, Z]2R 3 , rotation of the camera, and translation R2SO(3), T2R 3 , respectively. At time t, RGB-D frame contains an RGB image and a depth map. The homogenous point X ¼ (x, y, z,1) T can be computed by assuming a pinhole camera model with intrinsic parameters f x , f y , c x , and c y (focal length and optical center) such as x À c x f x z; y À c y f y z; z; 1 The 3D point corresponding to a pixel is reconstructed as In rigid body motion, the common representation matrix H consisting of a 3 Â 3 rotation matrix and 3 Â 1 translation vector T is used in the transformation of a pointX under motion as The rotation matrix R has nine parameters and if we were to estimate the camera motion, we have to solve these nine parameters by forming a constrained optimization problem, which can be very slow to implement. The Lie algebra allows us lower dimensional linear space for rigid body motion representation, making it popular in computer vision problems. We use a Lie algebra SE(3) representation as twist coordinates x as in the literature 34 because the rigid motion has six degrees of freedom while transformation matrix T has 12 parameters. Using the Lie algebra representation, rigid body motion can be written as Figure 1 depicts the important steps of our proposed method. We first apply a depth filter to eliminate significant amounts of noise in raw depth images. To eliminate redundant data in fusion process, we trim repeated camera views by measuring the similarity ratio of RGB images. We then perform pose estimation and continue the process by detecting the dynamic elements in the scene. The subsequent subsections provide the details of each block in our proposed system. Depth smoothing and feature matching Commercially available RGB-D cameras usually produce invalid depth measurements. In addition, there exist significant amounts of noise in raw depth images. In this study, we use a depth adaptive bilateral filtering 42 method because it modifies the weighting to account for variation of intensity. Figure 2 depicts the original depth image, smoothed image, and filtered image, respectively. In addition, we change the zero values in the original depth images by neighboring 5 Â 5 pixel mean value in smoothing process. SDF fusion is an averaging process, therefore, it is important not to use redundant data in the fusion process because small error makes the SDF model as unclear. To eliminate redundant camera views, we perform similarity ratio test based on feature matching. A typical feature matcher consists of the following steps: extracting local feature, matching features using nearest-neighbor approach, and selecting good correspondences. In the literature, scale-invariant feature transform (SIFT) is being proposed for extracting keypoints and widely used in different applications. SIFT featurematching works well for scaled images but fails some cases such as faces with pose changes. 36 Application of feature matching method FLANN with SIFT descriptor overcomes such disadvantages of SIFT. In similarity analysis, we use FLANN-based feature matching with SIFT descriptor and we use RATIO 37 to select good correspondences that compare the lowest feature distance and the second lowest feature distance for recognizing good ones. Similarity ratio of the VolumeDeform "boxing" sequence is depicted in Figure 3. Since the ratio is not high, which indicates low degree similarity, all the frames are included in the mapping process. On the other hand, BONN dataset "crowd2" sequence is a high dynamic sequence having 895 frames. If 80% similarity threshold is utilized, 78 frames are skipped, which results in 8.7% decrease in computational time. Absolute translational error increases only 2.2%, while rotational relative pose error increases by 0.3%. In low dynamic sequences, the number of similar frames will be higher, which decreases the unnecessary computational power. This is the novel enhancement we provide to existing methods in the literature for the betterment of the performance. We use the 80% similarity threshold. Pose estimation We can represent the geometry using SDF. To reconstruct the scene, we fuse incrementally RGB-D data into SDF and geometry is stored in voxel grid (see Figure 4 for SDF calculation). First, camera pose is estimated using SDF, and SDF is updated based on newly computed camera pose. In the literature, most of the volumetric fusion techniques, for example, KinectFusion 13 use synthetic depth images and align them using iterative closest point. However, we use the camera pose directly on the SDF because SDF encrypts the 3D geometry of the environment. Assuming independent and identical distributed pixels with Gaussian noise in depth values, the likelihood of observing a depth image is To find the camera poses that maximize this likelihood, we define a pose error function as A rigid-body motion can be described in Lie algebra with the 6D twist coordinates x ¼ ! 1 ; ! 2 ; ! 3 ; u 1 ; u 2 ; u 3 ð Þ . If we rewrite the error function (7), then it becomes If image registration is correct with the 3D model, the projected colors should be consistent as well. We incorporate this condition by adding an extra term. Since there is no absolute reference of the image for comparison, color value stored in the voxels is used. Using color intensities of the pixels and corresponding voxels, the error function becomes The joint error function is given in equation (11) with u intensity contribution with respect to the depth and start by linearizing around initial pose estimatex using Jacobian matrix Input : Joint error function Output : Pose 1: begin 2: Initialize parameters 3: Calculate Jacobian 4: Initialize non-negative correction factor as Grammian of Jacobian 5: while (pose difference) > 0.001 or iteration # <5 do 6: Find increment for ( + ( )) = 7: Update pose with increment 8: if objective function is minimum 9: return pose 10: else 11: Update correction factor 12: Increment iteration number 13: end In equation (12), is computed numerically by evaluating the gradient. After computing the Jacobian J c x ð Þ by following a similar procedure, we adopt to use Levenberg-Marquardt algorithm because Gauss-Newton cannot calculate the best optimal estimate, resulting in nonminimum function value. Levenberg-Marquardt algorithm can handle this problem in the form of where l is the non-negative correction factor updated at each iteration. In our case, A matrix and b vector become Algorithm 1 summarizes the pose estimation process. We solve equation (14) iteratively until difference (x k þ 1 ð ÞÀx k ð ÞÞ is small enough or maximum iteration number is reached. To increase real-time performance, we conduct all calculations on the GPU in parallel since vectors b and matrices A are independent of each other. Input : Depth image, prior segmentation from residual error, initial label class Output : Segmented depth image with label 1: Initialize parameters 2: Find maximal cliques 3: Construct k-neighborhoods 4: Partition into parallel threads 5: do each EM iteration 6: for each neighborhood of the subgraph do in parallel 7: E-step Signed distance function representation and 3D reconstruction We use the discrete voxel grid to represent the SDF. Signed-distance value is calculated by trilinear interpolation of eight neighboring pixels. We project each voxel onto the image plane instead of ray casting because this process is suitable for parallel processing since each voxel is independent of its neighbors. Since the operation has to be carried out for each voxel, GPU is used for this operation. Finally, we implement marching cubes algorithm 35 to extract the triangle mesh. In RGB-D mapping approaches, storing the SDF in a 3D grid requires a large amount of memory. Therefore, we use a special memory allocation technique proposed by Nießner et al. 39 In this technique, we only allocate the voxels in required areas, which enable to scan the large areas with limited memory. Dynamic detection Let I m and I s be the instantaneous image of the generated model and source, respectively. The error in color map denoted by e c as e c ¼ I s m À I s j j If the images I m and I s are accurately registered and if there is no change in the geometry, the resulting error would be zero ( Figure 5). In general, minimizing equation (15) results in a sufficient image registration. SDF represents the distance to the nearest surface, and therefore, we select to use SDF as an error function. The error in the depth can be written as In equation (16), N is the pixel number, is the SDF, andx is the matrix exponential multiplied by the 3D point corresponding to the i'th pixel p i , computed using equation (1). After performing an initial registration using equation (15), we compute for each pixel and its residual as defined in equation (17) The residual obtained after image registration is used as for dynamic detection ( Figure 6). Our aim is to compute the binary labeling for each element according to occurred changes. For example, l i ¼ 0 indicates consistency and l i ¼ 1 shows the presence of change in corresponding voxel i. If h(d) be the histogram of the image, our problem is in the form of binary classification problem using a dynamic label threshold. Then, probability density function can be defined as the combination of two density functions related class label as using class conditional densities and prior probabilities. To calculate an estimate of dynamic change, we maximize p(l|D) where L Djl ð Þ is the log likelihood of the two-component mixture and it can be written as The final log-likelihood function is in the form of In equation (21), u(d) is the indication of static or dynamic component. After dynamic label identification and updating the label grid (Algorithm 2), a second pose estimation and registration are performed using newly obtained label set (Algorithm 3). However, we must filter out dynamic labels that originated from noise. We compare the SDF value of new observation with the previous static reconstruction and compute the difference d L . Applying a threshold q, we obtain the label grid such that Figure 7 shows the overall flowchart of our proposed methodology including RGB similarity check, pose estimation, and dynamic detection. Experiments Our proposed method is able to operate in dynamic environments without requiring any dynamic object detection and tracking. Our experiments support our main claims, which are as follows: Robustness to dynamic elements regardless of their quantity and speed of change in the environment. That approach requires no explicit object tracking, object classifier and generate a consistent a dense model of the environment. The experiments were conducted on a workstation computer Intel i7 running at 3.20 GHz and a GeForce 1070 GPU using Ubuntu 16.04. Our default parameters have been determined empirically so that a sensitivity analysis is performed on change of parameters. TUM RGB-D dataset In this dataset, walking sequences are highly dynamic and complex because moving objects cover almost all camera views. Sitting sequence is low dynamic and there exists a person sitting and moving their arms. In this dataset, the evaluation is performed through the metrics proposed by Sturm et al. 3 as translational, rotational relative pose error (RPE), and translational absolute trajectory error (ATE). Obtained results of dense visual SLAM methods are listed in Tables 1 to 3. In the TUM dataset, the ground-truth trajectory is obtained from a high-accuracy motioncapture system with eight high-speed tracking cameras (100 Hz). Therefore, quantitative evaluation is possible regarding the accuracy of pose estimation. However, TUM dataset has no exact 3D model of the environment, therefore, we can evaluate the 3D reconstruction performance results of our method qualitatively. Qualitative results are shown in Figures 9, 11, and 12. Figure 12 also shows the scene reconstruction result of fr3/walking xyz sequence obtained using EF, DynaSLAM, and DS-SLAM. As given in Table 1, our proposed scheme achieves an average translation RPE of 0.045 m/s, which is considerably lower than other dense methods such as VOSF, EF, SF, and mask fusion. Our aim is to develop a dense RGB-D SLAM algorithm without using high computational power in dynamic environments. According to Tables 1 to 3, our method achieves smaller relative and translational error than other dense methods. For all high dynamic sequences, our method reaches the lowest RPEs except for the "fr3/ walk stat" sequence. In a highly dynamic scene, our proposed method produces better results for the following reasons: EF is not capable of dynamics in the sequences. Hence, dynamic object deteriorates the 3D mesh and pose estimation. CF works well for slow camera motions but its performance deteriorates noticeably when the speed of the camera increases. SF works sequences with limited dynamics at the beginning, and therefore, it produces large errors on a highly dynamic environment. In general, existing high dynamics in the scene leads to blurry motion in the image, resulting inconsistent mesh. In addition, according to Tables 1 to 3, there is no doubt that semantic-based visual SLAM methods have better results in ATE and RPE criteria. However, such method does not provide a dense model and it is relying heavily on the prior result from the learning techniques. If an unlearned condition exists in the camera view, the estimation result is highly influenced. Table 4 compares the execution time of our proposed method with semantic-based SLAM algorithms. Most of the modern segmentation-based SLAM methods are built on ORBSLAM, therefore, it is included in timing analysis. The execution time data are obtained from the corresponding published papers. DynaSLAM has a good tracking performance, however, mask R-CNN makes this method unsuitable for real-time operation. If a lightweight semantic segmentation such as Seg.Net is used, as in DS-SLAM and RDS-SLAM, the required time for per frame for segmentation decreases from 200 ms to 30 ms. However, an unlearned dynamics in the camera fieldof-view results in pose error, leading to moving object to be mapped as a static object. Our method without using any semantic label criteria runs almost constant rate regardless of moving object type and speed. In addition, our method does not require high-end graphic units. Figure 12 shows that a person remains in the model because the model built has artifact in the "walking xyz" sequences. This situation also occurs in "walking halfsphere" (Figure 11) and "walking static" (Figure 9) sequences because the camera is tracking a person initially, and finally, the camera never looks again, hence, it is not possible to identify that the voxels are free. Figure 13 also confirms such a case. It is clear that translational error is higher at the beginning when the camera tracks the person. Figure 8 and 10 depict the ATE/RPE of TUM "fr3/walking static" and "fr3/walking halfsphere" sequences. In addition, Figure 14 shows the estimated trajectory result of fr3/ walking xyz sequence obtained by the state-of-the-art visual SLAM system. Trajectory results are consistent with Tables 1 to 3. Semantic-based visual SLAM methods except pose fusion and flow fusion have better results in ATE and RPE criteria. Our proposed method can compete with semantic SLAM and RDS-SLAM, however, Dyna-SLAM and DS-SLAM have the best estimate. However, our method has the best result among the dense and CNN-free methods. Bonn RGB-D dynamic dataset We compare methods on the dynamic scenes of Bonn dataset published by Palazzolo et al. 4 This dataset has a variety of sequences. For example, "moving_obstructing_box" scene assesses the kidnapped camera problem, where the camera is moved to a different location, whereas "balloon_tracking" has uniformly colored balloon having no features on it. Figure 16 shows the resulting mesh of BONN moving obstructing box sequence. Table 5 presents that DynaSLAM outperforms the other methods in balloon tracking. However, it has poor performance on the obstructing box scene. Since DynaSLAM is the combination of neural network and geometric approach, the available semantic information on scene helps to increase the performance. VolumeDeform dataset VolumeDeform is an RGB-D dataset for the purpose of real-time nonrigid reconstruction and is used for evaluation of the nonrigid object reconstruction algorithms at realtime rates. 5 Since dynamic datasets for evaluating RGB-D SLAM method with exact trajectory are limited, this dataset is used to measure the elimination capability of our method to handle dynamic parts in the scene. Figure 15 illustrates the moving object elimination capability of the proposed method by using VolumeDeform boxing sequence. In addition, results of pose error and trajectory error are listed in Table 6. CVSSP RGB-D dataset "CVSSP dynamic RGB-D dataset has RGB-D sequences of general dynamic scenes captured using the Kinect V1/V2 and two synthetic sequences." 6 This dataset is designed for nonrigid reconstruction. "Dog" sequence is selected because there exists little clearly distinct geometry in the environment with nonrigid dynamic object. In this sequence, the dynamic part is the movement of the arm of the person and the head of the dog. The exact value of the trajectory and reference 3D model of the environment are not provided, therefore, we evaluated the 3D mesh result qualitatively. As the frame number increases, our proposed method successfully eliminates dynamic in the frame ( Figure 17). Outdoor mapping performance We used the ZED camera in a hand-held setup for acquiring RGB-D images. We captured the frame in a resolution of 1280 Â 720 with a rate of 30 fps. To measure the 3D mapping performance of our proposed approach, default camera properties and standard settings are used without calibration or lens distortion correction. The voxel size of 0.01 m with a minimum of 0.3-m depth sensor setting is used. Our method successfully created the mesh of the environment with some distortions. For instance, 0.01mm voxel size results in coarse map especially in missing wire grid fence and part of the fence door ( Figure 18). Using smaller voxel size increases the mapping performance helps to maintain grid fence as in Figure 19. If an autonomous robot is flying around thin branches, telephone lines, or chain link fencing, a detailed map is required to avoid from the collision because those are the main collision areas for outdoor autonomous drones. In the second sequence, we captured the frame in a resolution of 1280 Â 720 with a rate of 10 fps using default camera properties. The voxel size of 0.02 m and maximum depth of 16 m settings are used in this sequence. As can be seen from Figure 20, the final mesh has no artifact of the walking person in the scene. However, the result of EF has traces of the walking person. Sensitivity analysis In this section, the sensitivity of the proposed methodology to the voxel size and the contribution weight of the intensity with respect to the depth are examined (see Table 6). In addition, the required time for per image is analyzed. The fr3/walking static xyz dataset is selected for the error and timing analysis because, in this sequence, camera is tracking a person at the beginning, and finally, camera never revisits again, which results in artifact in resulting mesh. In addition, most of the state-of-the-art system use this sequence for performance analysis. According to Table 7, in all cases, using larger voxel dramatically decreases the required calculation time, which makes that the proposed scheme is more suitable for realtime applications. However, using larger voxel increases the absolute translational error. Using larger ratio of the intensity information with respect to the depth information decreases the RMSE error, however, such situation is not valid for all cases. Therefore, utilization of applicationspecific constant increases the performance of the proposed scheme. Conclusion Visual SLAM has been studied over the last years. The research efforts have addressed SLAM problem. However, most of the approaches assume a stationary environment. Our proposed method, SDF-based dynamic mapping approach, can operate in environments, where high dynamics exist without depending on moving objects. In addition, a static object is moved, and the corresponding voxels are removed successfully from the mesh. After performing a complete evaluation of our proposed method for several sequences of the TUM, Bonn, and Volu-meDeform datasets, our method has an improved pose estimation capability even though there exist dynamic elements in the scene. SDF is generally straightforward to split into independent tasks that may run in parallel, however, memory requirements are used for storing a given SDF volume scales cubically with the grid resolution. Hence, special care has to be taken for efficient memory usage considering the performance. In addition, the SDF encodes surface interfaces at subvoxel accuracy through interpolation, however, sharp corners and edges are not straightforward to extract from an SDF representation. Improvement using adaptive variable voxel size and implementing featurepreserving surface extraction on sharp corners is left for future work. Given rising interest in developing visual odometry and SLAM algorithms for very dynamic environments, it is clear that a new RGB-D dataset containing fast and slow camera motions and varying degrees of dynamic elements would be greatly appreciated by researchers if made available. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Roketsan Missiles Industries Inc. Supplemental material Supplemental material for this article is available online.
Building Blocks for Magnon Optics: Emission and Conversion of Short Spin Waves Magnons have proven to be a promising candidate for low-power wave-based computing. The ability to encode information not only in amplitude but also in phase allows for increased data transmission rates. However, efficiently exciting nanoscale spin waves for a functional device requires sophisticated lithography techniques and therefore, remains a challenge. Here, we report on a method to measure the full spin wave isofrequency contour for a given frequency and field. A single antidot within a continuous thin film excites wave vectors along all directions within a single excitation geometry. Varying structural parameters or introducing Dzyaloshinskii–Moriya interaction allows the manipulation and control of the isofrequency contour, which is desirable for the fabrication of future magnonic devices. Additionally, the same antidot structure is utilized as a multipurpose spin wave device. Depending on its position with respect to the microstrip antenna, it can either be an emitter for short spin waves or a directional converter for incoming plane waves. Using simulations we show that such a converter structure is capable of generating a coherent spin wave beam. By introducing a short wavelength spin wave beam into existing magnonic gate logic, it is conceivable to reduce the size of devices to the micrometer scale. This method gives access to short wavelength spin waves to a broad range of magnonic devices without the need for refined sample preparation techniques. The presented toolbox for spin wave manipulation, emission, and conversion is a crucial step for spin wave optics and gate logic. T he fundamental excitation of a spin precessing around its equilibrium position is known as a magnon, a magnetic quasi-particle. In magnetically ordered materials, the collective excitation of magnons manifests in a wave like behavior frequently called a spin wave. Over the past decade, the corresponding research field of magnonics has established itself as an indispensable part of magnetism research. 1 As is often the case in physics, the application of principles known from other disciplines such as quantum mechanics or electronics can result in fascinating phenomena including but not limited to spin wave tunneling, 2,3 spin wave manipulation by spin currents, 4,5 band gap tuning using magnonic crystals, 6−10 and Bose−Einstein condensation of magnons. 11,12 Additionally, magnon optics has attracted attention as an interdisciplinary research topic by transferring known optical wave phenomena to spin waves. 13−18 The magnon dispersion relation does not only account for the spin wave wavelength but also inherently depends on the respective orientation of magnetization and k-vector. 19 Although this makes it practically challenging to control spin waves, it is simultaneously enriching not only from a fundamental but also from an applicational point of view. To unlock the full potential of magnonic devices for applications it is crucial to have full control over the complex magnon dispersion relation. This allows for the development of a comprehensive toolbox of magnonic building blocks such as omnidirectional emitters, directional converters, and guiding channels to have access to for device fabrication. However, even the excitation of spin waves with applicational relevant length scales is challenging. There are several different approaches to excite propagating spin waves in magnetic materials. The most conventional method is the excitation by a microstrip antenna 20,21 or a coplanar waveguide. 22 Nevertheless, due to the challenges in lithography it is difficult to get sufficient excitation efficiencies for spin waves far below 400 nm. 23 Another possibility are spin torque devices. 24 However, these require high current densities and therefore are problematic for real-world applications. The fact that the size of the underlying excitation element needs to be of the order of the spin wavelength creates an additional challenge. Hence, some approaches to address this challenge exploit sophisticated magnetic textures to generate spin waves, 25 for example, by vortex core precession. 26−28 These methods are able to excite spin waves down to 80 nm. Nevertheless, the problem of vortex cores as spin wave emitters lies within the transition from the vortex structure into a continuous thin film while maintaining the omnidirectional character of the spin waves. 28 Alternative approaches make use of complex sample designs to excite short wavelength spin waves. 8,29−32 While the excitation mechanism exploited in spin wave grating couplers is able to excite a continuous spectrum of k-vectors, 33 their inherent structure allows for multiple but only discrete directions. 29 Hence, none of the presented methods is able to excite the full isofrequency contour 34 due to the directional limitations of the excitation structure which limits their versatility for magnonic applications. Within the last 70 years, magnetism has always been related to data storage and information technology. Although it is unlikely that magnonic devices will make up a large part of future processors, the integration of a magnonic circuit 35−37 as a specialized section of a chip is a realistic prospect. An essential task is to identify applications for which wave-based computing can outperform traditional approaches. 38 Encoding information not only in amplitude but also in phase allows magnonic devices to exploit the fact that multiple wavelengths can coexist at the same frequency potentially allowing for a multiplexing approach in magnonic devices. 39 In this work, we present a single magnetic antidot structure as a multipurpose spin wave emitter and converter. We use scanning transmission X-ray microscopy 39−41 (STXM) and micromagnetic simulations 42 to demonstrate the possibility to either create omnidirectional spin waves by exciting the antidot directly by a microstrip antenna or to use it as a directional spin wave converter. Although it has been shown that a single antidot is able to convert plane waves into caustic spin waves, 43−45 it has not been demonstrated experimentally that a single antidot and even more complex structures actually populate all possible k-vectors. In fact, different parts of the isofrequency contour can be populated selectively by varying excitation frequency and applied magnetic field. Concluding from the experimental results, we present a theoretical combination of a guiding channel and a single antidot, which can be utilized as a short spin wave beam generator. 46−54 We propose these simple structures as versatile and adaptable building blocks for a wide range of spin wave application devices which require spin wave emission of different k-vectors, spin wave conversion, or steering around corners. RESULTS AND DISCUSSION The first part of this section will elaborate on the possibility of using different magnetic structures to excite and measure spin wave wave vectors along all directions within a single field geometry and therefore the full isofrequency contour. Additionally, we propose methods to manipulate their intensity distribution and shape. Subsequently, we present measurements of spin wave converter and emitter structures which allow for the generation of short wavelength spin waves without the necessity of sophisticated lithography steps. Micromagnetic simulations reveal the underlying emission and conversion mechanism which gives rise to 100 nm spin waves. Measuring Isofrequency Contours. Antidot Structure. Measuring a full isofrequency contour requires a structure that is able to excite spin waves of all wave vectors. Therefore, we fabricated 50 nm thick permalloy samples with microstrip antennas structured on top. Afterward, two antidots of 800 nm in diameter are milled into the permalloy thin film, one of which is positioned halfway on the microstrip antenna. The other one is positioned 1.6 μm away from it (see Figure 1a). An example for a frequency filtered spin wave measurement of the antidot close to the antenna is shown in Figure 1b. Additionally, three spin wave movies are included in the Supporting Information. To make it easier to distinguish between experimental and simulated results, all experimental figures will be framed by a purple box while simulated results will be displayed in a green box throughout the article. The picture illustrates a 3D rendered snapshot of the m z component of the dynamic magnetization. The white circle in the middle denotes the position of the antidot. The colormap positioned below the spin wave is a 2D representation of the spin wave for which relative phase is encoded in color, amplitude in brightness. A reciprocal space representation of the measurement is displayed in Figure 1c. The intensity distribution reveals that although all possible k-vectors are populated, the spin waves along the backward volume direction are more emphasized. It is clearly visible that a single antidot structure by itself is a working device to excite and measure the full isofrequency contour. Magnon Zone Plate. It is challenging to focus X-rays using conventional single refractive lenses since the refractive index for X-rays in most materials is very close to unity. Therefore, diffractive type optics called Fresnel zone plates are commonly used. 55 The same effect can also be utilized to focus spin waves. 14 Although an antidot structure is arguably the simplest structure to excite all wave vectors, the full isofrequency contour can also be excited in nontrivial systems, for example, by a hole arrangement corresponding to a 1D Fresnel zone plate. A scanning electron microscope picture of the sample can be seen in Figure 2a. It consists of a magnetic zone plate and a microstrip antenna, which is positioned next to the zones. The illustration includes an example measurement of spin waves being focused by the zone plate. 14 The reciprocal space representation of two different measurements can be seen in Figure 2b,c. Figure 2b illustrates the isofrequency contour at f = 5.65 GHz and B = −36 mT. It is visible that these measurement parameters do not populate the entire isofrequency contour in reciprocal space. In this particular measurement, only waves with a maximum angle of 13°between k-vector and magnetization could be excited. At this frequency and field combination, pure backward volume spin waves are not detected in the system. Figure 2c displays the isofrequency contour measured at f = 3.59 GHz and B = −14 mT. It can be seen that the full isofrequency contour can be excited in more complex structures as well as in trivial antidot systems. For this particular frequency, backward volume waves can be excited due to multiple wavelengths fitting between the zones. Hence, the system allows for a selective population of the isofrequency contour depending on frequency and applied magnetic field. On the basis of the isofrequency contour, the effective field acting on the spin waves can be evaluated including all external and internal contributions such as applied, anisotropy, and demagnetizing fields. Because the size and orientation of the contour is very sensitive to changes in the direction and magnitude of the magnetization, it can be an indicator for changes of the magnetization vector within the region of interest. The rotation of the isofrequency contour in Figure 2b ACS Nano www.acsnano.org Article reveals that the local magnetization is slightly tilted by approximately −5°with respect to a horizontal measurement plane. Tuning of field and excitation frequency is not the only method for manipulating the size and shape of an isofrequency contour. The Dzyaloshinskii−Moriya interaction (DMI) is an antisymmetric exchange contribution which is well-known for stabilizing chiral spin textures in bulk 56 and multilayer materials. 57 However, its antisymmetric nature also manifests itself in its interaction with magnons. Similar distortions have also been observed for electric fields. 58 Figure 3 displays simulated isofrequency contours generated by an antidot structure including artificial interfacial DMI. The DMI vector points to out-of-plane with its effective influence on the spin waves along the y-direction. It is very visible that DMI does not only change the size but also the shape of the isofrequency contours. The antisymmetric interaction is reflected by a nonsymmetric distortion in reciprocal space along the k y -axis, which has been proposed to be a measure for the DMI constant. 59 With increasing DMI, the distortion of the contour gradually increases and hence affects the spin wave propagation in a nonsymmetrical manner. Therefore, wavelength as well as direction of the spin wave can be manipulated by introducing DMI into a material. Spin Wave Converter and Emitter. Emission and Conversion Mechanism. A close-up view of the antidot sample in Figure 1a is displayed in Figure 4a. In the following, it will be distinguished between spin wave converters and spin wave emitters. Both components differ in their position with respect to the microstrip antenna as well as their functionality in emitting spin waves or converting incoming plane waves. Both antidots are sufficiently separated from each other to avoid interaction. When running a high-frequency current through the microstrip antenna, spin waves are either directly emitted by the emitter or start to propagate from the antenna toward the converter. In literature, the conversion mechanism has been described as spin wave scattering 43 or as a consequence of the Schlomann effect 60 which describes the local coupling of an external field to local changes of the effective field. The coupling in in-plane magnetized films results in caustic beams propagating away from the antidot. 44,45 The underlying mechanism turns out to be the same for converter and emitter. In both cases, the local effective field changes necessary for spin wave emission are caused by the demagnetizing field of the antidot. For the emitter, the magnetization is excited by the plane wavefront and by the antenna itself. Propagating spin waves are even more amplified in the presence of the antenna. An illustration of the demagnetizing structures responsible for the emission can be found in the Supporting Information. For the converter, the incoming spin wave mimics the variation of an external field. Hence, the local magnetization starts oscillating subsequently converting the incoming spin waves in wavelength and direction. The difference is that the effective field variation which is needed to drive the local magnetization is either caused only by the spin waves (converter) or mainly by the antenna (emitter). Expanding on previous results from others, 43,44 the high resolution of STXM allows us to resolve these localized edge modes converting the incoming wave, as well as the full spectrum of the outgoing spin waves. In Figure 4b,c, the full spatial spin wave spectrum at an excitation frequency of f = 4.21 GHz and an applied field of B = 25 mT is presented which was obtained from a spatial Fourier transformation. Extending on other spin wave excitation mechanisms, 29 the emitter as well as the converter are capable of exciting all possible kvectors for a given frequency and field, that is, the full isofrequency contour, characterized by a horizontal eight in reciprocal space. In the presented case, propagation along x-direction represents backward volume modes while the y-direction denotes Damon-Eshbach geometry. 61,62 It is visible that converter and emitter spectrum are qualitatively equal with ACS Nano www.acsnano.org Article maximum k-vector magnitudes of |k ⃗ | = 8.31 μm −1 which corresponds to a wavelength of λ = 120.4 nm whereas the spin waves directly excited by the antenna have a wavelength of approximately λ = 6 μm. Because both mechanisms excite spin waves in multiple directions, this might be applicable for spin waves multiplexing applications in magnonic devices. 39 To showcase the versatility of the presented devices, the spin wave emitter was measured for different excitation frequencies and magnetic fields. The results are presented in Figure 5a. The second of the three panels displays the distribution of kvectors at an excitation frequency of f = 4.21 GHz and a magnetic field of B = 30 mT. It can be seen that the isofrequency contour is almost collapsed for these excitation parameters. Further decreasing the frequency or increasing the field will cause the isofrequency contour to collapse completely, drastically reducing the excitation efficiency. As it can be seen in the first and third panel, the isofrequency contour expands when decreasing the magnetic field to B = 20 mT or increasing the frequency to f = 4.71 GHz. Although the signal-to-noise ratio is slightly worse at higher excitation frequencies due to the synchrotron operation mode, the third panel shows that the spin wave emitter is able to directly excite spin waves with k-vector magnitudes of |k ⃗ | = 10.02 μm −1 corresponding to a wavelength of λ = 99.8 nm. It should be mentioned that the creation of these short wavelength spin waves did not rely on sophisticated lithography processes for sample preparation. Although approaches to reduce sample structure sizes to spin wave relevant length scales can be complex and challenging, adding an antidot into a sample is rather straightforward. In this case, the important length scale is not given by the structure itself but by the demagnetizing features. Micromagnetic Model. To complement our findings, we performed micromagnetic simulations of the antidot structures. The simulated results can be seen in Figure 5b. When comparing Figure 5a,b, it is obvious that the experiment is resembled well by the simulations. In particular, for the cases for which the experiment has a high signal-to-noise ratio the results are almost identical. A real space comparison of experiment and simulations can be found in the Supporting Information. To obtain a deeper insight into the spin wave emission and conversion mechanism, we performed simulations of the antidot at two different positions with respect to the antenna. The right column of Figure 6a Times are chosen such that they display four different states of the system. Figure 6a displays the antidot directly after turning on the excitation where no short wave spin waves are emitted yet. It can be seen that spin waves start to propagate away from the antenna and are slightly scattered by the demagnetizing structure of the antidot resulting in caustic beams. In the second frame (Figure 6b), short wavelength spin waves start to propagate away from the emitter caused by the coupling of the effective field to the demagnetization structures. Natural imperfections in its circularity ensure that small demagnetizing structures are present all around the antidot, even for a much larger antidot structure. This allows for a difference of at least 2 orders of magnitude between structural length scales and spin wave wavelength. Compared to CMOS technology, the smallest element carrying information is therefore not given or limited by the size of the structured element, which allows magnonics to operate far below the limits of lithography. The third illustration (Figure 6c) displays the system while emitting spin waves in all directions with wavelengths given by frequency and field. Looking at the spatial Fourier transformation displayed left of the three frames, it can be seen that the full isofrequency contour does not appear directly after turning on the excitation. For numerical reasons, it takes ACS Nano www.acsnano.org Article approximately 6 ns for it to be fully visible in reciprocal space. However, this is not the steady state of the system. Approximately 10 ns after the beginning of the excitation, demagnetizing structures located below the antenna start to oscillate and emit spin waves which are further amplified by the oscillating field. This steady state of the system is shown in Figure 1d. Hence, the simulations reveal that there are two distinct mechanisms emitting short wavelength spin waves: the excitation of localized demagnetizing structures near the antidot 63 and the local resonant oscillation of demagnetizing structures below the antenna. From the simulations, we can conclude that the short wavelength spin waves visible in Figure 1b are mainly caused by these demagnetizing structures below the antenna. In contrast to the emitter structure, the antenna has been moved 10 μm away from the antidot for the simulations presented in Figure 6e−h. In addition to Figure 4, these simulations further prove that the antidot cannot only be excited by the direct magnetic influence of the antenna but also by incoming propagating spin waves. Although the 2 ns frame already displays short wavelength spin waves for the emitter, they are hardly visible for the converter which is also reflected by the corresponding illustrations of reciprocal space. This is mainly due to the fact that the spin wave needs to first travel 10 μm, resulting in a time delay compared to the emitting structure. The main difference between simulated emitter and converter can be seen in Figure 6d and Figure 6h. The emitter has a strong tendency toward backward volume spin waves amplified by the presence of the antenna. In contrast, the converter equally excites all k-vectors in reciprocal space. Nevertheless, it is clearly visible that both structures are able to excite the entire isofrequency contour which allows for the generation of a full spin wave spectrum including very short backward volume modes. Spin Wave Beam Generation. Although it might be interesting for future applications to have multiple wavelengths within one spin wave channel, concepts for spin wave application need a coherent spin wave beam or package. Figure 7 illustrates a possible application for the spin wave converter by showcasing its steering and wavelength reduction capabilities. The following simulation highlights the capabilities of a converter structure as a spin wave beam emitter. The first graph in Figure 7a displays the position of the snapshots with respect to the beginning of the excitation at time t = 0. ACS Nano www.acsnano.org Article damping toward the antidot. Low damping can be achieved by using yttrium iron garnet which is widely known to have an exceptionally low damping coefficient 64−67 for magnons. The light blue region around the channel displays a region of increased damping which can be achieved by covering the magnetic layer with platinum for strong damping enhancement. 65 The different frames illustrate four different states of the system. In Figure 7b, the majority of the plane waves emitted by the antenna are still traveling along the Damon-Eshbach direction toward the antidot. No short wave conversion can be observed at that point in time. In the second frame, the converter starts to emit short wavelength spin waves along the backward volume direction which have only traveled approximately 1.5 μm. In the third frame, the converted backward volume waves have traveled half way into the channel and continue to travel until the system reaches its steady state, displayed in Figure 7e. As it can be seen from the illustrations the device is not only able to convert the long wavelength Damon-Eshbach spin waves into short wavelength backward volume spin waves but can simultaneously steer the incoming spin waves around a 90°a ngle converting it into a coherent short wavelength spin wave beam. These two characteristics are of interest for magnonic applications in integrated circuits. It not only allows for easy scalability of spin waves by reducing their size by 2 orders of magnitude but it also enables magnonic devices to function around corners further reducing their potential size. For example, it is conceivable to realize a spin wave majority gate 37,68 on the scale of a few micrometers using a coherent spin wave beam as input. Moreover, the anisotropic dispersion relation allows the device to work at two different wavelengths while maintaining equal frequencies. CONCLUSION Many articles report on various methods for the generation of short wavelength spin waves essential for the scalability of future magnonic devices and circuits. However, most of them require sophisticated structural or magnetic designs to achieve length scales relevant for applications. In this article, we presented a generation technique by means of a simple antidot. The structure can serve as a multipurpose object by either acting as emitter when placed next to a microstrip antenna or as a spin wave converter when positioned several micrometers away from it. We found that the emitter as well as the converter populate the same k-vectors in reciprocal space independent of their position with respect to the antenna, that is, all k-vectors allowed for a certain combination of field and frequency. By presenting results from a magnetic Fresnel zone plate system, it was shown that also nontrivial systems can be used to measure a full isofrequency contour. Moreover, the system can even be used to selectively excite distinct parts of the isofrequency contour depending on the applied field and frequency. Introducing DMI into the system allows for a nonsymmetric manipulation of reciprocal space. This technique provides a compelling approach to measure all wave vectors of a system within just one excitation geometry and allows for the evaluation of effective field changes caused by demagnetization or anisotropy fields. Using the emitter as a versatile tool for spin wave generation at various fields and frequencies, it is possible to create backward volume spin waves with wavelengths as small as 100 nm. This limit is not given by the generation mechanism itself but rather by the efficiency of the microstrip antenna at higher frequencies or lower fields. Compared to other generation techniques, a simple antidot is capable of either emitting or ACS Nano www.acsnano.org Article converting incoming spin waves and reducing their size by approximately 2 orders of magnitude. By performing micromagnetic simulations, it was confirmed that the obtained reciprocal emission spectra match well with theoretical predictions. Moreover, simulations gave insight into the emission and conversion mechanism, both of which consist of magnetization features being driven into oscillation by either the antenna or the incoming spin wave. On the basis of the findings of the spin wave converter, we simulated a system to potentially isolate spin wave beams. It could be seen that after 45 ns a continuous beam of small wavelength spin waves was well isolated from plane waves exciting the converter. This concept can be of impact for potential spin wave applications which need to steer spin waves around corners or applications at multiple different wavelengths operating at the same clock frequency. Additionally, a coherent spin wave beam of small wavelength reduces the size of existing magnonic devices down to the few micrometer length scale. We anticipate that the presented spin wave excitation and conversion method is especially useful for the fabrication of future spin wave devices by achieving application relevant spin wavelength scales without the need for nanometer-sized lithography. It is easily conceivable that the generation of a coherent spin wave beam not only allows for the production of exceptional magnonic devices but also eases the down-scaling of existing magnonic gate logic. METHODS The permalloy rectangle used as the basis for the antidot structure in this paper were patterned using photolithography and direct laser writing. Photo resists used were LOR 3A and AZ ECI 3027 by MicroChem and MicroChemicals, respectively. The UV exposure was done with KLOE's Dilase 250 laser writing system. A 100 μm × 200 μm × 50 nm permalloy (Ni 80 Fe 20 , Py) thin film was deposited on top of a X-ray transparent silicon nitride membrane Si 3 N 4 (100 nm)/ Si(100). As oxidation protection, a 2 nm thick Al layer was deposited on top of the Py. The 3 μm wide microstrip antenna was fabricated in a second lithography step and consists of 10 nm Cr/180 nm Cu/10 nm Al. The microstructures were deposited with ion beam sputtering at base pressures below 1 × 10 −7 mbar. After thin film deposition, a focused ion beam was used to mill two antidots into the permalloy. An illustration of the antidot sample is shown in Figure 4a. The zone plate samples consist of 50 nm thick Py with a 5 nm Al capping layer deposited on silicon nitride by evaporation at pressures below 1 × 10 −7 mbar. Zone plate structures were patterned using electron beam lithography. The microstrip antenna was isolated from the magnetic film by depositing 10 nm Al 2 O 3 with atomic layer deposition. The 1.6 μm wide antenna consists of 10 nm Cr/150 nm Cu/5 nm Al. An illustration of the zone plate sample can be seen in Figure 2a. All measurements presented in this article were conducted on a scanning transmission X-ray microscope (STXM) 40,41 at the MAXYMUS endstation at the BESSY II synchrotron radiation facility in Berlin. STXM allows for high resolution in space (20 nm) as well as time (35 ps). After acquisition, each spin wave movie was filtered in the frequency domain and subsequently transformed into reciprocal space by applying a spatial Fourier transformation. For a comprehensive elaboration on the analysis process, the reader is referred elsewhere. 40,41 If not stated differently, simulations 42 without Dzyaloshinskii− Moriya interaction were performed with a saturation magnetization of M s = 5.04 × 10 5 A/m and a damping coefficient α = 0.0067 both of which were obtained from ferromagnetic resonance measurements. The exchange constant was set to A ex = 5.5 × 10 −12 J/m. To reproduce a continuous thin film we set periodic boundary conditions in x-and y-direction. Spin wave interference between the simulation boxes was avoided by gradually increasing the damping coefficient close to the box boarders. Simulations with Dzyaloshinskii−Moriya interaction were performed with identical simulation parameters but contained an antenna which is able to excite all k-vectors equally. The damping was set to α = 0.0001. Illustration of the x-and y-component of the demagnetizing structures obtained from the simulation as well as a real space comparison between experiment and simulation (PDF) Emitter, f
Comparison of cemented and uncemented fixation in total hip replacement: a meta-analysis. Background The choice of optimal implant fixation in total hip replacement (THR)—fixation with or without cement—has been the subject of much debate. Methods We performed a systematic review and meta-analysis of the published literature comparing cemented and uncemented fixation in THR. Results No advantage was found for either procedure when failure was defined as either: (A) revision of either or both components, or (B) revision of a specific component. No difference was seen between estimates from registry and single-center studies, or between randomized and non-randomized studies. Subgroup analysis of type A studies showed superior survival with cemented fixation in studies including patients of all ages as compared to those that only studied patients 55 years of age or younger. Among type B studies, cemented titanium stems and threaded cups were associated with poor survival. An association was found between difference in survival and year of publication, with uncemented fixation showing relative superiority over time. Interpretation While the recent literature suggests that the performance of uncemented implants is improving, cemented fixation continues to outperform uncemented fixation in large subsets of study populations. Our findings summarize the best available evidence qualitatively and quantitatively and provide important information for future research. Background The choice of optimal implant fixation in total hip replacement (THR)-fixation with or without cement-has been the subject of much debate. Methods We performed a systematic review and meta-analysis of the published literature comparing cemented and uncemented fixation in THR. Results No advantage was found for either procedure when failure was defined as either: (A) revision of either or both components, or (B) revision of a specific component. No difference was seen between estimates from registry and single-center studies, or between randomized and non-randomized studies. Subgroup analysis of type A studies showed superior survival with cemented fixation in studies including patients of all ages as compared to those that only studied patients 55 years of age or younger. Among type B studies, cemented titanium stems and threaded cups were associated with poor survival. An association was found between difference in survival and year of publication, with uncemented fixation showing relative superiority over time. Interpretation While the recent literature suggests that the performance of uncemented implants is improving, cemented fixation continues to outperform uncemented fixation in large subsets of study populations. Our findings summarize the best available evidence qualitatively and quantitatively and provide important information for future research. ■ The success of total hip replacement (THR) and the frequency in which it is performed are largely due to the development of the cemented low-friction arthroplasty (Charnley 1960); its survival rate of 80% at 25 years (Berry et al. 2002) remains unsurpassed. The improved survival of circumferentially coated uncemented cups and stems that allow bone to grow into or onto the prosthesis (Zicat et al. 1995, Kim et al. 1999, Della Valle et al. 2004, Sinha et al. 2004 has supported their growing use in the United States, despite the higher costs (Agins et al. 1988, Barber and Healy 1993, Clark 1994, Mendenhall 2004. In 2003, an estimated two-thirds of all primary THRs were performed with uncemented fixation (Mendenhall 2004). This contrasts with some European countries such as Sweden, which have adopted these newer uncemented technologies more cautiously and have much lower revision rates (Malchau et al. 2002, Kurtz et al. 2005. Both cemented and uncemented implants are heterogeneous groups with many factors that can influence survivorship, such as geometry, materials, surface finishes, and bearings. Moreover, study-specific factors including surgical approach, expertise of the surgeon, and study design may add to baseline differences between studies. In order to summarize the best available evidence on the relative success of cemented and uncemented fixation in THR from comparative studies, we conducted a systematic review of the literature and a meta-anal-ysis. We concentrated specifically on the impact of cemented versus uncemented fixation on revision rates. Inclusion criteria were established a priori to minimize any possible selection bias. The objective was to identify all studies including information on: (1) THR performed for any reason other than acute fracture, (2) controlled comparison of cemented vs. uncemented fixation, and (3) outcome as measured by survival to time of revision surgery for any reason. All randomized controlled trials and comparative observational studies with a control group were included. The following were excluded: (1) studies that included revision cases, (2) studies including cancer or tumor cases, (3) animal studies, (4) studies containing previously published data, (5) studies that did not report any revision events, and (6) case reports. Initial screening of articles was performed by one of us (SM). Two reviewers (SM and KJB) then independently assessed each of the studies for eligibility for inclusion. If the title or the abstract was judged by either reviewer to be potentially eligible, the full article was examined. Any disagreements were resolved by consensus. Data extraction and synthesis Data were extracted by one of us (SM) and checked for accuracy by a second investigator (KJB). Information retrieved from each study included survivorship estimates, study design, participants, implants and methods of fixation employed, definition of outcome measures, study setting, number of surgeons, statistical methods employed, factors that were used to match or stratify patients, patient characteristics, sample size and follow-up duration, withdrawal or censorship data, and potential sources of conflict of interest. Failure events were described as any revision surgery for removal or exchange of (A) either cup, stem or both, or (B) one specific component. We performed stratified analysis on key components of study design (i.e. randomized vs. non-randomized studies, age range, and definition of failure event) and regression analysis (meta-regression) on aggregate measures of patient characteristics within studies, in assessing whether study outcomes varied systematically with these features (Colditz et al. 1995). Reporting was carried out in line with QUOROM (Moher et al. 1999) and MOOSE (Stroup et al. 2000) guidelines. Statistics Differences in survival and standard error were derived from reported survival analysis estimates or from reported differences in the proportion of revised THRs. We performed meta-analysis using inverse-variance weighting (Sharp and Sterne 1998) to calculate fixed and random effects summary estimates. The convention in reporting results here is that summary estimates greater than zero favor uncemented fixation and those less than zero favor cemented fixation. Between-study heterogeneity was assessed using a Chi-square statistic (Lau et al. 1997) and the more conservative random effects estimate was reported. Studies performing multiple comparisons on the same treatment group or not specifying whether there was patient overlap between such repeated comparisons could result in a potential loss of independence. In such cases, adjustments were made to the weighting of studies using a previously described method for conservatively inflating variance estimates (Jordan et al. 2002, Enanoria et al. 2004. We used subgroup analysis to explore heterogeneity potentially caused by discrete factors identified a priori. These included study design (randomized vs. non-randomized), study site (registry vs. single institution), component followed (cup versus stem), and patient age range (≤ 55 versus > 55 years of age). We also tested the hypothesis that certain groups of implants that have performed poorly in observational studies could influence summary estimates, such as titanium stems and screw-fit or macro-ingrowth cups (Robinson et al. 1989, Tompkins et al. 1994, Rorabeck et al. 1996b, Kubo et al. 2001, Aldinger et al. 2004, Fink et al. 2004, Grant and Nordsletten 2004. Sensitivity analysis was performed to assess the contribution of each individual comparison to the summary estimate. Meta-regression was used to evaluate the association between study results and year of publication, duration of follow-up, and characteristics of the study sample including sample sex ratios and average age. A p-value of less than 0.05 was considered significant. Potential for publication bias was evaluated with the use of Egger's test for funnel plot asymmetry (Egger et al. 1997). All analyses were performed using STATA 8.2 (Stata Corporation, College Station, TX). Results Of the 747 citations identified after literature searches, 20 studies (reporting 24 comparisons) met our inclusion criteria ( Figure 1). Study char-acteristics and survival estimates are summarized in Tables 1 and 2. When all 24 comparisons were pooled (Table 3), no significant benefit to either fixation method was found among subgroups defined by study setting (registry-or multiple center-based vs. those from single institutions), study design (randomized and non-randomized studies), or failure definition (type A: either component or both, vs. type B: specific component failure). All subsequent analyses were performed within subgroups defined by failure definition. Type A failure definition: revision of cup or stem, or both The forest plot ( Figure 2) represents the pooled estimate showing no significant overall advantage of one fixation method over the other. The seven comparisons that did not restrict analysis to patients less than or equal to 55 years of age favored cemented fixation by 4% and differed significantly from the group of two studies that did (Table 4). Sensitivity analysis did not show a significant result with omission of any single study. Meta-regression did not show any significant associations between duration of follow-up, year of publication, age, or sex ratio and the outcome estimate. The Egger test for funnel plot asymmetry did not reveal any evidence of publication bias (p = 0.2). Type B failure definition: revision of cup or stem specifically 10 studies compared cemented and uncemented stems, and 5 compared cemented and uncemented cups; all were non-randomized. From the Norwegian registry (Havelin et al. 2000), uncemented stem and cup survivorship estimates were calculated by combining data on both hydroxyapatitecoated and porous-coated designs. There was significant heterogeneity present and the pooled estimate shown in Figure 3 shows a difference in survival probability that does not significantly favor either fixation method. In the analysis of subgroups (Table 4), several important sources of heterogeneity were discovered. Subgroup analysis differentiating studies using a titanium stem in the cemented group from those reporting use of a stainless steel or cobalt chrome cemented stem demonstrated that the former favored uncemented fixation whereas the latter favored cemented fixation, and the dif- ference between the two was statistically significant. For comparisons of cups using a threaded or macro-ingrowth implant with those using a microingrowth or on-growth uncemented design, the former favored cemented fixation whereas the latter did not, and the difference between subgroups was significant. Sensitivity analysis revealed that omission from the pooled analysis of the study of cup survival by Gaffey et al. (2004) (Figure 4) resulted in a shifting of the pooled estimate towards favoring cemented fixation. Meta-regression showed year of publication to be associated with improved survival of uncemented implants relative to cemented implants ( Figure 5). The Egger test for funnel plot asymmetry did not reveal any evidence of publication bias (p = 0.5). Discussion We have summarized the best evidence from comparative studies on the use of cemented vs. uncemented fixation in THR. 20 studies comparing cemented and uncemented fixation in THR met the criteria for inclusion in this systematic review. While meta-analysis did not demonstrate overall superiority of either method of fixation as measured by a difference in survival, subgroup analysis of the type A comparisons not restricted to young patients (less than or equal to 55 years of age) demonstrated a statistically significant survival advantage with cemented fixation. Among type B studies, a linear association between survival difference and year of publication was found, with uncemented fixa- tion outlasting cemented comparators after 1995. Poor performance by cemented titanium stems and threaded and macro-ingrowth cups were found to lead subgroup estimates to favor uncemented stems and cemented cups in their respective subgroups. These findings offer important lessons for future investigations. This analysis suggests that cemented fixation gives favorable results at the population level, though some caution in drawing inferences is advisable. These results may have limited generalizability to the United States or other countries where cemented fixation is performed much less frequently, where THR is performed at an earlier mean age (Lucht 2000, Puolakka et al. 2000, CDC 2002, Malchau et al. 2002, or where the population is not as socially or demographically uniform. Moreover, young patients suffer from higher failure rates (Berry et al. 2002, Malchau et al. 2002 and pose a dilemma in the choice of implant and fixation method. Lower revision rates with uncemented fixation at 8-10 years in patients who are 50 years old or younger (Capello 1990, Xenos et al. 1995, Kronick et al. 1997, Fink et al. 2004 encouraged optimism. The 7% difference between the population level (age unrestricted) and younger subgroup estimates (Table 4: -0.038 vs. 0.031) means that prospective studies should be designed to compare the best available cemented implants against the best available uncemented implants without pooling all age groups, because results are likely to differ between groups. Improvement in relative performance of uncemented fixation in recent years was found among type B studies. This is consistent with data from numerous uncontrolled studies (Zicat et al. 1995, Kim et al. 1999, Della Valle et al. 2004, Sinha et al. 2004. A study on the survival of more modern uncemented cups by Gaffey and colleagues (2004) compared to the results from a historical cemented control group has provided some of the strongest evidence to this effect at 15 years of follow-up. That study is the most current of the 5 specifically addressing cup survival, and the only one to favor uncemented fixation, which may explain why its omission in the sensitivity analysis (Figure 4) led to a significant shift in the summary estimate of survival difference to favor cemented fixation. The study by Gaffey et al. (2004) was designed to assess the importance of implant fixation with cemented vs. uncemented technique, and part of the difference in survival may be mediated through impact on wear rates. Uncemented fixation has been found to increase wear rates, which can lead to early failure (Tanzer et al. 1992, Xenos et al. 1995, McCombe et al. 2004. Improvements in polyethylene production, alternate bearing surfaces, and other design features may have contributed to the relatively improved survival of uncemented implants. Further studies will be necessary to confirm these assertions. Cemented stems of titanium and threaded macroingrowth cups explain some inconsistency in the results of studies that were included in the metaanalysis. For series of cemented titanium stems, numerous authors have reported loosening rates of 10-49% at 3-5 years (Robinson et al. 1989, Tompkins et al. 1994, Rorabeck et al. 1996a. We found cemented fixation to be inferior when titanium stems were used and superior when a stainless steel or cobalt-chrome stem was used. Similarly, threaded macro-ingrowth cups have performed poorly with loosening rates of 25-55% at 10-15 years of follow-up (Kubo et al. 2001, Aldinger et al. 2004, Grant and Nordsletten 2004. When these implants were tested against cemented cups, cemented cups outperformed them by 5%, whereas studies comparing porous-coated Harris-Galante I/II cups to cemented polyethylene cups moved the difference in survival in the direction of favoring uncemented fixation by 9%. The World Medical Association Declaration of Helsinki (World Medical Association 1997) requires that new treatments be tested against the best known current standard. We found attempted to use estimates based on revision undertaken for any reason-because this is less subjective than "aseptic loosening" or "mechanical failure"the propensity for differential misclassification and resulting bias is present. This is because the decision to undertake a revision is influenced by the opinions of the surgeon and the patient. Moreover, this is not an adequately sensitive definition of all clinical failures. Revisions are occasionally performed on well-fixed implants without evidence of infection or mechanical failure, and many radiographically loose or symptomatic implants never come to be revised. Reporting of health-related quality of life and functional outcome in addition to standardized reporting of failure events in survival analyses will improve the accuracy and comparability of clinically relevant outcomes in future research. Randomized studies using radiostereometry (Mjöberg et al. 1986, Karr holm et al. 1994), a highly sensitive and specific computerized radiographic technique for quantifying implant migration and wear, may become useful surrogates in the future for detecting early failure and exposing fewer patients to new technologies that are potentially dangerous. The studies reviewed here have shown that failure events in THR are rare, and that longterm follow-up is required to generate meaningful estimates of difference in survival probability. It is not uncommon for an implant being studied to be removed from the market or replaced by a new version before the scheduled endpoint of a trial, as was the case for the Mallory-Head prosthesis (Biomet, Warsaw, IN) used by Laupacis et al. (2002). This can make clinical trials costly, logistically challenging, and in the end, potentially irrelevant. Some authors assert that national registries ought to be the research study design of choice to provide timely and relevant outcomes data to guide clinical practice, as it has in Scandinavia (Maloney and Harris 1990, Maloney 2002, Howard et al. 2004, and the results of this study underscore the need for this powerful tool for improvement of patient outcomes. Randomized clinical trials will, however, continue to be valuable when: (1) the question of relative superiority has been narrowed down to a few seemingly equivalent choices of fixation or implants, and a specific target population has been identified that control groups have not always been selected with regard to the best available treatment or standard of care. Future comparative trials should avoid these past mistakes and use systematic reviews and comprehensive summaries of implant performance from the implant registries, with long-term followup in selecting comparator groups. 4 randomized controlled trials assessing hybrid fixation (cementation of one component and uncemented fixation of the other) were excluded because they either only focused on polyethylene wear rates and component loosening or had inadequate follow-up to detect any failures resulting in revision (Godsiff et al. 1992, Karrholm et al. 1994, Onsten et al. 1998, McCombe and Williams, 2004. With respect to failure defined as revision of either or both components (type A), only the Danish and Swedish registries presented data on hybrid fixation as distinct from purely cemented or uncemented fixation and this was judged inadequate for independent subgroup meta-analysis. Thus, the hybrid fixation method was only assessed indirectly through analysis of studies comparing individual component failures. While the majority of studies that were included were non-randomized and subject to significant bias and confounding, the potential for bias is not restricted to non-randomized studies. Of the 3 randomized controlled trials, only Laupacis et al. (2002) documented proper randomization techniques and concealment of allocation, and discussed reasons for exclusion or non-participation. Loss to follow-up or non-response during data collection are also important sources of selection bias. Lack of attention to this problem was seen among both randomized and non-randomized studies in this review. Of the 3 randomized studies that mentioned the reasons for their exclusion and censoring, only Laupacis et al. provided the type of flow chart and accounting for withdrawals that the CONSORT statement (Altman 1996) requires in documentation of randomized controlled trials. Such clear and transparent reporting of all features related to validity of such trials ought to be enforced in orthopedic journals, as it is in many high-impact medical journals (Altman 1996, Moher et al. 2001a. Definition of a failure event in studies of implant survival is fraught with inconsistencies. While we under which the experiment could be undertaken with equipoise; or when (2) the development of validated surrogate markers for early failure (such as radiostereometry) allows smaller sample sizes and shorter duration in the testing of a new strategy against an established control. Several limitations in our work are important to note. In any systematic review or meta-analysis, there may be publication bias, incomplete ascertainment of studies, and errors in data extraction. The studies included in this review represent a diversity of designs, patient populations, surgical implants and approaches, and methods for assessing their efficacy. We believe that restricting our analysis to randomized studies alone would have ignored most of the comparative evidence on the subject. Also, certain potential predictors of outcome-such as race, rehabilitation program, and activity level-could not be explored, due to very limited information on these variables among the studies that were included. We did not find any statistical evidence of funnel plot asymmetry to suggest publication bias. We attempted to minimize errors in data extraction through cross-checking of all quantitative information by two of the authors. We used all sources of data that we could identify from a comprehensive literature search, without any restriction regarding language, to find studies for inclusion. Given the limitations in the published literature on this topic, the methods used in this systematic review and meta-analysis had limited bias and they explored sources of heterogeneity to the greatest degree possible. In conclusion, the published evidence suggests that cemented fixation still has superior survival among large subgroups of populations studied, and that survival of uncemented implants continues to improve. The effect on analyses of relative benefit from the use of suboptimal control groups (such as those with cemented stems of titanium and threaded cups) emphasizes the need for more uniform standards in the selection of control groups in future trials. Further research and improved methods are necessary to better define specific subgroups of patients in which the relative benefits of cemented and uncemented implant fixation can be more clearly demonstrated.
Ridge Formation and De-Spinning of Iapetus via an Impact-Generated Satellite We present a scenario for building the equatorial ridge and de-spinning Iapetus through an impact-generated disk and satellite. This impact puts debris into orbit, forming a ring inside the Roche limit and a satellite outside. This satellite rapidly pushes the ring material down to the surface of Iapetus, and then itself tidally evolves outward, thereby helping to de-spin Iapetus. This scenario can de-spin Iapetus an order of magnitude faster than when tides due to Saturn act alone, almost independently of its interior geophysical evolution. Eventually, the satellite is stripped from its orbit by Saturn. The range of satellite and impactor masses required is compatible with the estimated impact history of Iapetus. Introduction The surface and shape of Iapetus (with equatorial radius, R I =746 km, and bulk density, ρ = 1.09 g cm −3 ) are unlike those of any other icy moon (Jacobson et al. 2006). About half of Iapetus' ancient surface is dark, and the other half is bright (see Porco et al. 2005, for discussion). This asymmetry has been explained recently as the migration of water ice due to the deposition of darker material on the leading side of the body (Spencer and Denk 2010). Iapetus also has a ridge system near its equator, extending > 110 • in longitude (Porco et al. 2005), that rises to heights of ∼ 13 km in some locations (Giese et al. 2008). The ridge itself is heavily cratered, suggesting it originated during Iapetus' early history. Finally, Iapetus' present-day overall shape is consistent with a rapid 16-hour spin period rather than its present 79-day spin period (Thomas 2010;Castillo-Rogez et al. 2007;Thomas 2010). To some, the equatorial position of the ridge and Iapetus' odd shape suggest a causal relationship. Most current explanations invoke endogenic processes. For example, detailed models of Iapetus' early thermal evolution suggest that an early epoch of heating due to short-lived 26 Al and 60 Fe is required to close off primordial porosity in the object while simultaneously allowing it to rapidly de-spin, cool, and lock in a "fossil bulge" indicative of an early faster spin period (Castillo-Rogez et al. 2007;Robuchon et al. 2010). Recently, Sandwell & Schubert (2010) suggested a new and innovative mechanism for forming the bulge and ridge of Iapetus through contraction of primordial porosity and a thinned equatorial lithosphere. However, only a narrow range of parameters allows both a thick enough lithosphere on Iapetus to support the fossil bulge, while also being sufficiently dissipative to allow Iapetus to de-spin due to Saturn's influence on solar system timescales. In these scenarios, the ridge represents a large thrust fault arising from de-spinning. One difficulty faced by these ideas is that the stresses arising from de-spinning at the equator are perpendicular to the orientation required to create an equatorial ridge (Melosh 1977). Other interior processes, such as a convective upwelling (Czechowski and Leliwa-Kopystyński 2008), or convection coupled with tidal dissipation driven by the de-spinning (Roberts and Nimmo 2009) are required to focus and reorient de-spinning stresses on the equator. These latter models have difficulty reproducing the ridge topography because thermal buoyancy stresses are insufficient to push the ridge to its observed height (see Dombard and Cheng 2008). Alternatively, the ridge may be exogenic. One leading hypothesis is that the ridge represents a ring system deposited onto Iapetus' surface (Ip 2006;Dombard et al. 2010). This model has the benefit of providing a natural explanation for the mass, orientation, and continuity of the ridge, which present a challenge to endogenic models. Here we extend this idea to include a satellite that accretes out of the ring system beyond the Roche limit. As we show below, this can significantly aid in the de-spinning of Iapetus. In particular, we hypothesize that: 1) Iapetus suffered a large impact that produced a debris disk similar to what is believed to have formed Earth's Moon (Canup 2004;Ida et al. 1997;Kokubo et al. 2000). Like the proto-lunar disk, this disk straddled the Roche radius of Iapetus, and was quickly collisionally damped into a disk. As a result, a satellite accreted beyond the Roche radius, while a particulate disk remained on the inside. Also, the impact left Iapetus spinning with a period ≤ 16 hr, thereby causing the bulge to form. 1 2) Gravitational interactions between the disk and Iapetus' satellite (hereafter known as the sub-satellite) pushed the disk onto Iapetus' surface, forming the ridge. As Ip (2006) first suggested, a collisionally damped disk, similar to Saturn's rings, will produce a linear feature precisely located along the equator. Thus, it naturally explains the most puzzling properties of the ridge system. The impact velocity of the disk particles would have been only ∼ 300 m s −1 and mainly tangential to the surface, so it is reasonable to assume that they would not have formed craters, but instead piled up on the surface. 3) Tidal interactions between Iapetus and the sub-satellite led to the de-spinning of Iapetus as the sub-satellite's orbit expanded. Eventually, the sub-satellite evolved far enough from Iapetus that Saturn stripped it away. Iapetus was partially de-spun and continued despinning under the influence of Saturn. Finally, the sub-satellite was either accreted by one of Saturn's regular satellites (most likely Iapetus itself) or was ejected to heliocentric orbit (cf. §5). The end-state is a de-spun Iapetus that has both a bulge and a ridge. Faster de-spinning aided by the presence of a sub-satellite likely relaxes constraints on the early thermal evolution of Iapetus determined by prior works (Castillo-Rogez et al 2010; Robuchon et al. 2010). Because the results of one part of our story can be required by other parts, we begin our discussion in the middle and first find, through numerical simulations, the critical distance 1 It is important to note that the impact that we envision is in a region of parameter space that has yet to be studied. Such an investigation requires sophisticated hydrodynamic simulations and thus is beyond the scope of this paper. We leave it for future work. We emphasize, however, that the general geometry we envision has been seen in many hydrodynamic simulations of giant impacts (e.g., Canup 2004), so we believe that our assumed initial configuration is reasonable. (a st ) at which a sub-satellite of Iapetus will get stripped by Saturn. Knowing this distance, we integrate the equations governing the tidal interactions between both Saturn and Iapetus, and between Iapetus and the sub-satellite, to estimate limits on the mass of the sub-satellite. We then study the fate of the sub-satellite once it was stripped away from Iapetus by Saturn. Finally, using crater scaling relations we reconcile a sub-satellite impact with the topography of Iapetus. Satellites Stripped by Saturn The distance at which a satellite of Iapetus becomes unstable is important for calculating tidal evolution timescales. In systems containing the Sun, a planet, and a satellite, prograde satellites are not expected to be stable beyond ∼ R H /2, where the Hill radius is defined as R H = a(m/3M) 1/3 with a as the planet's semi-major axis, m as its mass, and M as the total system mass (Hamilton and Burns 1991;Barnes and O'Brien 2002;Nesvorný et al. 2003). In our case, Iapetus plays the role of the planet, and Saturn the role of the Sun. However, the tidal evolution timescale depends strongly on semi-major axis (as the −13/2 power, Eq. 3) and thus the success of our model depends sensitively on the value of the critical distance, a st . Therefore, we performed a series of numerical simulations to determine a st . This experiment used the swift WHM integrator (Levison & Duncan 1994; which is based on Wisdom & Holman 1991) to integrate two sets of test particles consisting of 500 objects, each of which were initially on orbits about Iapetus with semi-major axes, a, that ranged from 0.1-0.8 R H . The particles in the first set were initially on circular orbits in the plane of Iapetus's equator. Particles in the second set had initial eccentricities, e, of 0.1, and inclinations, i, that were uniformly distributed in cos (i) between i = 0 and i = 15 • . Saturn is by far the strongest perturber to the Iapetus-centered Kepler orbits and is the main source of the stripping. For completeness, we have also included the Sun and Titan. The effects of the other Saturnian satellites are at least two orders of magnitude smaller than those of Titan and thus can be ignored. The simulations were performed in an Iapetus-centered frame. The lifetime of particles dropped precipitously beyond 0.4 R H , suggesting that any sub-satellite with a larger semimajor axis would very quickly go into orbit around Saturn (Fig. 1). Thus, we adopt this limit, which is equivalent to 21 R I , in our calculations below. Tidal evolution of Iapetus The de-spinning of Iapetus by Saturn has long been considered problematic, because for nominal Q/|k 2 | (∼ 10 5 ), Iapetus should not have de-spun over the age of the solar system (Peale 1977). Starting with the assumption of constant Q/|k 2 |, and using the standard de-spinning timescale (Murray & Dermott 1999, eq 4.163), where α ≤ 2/5 is the moment of inertia constant of Iapetus, m I is its mass, Ω I is its spin frequency, |k 2 | is the magnitude of the k 2 Love number, Q the tidal dissipation factor, m is the mass of Saturn, and a and n are the semi-major axis and mean motion of Iapetus. The |k 2 | and Q values used throughout are for Iapetus only. For the tidal interaction between Iapetus and Saturn, Ω I > n, so the effect is always to decrease the spin of Iapetus. For these simple assumptions, the de-spinning from 16 h to a rate synchronous with the orbital period, 79.3 days, takes 3.6 × 10 5 (Q/|k 2 |) years, nominally 36 Gyr, for a density ρ = 1 g cm −3 . Using detailed geophysical models, Castillo-Rogez et al. (2007) and Robuchon et al. (2010) showed Saturn can de-spin Iapetus on solar system timescales, although only for a narrow range of thermal histories. Our goal here is to investigate how the addition of the sub-satellite affects the de-spinning times. Given that detailed models of Castillo-Rogez et al. (2007) and Robuchon et al. (2010) used different methods, and that we are only interested in how the de-spinning timescale changes with the addition of a satellite, we take a simple approach of integrating a modified version of Eq. 1. Our first adjustment is to remove the assumption of constant Q/|k 2 |. This ratio is dependent on the tidal frequency, (Ω − n), and accounts for the manner in which a material or body reacts to tidal stresses. We start with a model of Iapetus consisting of a time-invariant 200-km thick lithosphere with a Maxwell viscoelastic rheology with rigidity µ = 3.6×10 9 Pa and viscosity η = 10 22 Pa s, which is strong enough to support the equatorial bulge and ridge Castillo-Rogez et al. (2007), overlying a mixed ice/rock mantle with a lower viscosity, representative of an interior warmed by radiogenic heating. We performed two types of simulations. In the first, the viscosity of the mantle is held constant with time and has values from η = 10 15 -10 18 Pa s (typical for the interior of an icy satellite at 240 -270 K). In the second, we allow η of the inner ice/rock mantle to vary according to the thermal evolution models in Castillo-Rogez et al. (2007) and Robuchon et al. (2010). In particular, we employ the LLRI model of Castillo-Rogez et al. (2007), and the 0.04 and 72 ppb 26 Al cases from Robuchon et al. (2010). Love numbers are calculated for a spherically symmetric, uniform-density Iapetus using the SatStress software package (Wahr et al. 2009). We calculate the Love number k 2 (which is a complex number for a viscoelastic body, see Wahr et al., 2009 for discussion) and estimate Q/|k 2 | = 1/Im(k 2 ) (Segatz et al. 1988). The values of Q/|k 2 | vary over an order of magnitude for each value of η for the important range of tidal frequencies. An integration of equation (1) was performed using a Bulirsch-Stoer integrator for times up to 100 Gyr, incorporating the frequency dependent Q/|k 2 | for different internal viscosities which, in turn, is a function of temperature. Without the sub-satellite, the time for Iapetus to reach synchronous rotation ranged between 5 ×10 8 (fixed η = 10 15 Pa s) to 2 ×10 12 years (0.04 ppb 26 Al case from Robuchon et al. 2010). We describe an investigation of the effect that a sub-satellite could have on the spin of Iapetus in the next subsection. Tidal interaction with a sub-satellite The sub-satellite raises a tidal bulge on Iapetus, causing Iapetus to de-spin and the sub-satellite's orbit to change. The change in spin rate for Iapetus due to a sub-satellite is (Murray & Dermott 1999, eq. 4.161), and, the change in the satellite's orbit by (Murray & Dermott 1999, eq. 4.162), Together Eqs. 2 and 3 describe the interaction between the sub-satellite and Iapetus, where m ss is the mass of the sub-satellite. The term sign(Ω I −n) is of great importance, determining whether the satellite evolves outward while decreasing the spin of Iapetus, or inwards while increasing the spin of Iapetus. At semi-major axis a sync = (G(m I + m ss )/Ω 2 I ) 3/2 , Ω I = n, representing a synchronous state. If the sub-satellite has a < a sync , it evolves inwards; if a > a sync , it evolves outwards. Saturn is gradually decreasing the rotation rate of Iapetus, and thus the synchronous limit slowly grows larger, possibly catching and overtaking a slowly evolving sub-satellite. The orbital period of a sub-satellite at 21 R I , the distance at which we consider a satellite stripped by Saturn, is ∼ 12.8 days. Thus, if Iapetus is de-spun to a period of 12.8 days before the sub-satellite reaches 21 R I , it will be caught by the expanding synchronous limit. For the integrations of the sub-satellite's tidal evolution, the sub-satellite's mass is used as a free parameter, while the starting semi-major axis is set to 3 R I . This distance is derived from the expected origin of the sub-satellite accreting from an impact-caused debris disk encircling Iapetus (Ida et al. 1997;Kokubo et al. 2000). However, tidal evolution timescales for a sub-satellite are largely insensitive to the initial semi-major axis, so this starting point only needs to be beyond the synchronous limit for the model to be accurate. With a rotation period of 16 h for Iapetus, a sync would have been 2.94 R I , which is outside the Roche limit defined to be at r roche = 2.46R I (ρ I /ρ ss ) (1/3) ≈ 2.53R I for ρ ss = 1 g cm −3 . Thus, a satellite forming at 3.0 R I would be above the synchronous limit and destined, initially, to evolve outward due to tidal interaction with Iapetus. Equations (2) and (3) were then integrated with equation (1), to follow the evolution of the spin of Iapetus due to Saturn and the sub-satellite. We studied the geophysical models for Iapetus described in the last section, along with sub-satellites with mass ratios, q ≡ m ss /m I , between 0.0001 and 0.04. We summarize the suite of simulations in Fig. 2, showing the time of de-spinning for Iapetus and the time at which the sub-satellite is stripped by Saturn or tidally evolves back to re-impact Iapetus. We present the data scaled to the de-spinning time due to Saturn alone to highlight the effect of the sub-satellite in accelerating the tidal evolution of the system. The fate of the system, in regards to the escape or re-impact of the sub-satellite and the de-spinning time of Iapetus, are separated into three distinct classes of outcomes based on q. 3.1.1. Synchronous lock and re-impact: q > 0.021 Above a mass ratio q > 0.021, the sub-satellite does not reach a st before becoming synchronous with the spin of Iapetus (see Fig. 3a). With both the sub-satellite and Saturn working to slow the spin of Iapetus, the synchronous limit grows to 21 R I before the subsatellite evolves to that semi-major axis. This result only varies mildly for the different geophysical models of Iapetus, as all three timescales depend linearly on Q/|k 2 |; therefore, the re-impact outcome only depends on mass ratio. However, the time to reach this outcome ranges from 10 12 yr for the Robuchon et al. (2010) Upon reaching synchronous rotation, the evolution does not stop because Saturn is still tidally interacting with Iapetus. As Saturn continues to slow the spin rate of Iapetus, the synchronous limit moves beyond the sub-satellite, which then begins to tidally evolve inwards. The sub-satellite is doomed to evolve inwards and hit Iapetus. Given that the sub-satellite started at 3 R I and finishes by impacting Iapetus, it makes a net contribution to the angular momentum of Iapetus, and so it is spinning faster than if the sub-satellite were never there. Thus, the de-spinning of Iapetus (after re-impact) finishes later than it would have by Saturn tides alone (see Fig. 3a). 3.1.2. Satellite is stripped: 0.006 < q < 0.021 For 0.006 < q < 0.021, the sub-satellite evolves to 21 R I and is stripped by Saturn before attaining a synchronous orbit. As it moves out, the sub-satellite carries away angular momentum from Iapetus, allowing it to rapidly de-spin. This angular momentum is then removed from the Iapetus system when the sub-satellite is stripped. The sample evolution for a system with a constant η = 10 16 Pa s and q = 0.018 (Fig. 3b) shows that the sub-satellite reaches an orbital period of ∼ 12 days (a = 21 R I ) before Iapetus reaches that spin period. It is important to note that the sub-satellite does not totally despin Iapetus. Instead, it slows Iapetus down enough that Saturnian tides (which are faster because Q/|k 2 | is a decreasing function of the spin rate) can finish the job. For all but one (see §3.1.4) of our geophysical models, the sub-satellite can de-spin Iapetus an order of magnitude faster than it is when de-spun by Saturn alone. This order of magnitude difference means that Iapetus could have de-spun in 500 Myr, for situations that would otherwise require the age of the solar system. 3.1.3. Slow evolution of a small satellite: q < 0.006 The tidal evolution timescale of the sub-satellite's orbit expansion depends on q, and for smaller mass ratios, the evolution takes longer. Below q < 0.006, the de-spinning of Iapetus due to Saturn is fast enough that the location of synchronous rotation sweeps past the subsatellite (see Fig. 3c). After this occurs, the sub-satellite is then below the synchronous limit and doomed to evolve back in towards Iapetus. In this scenario, the evolution of the sub-satellite back to the surface of Iapetus takes longer than it takes for Iapetus to de-spin. Iapetus de-spins faster than Saturn otherwise could do alone. In this case, however, the relevant time constraint becomes the sub-satellite impact time (the small open symbols in Fig. 2) rather than the de-spinning time because we currently do not see a satellite. We find that sub-satellite impact time can be shorter than the de-spinning time for q > 0.003. However, it should be noted that this dynamical pathway only helps by at most roughly a factor of 2 over Saturn acting alone. In addition, the sub-satellite is likely to tidally disrupt on its way in, forming a second, significantly fresher, ridge. This is probably inconsistent with the ridge's ancient appearance. Thus, we think that this particular dynamical pathway can probably be ruled out, but we include it for completeness. Discussion of sub-satellite tides The integrations have bracketed the possible behavior of the Saturn-Iapetus-sub-satellite system. At high and low mass ratios the sub-satellite is doomed to return to re-impact Iapetus, while for 0.006 < q < 0.021 the sub-satellite is stripped. As Fig. 2 shows, subsatellites with masses between 0.005 < q < 0.021 decrease the despinning time over that of Saturn alone. This effect can be as large as a factor of 10 for q ∼ 0.02. Consistent with prior work, we find that a low-viscosity interior (presumably warmed by radiogenic heating) is required to despin Iapetus over solar system history. We find that the age of the despinning event with and without a sub-satellite are similar if the thermal evolution of Iapetus follows the LLRI model of Castillo-Rogez et al. (2007). In this case, Iapetus' interior heats slowly during the first Gyr of its history. When the interior is warmed close to the melting point, tides drive rapid despinning. The presence of the sub-satellite can significantly shorten this period of time, but because the de-spinning time is short irrespective of whether the sub-satellite is present, it does not significantly alter the time in solar system history when Iapetus is de-spun. The ridge and sub-satellite formation Given the sub-satellite masses which can assist in de-spinning Iapetus, and estimates on the mass of the equatorial ridge, a constraint can be placed on the amount of mass placed into orbit by the original disk-forming impact. The ring of debris which collapses to form the ridge must do so rapidly, as a ring is not currently observed, and the ridge is one of the oldest features on the surface of Iapetus (Giese et al. 2008). Ridge mass The ridge has an unknown mass due to incomplete imaging and significant damage from cratering. Ip (2006) estimated its mass assuming that it had, at one time, completely encircled the equator with a height of 20 km and width of 50 km, m ridge = 2 × π × R I × 20 km × 50 km with a density of 1 g cm −3 equals a mass of 4.4×10 21 grams (Ip 2006 used a radius of 713 km for this mass estimate). Giese et al. (2008) found a maximum height of 13 km in Digital Terrain Models (DTM) models, though the shape model maximum height was 20 km. Comparing the dimensions given by Ip (2006) with the profiles in Giese et al. (2008), we set a lower limit by taking a factor of two in both vertical and horizontal extent and assuming that the cross section is a triangle rather than a rectangle, yielding an estimate ∼8 times lower, 5.5×10 20 grams (where we use a radius of 746 km). However, Castillo-Rogez et al. (2007) We assume that the ridge consists of ring material that lands on the surface of Iapetus, accounting for the equatorial location. Given a ring of material interior to the Roche limit, there are two ways for it to land on the surface of Iapetus: the ring can tidally evolve down to the surface, or it can be pushed there by the newly formed sub-satellite. For reasons discussed below, we focus on the latter. Sub-satellite and ring interaction The tidal evolution of the ring down to the surface of Iapetus requires that the material be inside the synchronous rotation height. As described above, the synchronous height for a rotation period of 16 h is at a = 2.94 R I , which is exterior to the Roche limit at r roche = 2.53R I for ρ ss = 1 g cm −3 . The ring material evolves due to tides with Iapetus and the sub-satellite that accretes beyond the Roche limit. By comparing our estimates for the sub-satellite ( §3.1) to those of the ridge ( §4.1), we find that the sub-satellite is more massive than the ridge for the entire range of sub-satellite masses that assist in de-spinning. In this case, the ring spreading timescale is the time it takes a particle to random walk across a distance r Goldreich and Tremaine (1982), where Σ is the surface density of the ring, and R ss and a ss are the radius and the semi-major axis of the sub-satellite, respectively. The surface density (Σ) of the ring is simply the ridge mass spread over the region interior to the Roche limit, ∼ 5.6-46.4 × 10 3 g cm −2 . The possible range of sub-satellite masses is equivalent to the mass of a single body of radius, R ss , of ∼131-211 km for a density of 1 g cm −3 (or 155-251 km for ρ ss =0.6 g cm −3 ). The semi-major axis of the sub-satellite is likely to be 1.3 R roche initially (Kokubo et al. 2000), and so the range of possible times for the spreading of the ring is 9-286 years. Thus, even with many conservative approximations, this timescale is many orders of magnitude shorter than other timescales in the problem. For the sub-satellite masses of interest, the effect of the ring on the sub-satellite would be dwarfed by the much larger effect of Iapetus's tides. Impact scenarios In the vast majority of situations in which the addition of a sub-satellite aids de-spinning, the sub-satellite is stripped from its orbit around Iapetus. In these cases the stripped subsatellite will still be bound in the Saturnian system, at least initially. In this section, we determine the possible fates for these objects and ask what effect this will have on Iapetus. The dynamical fate of a stripped sub-satellite To determine the probability of impact by a stripped sub-satellite we performed a numerical N-body experiment consisting of the orbital evolution of 50 massless test particles initially in orbit around Iapetus. We used SyMBA, a symplectic code which is capable of handling close encounters Duncan et al. (1998). The simulations included Titan, Hyperion, Iapetus, and Phoebe. Particles were stopped if they hit a satellite, crossed Titan's orbit, became unbound from Saturn, or reached a distance of 0.4 AU from Saturn (roughly its Hill radius). The particles' initial semi-major axes ranged from 0.4 to 0.8 R H (19-38R I ) with eccentricities of 0.1. The particles' inclinations were between 0 • and 15 • degrees with respect to Saturn's equator. After 6 Myr only 1 particle remained in orbit around Iapetus, while 3 particles hit Iapetus before becoming unbound. The remaining 46 particles became unbound from Iapetus, entering orbit about Saturn. These are the objects of interest here because our goal is to determine the fate of a sub-satellite once it becomes unbound. Five of them were ejected from the Saturn system by the satellites (entering heliocentric orbit). The remaining 41 objects impacted Iapetus -none hit Titan, Hyperion, or Phoebe. Thus, a stripped sub-satellite has a roughly 41/46 ∼ 90% chance of ending its existence by returning from orbit about Saturn and colliding with Iapetus. This fact means that we must consider the effects of such an impact in our scenario. Angular Momentum Budget In cases in which the stripped sub-satellite re-impacts Iapetus, we must consider the angular momentum it imparts. If the resulting spin rate is too fast it will cancel any advantage that was originally gained by the presence of the sub-satellite. We assume that our scenario is still viable if, after the impact, Ω I < Ω * = 2 ×10 −5 , where Ω I is the spin frequency of Iapetus. For spin rates < Ω * , corresponding to spin periods greater than ∼ 4 days, the de-spinning time after the impact will be less than ∼ 100 Myr for η=10 16 Pa s. Assuming the sub-satellite is accreted completely, the magnitude of the angular momentum brought in by the sub-satellite is H = m ss v ∞ b, where v ∞ is the satellite's velocity with respect to Iapetus at "infinity", and b is the impact parameter. Assuming that Iapetus's pre-impact rotation is slow, we find that Ω I < Ω * requires that b < b * ≡ 2R 2 I Ω * /5qv ∞ . The maximum value of b that allows a collision with Iapetus is b 2 where v esc is Iapetus' surface escape speed. If b * > b max , all impacts leave Iapetus spinning more slowly than Ω * ; otherwise the probability P that Iapetus will have Ω I < Ω * (and thus our scenario will remain viable) is (b * /b max ) 2 = (2R I Ω * /5qv) 2 , where v is the impact speed and we have used v 2 = v 2 esc + v 2 ∞ . In §3.1, we found that satellites with 0.005 < q < 0.021 were most effective in de-spinning Iapetus. Most impacts occurred at velocities near the escape speed of Iapetus, 0.58 km/s. At that speed, P = 1 for q < 0.0103 and P = (0.0103/q) 2 for 0.0103 < q < 0.021. Thus, the lower-mass sub-satellites never produce spin rates faster than Ω * , while sub-satellites with q = 0.0146 yield viable scenarios 50% of the time. We therefore conclude that the re-impact of the sub-satellite can be consistent with Iapetus's spin state. Linking basins to the possible sub-satellite impact A large complex crater or basin will form if the sub-satellite were to impact Iapetus. In this section, we estimate the size of the impactors needed to produce the basins observed on Iapetus today and compare them to the size of the sub-satellite. Zahnle et al. (2003) estimate the diameter, D, of the final (collapsed) crater on a mid-sized icy satellite to be where v is the normal component of the impact velocity, g is the surface gravity of the target, ρ is the density of the target, and ρ i is the density of the impactor. This equation assumes that the incidence angle of the impact, measured from the normal to the target, is 45 • . If we substitute g = 22.3 cm s −2 for Iapetus, ρ i = ρ = 1.0 g cm −3 , we have Giese et al. (2008) found 7 basins, defined as craters with D > 300 km, on the leading face of Iapetus. The largest is stated to have D = 800 km. The Gazetteer of Planetary Nomenclature (http://planetarynames.wr.usgs.gov/) lists 5 basins on Iapetus, with the largest, Turgis, 580 km in diameter. In our scenario, one of Iapetus's basins may have been created by our escaped subsatellite when it returns to meet its maker. In our simulations in §5.1, we find that the impact would likely occur at a velocity near Iapetus's escape velocity, 0.58 km/s; however, there is about a 10% chance that the impact speed would exceed 2 km/s. At 2 km/s, Eq. (6) indicates that 300 -800 km basins are produced by impactors with radii between 32 and 96 km. At the more likely speed of 0.58 km/s, the corresponding impactor radii are 64 and 193 km. Recall that in §4.2, we found that our hypothetical sub-satellite should have radii between 131 and 211 km for these densities. The ranges vary slightly for a sub-satellite of only 0.6 g cm −3 with impactors between 40-120 km at 2 km/s and 80-241 km at 0.58 km/s, where the hypothetical sub-satellite at this density would have a radii between 155-251 km. Therefore, it is quite possible that one of the observed basins was caused by our hypothetical sub-satellite. These calculations are highly uncertain because numerical simulations at the relevant scales and velocities have not, to our knowledge, been performed for icy targets. In particular, scaling relations in the literature, such as the Schmidt-Housen scaling we used above, are generally based on field data or simulations of hypervelocity impacts, i.e., impacts at velocities large compared with the speed of sound in the target, which is about 3 km/s for non-porous ice. Impacts by Iapetus's putative sub-satellite would have occurred at lower speeds. Furthermore, experiments and simulations generally deal with impactors that are much smaller than their targets. This condition is only marginally satisfied at typical speeds for the impacting sub-satellite in our scenario. Thus, the values we quote for the subsatellite's size should be viewed as rough estimates. However, the calculations do support the possibility of basin formation caused by the impacting sub-satellite. Discussion and conclusions We have explored the scenario in which a sub-satellite forms from an impact-generated debris disk around Iapetus following an impact. The remains of the disk fall to the surface of Iapetus to build the observed equatorial ridge, while the tidal evolution of the sub-satellite assists in de-spinning Iapetus. We find that this scenario can significantly shorten Iapetus's tidal de-spinning time if the mass ratio between the sub-satellite and Iapetus, q, is between 0.005 and 0.021. These results suggest that the presence of a sub-satellite can potentially loosen constraints on the geophysical history of Iapetus that have been implied by the timing and duration of despinning. The full implications of our results, however, cannot be realized until the sub-satellite scenario is investigated using a detailed thermal evolution model such as those described in prior works (e.g. Castillo-Rogez et al. (2007); Robuchon et al. (2010)). Iapetus has been of great interest due to its extremely old surface that records the cratering history of the outer solar system. Prior scenarios require that the ridge form after an epoch of early heating to de-spin Iapetus that drive the timing and duration of resurfacing (Castillo-Rogez et al. (2007); Robuchon et al. (2010)). In our model, the ridge forms only a few hundred years after the impact, and therefore the ridge is quite old. This matches the qualitative assessment that the ridge is one of the oldest features on the surface, along with the 800-km basin (Giese et al. 2008). Some of the sub-satellite evolutionary scenarios presented here end with a re-impact, requiring an associated basin forming much later. The ridge is only seen to extend roughly 110 • around Iapetus. This could be the result of its extreme age -much of it could have been destroyed by subsequent cratering. Thus, it is still unclear whether the ridge extends for the full extent of the equatorial circumference, but an infalling ring would preferentially deposit material on global topographic high terrain. Indeed, a ring that extends only 110 • in longitude could result if the center-of-figure of Iapetus were offset from its center-of-mass as is seen on the Moon (see Araki et al. 2009, and references therein). The stripped sub-satellite mass range, described above, corresponds to bodies with radii of between roughly 130 and 210 km (assuming ρ I =1 g cm −3 ), not all of which are below the estimated radius, ∼190 km, of the largest allowable impactor given the basins on Iapetus. This limiting size corresponds to a mass ratio q = 0.015. We arrive at a similar upper limit when considering the angular momentum of the impact. Thus, to account for the impact of the secondary, the mass ratio range 0.006< q <0.015 is required. As seen in Fig. 2, the sub-satellites are stripped on very similar timescales to the de-spinning of Iapetus. Thus, a sub-satellite returning to form a basin on Iapetus would do so after the ridge formed. This younger basin caused by the sub-satellite would then be expected to overlie the ridge if it was a near-equatorial impact. Turgis (the 580 km Basin II in Giese et al. 2008) fulfills these requirements as the equatorial ridge appears to stop when intersecting this basin (though there are no profiles presented in Giese et al. 2008 for this region of the surface). This basin would correspond to an impactor with a radius of 140 km, which is only slightly larger than the lower limit estimated above. However, we find this acceptable given the inherent uncertainties in calculating crater sizes. Meanwhile Basin I, the largest at 800 km, appears to have a stratigraphically similar age as the ridge (Giese et al. 2008). It is important to note, however, that our simulations predict only ∼90% of stripped sub-satellites return to impact, so it is not a certainty that one of the basins is associated with this evolution. A strength of our hypothesis is that the limits on impactor and sub-satellite masses (0.005 < q < 0.015) are in line with the estimated ridge mass and the number of 300-800 km basins. Thus, this work provides a complete story from original to final impact which may explain the ridge, shape, and basin population on Iapetus. We would like to thank Michelle Kirchoff and David Nesvorný for useful discussions. HFL and KJW are grateful for funding from NASA's Origins and OPR program. ACB and LD acknowledge support from NASA CDAP grants. Fig. 1.-The lifetime of each test particle is plotted as a function of their initial semimajor axis for two different intial eccentricities (black) 0.1 and (gray) 0.0. The simulations lasted for 1 Myr, which is shown as a horizontal line. Symbols for particles which survive for 1 Myr are smaller than those of particles with shorter lifetimes. The lifetime drops precipitously at a = 21 R I = 0.44 R H . In case (a) the sub-satellite is too large and eventually is caught in synchronous lock with the rotation of Iapetus. Saturn continues despinning Iapetus, and so the sub-satellite falls below synchronous height, returning to impact Iapetus. Iapetus then despins, again, due to Saturn, and finally reaches a despun state later than had it simply despun due to the effects of Saturn. In case (b) the sub-satellite assists in despinning Iapetus, and is then stripped by Saturn, allowing Iapetus to despin up to 10× faster than by Saturn alone. Finally, in (c), the small sub-satellite's orbit evolves very slowly, so that Iapetus is despun by Saturn fast enough for the synchronous limit to move beyond the sub-satellite, forcing the sub-satellite to tidally contract its orbit and return to impact Iapetus. For this case the despinning time is similar to that by Saturn's effect alone.
Using Genotype Abundance to Improve Phylogenetic Inference Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. Introduction Although phylogenetic inference methods were originally designed to elucidate the relationships between groups of organisms separated by eons of diversification, the last several decades have seen new phylogenetic methods for populations that are evolving on the timescale of experimental sampling (Drummond et al. 2003). This development is being spurred by new experimental techniques that enable deep sequencing at single-cell resolution, some of which enable quantification of original abundance. For bulk sequencing, random barcodes can be used to quantify PCR template abundance (Jabara et al. 2011;Kivioja et al. 2012;Brodin et al. 2015). More recently, cell isolation (Shapiro et al. 2013) or combinatorial techniques (Cusanovich et al. 2015;Howie et al. 2015;DeWitt et al. 2016) have provided sequence data at single-cell resolution. With such data, a given unique genotype-among many in the data-is represented in a measured number of cells. The abundance of a genotype can be read out as the number of cells bearing that genotype. Here, we demonstrate that incorporating genotype abundance improves phylogenetic inference for densely sampled evolutionary processes in which it is common to sample genotypes more than once. We are motivated by the setting of B cell development in germinal centers. B cells are the cells that make antibodies, or more generally immunoglobulins. Immunoglobulins are encoded by genes that undergo a stage of rapid Darwinian mutation and selection called affinity maturation . During affinity maturation, immunoglobulin is in its membrane-bound form, known as the B cell receptor (BCR). The biological function of this process is to develop BCRs with high-affinity for a pathogen-associated antigen molecule, and later excrete large quantities of the associated antibody. This affinity maturation process occurs in specialized sites called germinal centers in lymph nodes, which have specific cellular organization to enable B cells to compete among each other to bind a specific antigen (proliferating more readily if they do) while mutating their BCRs via a mechanism called somatic hypermutation (SHM). Using microdissection, researchers can extract germinal centers from model animals and sequence the genes encoding their BCR directly (Kuraoka et al. 2016;Tas et al. 2016). Lymph node samples are also available through autopsy (Stern et al. 2014) or fine needle aspirates from living subjects (Havenar-Daughton et al. 2016). Such samples provide a remarkable perspective on an ongoing evolutionary process. Indeed, these samples contain a population of cells with BCRs that differentiated via SHM at various times and have various cellular abundances. Because the natural selection process in germinal centers appears permissive to a variety of BCR-antigen affinities (Kuraoka et al. 2016;Tas et al. 2016), earlier-appearing BCRs are present at the same time as laterappearing BCRs. The collection of descendants from a single founder cell in this process naturally form a phylogenetic tree. However, it is a tree in which each genotype is associated with a given abundance, and such that older ancestral genotypes are present along with more recent appearances. Reconstruction of phylogenetic trees from BCR data may benefit from methods designed to account for these distinctive features. Standard sequence-based methods for inferring phylogenies fall into several classes according to their optimality criteria. Maximum likelihood methods posit a probabilistic substitution model on a phylogeny and find the tree that maximizes the probability of the observed data under this model (Felsenstein 1973(Felsenstein , 1981(Felsenstein , 2003. Bayesian methods augment likelihood with a prior distribution over trees, branch lengths, and substitution model parameters, and approximate the posterior distribution of all the above variables by Markov chain Monte Carlo (MCMC) (Huelsenbeck et al. 2001;Drummond and Bouckaert 2015). Maximum parsimony methods use combinatorial optimization to find the tree that minimizes the number of evolutionary events (Eck and Dayhoff 1966;Kluge and Farris 1969;Fitch 1971). Parsimony methods often result in degenerate inference, in which multiple trees achieve the same minimal number of events (i.e., maximum parsimony) (Maddison 1991). Additional approaches include distance matrix methods, which summarize the data by the distances between sequence pairs, and phylogenetic invariants, which select topologies based on the value of polynomials calculated on character state pattern frequencies. None of the above methods incorporate genotype abundance information, and it is standard for data with duplicated genotypes to be reduced to a list of deduplicated unique genotypes before a phylogeny is inferred. In this article, we show that genotype abundance is a rich source of information that can be productively integrated into phylogenetic inference, and we provide an opensource implementation to do so. We incorporate abundance via a stochastic branching process with infinitely many types for which likelihoods are tractable, and show that it can be used to resolve degeneracy in parsimony-based optimality. We first validate the procedure against simulations of germinal center BCR diversification. We also empirically validate our method using an experimental lineage tracing approach combining multiphoton microscopy and single-cell BCR sequencing, allowing us to study individual germinal center B cell lineages from brainbow mice. Beyond the setting of BCR development, we foresee direct application to tumor phylogenetics in single-cell studies of cancer evolution (reviewed by Schwartz and Schaffer 2017), and single-cell implementations of lineage tracing based on genome editing technology (McKenna et al. 2016). Genotype-Collapsed Trees Given sequence data obtained from a diversifying cellular lineage tree ( fig. 1a), our goal is to infer the genotype-collapsed tree (GCtree) defining the lineage of distinct genotypes and their observed abundances ( fig. 1b). The GCtree is constructed from the lineage tree by collapsing subtrees composed of cells with identical genotype to a single node annotated with its final cellular abundance. Our data consist of the genotypes sampled at least once in the GCtree, along with their associated abundances. Under the infinite types assumption that every mutant daughter generates a novel genotype, each genotype can be identified with one subtree in the original lineage tree. We are not claiming any originality in the GCtree definition, but it is useful to have a word for this object. We note that, unlike standard phylogenetic trees where only leaf nodes represent observed genotypes, GCtree internal nodes represent observed genotypes if they are annotated with nonzero abundance. Although not leaves per se in the GCtree, a nonzero abundance represents a clonal sublineage that resulted in a nonzero number of leaves of that genotype in the lineage tree. A node in the GCtree, along with its descending edges, summarizes the lineage outcome for a given genotype as its mutant offspring clades and the number The corresponding genotype-collapsed tree (GCtree) describes the descent of distinct genotypes, and is our inferential goal. (c) Genotype abundance informs topology inference. Two hypothetical GCtrees, equally optimal with respect to the sequence data, propose two possible parents of the green genotype-the gray and yellow genotypes (the yellow genotype was not sampled and thus has a small circle with no number inside). Intuitively, the abundance information indicates that the tree on the left is preferable because the more abundant parent is more likely to have generated mutant descendants. of its clonal leaves. Because this summary does not completely specify the genotype's clonal lineage structure ( fig. 2c), several branching structures may be consistent with a given node, and we have no information with which to distinguish between the various lineage trees consistent with a GCtree. Hence, our goal is to infer the GCtree topology. Parsimony with a Prior BCR sequence data from a germinal center sample have the following characteristics from the perspective of phylogenetics: genotypes have abundances, there is a limited amount of mutation between genotypes, and ancestral genotypes are present along with later ones. The latter two features suggest maximum parsimony as a useful tool because of the limited amount of mutation and because ancestral genotypes can be assigned to internal nodes of the tree (although recent Bayesian methods can do such assignment as well; Gavryushkina et al. 2014Gavryushkina et al. , 2017. For these reasons, parsimony has been used extensively in B cell sequence analysis (Barak et al. 2008;Stern et al. 2014). Because having many duplicate sequences inhibits efficient tree space traversal, these studies have inferred trees using the unique genotypes (BCR sequences). This ignores the varying cellular abundances of the observed genotypes. Here, we wish to use a branching process model to rank trees that are equally optimal according to sequence-level optimality criteria. Indeed, maximum parsimony often results in degenerate inference: there are many trees that are maximally optimal (Maddison 1991). We refer to these trees as a parsimony forest. In later sections, we show, using in silico and empirical data, that parsimony degeneracy is common and often severe for BCR sequencing data, and that parsimony forests exhibit substantial variation in phylogenetic accuracy. It is common practice to arbitrarily select one tree in the parsimony forest at random, without regard for this variability in inference accuracy. Instead, we will rank trees in the parsimony forest with an auxiliary likelihood that incorporates abundance information, thereby resolving this degeneracy. Genotype abundance is an additional source of information for phylogenetics, using the simple intuition that more abundant genotypes are more likely to have more mutant descendant genotypes. This intuition makes sense because relative sample abundance is a reasonable estimator of relative total historical abundance, and total historical abundance is related to the number of mutant offspring-that is, genotypes with larger abundance are likely to have more mutant descendant genotypes simply because there are more individuals available to mutate. The number of mutant offspring genotypes is in turn related to the number of surviving mutant offspring sampled. Thus, given two equally parsimonious trees, this intuition would prefer the tree that has more mutant descendants of a frequently observed node ( fig. 1c). We formalize this intuition using a stochastic process model for the phylogenetic development of germinal centers, and integrate this model with sequence-based tree optimality via empirical Bayes. In this stochastic process model, a GCtree node i has a random number T i 2 N of mutant children (i.e., descending edges) and a random abundance A i 2 N. We will index nodes in a "level order" as follows, which is well defined given an embedding of the tree into the plane. Index 1 refers to the root node, and 2 through 1 þ T 1 refer to the children of the root node. The level-order continues in order through all tree nodes of the same level before nodes at the next level. Adopting this level-ordering convention, a GCtree containing N nodes is specified by integer-valued random vectors giving FIG. 2. Modeling sequences equipped with abundances. (a) Both genotype sequence data G and genotype abundance data A inform tree topology T. As illustrated in this probabilistic graphical model, we assume independence between G and A conditioned on T rather than a fully joint model of G, A, and T. This facilitates using standard sequence-based phylogenetic optimality for G, augmented with a branching process (with parameters h) for A. (b) For the binary infinite-type Galton-Watson process, h ¼ ðp; qÞ. Four possible branching events characterize the offspring distribution common to all nodes. A node may bifurcate (with probability p) or terminate, and upon bifurcating its descendants each may be a mutant (with probability q). (c) A GCtree node specifies a genotype's clonal leaf count and number of descendant genotypes, but not lineage details. The likelihood of a GCtree node marginalizes over consistent lineage branching outcomes. (d) GCtree likelihood factorizes into the product of likelihoods for each genotype. the (planar) topology T ¼ ðT 1 ; . . . ; T N Þ, and abundances A ¼ ðA 1 ; . . . ; A N Þ. We also have the observed genotype sequences associated with each node G ¼ ðG 1 ; . . . ; G N Þ. A complete diversification model would give a joint distribution on T, G, and A. As an approximation to such a model, facilitating use of existing sequence-based optimality methods, we propose a generative model containing conditional independences as follows ( fig. 2a). First, we model abundances A and tree topology T as being drawn from a branching process likelihood, conditioned on parameters h (characterizing birth, death, and mutation rates in the underlying lineage tree): PðA; TjhÞ. This stochastic process likelihood will capture the intuition (described earlier) that more abundant genotypes are likely to have more mutant descendant genotypes. Next, we assume that genotype sequences G are generated by a mutation model conditioned on the fixed tree T, independent of A. This sequence-based optimality is captured by a distribution over G dependent only on T: PðGjTÞ. The lack of direct dependence of G on A constitutes an approximation to a more realistic sequence-valued branching process. However, this formulation has the advantage that it allows us to leverage standard sequence-based phylogenetic optimality in the specification of PðGjTÞ. In a later section (In silico validation), we validate this approximation with simulations that do not assume this conditional independence. In an empirical Bayes treatment (see Materials and Methods for details), a maximum likelihood estimate for the branching process parameters,ĥ, can be obtained by marginalizing T, and this in turn can be used to approximate a posterior over T conditioned on the data G and A (as well aŝ h). Using parsimony as our sequence-based optimality, one can rank trees in the parsimony forest (denoted T G ) according to the GCtree likelihood. We encode the parsimony criteria in PðGjTÞ by assigning uniform weight to the trees in T G , and zero to the other trees. This gives the following approximate maximum a posteriori tree: (1) where the point estimateĥ is given bŷ Next we turn to explicitly defining the GCtree likelihood PðA; TjhÞ. A Stochastic Process Model of Abundance To compute likelihoods PðA; TjhÞ for GCtrees ( fig. 1b), we model the lineage tree ( fig. 1a) as a subcritical infinite-type binary Galton-Watson (branching) process (Harris 2002) in which extinct leaf nodes correspond to observed cells. All mutations in an infinite-type process result in a novel genotype, embodying the assumption that each genotype can be identified with one subtree. Subcriticality ensures that the branching process terminates in finite time, so an explicit sampling time is not needed. The process is initiated with a single cell (a naive germinal center B cell before affinity maturation ensues), and runs to eventual extinction. This model is highly idealized and unable to capture many biological realisms of B cell affinity maturation and the sampling process. However, as we show in our validations, it is useful as a minimal model for leveraging genotype abundance information in a tractable likelihood. The offspring distribution for our process, governing reproduction and mutation for all lineage tree nodes at all time steps, is specified by two parameters: the binary branching probability p, and the mutation probability q. Because the offspring distribution is independent of type, subcriticality simply requires that the expected number of offspring of any node is <1, in this case equivalent to p < 0.5. In this case a "mutation" is an event that causes the evolving lineage to change to a novel genotype (under the infinite-types assumption). Thus the corresponding offspring distribution supports four distinct branching events ( fig. 2b). Letting C and M denote the (random) number of clonal and mutant offspring of any given node in the lineage tree, respectively, the offspring distribution is We can compute the likelihood of a hypothetical binary lineage tree simply by evaluating equation (3) at each node in the tree and multiplying the results. The likelihood for a GCtree is then given by summing over all possible binary lineage trees that are consistent with that GCtree (i.e., that give the same GCtree when collapsing by genotype), thus marginalizing out the details of intragenotype branching events that give rise to the same abundance. Here, we show how to calculate the GCtree likelihood directly for the simple offspring distribution (eq. 3). Other work (Bertoin 2009) has described how to calculate statistics of the infinite-type branching process with a general subcritical offspring distribution. First consider the likelihood for an individual node in the GCtree, say the root node, in the context of the branching process described earlier. A GCtree node i is specified by its abundance A i and the number of edges descending from it T i (both random variables). There are, in general, multiple distinct branching process realizations for genotype i that result in A i ¼ a clonal leaves and T i ¼ s mutations off the genotype i lineage subtree ( fig. 2c). Determining the likelihood of node i in the GCtree under this process, which we denote by f as ðp; qÞ ¼ PðA i ¼ a; T i ¼ sjh ¼ ðp; qÞÞ, requires marginalizing over all such genotype lineage subtrees. In Materials and Methods, we derive a recurrence for f as ðp; qÞ by marginalizing over the outcome of the branching event at the root of the lineage subtree for genotype i, and show that the GCtree node likelihood f as ðp; qÞ can be computed by dynamic programming. A complete GCtree containing N nodes is specified by level-ordering the nodes as described earlier T ¼ ðT 1 ; . . . ; T N Þ; A ¼ ðA 1 ; . . . ; A N Þ. Because the same offspring distribution generates the lineage branching of each genotype subtree, the same recurrence can be applied to all GCtree nodes. Specifically, we show in Materials and Methods that the joint distribution over all nodes in a GCtree factorizes by genotype ( fig. 2d). Using dynamic programming and factorization by genotype, the computational complexity of the GCtree likelihood is OðmaxðAÞmaxðTÞ þ NÞ. Ranking parsimony trees with GCtree requires a polynomial increase in runtime compared with finding the parsimony forest, which is itself NP-hard (Foulds and Graham 1982). Supplementary figure S1, Supplementary Material online, depicts runtime from simulations of various size, and shows that, in practice, this increased runtime is negligible. A computational implementation of the inference method above is available at http://github.com/matsengrp/gctree; last accessed February 23, 2018. The GCtree inference subprogram accepts sequence data in FASTA or PHYLIP format, determines a parsimony forest from the unique sequences using the dnapars program from the PHYLIP package (Felsenstein 2005), determines the genotype-collapsed form of these trees and outputs tree visualizations using the ETE package (Huerta-Cepas et al. 2016), and ranks them according to their GCtree likelihood using the sequence abundances. Bootstrap analysis is also implemented, providing confidence values of each split in the maximum likelihood GCtree. The GCtree maximizing the branching process likelihood (with optional bootstrap support) is the inference result. Next we show that resolving parsimony degeneracy using GCtree substantially increases both accuracy and precision of phylogenetic inference. In Silico Validation To explore the accuracy and robustness of GCtree inference, we developed a simulation subprogram to generate random lineages starting with a naive BCR sequence. For simulated lineages, true trees can be compared against those inferred with the GCtree inference subprogram. The stochastic process model used in GCtree inference is intended as a minimal model (in terms of biological realism) that captures the intuition that genotype abundance is relevant to phylogenetic reconstruction. Experimental data need not obey our simplifying assumptions, thus, we set out to test GCtree's robustness to deviations of the data generating process from the inferential model. A simulation process was implemented that includes biological realisms of B cells undergoing SHM (and violates inferential assumptions). These realisms of simulationdetailed in Materials and Methods-include: branching process multifurcations (controlled by a parameter k, the expected number of children of a node in the cell lineage tree), sequence context sensitive mutations (Dunn-Walters et al. 1998;Spencer and Dunn-Walters 2005) (with a baselineline mutation rate k 0 , and a context-specific mutational model with 5mer mutabilities taken from Yaari et al. 2013), explicit sampling time (t, or N representing the number of cells desired in the sampled generation), incomplete sampling (the number of cells to sample n N), and repeated genotypes allowed (deviation from the infinite-type assumption). This constitutes a more challenging validation than simply simulating under the same assumptions that had been invoked for tractability of the inferential framework. Our in silico validation workflow is demonstrated in figure 3a for a small simulation that resulted in a parsimony forest with just two equally parsimonious trees. The output of the simulation software consists of FASTA data (sequences and their abundances), visualizations of the lineage tree and its GCtree equivalent, and a file containing the true GCtree structure. The GCtree inference subprogram can then be run on the FASTA data, and the resulting inferred GCtree compared with the true GCtree (in this case they were identical). To calibrate simulation parameters, we defined summary statistics on sequence data with abundance information, and tuned parameters to produce data similar to experimental BCR sequencing data under these statistics (see Materials and Methods). Our validation shows that using abundance information via a branching process likelihood can substantially improve inference results ( fig. 3b). For each simulation, we ranked otherwise degenerately optimal parsimony trees using GCtree. For each parsimony forest, we compared the GCtrees in the forest to the true GCtree for that simulation using the Robinson-Foulds (RF) distance (Robinson and Foulds 1981) as a measure of tree reconstruction accuracy. The maximum likelihood GCtree tends to be closer to the true tree than other equally parsimonious trees, which vary widely in accuracy, showing that GCtree is able to leverage abundance data to resolve parsimony degeneracy and improve the accuracy of tree reconstruction in this simulation regime. Empirical Validation We next performed a biological validation by investigating if GCtree improves inference according to biological criteria using real germinal center BCR sequence data. The BCR is a heterodimer encoded by the immunoglobulin heavy chain (IgH) and immunoglobulin light chain (IgL) loci. Both loci undergo V(D)J recombination, and then evolve in tandem during affinity maturation. By obtaining matched sequences from both loci using single-cell isolation, we have two independent data sets to inform the same phylogeny of distinct cells (each of which is associated with a single IgH sequence and single IgL sequence). Performing separate and independent IgH and IgL tree inference, we can then validate GCtree by comparing the inferred IgH tree to the inferred IgL tree. If the GCtree likelihood (eq. 4) meaningfully ranks equally parsimonious trees, then the Using Genotype Abundance to Improve Phylogenetic Inference . doi:10.1093/molbev/msy020 two MLE trees (IgH and IgL) would be expected to be more correct reconstructions than the other parsimony trees. Thus, we are to expect that the two MLE trees are more similar to each other (in terms of the lineage of distinct cells) than other pairs of IgH and IgL parsimony trees (which, if they are more distorted phylogenies, should show less concordance in the partitioning of the distinct cells). Conversely, if the GCtree likelihood is not meaningfully ranking trees, we expect that the MLE IgH and IgL trees will not be significantly closer to each other than other pairs of IgH and IgL parsimony trees. We used data from a previously reported experiment in which multiphoton microscopy and BCR sequencing were combined to resolve individual germinal center B cell lineages from mouse lymph nodes 20 days after subcutaneous immunization with alum-adsorbed chicken gamma globulin (Tas et al. 2016) (see Materials and Methods). Brainbow mice were used for multicolor cell fate mapping, enabling B cells and their progeny to be permanently tagged with different fluorescent proteins. In situ photo-activation followed by fluorescence-activated cell sorting yielded B cells from a color-dominant germinal center ( fig. 4a, left). BCR sequences were obtained for 48 cells in this lineage by single-cell mRNA sequencing of the IgH and IgL loci, resulting in 32 distinct IgH and 26 distinct IgL genotypes due to SHM mutations acquired through affinity maturation. The unmutated naive IgH and IgL V(D)J rearranged sequences (not observed) were inferred with partis using each set of 48 sequences (IgH and IgL) as a clonal family using germline genetic information Matsen 2016a, 2016b). These naive sequences were used as outgroups for rooting parsimony trees. GCtree results are depicted in figure 4b. Parsimony analysis resulted in degeneracy for both loci, with 13 equally FIG. 3. In silico validation of GCtree inference. (a) Demonstrating the simulation-inference-validation workflow, a small simulation resulted in two equally maximally parsimonious trees, and the one inferred using GCtree was correct. The initial sequence was a naive BCR V gene from the experimental data described in Materials and Methods. Branch lengths in the cell lineage tree (left) correspond to simulation time steps, while those in collapsed trees correspond to sequence edit distance. (b) About 100 simulations were performed with parameters calibrated using the BCR sequencing data and summary statistics described in Materials and Methods. Of 100 simulations, 66 resulted in parsimony degeneracy, with an average degeneracy of 12 and a maximum degeneracy of 124. For each of these 66, we show the distribution of Robinson-Foulds (RF) distance of trees in the parsimony forest to the true tree. "RF" denotes a modified Robinson-Foulds distance: since nonzero abundance internal nodes in GCtrees represent observed taxa, RF distance was computed as if all such nodes had an additional descendant leaf representing that taxon. GCtree MLEs (red) tend to be better reconstructions of the true tree than other parsimony trees (gray boxes). Four simulations resulted in a tie for the GCtree MLE, and the two tied trees in these cases are both displayed in red. Aggregated data across all simulations are depicted on the right, clearly indicating superior reconstructions from GCtree. parsimonious trees for IgH, and 9 for IgL. Empirical Bayes point estimation according to equation (2) yieldedp ¼ 0:495; q ¼ 0:388 (IgH) andp ¼ 0:495;q ¼ 0:304 (IgL). GCtree likelihoods (eq. 4) were computed to rank the equally parsimonious trees, and the MLE trees are shown with support values among 100 bootstrap samples (see Materials and Methods). Because the binary Galton-Watson process assigns probability zero to a GCtree node with frequency zero and one mutant descendant, the unobserved naive root node (which had one descendant after rerooting and collapsing identical genotypes in all parsimony trees) was given a unit pseudocount. These sequences were analyzed with partis Matsen 2016a, 2016b) to infer naive (preaffinity-maturation) ancestor sequences using germline genetic information, and trees were inferred with GCtree. (b) GCtree inference was performed separately for IgH and IgL loci, resulting in parsimony degeneracies of 13 and 9, respectively. Maximum likelihood GCtrees for each locus are indicated in red and the GCtrees with annotated abundance are shown. Roots are labeled with the gene annotations of the naive state inferred using partis. Small unnumbered nodes indicate inferred unobserved ancestral genotypes. Numbered edges indicate support in 100 bootstrap samples. (c) All possible pairings of IgH and IgL parsimony trees were compared in terms of the Robinson-Foulds distance between the IgH and IgL trees, labeled by cell identity. IgH and IgL parsimony trees are ordered by GCtree likelihood rank in columns and rows, respectively. Grid values show RF distance between each IgH/IgL pair. MLE trees result in more consistent cell lineage reconstructions between IgH and IgL (smaller RF values). (d) For each locus, distributions of bootstrap support values are shown for the tree inferred by GCtree and for a majority rule consensus tree of all trees in the parsimony forest. The latter contain more partitions with very low support. (e) Using additional data from a second germinal center from the same lymph node that had the same naive BCR sequence, GCtree correctly resolves the two germinal centers as distinct clades (as did other lower ranked parsimony trees). We then compared the concordance between pairs of heavy and light trees. Since both IgH and IgL loci have been recorded from the same set of 48 cells, the units of cell abundance in an IgH GCtree map to the units of cell abundance from an IgL GCtree (i.e., each cell identity among the 48 is associated with an IgH genotype and an IgL genotype). We can then consider the consistency of a given IgH tree and a given IgL tree in terms of the lineage of the 48 cell identities. For each possible pairing of an IgH parsimony tree with a IgL parsimony tree, we computed the RF distance (Robinson and Foulds 1981) between the two trees using the cell identities (rather than the genotype sequences) to define splits. We observed that the GCtree MLE based on IgH sequences and GCtree MLE based on IgL sequences form the most concordant pair among all pairs of parsimony trees ( fig. 4c). Moreover, pairs of parsimony trees that contained at least one GCtree MLE tree ranked consistently higher in terms of their similarity. We assessed confidence in GCtree partitions by comparing bootstrap support values in GCtree trees to those from the majority-rule consensus parsimony trees made using the consense program from the PHYLIP package (Felsenstein 2005). We observed the latter contained an excess of very low confidence partitions ( fig. 4d and supplementary fig. S4, Supplementary Material online). These results demonstrate that parsimony reconstructions for real BCR data sets suffer from degeneracy, and that GCtree likelihood can correctly resolve this degeneracy by incorporating abundance information ignored by previously published methods. Finally, using data collected from a second germinal center from the same lymph node, we tested GCtree's ability to correctly group cells from each germinal center into separate clades when run on combined data from both germinal centers. The two germinal center sequence data sets appeared to have the same naive BCR sequence (IgH and IgL), indicating they were both seeded from the same B cell lineage. Concatenating the IgH and IgL sequences for each cell in each germinal center, we used GCtree to infer a single tree for all cells from both germinal centers ( fig. 4e and supplementary fig. S5, Supplementary Material online). GCtree correctly resolved the two germinal centers as distinct clades (we note that all the parsimony trees had this feature, regardless of likelihood rank). This demonstrates the phylogenetic resolvability of germinal centers with the same naive BCR diversifying under selection for the same antigen specificity. Discussion We have shown that genotype abundance information can be productively incorporated in phylogenetic inference. By augmenting standard sequence-based phylogenetic optimality with a stochastic process likelihood, we were able to implement abundance-aware inference as a processing step downstream of results from an existing and widely used parsimony tree inference tool. We have shown that our method-implemented in the publicly available GCtree package-is useful for inferring B cell receptor affinity maturation lineages. Although branching processes have been used previously to infer parameters of BCR evolution (Kleinstein et al. 2003;Magori-Cohen et al. 2006) and construct SHM lineage trees from error-prone bulk sequencing reads (Sok et al. 2013), to our knowledge, we are the first to use branching processes to sharpen phylogenetic inference for BCRs sequenced at single-cell resolution from germinal centers. We believe GCtree will find use in other settings where sequence data from dense quantitative sampling of diversifying loci are available. Studies of cancer evolution are increasingly performed with single-cell resolved sequencing, however most tumor phylogenetics approaches use standard phylogenetic methods (reviewed by Schwartz and Schaffer 2017) that do not model genotype abundance. Exceptions include OncoNEM (Ross and Markowetz 2016) and SCITE (Jahn et al. 2016), both of which leverage single-cell data for tumor phylogenetic inference that is robust to genotyping errors and missing data, but do not aim to capture the intuition that genotype abundance and the number of direct mutant descendants are related. Single-cell implementations of lineage tracing based on genome editing technology (McKenna et al. 2016) may also benefit from reconstruction methods that model the abundance of observed editing target states, since cell types may vary widely in rates of proliferation. Using parsimony as our sequence-based optimality resulted in particularly simple results, as the tree space necessary to explore is exactly the degenerate parsimony forest. However, our empirical Bayes formulation is agnostic to the particular choice of sequence-based optimality, so in the future, we envision augmenting likelihood-based sequence optimality. This will require more computationally expensive tree space search and sampling schemes. In contrast to GCtree, a fully Bayesian approach to incorporate genotype abundance could use the full set of sequences (without deduplication) in a Bayesian phylogenetics package-such as BEAST (Drummond and Bouckaert 2015)-with a birth-death process prior. This would not enforce the infinite-type assumption, so a set of identical sequences could be placed in disjoint subtrees. However, such an approach will not scale well with many identical sequences: trees that only differ by exchange of identical sequences will create islands of constant posterior in tree space. Methods do not currently exist for tree space traversal that avoids moves within such islands. Even if such methods existed, they would need to be combined with algorithms to infer trees with sampled ancestors (Gavryushkina et al. 2014(Gavryushkina et al. , 2017 as well as multifurcations (Lewis et al. 2005(Lewis et al. , 2015; even just this combination is not currently available. Although our methods can be applied to other sequencebased optimality functions besides parsimony, it is important to recognize that GCtree (and indeed any tree inference procedure that deduplicates repeated sequences) contains an inherent weak parsimony assumption: that each unique genotype arose from mutation just once in the lineage and therefore corresponds to a single subtree in the lineage tree, and thus a single node in the GCtree. Thus it is important to continue to assess the impact of this weak parsimony assumption with simulation that does not make this assumption, as done here. The GCtree framework can also be extended to nonneutral models. For example, one could consider a model in which each genotype obtains a random fitness encoded by branching process parameters h that are fixed within a given genotype but randomly drawn by the genotype founder cell upon mutation from its parent. This will likely necessitate modeling genotype birth time explicitly, rather than restricting to extinct subcritical processes, since a genotype with small abundance may be a result of low fitness or just young age. One might also consider extending the offspring distribution to separately model synonymous and nonsynonymous mutations. Synonymous mutations do not change fitness, while nonsynonymous mutations change fitness as described earlier. Another direction of extension is to incorporate mutation models specialized to the case of BCR evolution, such as the S5F model (Yaari et al. 2013) used in our simulation study. An Empirical Bayes Framework for Incorporating Genotype Abundance in Phylogenetic Optimality Here, we more fully develop the empirical Bayes perspective on our estimator for the model depicted in figure 2a. This graphical model implies the factorization PðG; A; T; hÞ ¼ PðGjTÞPðA; TjhÞPðhÞ: A hierarchical Bayes treatment would assign a prior PðhÞ (such as uniform over the unit square for the model h ¼ ðp; qÞ) and compute the posterior over trees conditioned on the data, marginalizing over h: Rather than attempting this integral over PðA; TjhÞ, each evaluation of which requires dynamic programming, we first seek a maximum likelihood estimate for h marginalizing T: Using this point estimate, an approximate posterior over trees is PðTjG; A;ĥÞ / PðGjTÞPðA; TjĥÞ: This formulation embodies an optimality over trees conditioned on both genotype sequence data G and genotype abundance data A. Evaluation ofĥ with equation (6) in general requires summation over the space of all trees consistent with the data. A simple application of this formalism is to augment parsimony-based tree optimality with abundance data. Let T G denote the degenerate set of maximally parsimonious trees given G (each of which has the same total genotype sequence distance over its edges). Encode parsimony optimality as a PðGjTÞ assigning uniform weight to each tree in T G , and zero elsewhere. In this case, equation (2) becomeŝ and equation (7) becomes PðTjG; A;ĥÞ / PðA; TjĥÞ; t 2 T g 0; t 6 2 T g : ( With equation (9), we have a framework using abundance information to distinguish among the otherwise equally optimal trees presented by a parsimony analysis. In our application, we use a subcritical infinite-type binary Galton-Watson branching process model for the lineage tree, and describe a recursion for computing GCtree likelihoods PðA; TjĥÞ by dynamic programming to marginalize over compatible lineage trees. Dynamic Programming to Marginalize Lineage Tree Structure We derive a recurrence for f as ðp; qÞ ¼ PðA i ¼ a; T i ¼ sjh ¼ ðp; qÞÞ by marginalizing over the outcome {C, M} of the branching event at the root of the lineage subtree for genotype i (the first cell of type i). We will use that a and s are the sum over two iid processes for the left and right clonal branches. We temporarily suppress the parameters h ¼ ðp; qÞ, writing f as for notational compactness. In the case fC ¼ 2; M ¼ 0g, f a 0 s 0 f aÀa 0 ;sÀs 0 : As this is the convolution of f as with itself, we denote it as f Ã2 as . Marginalizing over all outcomes {C, M}, we have f Ã2 a0 pð1 À qÞ 2 a > 1; s ¼ 0; f a;sÀ1 2pqð1 À qÞ þ f Ã2 as pð1 À qÞ 2 otherwise; where d ÁÁ denotes the Kronecker delta function. In light of the first case, the convolutional square may be written as showing that there are no terms containing f as on the RHS of equation (11). The GCtree node likelihood f as is thus amenable to computation by straightforward dynamic programming. The GCtree Likelihood Factorizes by Genotype We argue that the joint distribution over all nodes in a GCtree factorizes by genotype ( fig. 2d): Since s 1 is the number of children of node 1 (the root node), the children of the root node are indexed in level order by 2; . . . ; 1 þ s 1 . Let K i denote the set of indices of the nodes of the subtree rooted at node i, so K 2 ; . . . ; K 1þs 1 refer to sister subtrees rooted on each of the s 1 children of the root. Using the definition of conditional probability, and since sister subtrees are independent, we have Pðfða j ; s j Þ : j 2 K i gÞ; where random variable notation has been dropped for notational compactness. Now, within each subtree factor, we may reindex in level order (that is, level order in that subtree) starting from 1. We then pull out factors f a 2 s 2 ; . . . ; f a 1þs 1 s 1þs 1 corresponding to the root nodes of the sister subtrees (children of the original root). We obtain equation (12) by applying this logic recursively. Restoring the offspring distribution parameters, we recognize this as the distribution needed in equations (1) and (2) to rank trees in a parsimony forest: where f a i s i ðp; qÞ is computed by dynamic programming using equation (11). Numerical validation of the GCtree likelihood is summarized in supplementary figure S3, Supplementary Material online, using 10,000 Galton-Watson process simulations at each of several parameter values. The likelihood accurately recapitulates tree frequencies, and simulation parameters are recoverable by numerical maximum likelihood estimation. Simulation Details To provide for a more challenging in silico validation study, several biological realisms were built into our simulation that defied simplifying assumptions in the GCtree inference methodology. Arbitrary Offspring Distribution The recursion (eq. 11) used to compute GCtree likelihood components specifies a binary branching process, and such an approach would in general require an offspring distribution with bounded support on the natural numbers. Our simulation implements an arbitrary offspring distribution with no explicit bounding. In the results that follow, we used a Poisson distribution with parameter k for the expected number of offspring of each node in the lineage tree. Context Sensitive Mutation To generate mutant offspring, all offspring sequences (drawn from a Poisson as described earlier) were subjected to a sequence-dependent mutation process. The SHM process is known to introduce mutations in a sequence contextdependent manner, with certain hot-spot and cold-spot motifs (Dunn-Walters et al. 1998;Spencer and Dunn-Walters 2005). We used a previously published 5-mer context model S5F (Yaari et al. 2013) to compute the mutabilities l 1 , . . . ; l ' of each position 1, . . . ; ' within a sequence of length ' based on its local 5-mer context. This model also provided substitution preferences among alternative bases given the 5mer context. To compute mutabilities for beginning and ending positions without a complete 5-mer context, we averaged over missing sequence context. Although existing code can simulate a mutational process parameterized by S5F on branches of a fixed tree with a prespecified number of mutations on each branch (Gupta et al. 2015), in our simulations, we wanted the number of mutations on the branches to be determined by the sequence mutability as it changes via mutation across the tree. For example, as an initial mutation hotspot motif acquires mutations down the tree, its mutability typically degrades as it diverges from the original motif. We defined the mutability of the sequence as a whole by the average over its positions i¼1 l i . We defined a baseline mutation expectation parameter k 0 as a simulation parameter, and the number of mutations m any given offspring sequence received was drawn from a Poisson distribution. The Poisson parameter was modulated by the sequence's mutability m $ Poisðl 0 k 0 Þ, so that more mutable sequences tended to receive more mutations. Given m > 0, the positions in the sequence to apply mutations were chosen sequentially as follows. A site j to apply the first mutation was drawn from a categorical distribution using the site-wise mutabilities to define relative probability of choosing each site j $ Catðl 1 ; . . . ; l ' Þ. We mutated the site using a categorical distribution over the three alternative bases parameterized by the substitution preferences defined by the site's context. We then updated mutabilities l 0 and l 1 , . . . ; l ' as necessary to account for contexts that had been altered by the mutation. This process was repeated m times. Since the mutability of each node in the lineage tree will depend on the mutation outcome of its parent, the GCtree likelihood components will not factorize by genotype. Because the probability of mutation is sequence-dependent, the topology of the GCtree will be sequence-dependent. Therefore, the generative assumptions of the empirical Bayes inference do not hold in this simulation scheme, nor does the offspring distribution equivalence across lineage tree nodes specified by equation (3). Sampling Time Our inference model specifies a subcritical branching process run until extinction, and sampling of all terminated nodes (leaves). Our simulation more realistically assigns a discrete time of sampling parameter t (number of time steps from root), and thus does not need to constrain the offspring distribution to achieve subcriticality. At the specified time, extant nodes can be sampled, so all genotypes that terminated or mutated at a prior times are not observed. Alternatively, a parameter N specifying the desired number of simulated observed sequences may be passed, in which case the simulation runs until a time such that at least N sequences exist (unless terminated). Genotypes born at different times will be sampled under a process with different effective sampling times since birth. Thus this sampling time parameter also increases dependence between genotypes, further distancing the simulation model from the inferential model. Incomplete Sampling We introduce imperfect sampling efficiency with a parameter n for the number of simulated sequences that end up in the simulated sample data (FASTA), requiring n N. This violates the inferential assumption of complete sampling, and renders the true genotype abundances latent variables (which a more complete likelihood approach might aim to marginalize out). Repeated Genotypes Our simulation is seeded with an initial naive BCR sequence, from which randomly mutated offspring are created. Because there is no built-in restriction that the same sequence cannot arise along different branches (or mutations could be reversed), the model assumption of infinite types-such that identical sequences can be associated with a single genotype subtree-does not necessarily hold. When this assumption is violated the tree must necessarily be incorrect. Calibrating Simulation Parameters Using Summary Statistics We defined several summary statistics on sequences equipped with abundances which were used to calibrate simulation parameters representative of a regime similar to experimental data. We chose these statistics to reflect information relevant to tree inference, but not actually require tree inference, so as to avoid circularity. Denote g 0 2 G as the naive BCR (root genotype) and d H ðÁ; ÁÞ as the Hamming distance function between two sequences. Given simulation or experimental data G and A, we characterize the degree of mutation (from naive BCR) in the lineage by the set of Hamming distances of the observed genotypes from the naive genotype: fd H ðg; g o Þ; g 2 Gg. For a given genotype g i 2 G, we can compute its number of Hamming neighbors in the data g i ¼ jfg j 2 G : d H ðg i ; g j Þ ¼ 1gj. A simulation is specified by parameters ðk; k 0 ; Nðor tÞ; nÞ, a mutability model (here S5F; Yaari et al. 2013), and an initial sequence. We found parameters ðk ¼ 1:5; k 0 ¼ 0:25; N ¼ 100; n ¼ 65Þ produced simulations that were comparable to experimental data under these statistics. The experimental data used for comparison, consisting of 65 total BCR V gene sequences from a single germinal center lineage, is described in the following section. Supplementary figure S2, Supplementary Material online, depicts these summary statistics for 100 simulations, compared with experimental BCR data. Germinal Center BCR Sequencing Germinal center B cell lineage tracing and B cell receptor sequencing was performed as previously described (Tas et al. 2016). Full length IgH and IgL sequences from lymph node 2 germinal centers 1 and 2 from this reference were used for empirical validation results, whereas V gene sequences only (which are not dependent on partis-inferred naive sequences) were used for calibrating simulation parameters. Bootstrap Support We computed bootstrap support values for edges on a GCtree extending the standard approach (Felsenstein 1985): we resampled columns from the alignment G 100 times with replacement, generating an inferred GCtree (maximum GCtree likelihood among equally parsimonious trees) for each. Each edge is equivalent to a bipartition of observed genotypes obtained by cutting the edge; such a bipartition is typically referred to as a split. We compute the number of bootstrapped trees that contain the same split, and annotate the edge with this number. Because resampling the alignment G can produce repeated genotypes, there can exist ambiguity about how to perform genotype collapse of a parsimony tree. We simply group genotypes in the bootstrap analysis that collapse to identical genotypes. For example, if two observed sister genotypes with resampled sequences are both identical in sequence to their mutual parent, both have a claim on collapsing into the parent. When collapsing this tree, both genotypes will be associated with this collapsed node, rather then just a single one. Data Availability Germinal center BCR sequence data can be found in supplementary Database S1 of Tas et al. (2016), lymph node 2 and germinal center 1. Software Availability The GCtree source code is available at github.com/matsengrp/gctree and accepts sequence alignments in FASTA or PHYLIP format as input. It is open-source software under the GPL v3. Supplementary Material Supplementary data are available at Molecular Biology and Evolution online.
Environmental opportunities facilitating cognitive development in preschoolers: development of a multicriteria index Access to environmental opportunities can favor children’s learning and cognitive development. The objectives is to construct an index that synthesizes environmental learning opportunities for preschoolers considering the home environment and verify whether the index can predict preschoolers’ cognitive development. A quantitative, cross-sectional, exploratory study was conducted with 51 preschoolers using a multi-attribute utility theory (MAUT). The criteria used for drawing up the index were supported by the literature and subdivided in Group A “Resources from the house” extracted from HOME Inventory including: (1) to have three or more puzzles; (2) have at least ten children’s books; (3) be encouraged to learn the alphabet; (4) take the family out at least every 2 weeks. Group B “Screens” (5) caution with using television; (6) total screen time in day/minutes. Group C “Parental Schooling” (7) maternal and paternal education. Pearson correlation analyses and univariate linear regression were performed to verify the relationship between the established index with cognitive test results. The index correlated with the total score of the mini-mental state exam (MMC) and verbal fluency test (VF) in the category of total word production and word production without errors. Multicriteria index explained 18% of the VF (total word production), 19% of the VF (total production of words without errors) and 17% of the MMC. The present multicriteria index has potential application as it synthesizes the preschooler’s environmental learning opportunities and predicts domains of child cognitive development. Introduction Bioecological theories of human development (Bronfenbrenner 2005;Sameroff 2010) emphasize the importance of positive environments conducive to individual well-being over time , given that the micro-system of the household (Bronfenbrenner 2005) has a direct effect on the child's cognitive development Morais et al. 2021;Daelmans et al. 2017;Richter et al. 2017). A growing body of evidence has focused on the impact of environmental factors that affect cognitive development in early childhood, a critical phase in which environmental stimuli have a significant impact on brain architecture and cognition ; Morais et al. 2021;Johnson et al. 2016;McCoy et al. 2018;Britto et al. 2017). Academic difficulties during preschool can reflect long-term personal and social problems in adulthood (Camara-Costa et al. 2015;Salamon 2020). Studies have shown that learning difficulties in the preschool phase, such as math and reading skills (Rabiner et al. 2016), have consequences on cognitive performance from preschool to higher education (Camara-Costa et al. 2015;Salamon 2020) and have a negative impact on an individual's ability to achieve high levels of education (Smart et al. 2017). Learning (defined as the acquisition of new knowledge and skills) is a complex human process primarily developed in early childhood when behaviors, skills, and knowledge are intensively acquired (Jirout et al. 2019). Strategies applied to reduce academic difficulty early in the educational trajectory tend to reduce educational inequalities (Salamon 2020), and studies have shown that environmental opportunities that favor cognitive improvement are strongly related to economic status Camara-Costa et al. 2015;Romeo et al. 2018). Parental education is a predictor of economic status, with the greatest education levels correlating to the highest wages and position levels (Christensen et al. 2014;Andrade et al. 2005;Krieger et al. 1997;Nahar et al. 2020). Maternal education is considered an important predictor of child development Vernon-Feagans et al. 2020). Mothers with higher education levels feel more co-responsible for their child's education than fathers and provide more activities that encourage child development (Christensen et al. 2014;Andrade et al. 2005). The home environment (Bronfenbrenner 2005) directly affects the child's cognitive development Morais et al. 2021;Daelmans et al. 2017;Richter et al. 2017). Studies have shown a positive association between higher parental education levels and a home environment with more opportunities for a child's learning Romeo et al. 2018;Christensen et al. 2014;Vernon-Feagans et al. 2020;Dickson et al. 2016). Thus, the home environment is crucial for a child's cognitive development Camara-Costa et al. 2015;Salamon 2020;Jirout et al. 2019). Participation in stimulating experiences for development (e.g., walking and travel), availability of toys and materials that present a challenge to thinking (e.g., books, puzzles), encouragement for learning (Christensen et al. 2014;Bradley and Corwyn 2019), and access to family outings offer distinct possibilities for a child's learning favoring its cognitive development Christensen et al. 2014;Rosen et al. 2018). The use of screens at home is part of the daily lives of families in the contemporary context (Strasburger 2015; Guedes et al. 2019;Nobre et al. 2021); however, evidence indicates that using some criteria is essential to favor child development . Excessive television exposure is associated with delays, for example, in language development (Valdivia Álvarez et al. 2014;Duch et al. 2013) and poorer performance on behavioral measures of executive function (EF) (Li et al. 2020). On the other hand, if used with caution Price et al. 2015), interactive media may contribute to child development (Price et al. 2015; Council on Communications Media. Media and young minds 2016; Radesky et al. 2015;Russo-Johnson et al. 2017;Anderson and Subrahmanyam 2017;Skaug et al. 2018), especially in the domains of language and fine motor (Souto et al. 2020) during early childhood . The Brazilian Society of Pediatrics (Eisenstein et al. 2019) recommends up to 1 h/day of exposure time to all screens for children aged 2-5 years, corroborating with other international guidelines (Council on Communications Media. Media and young minds 2016; World Health Organization 2019). However, recent studies demonstrate difficulties in complying with this recommendation Tamana et al. 2019), and the majority of preschoolers are exposed to screens for longer periods of time than is advised (Tamana et al. 2019), particularly after the onset of the COVID-19 pandemic (Eyimaya and Irmak 2021;Kracht et al. 2021). Given the difficulty of families in following the current recommendations on maximal daily exposure time to screens for children ; Council on Communications Media. Media and young minds 2016; Radesky and Christakis 2016), the risks and benefits of screens exposure to children's cognitive development have been a hot topic of debate (Gerwin et al. 2018). Heller's study (Heller 2021), for example, highlights the disparity between the current screen time recommendations and children's actual habits, pointing out the need to increase the use of interactive media to favor children's cognitive development (Heller 2021). Taking into account that the cognitive function is a multidimensional construction that reflects general cognitive functioning, executive functioning, learning, and memory (Assari 2020), difficulting the evaluation of learning environments (Munoz-Chereau et al. 2021); the present study aimed to construct an index that synthesizes environmental learning opportunities for preschoolers considering the home environment and verify whether the index can predict preschoolers' cognitive development. Study design This is a quantitative, exploratory, cross-sectional study with a Multi-Attribute Utility Theory (MAUT) analysis. The study was approved by the Research Ethics Committee of the Universidade Federal dos Vales do Jequitinhonha e Mucuri (UFVJM) (Protocol: 2.773.418). Parents provided written informed consent for children's participation. The data collection period took place from July to December 2019. Participants Preschool children (aged 3-5 years) from public schools in a Brazilian municipality were eligible. Children born preterm or with low birth weight, complications in pregnancy and childbirth, children with signs of malnutrition or diseases that interfere with growth and development were excluded from the study. The sample size was estimated using the OpenEpi software, version 3.01, following a study with a similar design . Initially, 1241 children were from public schools enrolled in the city (Viegas et al. 2021), with a prevalence of 4.58% of language alterations in Brazilian preschoolers from public schools (Melchiors Angst et al. 2015), with a target precision of 10%, a confidence interval of 90% and an effect size of 1 (Cordeiro 2001) would require 51 preschoolers. Instruments A questionnaire was created with data on the child's birth and health to characterize the participants. In addition, the education of parents and the economic level of the child's family were recorded. The Brazilian economic classification criterion from the Brazilian Association of Research Companies (ABEP) was applied to verify the economic level of the families. The questionnaire stratifies the general economic classification from A1 (high economic class) to E (class economic very low) (ABEP 2019), and considers the assets owned by the family, the head's education and housing conditions, such as running water and street paving. The environment in which the child lived was assessed through the Early Childhood Home Observation for Measurement of the Environment (EC_HOME) (Caldwell and Bradley 2003). The EC_HOME is standardized for children aged 3-5 years and analyzed through observations and semistructured interviews during home visits. The instrument contains 55 items divided into 8 scales: I-Learning Materials, II-Language Stimulation, III-Physical Environment, IV-Responsiveness, V-Academic Stimulation, VI-Modeling, VII-Variety, and VIII-Acceptance. The sum of the raw scores of the subscales generates the classification in an environment of low, medium and high stimulation. For the elaboration of the index, dichotomous variables (presence or absence) were used, including in subscales I (presence of 3 or more puzzles, 10 or more children's books), II (encouragement for learning) and III (walking with the family every 2 weeks). The HOME Inventory has been used in both international (Jones et al. 2017) and transcultural studies (Bradley 2015), presenting psychometric characteristics investigated in Brazilian preschoolers (Cronbach's Alpha = 0.84 for the 55 items) (Dias et al. 2017). Screen time was assessed using an adapted questionnaire to measure preschoolers' physical activity (PA)-"Outdoor playtime checklist"- (Burdette et al. 2004b). that also includes the description for television exposure in minutes (Burdette et al. 2004a). The instrument was adapted for exposure to other media (smartphone and tablets). This questionnaire was validated for Brazilian preschoolers (Gonçalves et al. 2021). The time the child is exposed to television and other screens (cellular, smartphone, or similar) in the morning, afternoon, and the evening was measured. The application of the questionnaire lasted an average of seven minutes. Each question was used to identify the day of the week and the period of the day (from waking up to noon; from noon to 6 AM; from 6 AM to bedtime) in which the child was exposed to screens (television and tablet/smartphone). The time of exposure to the screens was recorded by the parents considering five possible options (0, 1-15, 16-30, 31-60) or more than 90 min). Assessment of global cognitive function was performed through the mini-mental state exam (MMC), adapted for children according to Jaine Passi (Jain and Passi 2005) (Brazilian version in Moura and collaborators) (Moura et al. 2017). The MMC consists of 13 items covering five domains of cognitive function (orientation, attention and working memory, episodic memory, language, and constructive praxis) with a maximum score of 37. The Brazilian validation and normalization of MMC presented satisfactory psychometric properties, with 82% specificity and 87% sensitivity. MMC can be applied in the age group from 3 to 14 years old. The MMC application lasts from 5 to 7 min and has been used in several countries, including Brazil (Viegas et al. 2021;Jain and Passi 2005;Moura et al. 2017;Shoji et al. 2002;Rubial-Álvarez et al. 2007;Scarpa et al. 2017;Peviani et al. 2020). Cognitive function was assessed according to the total score. Overall, the MMC is an ideal instrument to track general cognitive function (Viegas et al. 2021). Verbal Fluency (VF) tests have been used to measure EF, vocabulary and mental processing speed (Heleno 2006;Mitrushina et al. 2005), working memory (Henry and Crawford 2004), inhibitory control (Hirshorn and Thompson-Schill 2006) and cognitive flexibility (Amunts et al. 2021). The score was calculated by the number of words produced and the number of wrong words in 60 s per category (toy, animal, body parts, food and color). For the present study, all categories were also added and the total word production and total word production without errors were created. Procedures Recruitment took place at the doors of the schools, with the invitation made to the children's guardians at the time they left the school. After acceptance and signing of the Informed Consent Form, the subsequent steps were scheduled. The first stage was carried out in the child's home by completing survey questionnaires to assess socioeconomic data (ABEP 2019), quality of the home environment (EC-HOME) (Caldwell and Bradley 2003), and data on learning opportunities, screen time, parental education and child medical history. The second stage was carried out at the Centro Integrado de Pesquisa em Saúde (CIPq-Saúde) at the Universidade Federal dos Vales do Jequitinhonha e Mucuri (UFVJM), where the cognitive tests were applied (VF, MMC). Data analysis MAUT, known as Multicriteria Decision Support, was used. MAUT is a tool used in the context of the connection and existence of multiple factors in the evaluation process, such as child development, making it possible to identify, characterize and combine different variables (Keeney and Raiffa 1976), also presented in other studies with similar themes . The phases of MAUT are as follows: Phase 1: selection of criteria First, the selected criteria must faithfully represent what will be evaluated and were selected based on the literature (Adunlin et al. 2015). Thus, for learning opportunities, the selected criteria, based on the literature, were: Group A "Home Resources", containing the following items related to the child: (1) to have three or more puzzles; (2) have at least ten children's books; (3) be encouraged to learn the alphabet; (4) take the family out at least every 2 weeks. Group B "Screens", containing item (5) is television used judiciously?; (6) total screen time in day/minutes. Group C "Parental Schooling", containing: (7) maternal and paternal education. Phase 2: establishment of a utility scale for scoring each criterion After selecting the criteria, the subsequent phase aims to place the scores of the selected criteria on the same ordinal scale. In MAUT, it may happen that some selected criteria have different measurement units quantified through attributes (Adunlin et al. 2015). In this study, the selected criteria have answers quantified by attributes described in the fourth column of Table 1. In this phase, the answers were converted into numerical variables using an ordinal scale. For each answer, a positive value was attributed when the practice was considered favorable and null if the criterion does not characterize facilitating opportunities for learning. In Group A, "Resources of the house", the first criterion scores 0.25 for the child who has three or more puzzles (Christensen et al. 2014;Caldwell and Bradley 2003;Pereira et al. 2021); the second criterion scores 0.25 for the child who has at least ten children's books (Christensen et al. Caldwell and Bradley 2003;Bradley 2015); the third criterion scores 0.25 for the child who is encouraged to learn the alphabet (Christensen et al. 2014;Pereira et al. 2021) and the fourth criterion scores 0.25 for the child who walks with the family at least every 2 weeks Caldwell and Bradley 2003;Bradley 2015). The total sum of the criteria in this group makes a total of 1 point. In Group B, "Screens", the fifth criterion scores 0.25 for the child whose use of television is done judiciously, and in the sixth criterion Caldwell and Bradley 2003;Bradley 2015) (Table 1). In group C, 'Maternal and paternal education", containing the seventh criterion, scores with 0.25 per level of education considering the education of the father and mother (Andrade et al. 2005;Vernon-Feagans et al. 2020;Dickson et al. 2016). Table 1 shows the distribution of weights according to the criteria presented. Of note, the child with the highest multicriteria index, i.e., facilitating opportunities for learning, will be the one who has three or more puzzles (Christensen et al. 2014;Caldwell and Bradley 2003;Bradley 2015;Pereira et al. 2021;Defilipo et al. 2012); 10 children's books or more (Christensen et al. 2014;Caldwell and Bradley 2003;Bradley 2015); is encouraged to learn the alphabet, goes out with the family at least every 2 weeks Christensen et al. 2014;Caldwell and Bradley 2003;Bradley 2015). In addition, this child uses television judiciously Caldwell and Bradley 2003), and the time of use of all media (tablets, smartphones, and television making up Phase 3: determination of weight for each multicriteria The numerical measure that measures the importance of each criterion is the weight. It is possible to assign different weights if the decision-maker understands that there is a different relevance between the criteria (supported in the literature or in the opinion of experts on the subject) (Adunlin et al. 2015). For the research, equal weights were used for the different criteria, assuming that each selected factor has the same degree of relevance for children's cognitive learning. Phase 4: calculation of the multicriteria index In the present study, the weights considered for each criterion were the same as described in phase 3, and, for multicriteria index calculation, an average of the evaluations of all criteria was made for each participating child. The multicriteria index represented the weighted sum of the evaluations of the different evaluated criteria. Equation 1 shows how this calculation was performed (n = number of evaluated criteria): Phase 5: validation of results At this moment, it is verified whether the performed multicriteria analysis meets the objective (Henry and Crawford 2004;Adunlin et al. 2015). In this study, it was intended to verify if a higher multicriteria index was related to the better performance of the VF and global cognitive function (MMC) tests. Therefore, a correlation analysis between the multicriteria index and the variables of the mentioned tests (VF and MMC) was performed. First, the Excel Program (version-2010) was used to formulate the multicriteria model; then, for the validation step, the data were transferred to the Statistical Package for the Social Sciences (version-22.0). The normality test was obtained using the Shapiro-Wilk test. Subsequently, Pearson's correlation with the dependent variable (Multicriteria Index) and cognitive tests were performed. With the independent variables that presented a value of p < 0.05 in the correlation analysis, simple linear regression was performed with the dependent variable "Multicriteria Index" to verify how much the created index could explain the results in the MMC and VF tests. The variable age was adjusted in the model. Results Fifty-one preschoolers from public schools in a small town in southeastern Brazil participated in this study. More than half children were boys (52.9%), with a mean age of 5 years; children's mothers had a mean age of 31 years (± 6), and fathers had a mean age of 45 (± 25). Most children's families belonged to stratum C of the economic (1) Multicriteria index child i = Evaluation criterion 1 child i weight criterion 1 + … . + Evaluation criterion n child i weight criterion n classification, which means lower middle class. Of note, children's parental education was characterized by higher levels of school education; 82.3% of the mothers and 52% of the fathers had 12 years of school education. In addition, most children belonged to the middle quartiles of the EC_HOME scores, which characterizes medium stimulation environments. More than half of children had high screen time exposure (64.7%), and the average exposure time to screen was 133.23 min/day (± 69.75). Table 2 presents the participant's characteristics and the correlation of the variables with the de multicriteria index. Table 3 presents the correlation between the multicriteria index and the cognitive tests. The multicriteria index was correlated with the MMC test total score (p = 0.002). The multicriteria index also showed a positive correlation with the VF test in the subcategories toy (r = 0.333; p = 0.017), animals (r = 0.347; p = 0.013) and body parts (r = 0.325; p = 0.020), word production (p = 0.001) and word production without errors (p = 0.001) (Table 3). Figure 1 shows the correlation between the multicriteria index and the cognitive test variables (VF and MMC score). High scores in the index of cognitive stimulation opportunities correlated positively with high scores in the VF tests (for both word production with and without errors). High scores in the multicriteria index also meant high scores in the cognitive test (Fig. 1). Table 4 presents the simple linear regression between the variables of the cognitive tests and the multicriteria index (p < 0.05). A high multicriteria index was linked to improved performance in both VF tests (production of total words and production of words without errors; p = 0.001). Of note, children who presented greater facilitating opportunities for learning (better results in the multicriteria index) also In addition, a high multicriteria index explained 18% of VF (total word production), 19% of VF (total word production without errors) and 17% of improved performance in global cognitive function (Table 3). This study presented a power of 0.95, with an effect size of 0.14 and an alpha error of 0.05 for total word production. A power of 0.94, with an effect size of 0.21 and alpha error of 0.05 for the MMC. The effect size was calculated using Cohen's d which considers Cohen's d = 0.20, 0.50, and 0.80 to interpret observed effect sizes as small, medium, or large, respectively (Cohen 2013). Discussion In summary, the multicriteria index showed the potential to synthesize environmental opportunities that facilitate learning in preschoolers since it correlated positively with the MMC tests and VF in three categories. Similar to MMC test, this multicriteria index was accurate for screening global cognitive function (Viegas et al. 2021;Peviani et al. 2020). Therefore, children with high scores in the multicriteria index also had high scores in the MMC test. In addition, we also verified if the index could predict cognitive development. Child development is influenced by multifactorial aspects, including the child's reciprocal relationships with the environment Daelmans et al. 2017). Our index sought to address the sphere of the home environment (Bronfenbrenner 2005), which exerts the greatest influence on a child's cognitive development ). In the present study, we considered for the analyses the house resources Christensen et al. 2014;Caldwell and Bradley 2003), screen time exposure Heller 2021) and parental education (Andrade et al. 2005;Vernon-Feagans et al. 2020;Dickson et al. 2016;Hamadani et al. 2014). Our results corroborate recent findings showing a direct influence of the home environment on children's cognitive development . Overall, improvements in children's learning opportunities through home stimulation can be essential for promoting early learning Caldwell and Bradley 2003;Bradley 2015), favoring child cognitive development Christensen et al. 2014). In a longitudinal study with Bangladeshi children, home environment, child growth, and parental education mediated 86% of the effects of poverty on child cognition in the first 5 years of age (Hamadani et al. 2014); thus, simple environmental interventions at home can positively impact children's cognitive development Yang et al. 2021). The healthy use of interactive media as a learning resource has been discussed in the literature . Previous studies pointed out that the use of interactive media can positively contribute to child development (Price et al. 2015;Radesky et al. 2015;Russo-Johnson et al. 2017;Anderson and Subrahmanyam 2017;Skaug et al. 2018;Souto et al. 2020) if used sparingly . Accordingly, a recent study showed positive results of using interactive media for domains of child development, especially language and fine motor coordination (Souto et al. 2020) in early childhood children ; thus, if used with caution, tablets and smartphones may improve preschoolers' knowledge (numbers, alphabets and colors learning). Moreover, animated e-books (e.g., with voice and interactive pictures) can awaken children's interest in reading and creating. Then, parents with high scholarly education may offer their children the use of media as a learning resource . Recent studies have shown that high parental education (Vernon-Feagans et al. 2020) was positively associated with children's brain's temporal cortical area development, which is related to reading ability (Assari 2020). According to previous studies , access to material resources and home stimulation may have contributed to the positive association between parental education and reading ability. Then, the variables mentioned above were crucial in constructing the multicriteria index of this study (see Table 1). Children with the highest scores on the index also achieved the highest scores on the VF test in three categories and, consequently, on the total production of words with and without errors. Among these, the subcategory animals (r = 0.347, p = 0.013), toys (r = 0.333; p = 0.017) and body parts (r = 0.325; p = 0.020). We highlight that the VF test (or task) assesses language (lexical knowledge) and executive functions ). Li and colleagues (Lin et al. 2017), investigated brain activation in adults using VF tests; they emphasized the predictive potential of VF tests which might be employed for executive function screening. These characteristics are related to the components of volition/choice, flexibility, and inhibition of executive functions (Anderson 2002). In addition, we believe that these characteristics are expanded by access to the repertoire of resources that encourage verbal communication and challenges when thinking Bornstein and Putnick 2012), facilitating, for example, literacy (Bornstein and Putnick 2012), an essential component for cognitive processes (Jeong et al. 2019), academic learning (Rodriguez and Tamis-LeMonda 2011) and language amplifying (Bornstein and Putnick 2012). Our data also reinforce the importance of environmental opportunities facilitating joint and simultaneous learning for academic performance (Rodriguez and Tamis-LeMonda 2011); once the multicriteria index explained 18% of the total word production, 19% of the total word production with no errors, and 17% of the highest global cognitive function score. The impact of the children's home environment was enhanced by the facilitating learning opportunities provided in their early years of age (Munoz-Chereau et al. 2021). Considering the evidence that academic difficulties can be accurately tracked in the preschool years and last throughout life (Camara-Costa et al. 2015), some of these contextual factors probably represent environmental characteristics that can be changed in early life through adequate support to families Camara-Costa et al. 2015) and through efforts to build holistic learning opportunities in developing countries (Camara-Costa et al. 2015). Our study has some limitations. First, despite being utilized in research with Brazilian preschoolers aged 3-6 years, the MMC test has not been validated for use with children under the age of. (Viegas et al. 2021). Second, screen time exposure was calculated by adding up the use of interactive media and television; therefore, no questionnaire was used to assess cautious use ). However, the criteria for time exposure to television were measured using a validated instrument subscale (Caldwell and Bradley 2003) as well as current guidelines (Council on Communications Media. Media and young minds 2016; Radesky et al. 2015) and flexibilities according to emerging studies (Heller 2021). To our knowledge, this is the first study to consider multiple factors in the home environment (including interactive media as a resource) to create an index that synthesizes environmental learning opportunities in preschoolers from a Brazilian urban area. In addition, we used MAUT, a robust methodology that considers multiple factors, used in similar studies in the area of health (Nobre et al. 2022), cognitive development, and language ). Conclusion The present multicriteria index has potential application as it synthesizes the preschooler's environmental learning opportunities and predicts domains of child cognitive development. A positive and significant relationship was found between the high multicriteria index means better performance in both VF tests and better performance in global cognitive function. Our data point out the importance of family-based interventions to improve preschoolers' academic performance. Children who have access to books and puzzles, are stimulated to learn the alphabet, take family outings, are encouraged to watch screens sparingly respecting usage criteria and exposure time and have parents with high schooling, probably have a greater global cognitive function and VF. Summary Child development is a product of the child's reciprocal relationships and environment so that access to environmental opportunities can favor learning and cognitive development. The objectives is to establish an index that synthesis the environmental learning opportunities considering relevant factors of the domestic environment and verify how the index can facilitate domains of child cognitive development. Quantitative, cross-sectional, exploratory study with 51 preschoolers with a multi-attribute utility theory (MAUT). The criteria used for drawing up the index were supported by the literature and subdivided in Group A "Resources from the house" including: (1) to have three or more puzzles; (2) have at least ten children's books; (3) be encouraged to learn the alphabet; (4) take the family out at least every 2 weeks. Group B "Screens" (5) caution with using television; (6) total screen time in day/minutes. Group C "Parental Schooling" (7) maternal and paternal education. Pearson correlation analyzes to verify the relationship between the established index and cognitive tests and univariate linear regression. The index correlated with the total score of the mini-mental state exam (MMC), verbal fluency test (VF) in the category of total word production and word production without errors. High multicriteria index explained 18% of the VF (total word production), 19% of the VF (total production of words without errors) and 17% of the MMC. The multicriteria index developed has the potential to be used. The positive and significant associations between the environmental opportunities facilitating cognitive development and best test FV and MMC.
Compact model for Quarks and Leptons via flavored-Axions We show how the scales responsible for Peccei-Quinn (PQ), seesaw, and Froggatt and Nielsen (FN) mechanisms can be fixed, by constructing a compact model for resolving rather recent, but fast-growing issues in astro-particle physics, including quark and leptonic mixings and CP violations, high-energy neutrinos, QCD axion, and axion cooling of stars. The model is motivated by the flavored PQ symmetry for unifying the flavor physics and string theory. The QCD axion decay constant congruent to the seesaw scale, through its connection to the astro-particle constraints of both the stellar evolution induced by the flavored-axion bremsstrahlung off electrons $e+Ze\rightarrow Ze+e+A_i$ and the rare flavor-changing decay process induced by the flavored-axion $K^+\rightarrow\pi^++A_i$, is shown to be fixed at $F_A=3.56^{+0.84}_{-0.84}\times10^{10}$ GeV (consequently, the QCD axion mass $m_a=1.54^{+0.48}_{-0.29}\times10^{-4}$ eV, Compton wavelength of its oscillation $\lambda_a=8.04^{+1.90}_{-1.90}\,{\rm mm}$, and axion to neutron coupling $g_{Ann}=2.14^{+0.66}_{-0.41}\times10^{-12}$, etc.). Subsequently, the scale associated to FN mechanism is dynamically fixed through its connection to the standard model fermion masses and mixings, $\Lambda=2.04^{\,+0.48}_{\,-0.48}\times10^{11}\,{\rm GeV}$, and such fundamental scale might give a hint where some string moduli are stabilized in type-IIB string vacua. In the near future, the NA62 experiment expected to reach the sensitivity of ${\rm Br}(K^+\rightarrow\pi^++A_i)<1.0\times10^{-12}$ will probe the flavored-axions or exclude the model, if the astrophysical constraint of star cooling is really responsible for the flavored-axion. I. INTRODUCTION Many of the outstanding mysteries of astrophysics may be hidden from our sight at all wavelengths of the electromagnetic spectrum because of absorption by matter and radiation between us and the source. So, data from a variety of observational windows, especially, through direct observations with neutrinos and axions, may be crucial. Hence, axions and neutrinos in astro-particle physics and cosmology could be powerful sources for a new extension of SM particle physics [1,2], in that they stand out as their convincing physics and the variety of experimental probes. Fortunately, most recent analyses on the knowledges of neutrino (low-energy neutrino oscillations [9] and high-energy neutrino [10]) and axion (QCD axion [11,12] and axion-like-particle(ALP) [13,14]) enter into a new phase of model construction for quarks and leptons. In light of finding the fundamental scales, interestingly enough, there are two astro-particle constraints coming from the star cooling induced by the flavored-axion bremsstrahlung off electrons e + Ze → Ze + e + A i [13] and the rare flavor-chanting decay process induced by the flavored-axion K + → π + + A i [15], respectlely, 6.7 × 10 −29 α Aee 5.6 × 10 −27 at 3σ , where α Aee is the fine-structure of axion to electron. String theory when compactified to four dimensions can generically contain G F = anomalous gauged U(1) plus non-Abelian (finite) symmetries. In this regard, in order to construct a model for the aforementioned fundamental issues one needs more types of gauge symmetry beside the SM gauge theory. One of simple approaches to a neat solution for those issues could be accommodated by introducing a type of symmetry based on seesaw [3] and Froggatt-Nielsen (FN) [8] frameworks, since it is widely believed that non-renormalizable operators in the effective theory should come from a more fundamental underlying renormalizable theory by integrating out heavy degrees of freedom. If so, one can anticipate that there may exist some correlations between low energy and high energy physics. As shown in Ref. [1], the FN mechanism formulated with global U(1) flavor symmetry could be promoted from the string-inspired gauged U(1) symmetry. Such flavored-PQ symmetry U(1) acts as a bridge for the flavor physics and string theory [1,16]. Even gravity (which is well-described by Einstein's general theory of relativity) lies outside the purview of the SM, once the gauged U(1)s are introduced in an extended theory its mixed gravitationalanomaly should be free. Flavor modeling on the non-Abelian finite group has been recently singled out as a good candidate to depict the flavor mixing patterns, e.g., Ref. [1,17,18], since it is preferred by vacuum configuration and string theory for flavor physics. In the socalled flavored PQ symmetry model where the SM fermion fields as well as SM gauge singlet fields carry PQ charges but electroweak Higgs doublet fields do not [1,7,17], the flavoredaxions (one linear combination QCD axion and its orthogonal ALP) couple to hadrons, photons and leptons, and its PQ symmetry breaking scale is congruent to the seesaw scale. Hence, flavored-PQ symmetry modeling extended to G F could be a powerful tool to resolve the open questions for astro-particle physics and cosmology. Since astro-particle physics observations have increasingly placed tight constraints on parameters for flavored-axions, it is in time for a compact model for quarks and leptons to mount an interesting challenge on fixing the fundamental scales such as the scales of seesaw, PQ, and FN mechanisms. The purpose of the present paper is to construct a flavored-PQ model along the lines of the challenge, which naturally extends to a compact symmetry G F for new physics beyond SM. Remark that [7] The rest of this paper is organized as follows. In Sec. II we construct a compact model based on SL 2 (F 3 ) × U(1) X in a supersymmetric framework. Subsequently, we show that the model works well with the SM fermion mass spectra and their peculiar flavor mixing patterns. In Sec. III we show that the QCD decay constant (congruent to the seesaw scale) is well fixed through constraints coming from astro-particle physics, and in turn the FN scale is dynamically determined via its connection to the SM fermion masses and mixings. And we show several properties of the flavored-axions. What we have done is summarized in Sec. V, and we provide our conclusions. In Appendix we consider possible next-to-leading order corrections to the vacuum expectation value (VEV). 3 Recently, studies on flavored-axion are gradually becoming amplified [19]. As mentioned in the Introduction, finding the scales responsible for seesaw [3], PQ [5], and FN [8] mechanisms, as a theoretical guideline to the aforementioned fundamental issues, could be one of big challenges. To resolve such interesting challenge, we construct a neat and economical model based on the flavored-PQ symmetry U(1) X embedded in the non-Abelian finite group, which may provide a hint and/or framework to accommodate all the fundamental issues on astro-particle physics and cosmology. Along this line, the G F quantum number of the field contents is assigned in a way that (a) the G F requires a desired vacuum configuration to compactly describe the quark and lepton masses and mixings, (b) the G F fits in well with the astro-particle constraints induced by the flavored-axions, and (c) the U(1) X mixed-gravitational anomaly-free condition with the SM flavor structure demands additional Majorana fermions as well as no axionic domain-wall problem. Similar to Ref. [7] it is followed by the model setup: Assume we have a SM gauge theory based on the G SM = SU(3) C × SU(2) L × U(1) Y gauge group, and that the theory has in addition a G F ≡ SL 2 (F 3 ) × U(1) X for a compact description of new physics beyond SM. Here we assume that the symmetry group of the double tetrahedron SL 2 (F 3 ) [18,20,21] 4 is realized in field theories on orbifolds and a subgroup of a gauge symmetry that can be protected from quantum-gravitational effects. Since chiral fermions are certainly a main ingredient of the SM, the gauge-and gravitational-anomalies of the gauged U(1) X are generically present, making the theory inconsistent. Hence some requirements needed for the extended theory are: anomalies should be cancelled by the Green-Schwarz (GS) mechanism [22] (see Ref. [1]). (ii) The non-vanishing anomaly coefficient of the quark sector constrains the quantity N f j X ψ j in the gravitational instanton backgrounds (with N f generations well defined in the non-Abelian discrete group), and in turn whose in the QCD instanton backgrounds, where the t a are the generators of the representation of SU(3) to which Dirac fermion ψ i belongs with X-charge. Thanks to the two QCD anomalous U(1) we have a relation [17] |δ indicating that the ratio of QCD anomaly coefficients is fixed by that of the decay constants f a i of the flavored-axions A i . Here f a i set the flavor symmetry breaking scales, and their ratios appear in expansion parameters of the quark and lepton mass spectra (see Eqs. (38), (39), and (40)). where k i (i = 1, 2) are nonzero integers, which is a conjectured relationship between two anomalous U(1)s. The U(1) X i is broken down to its discrete subgroup Z N i in the backgrounds of QCD instanton, and the quantities N i (nonzero integers) associated to the axionic domain-wall are given by (iv) The U(1) X invariance forbids renormalizable Yukawa couplings for the light families, but would allow them through effective non-renormalizable couplings suppressed by (F /Λ) n with a flavon field F and positive integer n. Then the SM gauge singlet flavon field F is activated to dimension-four(three) operators with different orders [1,8,17,23] where OP 4(3) is a dimension-4(3) operator, and all the coefficients c n and c ′ 1 are complex numbers with absolute value of order unity. Here the flavon field F is a scalar field which acquires a VEV and breaks spontaneously the flavored-PQ symmetry U(1) X . And the scale Λ, above which there exists unknown physics, is the scale of flavor dynamics, and is associated with heavy states which are integrated out. Such fundamental scale may come from where some string moduli are stabilized. The flavored-PQ symmetry U(1) X is composed of two anomalous symmetries U(1) X 1 × U(1) X 2 generated by the charges X 1 ≡ −2p and X 2 ≡ −q. Here the global U(1) symmetry 5 including U(1) R is remnants of the broken U(1) gauge symmetries which can connect string theory with flavor physics [1,16]. Hence, the spontaneous breaking of U(1) X realizes the existence of the Nambu-Goldstone (NG) mode (called axion) and provides an elegant solution to the strong CP problem. A. Vacuum configuration In this section we will review the fields contents responsible for the vacuum configuration since the scalar potential of the model is the same as in Ref. [7]. Apart from the usual two Higgs doublets H u,d responsible for electroweak symmetry breaking, which transform as (1, 0) under SL 2 (F 3 ) × U(1) X symmetry, the scalar sector is extended via two types of new scalar multiplets that are G SM -singlets: flavon fields Φ T , Φ S , Θ,Θ, η, Ψ,Ψ responsible for the spontaneous breaking of the flavor symmetry, and driving fields Φ T 0 , Φ S 0 , η 0 , Θ 0 , Ψ 0 that are to break the flavor group along required VEV directions and to allow the flavons to In addition, the superpotential W in the theory is uniquely determined by the U(1) R symmetry, containing the usual R-parity as a subgroup: {matter f ields → e iξ/2 matter f ields} and {driving f ields → e iξ driving f ields}, with W → e iξ W , whereas flavon and Higgs fields 5 It is likely that an exact continuous global symmetry is violated by quantum gravitational effects [24]. remain invariant under an U(1) R symmetry. As a consequence of the R symmetry, the other superpotential term κ α L α H u and the terms violating the lepton and baryon number symmetries are not allowed. In addition, dimension 6 supersymmetric operators like Q i Q j Q k L l (i, j, k must not all be the same) are not allowed either, and stabilizing proton. The superpotential dependent on the driving fields having U(1) R charge 2, which is , is given at leading order by [7] W where higher dimensional operators are neglected, and µ i=T,Ψ,η are dimensional parameters and g T,η , g 1,...,8 are dimensionless coupling constants. The fields Ψ andΨ charged by X 2 , respectively, are ensured by the U(1) X symmetry extended to a complex U(1) due to the holomorphy of the supepotential. So, the PQ scale µ Ψ = v Ψ vΨ/2 corresponds to the scale of spontaneous symmetry breaking of the U(1) X 2 symmetry. Since there is no fundamental distinction between the singlets Θ andΘ as indicated in Table I, we are free to defineΘ as the combination that couples to Φ S 0 Φ S in the superpotential W v [25]. At the leading order the usual superpotential term µH u H d is not allowed, while at the leading order the operator driven by Ψ 0 and at the next leading order the operators driven by Φ T 0 and η 0 are allowed which is to promote the effective µ-term Actually, in the model once the scale of breakdown of U(1) X symmetry is fixed by the constraints coming from astrophysics and particle physics, the other scales are automatically fixed by the flavored model structure. And it is clear that at the leading order the scalar supersymmetric W (Φ T Φ S ) terms are absent due to different U(1) X quantum numbers, which is crucial for relevant vacuum configuration in the model to produce compactly the present lepton and quark mixing angles. The vacuum configuration of the flavon fields, Φ T , Φ S , η,Θ, Ψ, andΨ, is obtained from the minimization conditions of the F -term scalar potential 6 . At the leading order the global minima of the potential are given [7] by where v Ψ = vΨ and κ = v S /v Θ in SUSY limit. B. Quarks, Leptons, and flavored-Axions Under SL 2 (F 3 ) × U(1) X with U(1) R = +1, the SM quark matter fields are sewed by the five (among seven) in-equivalent representations 1, 1 ′ , 1 ′′ , 2 ′ and 3 of SL 2 (F 3 ), and assigned as in Table II and III. Because of the chiral structure of weak interactions, bare fermion masses are not allowed in the SM. Fermion masses arise through Yukawa interactions 7 . Through 6 The vacuum configuration of the driving fields is not relevant in this work. And we will not consider seriously the corrections to the VEVs due to higher dimensional operators contributing to Eq. (7) since their effects are expected to be only few percents level, see Appendix B. 7 Since the right-handed neutrinos N c (S c ) having a mass scale much above (below) the weak interaction scale are complete singlets of the SM gauge symmetry, they can possess bare SM invariant mass terms. However, the flavored-PQ symmetry U (1) X guarantees the absence of bare mass terms M N c N c and M S S c S c . As discussed in Refs. [1,7,17] , the quantum numbers of the lepton fields are summarized as in Table III. The lepton Yukawa superpotential, similar to the quark sector, invariant under G SM ×G F ×U(1) R reads at leading order In the above charged-lepton Yukawa superpotential, W ℓ , it has three independent Yukawa terms at the leading: apart from the Yukawa couplings, each term involves flavon field y β which appears in the superpotentials (13) and (14). In the neutrino Yukawa superpotential 8 , W ν , two right-handed Majorana neutrinos S c and N c are introduced to make light neutrinos pseudo-Dirac particles and to realize an exact tri-bimaximal mixing (TBM) [26] at leading order, respectively. Such additional Majorana fermion S c plays a role of making no axionic domain-wall problem, which links low energy neutrino oscillations to astronomical-scale baseline neutrino oscillations. The different assignments of SL 2 (F 3 ) × U(1) X quantum number to Majorana neutrinos guarantee 8 We will study on neutrino in detail including numerical analysis in the next project. the absence of the Yukawa terms S c N c × F . Consequently, two Dirac neutrino mass terms are generated; one is associated with S c , and the other is N c . The right-handed neutrino SU(2) L singlet denoted as N c transforms as the (3, p) and additional Majorana neutrinos denoted as S c e , S c µ , and S c τ transform as (1, r − Q y s 1 ), (1 ′′ , r − Q y s 2 ) and (1 ′ , r − Q y s 3 ), respectively. Below the cutoff scale Λ, the mass term of the Majorana neutrinos N c comprises an exact TBM pattern. Imposing the U(1) X symmetry explains the absence of the Yukawa terms LN c Φ S and N c N c Φ T as well as does not allow the interchange between Φ T and Φ S , both of which transform differently under U(1) X , so that the exact TBM is obtained at leading order. With the desired VEV alignment in Eq. (9) it is expected that the leptonic Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix at the leading order is exactly compatible with a TBM In order to explain the present terrestrial neutrino oscillation data, non-trivial next leading order corrections should be taken into account: for example, considering next leading order Yukawa superpotential in the Majorana neutrino sector triggered by the field Φ T are written (For neutrino phenomenology we will consider in detail in the next project. See also an interesting paper [27].) After including the higher dimensional operators there remain no residual symmetries 9 . Remark that, as in the SM quark fields since the U(1) X quantum numbers are arranged to lepton fields as in Table III with the condition (4) (or Eq. (19)) satisfied, it is expected that the SM gauge singlet flavon fields derive higher-dimensional operators, which are eventually where Similarly, the U(1) X quantum numbers associated to the neutrinos can be assigned by the anomaly-free condition of U(1) X -[gravity] 2 together with the measured active neutrino observables: This vanishing anomaly, however, does not restrict Q yν (or equivalently Q y ss i ), whose quantum numbers can be constrained by the new neutrino oscillations of astronomical-scale baseline, as shown in Refs. [1,7,28]. With the given above U(1) X quantum numbers, such whereQ y s i = Q y s 1 /X 2 . We choose k 2 = ±21 for the U(1) X i charges to be smallest making no axionic domain-wall problem, as in Ref. [1,7]. Hence, for the case-IQ y s 1 +Q y s 2 +Q y s 3 = −10 (32); for the case-II −20 (22); for the case-III −14 (28), respectively, for k 2 = 21(−21). Then, the color anomaly coefficients are given by δ G 1 = 2X 1 and δ G 2 = −3X 2 , and subsequently from Eq. (5) the axionic domain-wall condition as in Ref. [7] is expressed with the reduced Clearly, in the QCD instanton backgrounds since the N 1 and N 2 are relative prime there is no Z N DW discrete symmetry, and therefore no axionic domain-wall problem occurs. The model incorporates the SM gauge singlet flavon fields F A = Φ S , Θ, Ψ,Ψ with the following interactions invariant under the U(1) X ×SL 2 (F 3 ) and the resulting chiral symmetry, i.e., the kinetic and Yukawa terms, and the scalar potential V SUSY in SUSY limit 11 are of the form Here the V SUSY term is replaced by V total including soft SUSY breaking term when SUSY breaking effects are considered, and ψ stands for all Dirac fermions. The kinetic terms +higher order terms for canonically normalized fields are written as The scalar fields Φ S , Θ and Ψ(Ψ) have X-charges X 1 and X 2 (−X 2 ), respectively, that is where ξ k (k = 1, 2) are constants. So, the potential V SUSY has U(1) X global symmetry. In order to extract NG modes resulting from spontaneous breaking of U(1) X symmetry, we set the decomposition of complex scalar fields as follows 12 in which we have set Φ S1 = Φ S2 = Φ S3 ≡ Φ Si and h Ψ = hΨ in the SUSY limit, and v g = v 2 Ψ + v 2 Ψ . And the NG modes A 1 and A 2 are expressed as with the angular fields φ S , φ θ and φ Ψ . With Eqs. (23) and (25), the derivative couplings of A k arise from the kinetic terms 11 In our superpotential, the superfields Φ S , Θ and Ψ(Ψ) are the SM gauge singlets and have −2p and −q(q) X-charges, respectively. Given soft SUSY-breaking potential, the radial components of the X-fields |Φ S |, |Θ| |Ψ| and |Ψ| are stabilized. The X-fields contain the axion, saxion (the scalar partner of the axion), and axino (the fermionic superpartner of the axion). 12 Note that the massless modes are not contained in the fieldsΘ, where v F = v Θ (1 + κ 2 ) 1/2 and h F = (κh S + h Θ )/(1 + κ 2 ) 1/2 , and the dots stand for the orthogonal components h ⊥ F and A ⊥ 1 . Recalling that κ ≡ v S /v Θ . Clearly, the derivative interactions of A k (k = 1, 2) are suppressed by the VEVs v F and v Ψ . From Eq. (27), performing v F , v Ψ → ∞, the NG modes A 1,2 , whose interactions are determined by symmetry, are invariant under the symmetry and distinguished from the radial modes, h F and h Ψ . Quarks and CKM mixings, and flavored-Axions Now, let us move to discussion on the realization of quark masses and mixings, in which the physical mass hierarchies are directly responsible for the assignment of U(1) X quantum numbers. The axion coupling matrices to the up-and down-type quarks, respectively, are diagonalized through bi-unitary transformations: , and the mass eigenstates ψ ′ R = V ψ R ψ R and ψ ′ L = V ψ L ψ L . These transformation include, in particular, the chiral transformation necessary to make M u and M d real and positive. This induces a contribution to the QCD vacuum angle. Note here that under the chiral rotation of the quark fields the effective QCD vacuum angle is invariant, see Refs. [1,17]. With the desired VEV directions in Eq. (9), in the above Lagrangian (28) the mass matrices M u and M d for up-and down-type quarks, respectively, are expressed as where In the above mass matrices the corresponding Yukawa terms for up-and down-type quarks are given by One of the most interesting features observed by experiments on the quarks is that the mass spectrum of the up-type quarks exhibits a much stronger hierarchical pattern to that of the down-type quarks, which may indicate that the CKM matrix [29] is mainly generated by the mixing matrix of the down-type quark sector. Moreover, due to the diagonal form of the up-type quark mass matrix in Eq. (48) the CKM mixing matrix V CKM ≡ V u L V d † L coming from the charged quark-current term in Eq. (28) is generated from the down-type quark matrix in Eq. (30): in the Wolfenstein parametrization [30] and at higher precision [31], where λ = where P u and Q u are diagonal phase matrices, and V d L and V d R can be determined by diagonalizing the matrices for M † d M d and M d M † d , respectively. The physical structure of the upand down-type quark Lagrangian should match up with the empirical up-and down-type quark masses and their ratios calculated from the measured PDG values [29]: Then, the mixing matrix V d † L = V CKM is obtained by diagonalizing the Hermitian matrix The CKM mixing angles in the standard parametrization [33] can be roughly described as And with the quark fields redefinition the CKM CP phase is given as Subsequently, the up-and down-type quark masses are obtained as And the parameter of tan β ≡ v u /v d is given in terms of the PDG value in Eq. (36) by Since all the parameters in the quark sector are correlated with one another, it is very crucial for obtaining the values of the new expansion parameters to reproduce the empirical results of the CKM mixing angles and quark masses. Moreover, since such parameters are also closely correlated with those in the lepton sector, finding the value of parameters is crucial to produce the empirical results of the charged leptons (see below Eq. (48)) and the light active neutrino masses in our model. In the following subsequent subsection we will perform a numerical simulation for quark sector. Numerical analysis for Quark masses and CKM mixing angles We perform a numerical simulation 14 using the linear algebra tools of Ref. [34]. With the inputs tan β = 4.7 , κ = 0.33 , 14 Here, in numerical calculation, we only have considered the mass matrices in Eqs. (29) and (30) since it is expected that the corrections to the VEVs due to dimensional operators contributing to Eq. (7)) could be small enough below a few percent level, see Appendix B. III. SCALE OF PQ PHASE TRANSITION AND QCD AXION PROPERTIES The couplings of the flavored-axions and the mass of the QCD axion are inversely proportional to the PQ symmetry breaking scale. In a theoretical view of Refs. [1,7,17], the scale of PQ symmetry breakdown congruent to that of the seesaw mechanism can push the scale much beyond the electroweak scale, rendering the flavored-axions very weakly interacting particles. Since the weakly coupled flavored-axions (one linear combination QCD axion and its orthogonal ALP) could carry away a large amount of energy from the interior of stars, according to the standard stellar evolution scenario their couplings should be bounded with electrons 16 , photons, and nucleons. Hence, such weakly coupled flavoredaxions have a wealth of interesting phenomenological implications in the context of astroparticle physics [1,7], like the formation of a cosmic diffuse background of axions from the Sun [35,36]; from evolved low-mass stars, such as red-giants and horizontal-branch stars in globular clusters [37,38], or white dwarfs [39,40]; from neutron stars [41]; and from the duration of the neutrino burst of the core-collapse supernova SN1987A [42] as well as the rare flavor changing decay processes induced by the flavored-axions K + → π + + A i [15,43] and µ → e + γ + A i [43,45] etc.. Such flavored-axions could be produced in hot astrophysical plasmas, thus transporting energy out of stars and other astrophysical objects, and they could also be produced by the rare flavor changing decay processes. Actually, the coupling strength of these particles with normal matter and radiation is bounded by the constraint that stellar lifetimes and energyloss rates [46] as well as the branching ratios for the µ and K flavor changing decays [15,45] should not be counter to observations. Interestingly enough, the recent observations also show a preference for extra energy losses in stars at different evolutionary stages -red giants, supergiants, helium core burning stars, white dwarfs, and neutron stars (see Ref. [13] for the summary of extra cooling observations and Ref. [1] on the interpretation to a bound of the QCD axion decay constant); the present experimental limit, Br(K + → π + A i ) < 7.3 × 10 −11 [15], puts a lower bound on the axion decay constant, and in the near future the NA62 experiment expected to reach the sensitivity Br(K + → π + A i ) < 1.0 × 10 −12 [47] will probe the flavored-axions or put a severe bound on the QCD axion decay constant F A (or flavored-axion decay constants F a i = f a i /δ G i ). According to the recent investigation in Ref. [1,7], the flavored-axions (QCD axion and its orthogonal ALP) would provide very good hints for a new physics model for quarks and leptons. Fortunately, in a framework of the flavored-PQ symmetry the cooling anomalies hint at an axion coupling to electrons, photons, and neutrons, which should not conflict with the current upper bound on the rare K + → π + A i decay. Remark that once a scale of PQ symmetry breakdown is fixed the other is automatic including the QCD axion decay constant and the mass scale of heavy neutrino 16 The second (µ) and third (τ ) generation particles are absent in almost all astrophysical objects. associated to the seesaw mechanism. In order to fix the QCD axion decay constant F A (or flavored-axion decay constants F a i = f a i /δ G i ), we will consider two tight constraints coming from astro-particle physics: axion cooling of stars via bremsstrahlung off electrons and flavor-violating processes induced by the flavored-axions. A. Flavored-Axion cooling of stars via bremsstrahlung off electrons In the so-called flavored-axion framework, generically, the SM charged lepton fields are nontrivially U(1) X -charged Dirac fermions, and thereby the flavored-axion coupling to electrons are added to the Lagrangian through a chiral rotation. In the present model since the flavored-axion A 2 couples directly to electrons, the axion can be emitted by Compton scattering, atomic axio-recombination and axio-deexcitation, and axio-bremsstrahlung in electron-ion or electron-electron collision [37]. The flavoredaxion A 2 coupling to electrons in the model reads where m e = 0.511 MeV, 2 and X e = −11X 2 . Indeed, the longstanding anomaly in the cooling of WDs (white dwarfs) and RGB (red giants branch) stars in globular clusters where bremsstrahlung off electrons is mainly efficient [39] could be explained by axions with the fine-structure constant of axion to electrons α Aee = (0.29 − 2.30) × 10 −27 [48] and α Aee = (0.41 − 3.70) × 10 −27 [40,49], indicating the clear systematic tendency of stars to cool faster than predicted. It is recently reexamined in Ref. [13] as Eq. (1) where α Aee = g 2 Aee /4π, which is interpreted in terms of the QCD axion decay constant in the present model as B. Flavor-Changing process K + → π + + A i induced by the flavored-axions Below the QCD scale (1 GeV≈ 4πf π ), the chiral symmetry is broken and π and K, and η are produced as pseudo-Goldstone bosons. Since a direct interaction of the SM gauge singlet flavon fields charged under U(1) X with the SM quarks charged under U(1) X can arise through Yukawa interaction, the flavor-changing process K + → π + + A i is induced by the flavored-axions A i . Then, the flavored-axion interactions with the flavor violating coupling to the s-and d-quark is given by where 17 V d † L = V CKM , f a 1 = |X 1 |v F , and f a 2 = |X 2 |v g are used. Then the decay width of K + → π + + A i is given by [43,44] where m K ± = 493.677 ± 0.013 MeV, m π ± = 139.57018(35) MeV [50], and where is used. From the present experimental upper bound in Eq. (1), , we obtain the lower limit on the QCD axion decay constant Hence, from Eqs. (51) and (55) we can obtain a strongest bound on the QCD axion decay constant F A = 3.56 +0.84 −0.84 × 10 10 GeV . Interestingly enough, from Eqs. (47) and (56) In the near future the NA62 experiment will be expected to reach the sensitivity of [47], which is interpreted as the flavored-axion decay constant and its corresponding QCD axion decay constant f a i > 9.86 × 10 11 GeV ⇔ F A > 2.32 × 10 11 GeV . Clearly, the NA62 experiment will probe the flavored-axions or exclude the present model. 17 In the standard parametrization the mixing elements of V d R are given by θ R , and θ R 12 ≃ 2 √ 2| sin φ d |λ 2 . Its effect to the flavor violating coupling to the s-and d-quark is negligible: 12 = 0 at leading order. C. QCD axion interactions with nucleons Below the chiral symmetry breaking scale, the axion-hadron interactions are meaningful (rather than the axion-quark interactions) for the axion production rate in the core of a star where the temperature is not as high as 1 GeV, which is given by [17] where a is the QCD axion, its decay constant is given by and ψ N is the nucleon doublet (p, n) T (here p and n correspond to the proton field and neutron field, respectively). Recently, the couplings of the axion to the nucleon are very precisely extracted as [52] where N = 2δ G 1 δ G 2 with δ G 1 = 2X 1 and δ G 2 = −3X 2 , andX q = δ G 2 X 1q + δ G 1 X 2q with q = u, d, s and X 1u = X 1 , X 1d = X 1 , X 1s = 0, X 1c = 0, X 1b = 0, X 1t = 0, X 2u = −4X 2 , X 2d = −X 2 , X 2s = X 2 , X 2c = −2X 2 , X 2b = 3X 2 , X 2t = 0. And the QCD axion coupling to the neutron is written as where the neutron mass m n = 939.6 MeV. The state-of-the-art upper limit on this coupling, g Ann < 8 × 10 −10 [53], from the neutron star cooling is interpreted as the lower bound of the QCD axion decay constant Clearly, the strongest bound on the QCD axion decay constant comes from the flavored-axion cooling of stars via bremsstrahlung off electrons in Eq. (51) as well as the flavor-changing process K + → π + + A i induced by the flavored-axions in Eq. (55). Using the state-of-the-art calculation in Eq. (61) and the QCD axion decay constant in Eq. (56), we can obtain g Ann = 2.14 +0.66 −0.41 × 10 −12 , which is incompatible with the hint for extra cooling from the neutron star in the supernova remnant "Cassiopeia A" by axion neutron bremsstrahlung, g Ann = 3.74 +0.62 −0.74 × 10 −10 [54]. This huge discrepancy may be explained by considering other means in the cooling of the superfluid core in the neutron star, for example, by neutrino emission in pair formation in a multicomponent superfluid state 3 P 2 (m j = 0, ±1, ±2) [55]. D. QCD axion mass and its interactions with photons With the well constrained QCD axion decay constant in Eq. (56) congruent to the seesaw scale we can predict the QCD axion mass and its corresponding axion-photon coupling. As in Refs. [1,17], the axion mass in terms of the pion mass and pion decay constant is obtained as where 18 f π = 92.21 (14) MeV [29] and (3) and ω = 0.315 z . (66) Note that the Weinberg value lies in 0.38 < z < 0.58 [29,56]. After integrating out the heavy π 0 and η at low energies, there is an effective low energy Lagrangian with an axion-photon coupling g aγγ : where E and B are the electromagnetic field components. And the axion-photon coupling can be expressed in terms of the QCD axion mass, pion mass, pion decay constant, z and w: 18 Here F (z, ω) can be replaced in high accuracy as in Ref. [52] by The upper bound on the axion-photon coupling is derived from the recent analysis of the horizontal branch (HB) stars in galactic globular clusters (GCs) [57], which translates into the lower bound of decay constant through Eq. (65), as The bounds of Eqs. (69) and (70) are much lower than that of Eq. (56) coming from the present experimental upper bound Br(K + → π + A i ) < 7.3 × 10 −11 [15] as well as the axion to electron coupling 6.7 × 10 −29 α Aee 5.6 × 10 −27 at 3σ [13]. Hence, from Eqs. (56) and (65) The QCD axion coupling to photon g aγγ divided by the QCD axion mass m a is dependent on E/N. Fig. 1 shows the E/N dependence of (g aγγ /m a ) 2 so that the experimental limit is independent of the axion mass m a [17]: for 0.38 < z < 0.58, the value of (g aγγ /m a ) 2 for the case-II and -III are located lower than that of the ADMX (Axion Dark Matter eXperiment) bound [12], while for the case-I is marginally 19 lower than that of the ADMX bound, where Fig. 1, the uncertainties of (g aγγ /m a ) 2 for the case-II and -III are larger than that of case-I for 0.38 < z < 0.58. Fig. 2 shows the plot for the axion-photon coupling |g aγγ | as a function of the axion mass 19 In fact, this is the case for 0.54 z < 0.58. , which corresponds to the case-I, -II, and -III, respectively. As the upper bound on Br(K + → π + + A i ) gets tighter, the range of the QCD axion mass gets more and more narrow, and consequently the corresponding band width on |g aγγ | in Fig. 2 is getting narrower. In Fig. 2 the top edge of the bands comes from the upper bound on Br(K + → π + + A i ), while the bottom of the bands is from the astrophysical constraints of star cooling induced by the flavored-axion bremsstrahlung off electrons e + Ze → Ze + e + A i . The model will be tested in the very near future through the experiment such as CAPP (Center for Axion and Precision Physics research) [61] as well as the NA62 experiment expected to reach the sensitivity of Br(K + → π + + A i ) < 1.0 × 10 −12 [47]. IV. SUMMARY AND CONCLUSION Motivated by the flavored PQ symmetry for unifying the flavor physics and string theory [1,16], we have constructed a compact model based on SL 2 (F 3 ) × U(1) X symmetry for resolving rather recent, but fast-growing issues in astro-particle physics, including quark and leptonic mixings and CP violations, high-energy neutrinos, QCD axion, and axion cooling of stars. Since astro-particle physics observations have increasingly placed tight constraints on parameters for flavored-axions, we have showed how the scale responsible for PQ mechanism (congruent to that of seesaw mechanism) could be fixed, and in turn the scale responsible for FN mechanism through flavor physics. Along the lines of finding the fundamental scales, In the concrete, the QCD axion decay constant congruent to the seesaw scale, through its connection to the astro-particle constraints of stellar evolution induced by the flavored-axion bremsstrahlung off electrons e+Ze → Ze+e+A i and the rare flavor-changing decay process induced by the flavored-axion K + → π + + A i , is shown to be fixed at F A = 3.56 +0.84 −0.84 × 10 10 GeV (consequently, the QCD axion mass m a = 1.54 +0. 48 −0.29 × 10 −4 eV, wavelength of its oscillation λ a = 8.04 +1.90 −1.90 mm, axion to neutron coupling g Ann = 2.14 +0.66 −0.41 × 10 −12 , and axion to photon coupling |g aγγ | = 5.99 +1.85 −1.14 × 10 −14 GeV −1 for E/N = 23/6 (case-I), 4.89 +1.51 −0.93 × 10 −14 GeV −1 for E/N = 1/2 (case-II), 1.64 +0. 51 −0.31 × 10 −14 GeV −1 for E/N = 5/2 (case-III), respectively, in the case z = 4.8.). Subsequently, the scale associated to FN mechanism is automatically fixed through its connection to the SM fermion masses and mixings, Λ = 2.04 +0. 48 −0.48 ×10 11 GeV, and such fundamental scale might give a hint where some string moduli are stabilized in type-IIB string vacua. We may conclude that in an extended SM framework by a compact symmetry G F = SL 2 (F 3 ) × U(1) X , if the scale responsible for the FN mechanism (whose scale is associated to some string moduli stabilization) is fixed, the scales responsible for seesaw and PQ mechanisms are dynamically determined in way that the SM fermion (including neutrino) masses and mixings are well delineated, which in turn provides predictions on several properties of the flavored-axions. In the very near future, the NA62 experiment expected to reach the sensitivity of Br(K + → π + + A i ) < 1.0 × 10 −12 will probe the flavored-axions or exclude the model. The SL 2 (F 3 ) is the double covering of the tetrahedral group A 4 [18,20,21]. It contains 24 elements and has three kinds of representations: one triplet 3 and three singlets 1, 1 ′ and 1 ′′ , and three doublets 2, 2 ′ and 2 ′′ . The representations 1 ′ , 1 ′′ and 2 ′ , 2 ′′ are complex conjugated to each other. Note that A 4 is not a subgroup of SL 2 (F 3 ), since the two-dimensional representations cannot be decomposed into representations of A 4 . The generators S and T satisfy the required conditions S 2 = R, T 3 = 1, (ST ) 3 = 1, and R 2 = 1, where R = 1 in case of the odd-dimensional representation and R = −1 for 2, 2 ′ and 2 ′′ such that R commutes with all elements of the group. The matrices S and T representing the generators depend on the representations of the group [21]: where we have used the matrices The following multiplication rules between the various representations are calculated in Ref. [21], where α i indicate the elements of the first representation of the product and β i indicate those of the second representation. Moreover a, b = 0, ±1 and we denote 1 0 ≡ 1, 1 1 ≡ 1 ′ , 1 −1 ≡ 1 ′′ and similarly for the doublet representations. On the right-hand side the sum a + b is modulo 3. The multiplication rule with the 3-dimensional representations is
The effective fraction isolated from Radix Astragali alleviates glucose intolerance, insulin resistance and hypertriglyceridemia in db/db diabetic mice through its anti-inflammatory activity Background Macrophage infiltration in adipose tissue together with the aberrant production of pro-inflammatory cytokines has been identified as the key link between obesity and its related metabolic disorders. This study aims to isolate bioactive ingredients from the traditional Chinese herb Radix Astragali (Huangqi) that alleviate obesity-induced metabolic damage through inhibiting inflammation. Methods Active fraction (Rx) that inhibits pro-inflammatory cytokine production was identified from Radix Astragali by repeated bioactivity-guided high-throughput screening. Major constituents in Rx were identified by column chromatography followed by high-performance liquid chromatography (HPLC) and mass-spectrometry. Anti-diabetic activity of Rx was evaluated in db/db mice. Results Treatment with Rx, which included calycosin-7-β-D-glucoside (0.9%), ononin (1.2%), calycosin (4.53%) and formononetin (1.1%), significantly reduced the secretion of pro-inflammatory cytokines (TNF-α, IL-6 and MCP-1) in human THP-1 macrophages and lipopolysaccharide (LPS)-induced activation of NF-κB in mouse RAW-Blue macrophages in a dose-dependent manner. Chronic administration of Rx in db/db obese mice markedly decreased the levels of both fed and fasting glucose, reduced serum triglyceride, and also alleviated insulin resistance and glucose intolerance when compared to vehicle-treated controls. The mRNA expression levels of inflammatory cell markers CD68 and F4/80, and cytokines MCP-1, TNF-α and IL-6 were significantly reduced in epididymal adipose tissue while the alternatively activated macrophage marker arginase I was markedly increased in the Rx-treated mice. Conclusion These findings suggest that suppression of the inflammation pathways in macrophages represents a valid strategy for high-throughput screening of lead compounds with anti-diabetic and insulin sensitizing properties, and further support the etiological role of inflammation in the pathogenesis of obesity-related metabolic disorders. Background Obesity is a major risk factor for a cluster of cardiometabolic disorders including insulin resistance, fatty liver, dyslipidemia, type 2 diabetes and cardiovascular diseases [1,2]. Mounting evidence suggests that lowgrade systemic chronic inflammation plays an important role in linking obesity to its associated pathologies. Adipose tissue, originally regarded as an inert energy storage compartment, is now found to be an endocrine organ that secretes a large number of adipokines to regulate energy balance, food intake, lipid and glucose metabolism, insulin sensitivity and vascular tone [3]. It also plays a pivotal role in the development of systemic inflammation in obese subjects [4]. In obese humans and rodents, increased infiltration of activated macrophages or mast cells into adipose tissues is clearly evident [5]. Enlarged adipocytes, together with the infiltrated macrophages, act in a synergistic manner to cause aberrant production of pro-inflammatory molecules including inducible nitric oxide, cytokines such as tumor necrosis factor-alpha (TNF-α), interleukine-6 (IL-6) and the chemokine monocyte chemoattractant protein-1 (MCP-1). Both TNF-α and IL-6 can impede insulin sensitivity by triggering different key steps in the insulin signaling pathway [6][7][8]. Meanwhile, chronic inflammation impacts on the fat storage in adipose tissue, resulting in excess free fatty acid and triglycerides in the bloodstream, and the induction of insulin resistance in muscle and liver, in part via ectopic fat deposition in these tissues [1,9,10]. Weight loss has been shown to result in a reduction in macrophage accumulation and ameliorate the up-regulated inflammatory status in humans [11,12]. Radix Astragali, which is known as Huangqi in Chinese, is a flowering plant in the family Fabaceae. It is one of the 50 fundamental herbs used in traditional Chinese medicine and is traditionally used for the treatment of diabetes, wound healing [13] and strengthening the immune system [14,15]. Recently, a herbal formulation with Radix Astragali was shown to exert anti-hyperglycemic and anti-oxidant effect in the db/db diabetic mouse model [16]. In addition, our laboratory has demonstrated the anti-diabetic effect of two natural compounds from Radix Astragali, astragaloside II and isoastragaloside I, which can enhance the expression of adiponectin, an insulin-sensitizing adipokine [17]. However, the detailed mechanism underlying the anti-diabetic effects of Radix Astragali remains poorly understood. Since the dysregulated production of cytokines and the activation of the inflammatory signaling pathways are closely associated with obesity-related metabolic diseases, we postulated that the beneficial metabolic effect of Radix Astragali may be mediated through anti-inflammatory actions. In this study, Radix Astragali was fractionated and its constituents were repeatedly screened for their activities in inhibiting pro-inflammatory cytokine production and lipopolysaccharide (LPS)-induced activation of the NF-B signaling pathway in macrophages. The therapeutic potential of the selected active fraction Rx on the obesity-related metabolic disorders was validated in the db/db diabetic mice. Preparation of Bioactive Extracts from the Plant materials Radix Astragali, the dried root of Astragalus membranaceus Bge. var.mongholicus (Bge.) Hsiao, was collected from Rui-Bao Good Agriculture Practice (GAP) base at Baotou City of Inner Mongolia in China in April 2008. A voucher specimen has been deposited in the Chinese Medicine Laboratory of Hong Kong Jockey Club Institute of Chinese Medicine. Radix Astragali was powdered and extracted with ethanol (20%, 50%, 80% and 95%) and water. The extract evaporated in vacuum and used for high-throughput screening. The effective fractions that inhibited pro-inflammatory cytokine production in macrophages was further extracted with various organic solvents (butanol and ethyl acetate) and their anti-inflammatory activities were further determined. To identify the major constituents in selected active fraction Rx, HPLC was performed in an Agilent 1100 system comprising of a quaternary pump, an online degasser, an auto-sampler, a column heater and a variable wavelength detector. Separation was achieved on a 4.6 × 250 mm, 5 μm particle, Alltima C 18 reversed-phase analytical column. The mobile phase was acetonitrile and 0.1% aqueous formic acid; the amount of acetonitrile was changed linearly from 10 to 60% in 40 min. The flow rate was 1.0 ml/min and the wavelength was 254 nm. The proposed method for quantitative analysis of major constituents in the bioactive extracts was validated in terms of linearity, limits of detection and quantification, reproducibility and recovery. The identified active fraction was used for the treatment of db/db diabetic mice. Cell Culture and Macrophage Differentiation Human THP-1 macrophage cells were maintained as sub-confluent cultures in RPMI-1640 supplemented with 10% fetal bovine serum and were induced for differentiation by incubating with 100 nM phorbol 12-myristate 13-acetate (PMA) for 3 days. After differentiation, medium was replaced with RPMI-1640 with 10% fetal bovine serum for 1 day before drug treatment. RAW-Blue cells (InvivoGen) were murine macrophages derived from RAW 264.7 cells. RAW-Blue cells were maintained as sub-confluent cultures in DMEM supplemented with 10% fetal bovine serum (Invitrogen). Cells were seeded in 24-well plate 1 day before drug treatment. Quantification of TNF-a, MCP-1 and IL-6 Production in Rx-treated Human THP-1 Macrophage Culture Medium Conditioned THP-1 medium was collected after Rx (5 μg/ml or 10 μg/ml) or Rosiglitazone (10 μg/ml) treatment for 48 hours. The concentrations of TNF-α, MCP-1 and IL-6 were determined using in-house sandwich ELISA. The capture antibodies and detection antibodies of human TNF-α, MCP-1 and IL-6 were purchased from R & D systems (Minneapolis, MN). Detection and Quantification of LPS-induced Alkaline Phosphotase (SEAP) Activity in the Supernatants of RAW-Blue Cells RAW-Blue cells were treated with LPS (100 ng/ml) alone or together with Rx (10 μg/ml or 20 μg/ml) for 24 or 48 hours. SEAP activity in the conditioned medium was determined using QUANTI-Blue medium (InvivoGen) following manufacturer's manual. Briefly, 20 μl samples were added to 200 μl of Quanti-Blue assay buffer (InvivoGen) and incubated at 37 C for 15 to 30 minutes. Absorbance was measured at 620 nm and fold change in SEAP activity in each sample compared to LPS-induced sample was calculated. Animal Studies C57BL/KsJ db/db diabetic mice were propagated in the laboratory animal unit at University of Hong Kong. The mice were housed in a room under controlled temperature (23 ± 1 C), with free access to water and standard mouse chow. Mice at the age of 10 weeks were treated with either Rx (2 g/kg/day) or PBS with 4% Tween 80, as control, by daily oral gavage for 12 weeks. Physical parameters such as body weight and food intake of mice were measured weekly. Glucose tolerance test and insulin tolerance test were performed as previously described [18]. All of the experiments were conducted according to institutional guidelines for humane treatment of laboratory animals. Analysis of Serum Glucose, Free fatty acid, Triglyceride and Insulin Levels Fed or fasting blood was collected from the tail-tip of mice. Trunk blood was collected by cardiac puncture under anesthesia before the mice were sacrificed. Serum glucose level was measured using the ACCU-CHEK Advantage II glucometer (Roche, USA). The levels of serum triglyceride and free fatty acid were determined using commercial assay kits Stanbio Liquicolor Triglycerides (Stanbio, USA) and Half micro test (Roche, USA), respectively. The level of insulin was measured by Ultrasensitive mouse insulin ELISA kit (Mercodia, Sweden). Quantification of Inflammatory Marker Expression by Real-time Polymerase Chain Reaction (PCR) Total RNA was extracted from mouse epididymal fat pads using Trizol reagent and was transcribed into cDNA with a Superscript first-strand cDNA synthesis system (Promega, Madison, WI, USA). The relative gene abundance was quantified by real time PCR using the assay on demand TaqMan primers and probes from Applied Biosystems (Foster City, CA) with the premade assay kits. The reactions were performed in an ABI 7000 sequence detection system. Statistical Analysis Statistical analyses were performed using GraphPad Prism 3 software (San Diego, CA). Data are expressed as means ± S.E.M. Statistical significance was determined by one-way ANOVA and Dunnett's post hoc test. In all statistical comparisons, a P value < 0.05 was considered statistically significant. High Throughput Screening of the Active Fractions from Radix Astragali and HPLC Analysis of Four Main Constituents in selected fraction Rx We selected about 20 ethanol and water extracts from three traditional Chinese herbs, Radix Astragali (Huang Qi), Rhizoma coptids Franch (Huang Lian) and Lonicerae japonica Thunb. (Jin Yin Hua) to screen for the compounds with anti-inflammatory properties. Highthroughput screening, in which various fractions were incubated with human THP-1 for 24-48 hours followed by ELISA assays, allowed for the identification of components that reduced the basal cytokine secretion from macrophages. After repeatedly screening, we identified an active fraction (Rx) from Radix Astragali that reproducibly inhibited the pro-inflammatory cytokine secretion from macrophages. Its anti-inflammatory activity, as evidenced by inhibiting the NF-B signaling pathway, was further confirmed by SEAP assay using mouse RAW-Blue cell system. The schematic diagram of the purification and identification of active fraction Rx from Radix Astragali was shown in figure 1. Effect of the Fraction Rx on Secretion of Proinflammatory Cytokine and LPS-induced NF-B Activation in Macrophages Two macrophage cell systems, human THP-1 macrophages and mouse RAW-Blue macrophages, were employed in the high-throughput screening platform to determine the effects of the active fractions. Treatment with the selected active fraction Rx caused a drastic reduction in secretion of pro-inflammatory cytokines including MCP-1 and IL-6 in the conditioned medium of THP-1 macrophages, in a dose-dependent manner ( figure 3a, b). The inhibitory effect of Rx (5 μg/ml and 10 μg/ml) on TNF-α secretion was comparable to that of Rosiglitazone (10 μg/ml) (figure 3c). On the other hand, Rx had a more effective inhibitory effect than Rosiglitazone on the secretion of MCP-1 and IL-6 ( figure 3a, b). The anti-inflammatory effect of Rx was further confirmed by secondary screening using mouse RAW-Blue cells. RAW-Blue cells are derived from RAW-264.7 macrophages that stably express a secreted embroyonic alkaline phosphotase (SEAP) gene which can be induced by NF-B or AP-1 transcription factors. Treatment of RAW-Blue cells with Rx inhibited the LPS-induced NF-B activity in a time-and dose-dependent manner as indicated by the reduced SEAP activity ( figure 4). The effect of Rx was also comparable to Rosiglitazone (10 μg/ml). Rx was then manufactured in a large quantity for the in vivo study. Effect of the Fraction Rx on Metabolic Parameters of db/ db Diabetic Mice Having established the action of Rx on inhibiting cytokine secretion from macrophages, we next validated the therapeutic potential of the active fraction Rx in the db/ db mice, a well-established genetic obese model with typical symptoms of type 2 diabetes. Male db/db mice were administrated with Rx fraction (2 g/kg/day dissolved in 4% Tween 80 in PBS) or vehicle by daily oral gavage for a period of 12 weeks. The effects of Rx on insulin sensitivity and glucose metabolism were investigated. Rx treatment markedly improved the glycemic Different dosages of Rx (10 μg/ml or 20 μg/ml) or Rosiglitazone (10 μg/ml) together with 100 ng/ml LPS were incubated with RAW-Blue cells for 24 hours or 48 hours. DMSO was used as the vehicle control. The SEAP activities in the conditioned medium of mouse RAW-Blue cells were measured using QUANTI-Blue assay as described in Methods. Each bar represents the relative mean fold change ± SEM (n = 6). Data were statistically analyzed using one-way ANOVA with Dunnette's post hoc test. ** P < 0.01. control in db/db mice. Within 3 weeks of treatment, the fed glucose levels were significantly reduced and this effect was persistent throughout the treatment period (until week 7). In order to determine whether the therapeutic effect of Rx would persist after drug withdrawal, we stopped the administration of Rx for 2 weeks (week 8-10) (figure 5a). The fed blood glucose level remained lowered for one week (week 8) after drug withdrawal and then gradually increased from week 8 to week 10, but was reduced again when Rx was re-administrated to the mice (week [11][12]. Similarly, the fasting glucose levels and serum insulin concentrations were significantly reduced in Rx-treated db/db mice (figure 5b,c). Rx treatment also dramatically improved the hypertriglyceridemia in db/db mice within 2 weeks of treatment, with the effect lasting throughout the treatment period and persisting through the two weeks of drug withdrawal (figure 5d). A trend of decreased serum free fatty acid (FFA) was also observed (figure 5e). These data suggested that Rx treatment could improve the glycemic and lipid control in db/db mice. Glucose tolerance test and insulin tolerance test were performed to examine the systemic glucose metabolism and insulin sensitivity in db/db mice. Rx-treated db/db mice displayed a more efficient clearance of systemic glucose level and had a significant increase in insulin sensitivity than the vehicle-treated db/db mice ( figure 6a, b, c, d). Effect of Rx Treatment on the Adipose Tissue of the db/ db Diabetic Mice Quantitative real time PCR analysis showed that chronic treatment with Rx resulted in a significant reduction in the mRNA levels of two macrophage markers, CD 68 and F4/80, in the epididymal adipose tissue of db/db mice when compared with the vehicle-treated mice (figure 7a, b). The mRNA levels of the pro-inflammatory cytokines, TNF-α and MCP-1, were also markedly reduced in the adipose tissue of the Rx-treated mice ( figure 7c, d). In contrast, the mRNA levels of the antiinflammatory or "alternatively activated" macrophages arginase I gene, was significantly increased in the Rxtreated mice (figure 7e). These findings suggest that Rx treatment reduced inflammation status in the adipose tissue. Physical Parameters in Experimental Animals Treatment of db/db mice with Rx did not affect daily food uptake (figure 8a). However, there was an increased body weight in the Rx-treated group (figure 8b). This may be due to the increased liver weight (figure 8c) and adipocyte size ( figure 9). Biochemical analysis on the hepatic lipid profile showed that there were no significant changes in hepatic triglyceride, cholesterol and free fatty acid in the Rx treated mice (figure 10) when compared with vehicle-treated mice. Discussion The prevalence of diabetes has become a major public health concern worldwide and is known to be closely associated with obesity. Cumulating evidence suggests that chronic inflammation in white adipose tissue (WAT), which is characterized by infiltrated macrophages and aberrant production of pro-inflammatory cytokines, plays an essential role in linking obesity with diabetes and its complications [18,19]. Insulin resistance precedes impaired glucose tolerance and the onset of type 2 diabetes. In obese subjects, infiltrated macrophages together with enlarged adipocytes secrete numerous pro-inflammatory mediators such as IL-6 and TNF-α, which contribute to insulin resistance [6,20]. The cytokine-stimulated adipocytes secrete chemokines such as MCP-1 which further enhance the recruitment of macrophages into the adipose tissue, hence amplifying the inflammatory response [21][22][23]. Though there are existing synthetic anti-diabetic drugs that possess anti-inflammatory properties, such as the thiazolidenediones (TZDs), an increasing number of studies has documented their undesirable effects including the increased risk for heart failure and bone fractures [24]. Therefore, it is important to identify new anti-diabetic drug with anti-inflammatory actions that are associated with a better safety profile. With the increased popularity of herbal medicine, many Chinese herbs are now frequently used as the natural and presumably safer sources for drug discovery. Metformin, a widely used anti-diabetic drug, was identified from the herb French lilac (Galega officinalis) [25,26]. Berberine, identified from the traditional herb Hydrastis canadensis or Coptis chinensis which has been used for treating diabetes in China for more than 1400 years, has recently been shown to have glucose lowering, insulin sensitizing and incretin actions [27][28][29]. In the present study, we have developed a highthroughput screening platform, for selection of active fractions from traditional Chinese herbs that possess anti-inflammatory properties, which is based on the ability of the fraction to inhibit cytokine production in macrophages. An effective fraction Rx from Radix Astragali was identified after repeated screening and its antidiabetic effect was confirmed in the db/db mouse model. Radix Astragali is an edible herb that has been used as one of the primary tonic herbs in China for many centuries [30]. Recently, various studies on the effects of Radix Astragali on obesity-related metabolic disorders have been carried out [31,32]. The extracts of Radix Astragali have been shown to be anti-inflammatory [33], hepatoprotective [34], cardioprotective [35], neuroprotective [36,37] and anti-diabetic [38]. The use of Radix Astragali to treat diabetic symptoms (Xiao Ke, or wasting thirst syndrome, in Chinese medicine) has been well documented in the Compendium of Materia Medica (Ben Cao Gang Mu) during the Ming Dynasty (from 1368-1644 AD). Notably, Radix Astragali is now being used as a major ingredient in six of the seven antidiabetic herb formulae approved by the State Drug Administration in China [38]. Radix Astragali contains various isoflavones and isoflavonoids including formononetin, calycosin and ononin, and many saponins including astragaloside I, astragaloside II, astragalosie VI and also acetylastragaloside I [39]. Our laboratory has demonstrated that astragaloside II and isoastragaloside I from Radix Astragali alleviate insulin resistance, glucose intolerance and hyperglycemia by increasing the secretion of the insulin sensitizing hormone, adiponectin from adipocytes [17]. The effective fraction Rx is a mixture of 4 isoflavonoids including calycosin-7-β-D-glucoside (0.9%), ononin (1.2%), calycosin (4.53%) and formononetin (1.1%). Formononetin and calycosin have been shown to be activators of peroxisome proliferator-activated receptors (PPARα and PPAR-γ) [40], the major therapeutic targets of the fibrate group of lipid-lowering drugs the TZDs, respectively. It should be noted that PPAR-γ is highly expressed in the adipose tissue where it regulates insulin sensitivity, adipocyte differentiation and lipid storage, and is responsible for regulating the alternative activation of macrophages [41]. Formononetin and calycosin have also been reported to potentiate the anti-hyperglycemic action of fangchinoline [42] and inhibit the LPSinduced production of TNF-α, nitric oxide and superoxide in mesencephalic neuron glia culture [43]. On the other hand, calycosin has a protective effect in endothelial cells against hypoxia-induced barrier impairment [44], while calycosin-7-O-β-D-glucoside and ononin have been reported to be free radicals scavenging antioxidants [45]. All these studies have suggested that the four major components of Rx would have potential antiinflammatory, glucose-lowering and anti-oxidant effects which may contribute to improving the insulin resistance, glycemic and lipid control in the Rx-treated db/ db mice. However, there is no evidence suggests that these components are bioactive ingredients. Further experiments are needed to verify it. The Food and Drugs Administration (FDA) is now recommending a mixture of multiple active ingredients rather than a single active compound for herbal alternative medicine. This is due to the fact that the constituents can work together to enhance the effectiveness and potentiate a synergistic effect, so that the dosage and toxicity can be reduced [46]. Our present study has demonstrated that Rx significantly reduced the plasma glucose and triglyceride levels with the effect persisting throughout the experimental period ( figure 5a, b, d). Rx treatment also reduced the expression of macrophage markers and pro-inflammatory cytokines in the adipose tissue, indicating an ameliorating effect on chronic inflammation in the db/db Figure 8 Effects of Rx on the basic metabolic parameters of db/db diabetic mice. Panels show (a) daily food intake, (b) body weight and (c) liver weight of the treated db/db diabetic mice. Data were statistically analyzed using one-way ANOVA with Dunnette's test. Data are presented as mean ± S.E.M. (n = 5-6). * P < 0.05; ** P < 0.01. diabetic mice. The increased arginase I expression implied an increased alternative activation of macrophages in the adipose tissue, which could lead to the secretion of the anti-inflammatory cytokines IL-10 and IL-8 [47], indicating that Rx treatment may lead to polarization of resident macrophage towards the alternative state. Though increased body weight in db/db mice has been observed after Rx treatment, this is also a common effect of the well-established PPAR-γ agonist, TZDs, in spite of their anti-diabetic effect. In summary, the present study raised the possibility that measurement of cytokine production level in macrophages is a viable activity-guided high-throughput screening method to search for the lead compounds with anti-diabetic potential. Using this platform an effective fraction Rx was identified from the traditional Chinese herbal medicine Radix Astragali and shown to demonstrate anti-diabetic and lipid-lowering effects, at least in part via the suppression of inflammation in adipose tissue. draft the manuscript. HX designed the study and helped in data analysis on the active fraction purification. KL conceived the study. All authors read and approved the final manuscript.
Waveguide coupled III-V photodiodes monolithically integrated on Si The seamless integration of III-V nanostructures on silicon is a long-standing goal and an important step towards integrated optical links. In the present work, we demonstrate scaled and waveguide coupled III-V photodiodes monolithically integrated on Si, implemented as InP/In0.5Ga0.5As/InP p-i-n heterostructures. The waveguide coupled devices show a dark current down to 0.048 A/cm2 at −1 V and a responsivity up to 0.2 A/W at −2 V. Using grating couplers centered around 1320 nm, we demonstrate high-speed detection with a cutoff frequency f3dB exceeding 70 GHz and data reception at 50 GBd with OOK and 4PAM. When operated in forward bias as a light emitting diode, the devices emit light centered at 1550 nm. Furthermore, we also investigate the self-heating of the devices using scanning thermal microscopy and find a temperature increase of only ~15 K during the device operation as emitter, in accordance with thermal simulation results. A s the amount of data generated from modern communication applications such as cloud computing, analytics, and storage systems is increasing rapidly, silicon electronic integrated circuits (ICs) are suffering from a bottleneck at the interconnection level resulting from the resistive interconnect 1,2 . Electrons are ideal for computation because they allow for ultimately scaled logic gates that can be integrated in a massively parallel fashion using modern CMOS technologies. Photons on the other hand are ideal for transmission because this can be done almost loss-less on chip-size length scales. To get the best of both worlds, it has therefore been a long-standing goal to combine electronics and photonics on a silicon chip, and the distances over which optical on-chip signal transmission may become favorable are also slowly decreasing 3 , bringing on-chip optical communication schemes closer. At the length scales of optical chips, cross-talk may be avoided and the coherence of optical signals may be exploited to allow for transmitting signals of different wavelengths without interference as in wavelength-division-multiplexing (WDM) to increase bandwidth. State-of-the-art high-speed germanium (Ge) photodetectors showing high bandwidth of 100 GHz have been demonstrated 4 . However, relatively high dark currents of Ge detectors may lead to a low signal-to-dark current ratio 5 and more importantly, the indirect band gap of Ge prevents efficient light emission. Thus, for an on-chip integrated photonic link there is a need to integrate alternative materials such as III-Vs to provide the active gain needed for emission. Beyond classical onchip optical interconnect, there is also a rising interest in highly scaled integrated photonic components, notably single-photon detectors and emitters for applications in quantum computing 6,7 . High-performance on-chip detectors and lasers have been demonstrated based on bonding of a III-V laser stack including quantum wells on top of a silicon wafer with pre-fabricated waveguides and passives [8][9][10] . The beauty of wafer-bonding is that the full III-V stack is grown lattice matched on an InP substrate and then transferred onto the silicon photonics wafer either as individual chiplets or as a full wafer 11,12 , this allows for perfect material quality. However, the active III-V material is generally integrated on top of the photonics integrated circuit (PIC) and therefore necessitates to couple the light evanescently back and forth from the III-V active material on top to the silicon underneath. Whereas photonic devices will inevitably be larger than electronic ones, today's state-of-the-art high-performance integrated photonic components tend to be orders of magnitude larger measuring 100 s of micrometers. To reduce the RC time constant, which is an important factor for power consumption, devices need to be scaled down. In the past few years, research efforts have therefore been focused on the development of scaled hybrid III-V/ Si nanophotonic devices, including nanowire photodetectors 13,14 , nanowire light sources [15][16][17] , and photoconductors acting as both detectors and emitters coupled by a polymer waveguide 18 . Although high crystal quality III-V nanowires can be achieved and used for devices, the vertical geometry from a device fabrication perspective requires additional engineering and pick-andplace methods for on-chip integration and waveguide coupled solutions 19,20 . These limitations may be overcome with templateassisted selective epitaxy (TASE) [21][22][23] . When growing III-V directly on Si, defects will arise at the hetero-interface due to the lattice mismatch. Traditionally, they can be gradually filtered out by buffer layers and defect-stoppinglayers as it is common in direct InP-based epitaxy on Si 24 , or they can be mediated by growth from trenches exposing (111) facets 25 . Using these methods excellent devices have been demonstrated, but integration with waveguides remains difficult. In nanowire growth one relies on a small interface for nucleation between the III-V and Si 26,27 , where defects remain confined near the interface and no propagating dislocations are formed, whereas stacking faults and twins are quite common in nanowire growth. TASE growth is similar to nanowire growth in that we limit the nucleation site to avoid dislocations, whereas the geometry is determined by the template design rather than the growth conditions. The defects in our TASE grown structures can be localized to the small interface between the Si and III-V seed and result in high-quality III-V elsewhere in the template. This is also confirmed by extensive transmission electron microscopy (TEM) investigations, where we generally do not observe dislocations in the studied p-i-n structures. The grown materials are mono-crystalline with an epitaxial relationship to the Si seed. Stacking faults are common, but these should have a minor impact on electrical and optical properties. The high quality of the material has been demonstrated originally for electronics applications where we could measure mobilities comparable with other III-V films 28,29 , and more recently for monolithic optically pumped InP emitters with performance comparable to that of identical devices fabricated by direct wafer bonding 30 . IBM colleagues previously demonstrated the successful transfer of the TASE concept to an advanced 200 mm process line within IBM, where it was used to demonstrate nanosheet InGaAs FinFETs on Si with a 10-nm channel thickness and state-of-the-art performance 31 . The successful integration of logic devices based on the same technology and for large wafer scale, is promising for also achieving large-scale integration for photonic integrated circuits in the future. In the present work, we demonstrate waveguide coupled devices with grating couplers centered at 1320 nm. We study two different device architectures based on either a straight or a T-shape architecture. In addition, we implemented a double heterostructures (n-InP/i-InGaAs/p-InP/p-InGaAs) to improve carrier confinement. The improved electrostatics enables the investigation of the emission properties when operated as a lightemitting diode (LED). Thermal characteristics of the devices are important factors for performance and reliability evaluation. We investigate the in-situ temperature profile by scanning thermal microscopy (SThM) and establish that the associated temperature increase is within the acceptable range for device operation 32 . We demonstrate, to the best of our knowledge, the first monolithic heterostructure photodetector directly coupled in-plane to a Si waveguide and demonstrating high-speed performance with a 3 dB frequency of 70 GHz. Using grating couplers centered around 1320 nm, we evaluate the detector performance under various signal encoding schemes and observe data reception at 100 Gbps. Results Device fabrication and material characterization. The devices are fabricated on a conventional silicon-on-insulator (SOI) substrate using TASE, the process is illustrated in Fig. 1a. First, we pattern the top silicon layer by a combination of e-beam lithography and dry etching of silicon. The features of the future detectors and all silicon passives, such as waveguides and grating couplers are etched simultaneously in this step ( Fig. 1a(1-2)), which provides for inherent self-alignment of the III-Vs and silicon features. The silicon features are then embedded in a uniform SiO 2 layer, deposited as a combination of atomic layer deposited (ALD) and plasma-enhanced chemical vapor deposited (PECVD) SiO 2 , which is thinned down and planarized by chemical-mechanical polishing (CMP). An opening in the oxide is made to expose the silicon in areas where the Si will be replaced with III-V material. The Si is then selectively etched back using tetramethylammonium hydroxide (TMAH) to form a hollow oxide template with a Si seed at one extremity ( Fig. 1a 3). TMAH is an anisotropic wet etchant that results in a smooth but tilted Si (111) facet. In the next step ( Fig. 1a 4) the desired III-V profile is grown within the template by metal-organic chemical vapor deposition (MOCVD). In this work we grow a n-InP/i-InGaAs/p-InP/p-InGaAs sandwich structure. The role of the two wider bandgap InP regions is to improve carrier confinement in the i-InGaAs region. We believe that the presence of the heterostructure significantly improves performance compared to our earlier work with pure InGaAs 33 . Note that the InGaAs region is not completely intrinsic but will have a slight n-type doping as a result of parasitic carbon doping in the MOCVD. The p-InGaAs growth is intended to improve contacting as it can be difficult to obtain a good Ohmic contact on p-type InP. Details on the growth conditions can be found in Supplementary Note 2 and Supplementary Table 1. Following growth, the top oxide is uniformly thinned further by reactive ion etching (RIE) to obtain a thickness of~10 nm to enable the thermal measurements and facilitate contacting. Top-view SEM and energy dispersive x-ray spectroscopy (EDS) analyses are used to distinguish the area from the n-InP/i-InGaAs/p-InP/p-InGaAs sandwich structure and Ni-Au metal contacts are implemented by e-beam lithography and lift-off ( Fig. 1a 5). A series of devices were fabricated on a SOI wafer with a thickness of 220 nm, and varying device width (W PD ) of the III-V region with W PD = 200 nm, W PD = 350 nm, and W PD = 500 nm. At the opposite end of the waveguide, a focusing grating coupler is implemented for diffracting light at around 1320 nm for a targeted incident angle of 10°. As it is illustrated in Fig. 1a, we focused on two different device architectures. One, where the III-V material is grown as an extension of the waveguide (Fig. 1b) we refer to this as the "straight" device. In addition, there is a structure where the III-V material is grown with a nucleation seed on a separate Si structure orthogonal to the optical waveguide with the grating couplerwe refer to this as the "T-shape" device ( Fig. 1c). Figure 1d shows the cross-section SEM of a T-shape device with the focused ion beam (FIB) cutting line shown in Fig. 1c: the Si waveguide is separated from the orthogonally grown III-V structure by a small oxide-filled gap (~50 nm). The motivation behind the two device architectures is that each structure provides different benefits and challenges. The straight structure is the easiest from a conceptual point of view and the propagating mode will be coupled directly from the silicon waveguide to the III-V region. The drawback is that contacts will inevitably need to be in the path of the optical mode which will most likely result in a higher absorption loss from the metal, and if there are localized defects at the Si/III-V interface these will also be in the path of the propagating light. In the T-shape structure, the III-V is grown orthogonal to the waveguide, therefore no contacts need to be placed on top of the waveguide and the waveguide may directly end at the i-InGaAs region. Any defects associated with the Si/III-V interface will also be no longer in the optical path. However, the coupling efficiency across the thin gap is unknown and this might lead to back-reflections in the silicon waveguide. In this proof of concept work we demonstrate the feasibility of such a coupling scheme, but we expect significant coupling losses. These could most likely be improved by a more sophisticated waveguide design. Conceptually there are advantages and challenges to both approaches, so investigating this trade-off was one of the objectives of the present work. To investigate the device architecture and material quality, a TEM lamella was prepared on a straight device using FIB, with the FIB cutting line shown in Fig. 1b. Figure 2a shows the overview STEM image of the sample. A tilted Si (111) crystal facet is observed at the III-V/Si interface where the growth initiated. This is due to the anisotropic TMAH etch, which will terminate on a (111) plane. EDS was performed along the device, as presented in Fig. 2b. From left to right we can identify following regions: p-InGaAs (in red), p-InP (in blue), i-InGaAs (in red), and n-InP (in blue). The observed shape and width of the sections stem from the individual crystal facets formed during the growth sequence which can lead to a device-to-device variability. Further studies to finetune the epitaxial processes are ongoing. We note that using a similar geometry on a InP substrate, the growth of sharp and vertical quantum wells was demonstrated 34,35 . Figure 2c presents a line profile acquired along the "Line 1" as indicated in Fig. 2a. Pronounced transition regions with apparently graded compositions are visible and attributed to the non-orthogonal alignment of the crystal growth facets to the beam direction, with the exception of the starting Si/InP interface. We also observe that the intrinsic InGaAs region appears more In-rich (75 % In) compared to the p-region (53% In), which can be explained by the effect of the introduced doping precursor on composition and growth 22,36 . The high-resolution (HR) bright field (BF) STEM image in Fig. 2d shows an overall sharp Si/InP interface, with projection effects appearing as blurred area. Similarly, the HR BF-STEM image in Fig. 2e shows a sharp interface between the i-InGaAs and the n-InP with some projection effects. The inset shows a representative HR BF-STEM of the i-InGaAs with high crystalline quality which enables electrically stimulated light emission from the device. No dislocations are observed in this or in other similar cross-sections, whereas we do observe regions with stacking faults. Thermal effects during device operation. To reach a thorough understanding of the thermal effects on nanometer scale and hence be able to understand its impact on device performance, we performed thermal simulations using ANSYS Parametric Design Language (APDL) and experimental measurements using SThM 37 on a T-shape device (W PD = 500 nm). The SThM allows to measure the surface temperature quantitatively with about 10 nm lateral resolution while applying a bias to the device. Specifics of the SThM setup as well as the other setups used can be found in Supplementary Note 1. Figure 3a shows the simulated temperature increase as a function of applied forward bias (LED operation) along with a measured SThM result. The resulting temperature rise compares well with SThM data, where an AC modulated bias of about 3 V amplitude is applied on the device. Figure 3b, c shows the simulated and measured temperature distribution of the device operating at 3 V, from which we observe a very good agreement between experimental and simulation data in terms of temperature increase. Figure 3d shows the temperature increase along the black and red dashed line in Fig. 3c. Except for a local hightemperature region at the contact edge, which might be caused by local high resistance at the contact, the overall temperature increase in the III-V is only around 15 K. The temperature increase of the device while working as detector in reverse bias depends on the light injection in the III-V region. In addition to creating an electron-hole pair, the absorption of a photon will also lead to the creation of phonons and heating of the device. While light injection is not possible in the SThM setup, we can measure any temperature increase associated with the reverse bias drift current, but we expect this to be negligible in comparison. Therefore, we performed thermal simulations in the reverse bias case and the result shows a temperature increase of~50 K when assuming a light injection of 3.16 mW (corresponding to the maximum laser power used for detection). Hence, we conclude that we do not expect any catastrophic thermal breakdown in these devices under the used measurement conditions, which is in agreement with our experimental observations. The detailed simulation results of the detector can be found in Supplementary Fig. 4. Electroluminescence as emitter. In forward bias the device can be used as an LED, where the undoped InGaAs region acts as the active region for the emitter. However, the grating couplers cannot be used in this mode as their transmission is optimized for 1320 nm while the emission from the InGaAs region is centered at 1550 nm. The choice of wavelength of the grating couplers was motivated by the availability of existing designs developed for onchip photodetectors for data communication applications, which were centered around 1320 nm 8 . Emission measurements are therefore performed in a free-space coupled optical setup in reflection mode. More details on the electrical/optical measurements can be found in Supplementary Note 3. The electroluminescence (EL) measurements were performed under continuous wave (CW) operation from 80 K to 300 K. Figure 4a shows the EL spectra of the T-shape device with W PD = 350 nm (T3). An EL peak centered at 1550 nm is observed at room temperature when a forward bias of 2.5 V is applied on the device. As illustrated in Fig. 4a, a blueshift of the EL peak is observed upon increasing the applied bias on the device. The reason for the different biasing regimes at different temperatures is the temperature-dependent threshold voltage shift, which is illustrated in Fig. 4b. A significant forward current increase with temperature is observed when comparing the same bias voltage, which results in the EL intensity increase as the temperature rises. The reverse current stays constant as temperature increases from 80 K to 150 K and then increases as the temperature increases from 200 K to 300 K. This is likely due to the activation of defect centers acting as current paths. When the temperature increases from 80 K to 150 K, the defects are "frozen" and the reverse current stays constant. As the temperature further increases, the defects start to be activated and we see a reverse current increase from 200 K to 300 K. Additionally, the voltage related to the lowest current shifted from 0 V to −1 V at 250 K. This voltage shift is correlated to the trapped carriers within the structure which induce extra current. For detailed comparison, the EL peak wavelength dependence on injection current at various temperatures from 80 K to 300 K is shown in Fig. 4c. The injection-current dependent blueshift of the EL peak is likely due to the band filling effect of the carrier injection in the active region. The injected carriers prefer to first fill the states with lower energy and emit light with longer wavelengths. As the injection current or bias voltage increases, the states with lower energy are occupied and the carriers fill the higher energy states, which results in EL with shorter wavelengths. Increased carrier injection is also predicted to influence the refractive index (plasma dispersion effect) which was observed in our microdisk lasers 38 , but as we here consider an LED without a resonant cavity, the impact of a change in refractive index should be minimal. In addition to the EL blueshift, a temperature-dependent redshift of the EL peak was observed when comparing the same injection current, which is due to the bandgap shrinkage of InGaAs as the temperature increases. This result correlates with the thermal measurements, where we observed a 15 K temperature increase at 3 V, indicating that device self-heating was not a limiting factor for the LED operation. Responsivity as photodetector. In order to evaluate the detector performance of the devices, we measured them in a fiber-coupled setup. For the dynamic measurements, light is coupled from a single-mode optical fiber into the silicon waveguide via the grating coupler. More details on the transmission characteristics of the grating coupler can be found in Supplementary Fig. 6, where also the coupling efficiency and waveguide losses are presented. As the transmission spectrum of the grating coupler is quite narrow, all the dynamic detection measurements are performed at a wavelength of 1320 nm. Our earlier work on smaller form factor detectors 33 showed a non-linear spectral dependence, therefore, we evaluated this for these waveguide-coupled structures as well. As expected, we did not observe any unusual trends. More information on the spectral and free-space power dependence can be found in Supplementary Figs. 7 and 8. We investigated waveguide coupled devices with different architectures and dimensions. Figure 5a shows the responsivity of two T-shape (T1, T3) and one straight (S4) devices excluding the 6 dB coupling loss of the waveguide and the coupler. As we increase the reverse bias, the responsivity of all devices increases with the reverse voltage in two slopes which can be described by the different carrier transport mechanisms: carrier trapping at the heterojunction interfaces and carrier drifting in the intrinsic region of the junction, as shown in the inset of Fig. 5a. Different slope values and turning points for the two slopes can be observed on devices with various band offsets and depletion layer lengths formed during MOCVD growth. T1, T3, and S4 show responsivities of 0.18, 0.08, and 0.21 A/W at -2 V, respectively. This corresponds to an external quantum efficiency ðη EQE Þ of 19%, 8%, and 22% (η EQE ¼ R λ _c e , where R is the responsivity, λ is the wavelength, _ is the Planck constant divided by 2π, c is the speed of light in vacuum, e is the elementary charge). We also expect Fig. 4 Electrically pumped light emission. a Electroluminescence (EL) spectra of a 350 nm wide T-shape device (T3) under continuous wave (CW) operation, measured at 100 K, 200 K, and 300 K. b Current-voltage (I-V) curves measured from 80 K to 300 K. c EL peak wavelength dependence on injection current at various temperatures from 80 K to 300 K. EL spectra and peak energy plots in photon energy can be found in Supplementary Fig. 5. there to be significant additional absorption losses resulting either from the metal contacts placed directly on top of the III-V region (straight shape) or from the coupling from the Si waveguide to the III-V absorption region (T-shape), hence the values of responsivity presented herein should be considered as a lower boundary. Electromagnetic simulations were performed to obtain theoretical values of the responsivity (see Supplementary Note 4). The simulation parameters and results of the straight and T-shape devices are described in more detail in Supplementary Table 2 and Supplementary Fig. 9. Depending on the geometry, the amount of light absorbed in the i-region lies between 10% and 20% which is in good agreement with the measured values. The simulation results show higher metal loss and slightly more absorption in the p-InGaAs for the straight device. The reason for the slightly different scaling behavior is that for the straight devices increasing the width of the detector also means increasing the width of the waveguide. This will impact the mode distribution and the current density. Whereas in the T-shape photodetectors the width of the detector is independent of the waveguide dimensions, so increasing the width of the photodetector will increase the length of the region where the light impinging from the waveguide is absorbed. Figure 5b shows the current-voltage (I-V) curves without light (dark green) and under illumination with a 1320 nm laser coupled from the Si waveguide. The dashed green I-V curve was measured at a probe station with high resolution, displaying a dark current of around 0.048 A/cm 2 at −1 V (normalized to the device cross section), which is two decades lower compared to our previously reported pure InGaAs devices 33 and comparable to high-speed bonded membrane III-V photodetectors 8 . We believe this improvement stems from the use of the double heterostructure. Figure 5c, d, e show the normalized dark current at -1 V, the responsivity at −2 V and the f 3dB , respectively. By comparing these parameters, a clear inverse trend for the responsivity and the f 3dB is observed for each device, i.e. those devices showing a higher responsivity tend to register a lower f 3dB , if we compare the trend within the same device width. We believe this trade-off is mainly due to a longer time required to extract the carriers for a device with higher responsivity. For example, if contacts are (unintentionally) positioned to absorb a significant fraction of light, it will result in a smaller responsivity as the light absorbed by the contacts is lost, but it might also make the device faster if the contacts absorb the light generated at the edges of the i-region. Figure 5f, g show the schematics of the straight and T-shape device. In the straight device, the Si waveguide serves also as the Si seed for the MOCVD growth of the p-i-n structure and in this case the photodetector width is always equal to the waveguide width (W WG ). For the dimensions we are looking at in this structure, the mode should be well confined in the Si waveguide, hence a change of the detector/waveguide width should not directly influence the measured absolute current. However, it would change the current density as this would be calculated over a larger cross-section area. In this structure, we would possibly expect an increased absorption if instead we vary the length of the i-region, but as this is determined by the duration of epitaxial growth it can not be varied on a device-to-device level. A significant metal loss is expected in this structure as the light is absorbed by the Au contact on top of the waveguide. In the T-shape structures, the propagating light impinges on the i-region orthogonally from the Si waveguide, in this case the detector width is independent of the waveguide width. If we make the detector wider, we may expect that more light will be absorbed, so that we should see an increase of absolute current, and possibly also in current densitydepending on the magnitude of such an increase. Another advantage is that the Au contacts as well as any inherent defects at the Si/III-V are not directly in the line of the light. However, this device is most likely limited by the coupling from the Si waveguide to the III-V region over the SiO 2 gap. High speed operation. To achieve high data rates and a high signal-to-noise ratio (SNR), not only the cut-off frequency but also the saturation photocurrent is of interest. In addition to the highest current, higher modulation formats, such as the 4-level pulse-amplitude modulation demonstrated here, also have increased requirements for linearity. Following the discussion in Williams and Esman 39 we identify three components that limit linearity: thermal effects, voltage drop in the series resistance, and carrier screening. High optical and electrical power densities ultimately cause catastrophic thermal failure. We concluded from our thermal characterization and simulation results that a reverse voltage of -2 V together with an optical power level of 7 dBm was usually safe in this regard. The other two effects mentioned above also limit the linearity, but do not cause catastrophic failure. High photocurrents lead to a voltage drop in the series resistance and therefore reduce the reverse voltage applied across the p-i-n junction. An estimation of the maximum photocurrent can therefore be made as: In addition, photocarriers in the intrinsic region form a screening field proportional to the optical power 40 , which also limits the optical power. Since this screening field depends on the excess carrier density, the effect is expected to be more pronounced in small detectors. To investigate the impact of these effects, we measured the linearity of a 200 nm wide device (T2) at different bias voltages and small-signal frequencies. Figure 6a, b show the resulting linearity curves. The measurements were fitted with the saturation expression where α contains the responsivity R as well as system RF losses and the 50 Ω load resistance. The saturation power value P sat is the 3 dB compression point and is given in the figure legend. A clear bias dependence can be seen, whereas no clear frequency dependence was observed. The linearity measurements suggest that under safe operating conditions, an input power of up to 10 dBm still results in a fairly linear response. Due to the scaling of the power limitations we expect wider devices to perform even better. Figure 6c shows a bandwidth measurement of device T3 (W PD = 350 nm) corrected for the system losses, the same device as measured for the emission. Consistent with the DC responsivity, the RF power at zero bias is 12 dB lower than the saturated value at −1 V. At −1.5 V, the device shows no clear cut-off up to the setuplimited frequency of 70 GHz. The ripples in the frequency response are most likely caused by RF reflections at the unterminated photodiode. In the present devices contacts have not been optimized for RF performance, we believe that optimization of device and contact design could lead to further gains in performance, in line with what was observed from detectors based on wafer bonding 8 . We performed a data transmission experiment on the same device to show the capability of the fabricated photodiodes. Figure 6d, e shows the digitally interpolated eye diagram of 50 Gbit/s on-off keying (OOK) and 100 Gbit/s four-level pulse-amplitude modulation (4PAM) measured on T3. We use a non-return to zero (NRZ) signaling scheme for both rates. For the 50 Gbit/s transmission we achieve a bit-error rate (BER) of 3.21 × 10 −5 which is below the hard decision forward-error correction (FEC) limit of 3.8 × 10 −3 (ref. 41 ). In contrast, we achieve a BER of 1.17 × 10 −2 for the 100 Gbit/s transmission. This BER is below the soft decision FEC limit of 4.2 × 10 −2 (ref. 42 ). A subset of the data-transmission results of a single device have been presented at the Optical Fiber Communication (OFC) conference 43 . Table 1 shows the performance metrics of other state-of-the-art III-V on Si near-infrared photodetectors compared with the performance of the detectors shown in this work. For our own work we include the highest speed device, this device has a lower responsivity than some of the others we measured. For nonwaveguide coupled devices the responsivity is a calculated value based on various estimates. Discussion In conclusion, we demonstrated waveguide coupled III-V heterostructure photodiodes monolithically integrated on Si with sub-micron dimensions. The devices show light emission centered at 1550 nm when operating in forward bias as an LED. A blueshift was observed with increasing bias which we attribute to band-filling effect, and the threshold voltage of the diodes also showed a strong temperature dependence. In photodetection mode the devices show a dark current down to 0.048 A/cm 2 at -1 V and a responsivity up to 0.2 A/W at -2 V. This value is not corrected with respect to additional losses due to coupling from the Si waveguide to the III-V active region and should be understood as a lower boundary. With the grating couplers centered around 1320 nm, high-speed detection with bandwidth exceeding 70 GHz was demonstrated, which enables data transmission at 50 GBd with OOK and 4PAM. A trade-off was observed among different devices in terms of responsivity and f 3dB . We believe that there is significant potential to further optimize the device width, the length of the i-region, or improve the coupling from Si to III-V for the T-shape devices and contacting scheme for straight devices, which could lead to further improvement in responsivity and detection beyond 100 Gbps. Due to the many different trade-offs in terms of contact placement, coupling losses and different geometrical dependencies, a one-to-one comparison among straight and T-Shape devices is not possible in this work. However, the T-shape device provides much more freedom in terms of design and possibilities for further performance optimization. Therefore, we believe this is the most desirable architecture for future work. Thermal effects were evaluated by simulation and SThM for both emission and detection operation and in both cases we find an acceptable temperature increase for stable operation. These findings also correlate with the optical measurements and the measured device linearity at high frequencies. The presented in-plane integration of the III-V heterostructure p-i-n diode self-aligned to a Si waveguide represents a new paradigm for mass production of densely integrated hybrid III-V/ Si photonics schemes. By using the same approach for the integration of the detector and the emitter and an integration technique that enables heterojunctions along the growth direction, this approach can also be extended to an all-optical high-speed link on Si without the need for evanescent coupling. Compared to previous demonstrations in Table 1, we do not rely on pick-andplace methods, multi-level coupling, regrowth or diffusion for integration of doping profiles. We can leverage the self-alignment with nm precision of passive and active components and the insitu growth of heterojunctions. On the same chip we implemented optically pumped photonic crystal (PhC) emitters covering the entire telecom band 44 . The coupling from the 1D PhC emitters to the waveguide coupled photodetectors demonstrated here should be straightforward as they are implemented in the same plane and integrated in the same MOCVD run. What remains is the optimization towards electrically actuated lasing. This is naturally far from trivial, but we believe it might be possible to achieve this in the future based on in-plane epitaxial growth provided by TASE. Methods Device fabrication. First, a conventional SOI substrate with top silicon thickness of 220 nm is prepared which defines the thickness of the III-V device (Fig. 1a 1). Then, the top silicon layer was patterned by a combination of e-beam lithography using HSQ resist and HBr dry etching of silicon, forming the features of the future waveguides and grating couplers (Fig. 1a 2). The silicon features were then embedded in a uniform SiO 2 layer which in the following steps serves as the oxide template for the III-V growth. An opening was then made to expose the silicon where the selective back-etching using TMAH starts, exposing at one extremity a silicon seed (Fig. 1a 3). In the next step, the III-V profile was grown within the template by MOCVD (Fig. 1a 4). Following the growth, the top oxide was etched further down and the metal contacts were implemented by sputtering and evaporation of Ni-Au metal (Fig. 1a 5). Material characterization. To investigate the device architecture and evaluate the material quality, a TEM lamella was prepared using an FEI Helios Nanolab 450S FIB. The cut was conducted along the growth direction on a 350 nm straight-type device. The lamella was then investigated by STEM with a double spherical aberration-corrected JEOL JEM-ARM200F microscope operated at 200 kV, which permitted to assess the crystalline quality of the various III-V regions and the Si seed. Using a liquid-nitrogen-free silicon drift EDS detector, element mapping and species quantification were carried out with the commercial Gatan Micrograph Suite ® (GMS 3) software by assuming the lamella thickness of~100 nm and using the theoretical k-factors for the quantification. Thermal characterization. Thermal effects of the photodiodes were investigated by scanning thermal microscopy (SThM), which is performed in a high-vacuum (<10 −6 mbar) chamber at room temperature in the Noise-free labs at IBM Research Europe -Zurich 45 . The SThM-based technique relies on a microcantilever with integrated resistive sensor coupled to the silicon tip, which enables temperature measurements with down to few nanometer spatial resolution and sub-10 mK temperature resolution 46 . The temperature of the tip T = 267°C is known by detecting the lever voltage and was calibrated before the scan when the tip is out of contact. The scan is operated in contact mode whereby the contact force is monitored and controlled by a laser deflection system. The temperature of the sample was modulated by applying an AC voltage with frequency of f = 1 kHz on the device. A series resistance of 10 kΩ was used during the measurement. The local temperature on the sample is thermally coupled through the tip to a resistive sensor integrated in the silicon MEMS cantilever. The change of the sensor temperature leads to the change of the electrical resistance of the cantilever, which is tracked using a Wheatstone bridge circuit. Details on the setup can be found in Supplementary Fig. 1. Thermal simulation. Thermal simulations were carried out using commercial finite element method with APDL, which uses Fourier's law of heat conduction to calculate the heat flow. When simulating device operation as emitter, a uniform heat generation was applied on the III-V region with a total heat power equal to the electrical power applied while doing the SThM measurement. In the simulation, the back side of the Si substrate is assumed to be at a constant temperature of 300 K. For device operation as detector, a Gaussian-distributed heat generation was applied on the III-V region, which simulates a laser spot with a spot size of 1 µm. Only heat conduction was accounted for in the simulation since heat convection and radiation can be neglected considering the measurement conditions and temperature increase. Electroluminescence spectroscopy. The EL measurements were performed under CW operation from 80 K to 300 K. The device was placed in a cryostat with 4 probes and with capability of cooling down to 10 K. An Agilent 1500 A was used as both device parameter analyzer and power supply. The light emission is collected from the free-space by an objective with a magnification of ×100 and numerical aperture of 0.6 and detected by an InGaAs line array detector (Princeton Instruments, PyLoN-IR 1.7) which is combined with a grid diffraction spectrometer (Princeton SP-2500i). An integration time of 30 s was used to get high signal-tonoise ratio. The layout of the setup can be found in Supplementary Fig. 2. High-speed detection. Responsivity and high-speed measurements were performed in an optical setup with an optical fiber. For bandwidth measurements, a CW tone was generated using a 70 GHz Keysight synthesizer and modulated onto a 1320-nm optical carrier. The system frequency response was calibrated using a commercial 67 GHz u2t photodiode. For the data transmission, an electrical data ARTICLE signal was first generated using a Mircram 100 GSa/s digital to analog converter (DAC). The output of the DAC was then amplified using a SHF driver amplifier (DA) with 3 dB bandwidth of 55 GHz. A u2t Mach-Zehnder modulator was used to transfer the electrical signal to the 1320 nm optical carrier generated with a Keysight tunable laser source (TLS). In the next step, the optical signal was amplified to the optimum power using a FiberLabs Praseodymium doped fiber amplifer (PDFA). High-speed RF probes were used to extract the RF signal from the device under test (DUT) and a reverse bias voltage of -1.5 V was supplied to the DUT using a SHF bias tee. At the receiver, an Agilent 160 GSa/s digital storage oscilloscope (DSO) was used to record the generated RF signal. A 30 cm RF cable was used to connect the DSO to the bias tee, limiting the frequency response of the full system. An offline digital signal processing was used to process the recorded signal. The digital signal processing setup comprises of signal normalization, timing recovery, linear equalization, and non-linear equalization (see details in Supplementary Fig. 3).
Traumatic Events and Vaccination Decisions: A Systematic Review Despite the apparent relationship between past experiences and subsequent vaccination decisions, the role of traumatic events has been overlooked when understanding vaccination intention and behaviour. We conducted a systematic review to synthesize what is known about the relationship between traumatic events and subsequent vaccination decisions. MEDLINE, PsycINFO and CINHAL electronic databases were searched, and 1551 articles were screened for eligibility. Of the 52 articles included in full-text assessment, five met the eligibility criteria. Findings suggest that the experience of trauma is associated with individual vaccination decisions. Social and practical factors related to both trauma and vaccination may mediate this relationship. As this is a relatively new field of inquiry, future research may help to clarify the nuances of the relationship. This review finds that the experience of psychological trauma is associated with vaccination intention and behaviour and points to the potential importance of a trauma-informed approach to vaccination interventions during the current global effort to achieve high COVID-19 vaccine coverage. Introduction Past experiences are an influential factor in the decision to vaccinate [1,2]. More generally, an individual's thoughts and behaviour can be affected by their experience of a traumatic event [3]. Traumatic events are experiences that place an individual or someone close to them at risk of serious harm or death [3]. An intensely distressing event that does not pose the risk of serious harm or death is a stressful event. While the prevalence of specific experiences of trauma varies globally, many individuals will experience a traumatic event at some point in their lives [4,5]. Thus, exploring the role of trauma in influencing vaccination decisions is of potential importance when understanding vaccination intention and behaviour. The decision to vaccinate and the consequences of a traumatic experience are each related to cognitive appraisals, social factors and control beliefs, suggestive of a relationship between the two. Psychological distress following exposure to a traumatic event is variable [3,6,7]. Posttraumatic stress disorder (PTSD) is a psychiatric disorder that may occur in people who have experienced or witnessed a traumatic event, so long as they also experience (for over a month): (a) intrusive symptoms associated with the traumatic event (e.g., flashbacks, nightmares, recurrent memories); (b) persistent avoidance; (c) alterations in mood; and d) increased arousal/reactivity (e.g., outbursts of anger or irritability, lack of concentration, sleep disturbances) [3]. The experience of traumatic events may affect threat appraisal and outcome predictions, which are also utilised when making vaccination decisions. Appraisals that underlie vaccination decisions are based on outcome predictions and perceived threats. Namely, this involves the perceived susceptibility to a vaccine preventable disease (VPD), the perceived severity of the VPD [8] and the anticipated regret of contracting a VPD following vaccine refusal [9]. Similarly, traumatic events have consequences for decision-making via alterations in threat appraisal mechanisms [10] by which maladaptive interpretations and memories of trauma induce a sense of threat in everyday situations [10]. Social factors and control beliefs underpin both the decision to vaccinate and the consequences of trauma. Several studies show a relationship between social norms and vaccination intentions [11][12][13][14]. Norms within social networks are evidenced through the geographical clustering of vaccine objectors [15,16] and congruence of vaccine attitudes within families [17,18]. Similarly, social factors such as low social support may heighten the consequences of trauma [19,20] and, inversely, relationships may be affected by traumatic events [21]. The interaction between subjective norms and perceived behavioural control shows a strong association with vaccination intention. Since trauma is often associated with feelings of loss of control and/or helplessness, which may impact volitional control [22], this suggests yet another way in which vaccination decisions may relate to the experience of psychological trauma. Current global vaccination efforts demand an understanding of the drivers of vaccination as well as reasons for under-vaccination in the face of the COVID-19 pandemic. Despite suggestion of an association between psychological trauma and vaccination decisions, the details of this relationship have been overlooked by vaccination interventions. The objective of this systematic review is to synthesize the literature regarding the relationship between traumatic events and vaccination decisions. This may inform whether tailored approaches to address vaccine hesitancy are warranted for trauma-affected individuals. Search Strategies A review was conducted in accordance with PRISMA criteria [23]. The online databases MEDLINE, PsycINFO and CINHAL were searched. As this review aimed to understand vaccination decisions following trauma, exemplars of the latter listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) [3] served as a reference point for included terms. The terms 'fear' and 'anxiety' were added to broaden the search to include studies that investigated related stressors. Terms pertaining to vaccination were few, in order to capture the unique decision to vaccinate as opposed to other related concepts. However, the more general search term "needle *" was included to broaden the search to include effects that may impact vaccination decisions in cases where it was not a primary outcome variable of the research. The Medical Subject Headings (MeSH) terms used to search for articles in Medline were: 'Psychological Trauma' or 'Sexual Trauma' or 'Domestic Violence' or 'Child Abuse' or 'Child Abuse, Sexual' or 'Elder Abuse' or 'Spouse Abuse' or 'Gun Violence' or 'Intimate Partner Violence' or 'Physical Abuse' or 'Rape' or 'Terrorism' or 'Anxiety' or 'Fear' or 'Combat Disorders'. Text word terms were: 'Psychological Trauma' or 'Sexual Trauma' or 'Violence' or 'war' or 'abuse *' or 'assault' or 'rape' or 'terroris *'or 'accident *' or 'disaster *' or 'anxiety' or 'fear *' and 'vaccin *' or 'immuniz *' or 'immunis *' or 'needle *'. Inclusion and Exclusion Criteria Inclusion criteria were for peer reviewed studies involving human subjects, published in English between 1980 and 2021 which reported on vaccination of individuals who had experienced trauma. This timeframe was selected since 1980 marked the introduction of post-traumatic stress disorder as a diagnosis in the third iteration of the DSM [24], and thus was the year that trauma was officially recognised for its potential to clinically affect the individual, as it did not appear in the International Classification of Diseases (ICD) until 1992 [25]. The review excluded studies for which data collection commenced before 1980, Vaccines 2022, 10, 911 3 of 9 in which subjects were not human, and those that were not written in English. Studies were included if they explored both (i) a traumatic event and (ii) vaccination decisions. The search strategy excluded studies concerning general psychological outcomes (e.g., depression or anxiety) without reference to trauma and studies investigating concepts related to other needle procedures (e.g., phlebotomy, injected medication or other injection paraphernalia). In the few cases where there was ambiguity regarding a study meeting these criteria, two additional reviewers (J.L. and K.E.W.) assessed the study for deliberation until consensus was reached. Screening Following the removal of duplicates, studies were screened for inclusion based on review of the title and abstract. Remaining studies underwent a full-text assessment against inclusion and exclusion criteria with references within relevant articles also screened in the same manner; first by title and abstract, and then by full text assessment. Following screening (Figure 1), five studies were included in the review. Vaccines 2022, 10, x FOR PEER REVIEW 3 of 9 [24], and thus was the year that trauma was officially recognised for its potential to clinically affect the individual, as it did not appear in the International Classification of Diseases (ICD) until 1992 [25]. The review excluded studies for which data collection commenced before 1980, in which subjects were not human, and those that were not written in English. Studies were included if they explored both (i) a traumatic event and (ii) vaccination decisions. The search strategy excluded studies concerning general psychological outcomes (e.g., depression or anxiety) without reference to trauma and studies investigating concepts related to other needle procedures (e.g., phlebotomy, injected medication or other injection paraphernalia). In the few cases where there was ambiguity regarding a study meeting these criteria, two additional reviewers (J.L. and K.E.W.) assessed the study for deliberation until consensus was reached. Screening Following the removal of duplicates, studies were screened for inclusion based on review of the title and abstract. Remaining studies underwent a full-text assessment against inclusion and exclusion criteria with references within relevant articles also screened in the same manner; first by title and abstract, and then by full text assessment. Following screening (Figure 1), five studies were included in the review. Study Characteristics Five studies were included in this review. Table 1 summarises the study type, location by country, and sample, as well as the traumatic events referenced in relation to vaccination decisions, population affected by the events, vaccine being studied, vaccination Study Characteristics Five studies were included in this review. Table 1 summarises the study type, location by country, and sample, as well as the traumatic events referenced in relation to vaccination decisions, population affected by the events, vaccine being studied, vaccination decision agent (i.e., the person responsible for the vaccination decision), and the key findings relating to trauma and vaccination. Findings The review identified few studies that comment on a traumatic experience in reference to vaccination decisions. Two studies found that vaccine acceptance was associated with perceived likelihood of VPD infection amplified by a traumatic event. A cross-sectional survey of 461 females [26] reported an increase in acceptance of the HPV associated with an increased experience of violence (91.1% vs. 80%, p < 0.021) including emotional (91.9% vs. 83.7%, p < 0.027) or physical violence (90.6% vs. 84.8%, p < 0.05). Another qualitative study [27] found that a heightened fear of cholera during a humanitarian crisis was related to increased acceptance of a cholera vaccine. Trust was of particular importance in the latter study, whereby distrust in institutions was associated with hesitancy, while inversely, increased trust was associated with acceptance. However, unlike all other studies in this review which measure vaccine uptake, these two studies measure vaccination intention. As intentions and behaviours have been found to differ within populations [31], the comparison of these findings with others in this review is tentative. Three studies examined a traumatic event and vaccine uptake [28][29][30]. A crosssectional survey of 124,385 women [28] found that maternal experience of physical and/or sexual interpersonal violence was associated with decreased likelihood of full immunization among their children (RR = 0.90; 95% CI = 0.83-0.98) although it did not explore the reason for this relationship. Similarly, vaccine refusal was evident in a study of pediatric survivors of sexual assault [29] in which 48% of vaccine eligible patients did not receive the HPV vaccine during the intervention. This study identifies limited social support via the absence of a consenting and guiding caregiver as a key barrier to vaccine uptake. A population level cross-sectional survey examined vaccination uptake following a ferry disaster with far-reaching, pervasive effects on mental health and social disruption [30]. Residents of a comparison city received more vaccination than residents in the city affected by the disaster (AOR = 1.10; 95% CI = 1.04-1.17; p = 0.002). While there was no difference in the vaccination rates between depressed individuals in both cities (AOR = 0.90; 95% CI = 0.75-1.09; p = 0.281), non-depressed individuals residing in the same locality as disaster victims received fewer vaccinations following the disaster (AOR = 1.12; 95% CI = 1.05-1.20; p < 0.001) compared to non-depressed individuals in the comparison city. Discussion Overall, there is limited research investigating the specific relationship between the experience of a traumatic event and vaccination. The aim of this review is to consolidate what is known. The studies included in this review are relatively recent, with the oldest published in 2012. This suggests that the exploration of the subject is new and that more research is needed to make definitive conclusions. Nevertheless, the findings of this review suggest that the experience of traumatic events is associated with vaccination decisions, and that decisions may be influenced by social and practical factors that are related to both the traumatic event and vaccination experience. While three of the studies reviewed [28][29][30] found that the experience of a traumatic event was associated with decreased vaccination, two studies [26,27] found that interpersonal violence and a humanitarian crisis were associated with increased vaccination against HPV and cholera, respectively. This suggests that risk appraisals may depend on the type of trauma and vaccine in question, along with other potential factors that require further investigation. Interestingly, the two studies that found that a traumatic experience was associated with increased vaccine acceptance were the only two studies that measured vaccination intention rather than uptake. Thus, vaccination intention and behaviour may differ following a traumatic experience, and while individuals may be very motivated to receive a vaccine, practical barriers may affect vaccine uptake. Studies included in this review were conducted in India (n = 1), South Korea (n = 1), South Sudan (n = 1), and the United States of America (n = 2), and thus were diverse in cultural scope. As such, the influence of contextual factors may be relevant when drawing conclusions from this review. Vaccination decisions are influenced by social and cultural factors [32], as are the consequences of traumatic events that can be shaped by cultural [33] and gender norms [34]. The mediating effect of these variables in relation to trauma and vaccination may be an appropriate avenue for future research. It is difficult to compare vaccination decisions in this review, as decision agents and vaccine target groups differ among the samples. While most studies of traumatic experiences in this review report on personal vaccination decisions, two studies focused on parents' vaccination decisions for their children. Generally, surrogate decision-making may alter risk appraisals so that they differ from those underlying decisions made for oneself [35,36]. The experience of trauma notwithstanding, the decision to vaccinate a child may be made by more than one parent or caregiver, thus adding a social element to the decision-making process. The social role of decision agents in the context of a traumatic event requires further evaluation in future research. At an individual level, the mechanism underlying the effect of traumatic experiences on vaccination decisions has not been explicitly considered. Although only explicitly mentioned by one qualitative study in this review [27], vaccination research indicates that confidence, underpinned by trust, is a moderate correlate of vaccine acceptance [37]. Since the experience of trauma may affect an individual's capacity for trust [22], this is a potentially important consideration for vaccination decisions following trauma. While most studies considered traumatic events experienced by individuals directly, one study investigated the vaccination behaviour of individuals following death or harm caused to others by a disaster [31]. While this might be due to practical challenges imposed on the community, this might also be due to vicarious trauma acquisition which can also affect an individual's decision making [3]. Indeed, experiencing first-person narratives of victims are found to influence behaviour [38]. The impact of vicarious trauma on vaccination decisions may be important when seeking to understand vaccine hesitancy. This may be especially important in the face of anti-vaccination rhetoric that uses personal anecdotes of traumatic vaccination experiences as evidence of alleged vaccine harms [39][40][41]. The small number of heterogenous studies conducted, limits the generalizability of our conclusions, as does our limited scope of inquiry. Our search terms pertaining to trauma were not exhaustive and focused on exemplars of discrete traumatic events listed by the DSM-V which are thus most likely to have clinical implications. This was done to provide the most pointed findings. Accordingly, studies relating to trauma that are not defined as such, but may nonetheless have subjective psychological implications for individuals, were excluded. We note that constructs were not always well defined by studies that were screened and that conceptual clarity is an issue in the literature. There is a large body of literature concerning medically distressing events. Medical experiences that qualify as traumatic events involve sudden, catastrophic events (e.g., waking during surgery, anaphylactic shock) [3]. There were many studies concerning negative vaccination procedures that were excluded. While a painful or negative vaccination experience may be a stressor, it may be considered traumatic if it causes the individual serious physical harm and is accompanied by other factors (see above). Broadening the search to encompass stressful events may add to our conclusions; however, this is reliant on the reporting of the impact of trauma on individual subjects. The included studies did not measure symptoms associated with PTSD in individuals who experienced trauma. People who have experienced a broad range of traumatic events follow different trajectories in their subsequent functioning in that they are either (i) resilient, (ii) gradually recovering after an initial period of distress, (iii) worse as time progresses, or (iv) chronically distressed [42][43][44][45]. Moreover, traumatic events that are pervasive or experienced in childhood can have greater effects on daily functioning [46]. Thus, understanding the symptoms currently experienced by individuals and the timing of the traumatic event relevant to vaccination may help to draw conclusions about how the consequences of traumatic events may be associated with vaccination decisions. Moreover, there is a need for studies to clearly operationalize traumatic events, experienced symptoms, and vaccination outcomes for a more complete understanding of this relationship. Future research should pay close consideration to the person making the vaccination decision, vaccine target group, and trauma-affected group of interest to better understand the mechanism at an individual level. Finally, studies in this review reveal associations between traumatic experiences and vaccination decisions but do not attempt to understand underlying mechanisms. Our work adds to the growing recognition of the relationship between traumatic experiences and medical encounters [47]. Various trauma-informed approaches to communication within primary health care have facilitated an individualized approach [48][49][50], and qualitative vaccination studies suggest that vaccine-hesitant individuals value individualized management and communication [51,52]. This review highlights the potential importance of considering trauma history prior to vaccination. While various psychometric tools exist to screen for trauma exposure, further research may help decide the best screening method to employ in conjunction with immunization delivery services. Moreover, future research should investigate the efficacy of interventions that use a trauma-informed approach on vaccination intention and uptake. While there are many models of trauma informed healthcare, there are no prescribed actions for providing a trauma-informed service. Rather, these encompass a shift in in the way providers think about trauma and interact with patients [53]. Generally, trauma-informed approaches recognize that trauma is a widespread experience that can affect all levels of the medical context, and respond to this by applying trauma knowledge into practice and endeavoring to prevent further trauma [54]. Future research could consider the most cost-and time-effective actions to implement alongside vaccination procedures under such an approach. Conclusions This review makes apparent the overlooked and potentially important role of psychological trauma in shaping vaccination intention and behaviour. This is a relatively new field of inquiry and, as such, more research is needed to explore this relationship and understand its mechanism. This review suggests that vaccination interventions may benefit from understanding the unique experiences and perspectives of trauma-affected individuals. Research that focuses on the efficacy of a trauma-informed approach to vaccination delivery services may be helpful for guiding efforts to address vaccine hesitancy.
Contrast-Enhanced Ultrasound Imaging Quantification of Adventitial Vasa Vasorum in a Rabbit Model of Varying Degrees of Atherosclerosis This study used an atherosclerotic rabbit model to investigate the feasibility of quantifying adventitial vasa vasorum (VV) via contrast-enhanced ultrasound (CEUS) imaging to identify early atherosclerosis. Recent evidence has linked adventitial VV with atherosclerotic plaque progression and vulnerability. A growth in VV density has been detected preceding intimal thickening and even endothelial dysfunction. In our study, carotid atherosclerosis rabbit models were used, and animals underwent CEUS imaging at the end of the atherosclerotic induction period. Normalized maximal video-intensity enhancement (MVE) was calculated to quantify VV density. After CEUS imaging, animals were euthanized, and their carotids were processed for histopathological analysis following staining for CD31 and VEGF. Adventitial normalized MVE increased as atherosclerosis progressed (p < 0.001), and normalized MVE also progressed, demonstrating a linear correlation with histological findings (r = 0.634, p < 0.001 for VEGF-positive; r = 0.538, p < 0.001 for CD31-positive). Thus, we histologically validated that CEUS imaging can be used to quantify the development of adventitial VV associated with atherosclerosis progression. This method can be used for monitoring the VV to detect early atherosclerosis. specimens do not include the entire arterial wall and because there is a possibility of high-risk lesions being missed during sample collection, histology cannot entirely confirm the imaging results despite the promising potential application of CEUS imaging in the clinical setting. In addition, further validation of CEUS imaging to quantitate VV density over the time course of plaque evolution is indispensable to its clinical application. The current study was designed to investigate the feasibility of quantitative CEUS imaging for the in vivo visualization of intraplaque and adventitial neovascularization during atherosclerosis progression. Serial CEUS imaging was performed in an experimental carotid atherosclerotic rabbit model, with systematic histological assessment as the reference standard. Methods Animals. All rabbits (New Zealand white male rabbits) were obtained from the model animal centre of the 2 nd Affiliated Hospital of Harbin Medical University. The principles of laboratory animal care were followed, and all procedures were conducted according to the guidelines established by the National Institutes of Health, with efforts made to minimize suffering. The study protocol was approved by the Medical Ethics Committee on Animal Research of the 2 nd Affiliated Hospital of Harbin Medical University (Ethics No. KY2016-090). Carotid atherosclerotic animal model. Carotid atherosclerosis was induced in New Zealand white adult male rabbits (2.5-3.5 kg) (n = 30) by feeding the rabbits a high-fat diet [1% cholesterol (Shanghai Lanji technology), 10% lard (Shandong Shiyuantianjiaji Factory), and 3% yolk powder (Shandong Shiyuantianjiaji Factory)] for 4 weeks, 8 weeks, or12 weeks, and these animals were assigned to groups 1, 2 and 3, respectively. An accelerated atherosclerotic rabbit model (n = 20) was generated by feeding the rabbits a combination of a high-fat diet (1% cholesterol, 10% lard, and 3% yolk powder) and an endothelial injury caused by a 2 F Fogarty balloon catheter (Boston Scientific, Temecula, California). The progression of plaques in these rabbits was observed by weekly 2D-ultrasound examinations, and the rabbits were divided into two additional groups (group 4: small plaque luminal stenosis <50% and group 5: large plaque lumen, almost occlusion). Age-matched rabbits (n = 10) maintained on normal chow served as the control group (group 0) (Fig. 1). Atherosclerosis was initiated after one week of this combination of high-fat diet. Rabbits were anaesthetized with ketamine (35 mg/kg intramuscularly), xylazine (5 mg/kg intramuscularly) and acepromazine (0.75 mg/kg intramuscularly). Anaesthesia was maintained during the procedure through isoflurane inhalation. The right carotid arteries were injured with a balloon catheter, as described in a previous study 21 . Briefly, the balloon catheter was gently advanced into the right common carotid artery through the external carotid artery. The balloon was gently inflated at 2 atm and retracted. This procedure was repeated three times in each rabbit. The balloon catheter was then removed, the incision was closed with a suture, and the rabbits were allowed to recover. Contrast ultrasound imaging. CEUS examinations were performed using an advanced ultrasound equipment (HITACHIHI VISION Preirus, Hitachi Ltd., Tokyo, Japan) and ultrasound contrast software. Contrast pulse sequencing is a multi-pulse imaging method utilizing phase and amplitude modulation of transmission ultrasound combined with cancellation algorithms to detect microbubble-specific signals. A preliminary study was performed to establish the optimization of the contrast agent and image settings. CEUS imaging of the carotid artery was performed using USphere ™ (with an average diameter of 1.2 µm, phospholipid-coated microbubbles, all fluorinated carbon gases, TRUST BIO SONICS) as the contrast agent, and vibration activation every 30 s by a special concussion instrument (For USphere ™ series use only, TRUST BIO SONICS) before use. Rabbits were anaesthetized as described above, and a 50 µL contrast bolus was injected through an ear vein, followed by 1 mL normal saline injection. An EUP-L74M non-linear probe (5~13 HZ) with real-time ultrasound imaging (MI 0.15) was performed to capture microbubble reflow into the carotid artery lumen and the VV. To obtain satisfactory CEUS images, B-mode ultrasound scanning was set to reveal the details of the deeper wall adventitia interface, avoiding US noise in the vessel lumen, and to allow similar average greyscale levels of the deep and superficial regions 22 . The contrast-enhanced ultrasound images relied on 2D long-axis imaging of vessels and fully displayed the largest view of the carotid atherosclerotic plaque. Digitally acquired ultrasound images were analysed off-line using customized CEUS imaging software by an observer blinded to the experimental conditions. Video-intensity in the region of interest (ROI) drawn in the plaque and adventitia of the injured vessel segment was measured over time using CEUS imaging software, and time-intensity curves were generated. Video-intensity data, which relate to the concentration of microbubbles in tissue, were plotted against the time elapsed from the destruction pulse. The maximal video-intensity enhancement (MVE) was the peak video-intensity from the time-intensity curves minus the background video-intensity. The normalized maximal video-intensity was calculated as the maximal video-intensity in the adventitia divided by the luminal region maximal video-intensity of interest drawn proximal to the lesion 23 . We traced different ROI for their MVE. The ROI may be located at areas of the carotid lumen, the atherosclerotic plaque, or the adventitia. The area of the ROI of the same carotid sample was used to obtain the MVE. Histology and immunohistochemistry. The carotid atherosclerotic rabbits (n = 22) at weeks 4, 8, and 12 were sacrificed by an overdose of intravenous sodium pentobarbital. The accelerated atherosclerotic rabbits with small atherosclerotic plaques that were characterized by luminal stenosis less than 50% and large atherosclerotic plaques that were characterized by full occlusion of the lumen were sacrificed by an overdose of intravenous sodium pentobarbital after confirmation by B model ultrasound imaging. The right carotid arteries were swiftly removed. Each specimen was fixed with 4% paraformaldehyde fixative and embedded in paraffin for haematoxylin and eosin (H&E) staining and immunostaining. Serial cross-sections with a thickness of 3 µm were stained with H&E and observed by light microscopy (Olympus, BX41, Tokyo, Japan). Specimens were immunostained with an anti-CD31 antibody (1:100 dilution; Abcam, Cambridge Science Park Cambridge, UK) and an anti-VEGF antibody (1:800 dilution; Abcam, Cambridge Science Park Cambridge, UK) for characterization and quantification of neovessels. Immunohistochemical reagents and secondary antibodies were from Maixin Bio and, specimens were visualized using a DAKO Envision System. The VV number was quantified by counting the total number of CD31-positive and VEGF-positive microvessels per carotid artery cross-section. The counting was performed by two independent observers, and the means of two values were used for analysis. Plasma lipid profile. Apolipoprotein A (APOA), apolipoprotein B (APOB), C-reactive protein (CRP), plasma total cholesterol (TC), triglyceride (TG), high-density lipoprotein (HDL), and low-density lipoprotein (LDL) levels were determined by enzymatic assays of blood samples, which were collected through an ear vein, and serum was separated by centrifugation for 15 min at 4 °C. Statistical analysis. All data analyses were performed using PASW 18.0 (IBM, New York, United States). P < 0.05 was considered statistically significant. The CEUS imaging parameters, serum lipid levels, and histological data are presented as the mean ± SD. The significance of the differences among all of the parameters in the two groups was tested using Student's t-test. Differences in multiple groups were analysed using Friedman one-way ANOVA. The significance of the differences between the two groups was tested using Dunnett's T3 test. The relationships between CEUS imaging and histological data were analysed by linear correlation analysis. Results All rabbits successfully underwent CEUS imaging examinations except four (one on the combination diet from group 2, two from group 3 because of fatty diarrhoea, and one from group 5 because of severe carotid stenosis leading to stroke symptoms) who died halfway. Group 5 was designed for direct observations of adventitial and plaque VV; therefore, we could not calculate the normalized maximal video-intensity for this group. Plasma lipid profile. As expected, experimental groups 1-5 in our study had higher CRP levels than the control group. The ApoA levels in groups 2, 3, 4, and 5 were higher than the control. The ApoB levels in groups 3, 4, and 5 were higher than the control. The TC and LDL levels in groups 1-5 were all higher than the control. The TG levels in groups 3 and 5 were higher than the control. It is interesting to note that there were no significant differences among groups 1, 2, or 4 and the control for HDL levels. Furthermore the HDL levels in groups 3 and 4 were higher than the control. However, the HDL/TC ratio in groups 1-5 was all lower than the control. The results are shown in Table 1. In addition, we also compared a group of early atherosclerosis animals fed a high-fat diet without endothelial injury confirmed by pathology to the control group, and there was an increase in normalized MVE (0.146 ± 0.099 Vs 0.352 ± 0.293, p = 0.01). Figure 3 shows a representative consecutive carotid CEUS and two-dimensional ultrasound images from rabbits with varying degrees of atherosclerosis before and after CEUS imaging. The two-dimensional ultrasound image of the carotid artery demonstrated the interface between the adventitia and arterial lumen clearly (Fig. 3A). The arterial intima was thin and smooth. No plaque was detected. Before injection of contrast, the right carotid artery lumen was dark due to tissue signal suppression (Fig. 3B). After contrast injection, the carotid lumen was immediately enhanced, and the adventitial VV of the high-fat diet model animals also showed enhanced signals (Fig. 3C). Foam cells in the subintima and adventitial neovascularization proliferation were later confirmed by haematoxylin and eosin staining. Figure 3D-F were taken from a rabbit of group 4, which modelled accelerated atherosclerosis. We can see a small plaque on the intima of the posterior wall of the carotid artery (Fig. 3D) and a small filling defect (Fig. 3F) (Real-time CEUS imaging see video 1). More enhanced signals were detected in Fig. 3F than Fig. 3E. The three images were taken from an accelerated atherosclerotic rabbit in group 5. The lumen was almost occluded by a large plaque (Fig. 3G). The arterial lumen was dark before the injection of contrast agents (Fig. 3H). After the injection, the outer membrane was enhanced and much more adventitial contrast was visible than in the other groups (Fig. 3I). Online videos show the same findings in the form of higher resolution movie files. Histological findings. A gradual increase in VEGF-positive microvessels was observed in groups 1-5 compared to the controls, and the increase in group 5 with large plaques that almost occluded the lumen was very abrupt (p < 0.001) (Fig. 4A). Furthermore, differences in the number of VEGF-positive microvessels between the two accelerated atherosclerotic groups, groups 4 and 5, were significant (p = 0.023). The number of CD31-positive microvessels showed a similar trend to VEGF-positive microvessels (Fig. 4B). A significant increase in CD31-positive microvessels was observed (p < 0.001). Compared to group 4 (accelerated atherosclerotic rabbit model with small plaques), a higher number of CD31-positive microvessels was observed in group 5 (p = 0.002). The number of VEGF-positive adventitial microvessels in the early atherosclerosis without endothelial injury group that received a high-fat diet did show a significant increase compared to the control group (3.58 ± 1.71 vs 1.3 ± 0.48, p < 0.001). Similarly, the number of CD31-positive adventitial microvessels in the early atherosclerosis without endothelial injury group that received a high-fat diet was also higher compared to the control group (6.21 ± 3.52 vs 1.90 ± 0.10, p < 0.001). Representative histological cross-sections from the group with early atherosclerosis (Fig. 5A-C), small plaques ( Fig. 5D-F) and large plaques (Fig. 5G-I) are shown in Fig. 5. The plaque in Fig. 5D, positive microvessels of VEGF in Fig. 5B,E and H, and the positive microvessels of CD31 in Fig. 5C,F and I are highlighted by small red arrows. It is obvious that the number of VEGF-positive and CD31-positive microvessels increased with the progression of atherosclerosis. (Fig. 6A). Similarly, a linear correlation between the number of CD31-positive microvessels and the normalized MVE was observed (r = 0.538, p < 0.001) (Fig. 6B). . Two-dimensional ultrasound and contrast ultrasound images of the carotid before and after contrast injection. Two-dimensional ultrasound image of the carotid artery clearly shows the adventitia interface and lumen (A). The carotid artery lumen was dark before contrast injection, and the outer membrane is clearly visible as indicated by the yellow arrows (B). The carotid lumen became immediately visible, and the adventitial also showed enhanced signal after contrast injection, as indicated by the yellow arrows (C). There was a small plaque on the wall of the deep region as indicated by the blue arrows (D). The carotid artery lumen was dark before contrast injection (E). The plaque was a little filling defect, and the outer membrane was visible with more adventitial contrast as indicated by the arrows (F). There was a large mixed echo plaque almost filling the lumen (G). Before injection of contrast, the carotid artery lumen was dark (H). After contrast injection, the outer membrane was obviously enhanced and much more adventitial contrast was visible. There is contrast enhancement inside the plaque as indicated by the yellow arrows (I). Discussion In this study, we explored the feasibility of CEUS imaging for visualizing the neovascularization in the adventitial VV in rabbit models with different degrees of atherosclerosis caused by a high-fat diet with or without balloon injury. The study further confirmed the utilization of quantitative measures of contrast enhancement to measure VV density, especially in early atherosclerosis. Our findings demonstrated that CEUS imaging could be used as a quantitative approach to demonstrate adventitial VV in early atherosclerosis before intimal changes. The contrast-enhanced VV density increased as development of atherosclerosis progressed. This finding was confirmed by histological analysis. The results supported the hypothesis that the adventitial VV network may play an important role in plaque progression. In addition, we designed a model of advanced atherosclerosis with large plaques to observe the adventitial VV and its relationship with the new vessels in plaques. And we found that the enhancement in plaque followed and derived from outer membrane enhancement (Video 2). Identifying adventitial VV and plaque neovascularization is undoubtedly very important for atherosclerosis progression and plaque vulnerability. Unfortunately, methods for identifying and quantifying VV are limited. Techniques such as coronary angiography and intravascular ultrasound have limitations, including low far-field resolution, low molecular sensitivity, interference by blood, lack of structural definition, and motion and flow artefacts 15 . Nevertheless, CEUS imaging is a non-invasive technique, and because microbubbles could act as the tracer of red blood cells, the microcirculation in the plaque neovascularization and adventitial VV can be clearly visualized. Therefore, CEUS imaging appears to be an emerging technique serving as a valuable method for the early detection of premature atherosclerosis and for the detection of vulnerable plaques in at-risk populations. Some studies have shown that CEUS imaging can be clinically useful in the carotid artery, aorta abdominalis and femoral artery [24][25][26] . Most of the previously published studies used a qualitative scale to score the presence and amount of intraplaque neovascularization 16,20,27 . Quantitative measurements of the contrast enhancement should further improve the reproducibility of the results and reduce observer variability. Recently, a study conducted by Moguillansky et al. showed that CEUS imaging can be used for quantifying plaque neovascularization 15 . In the current study, we demonstrated that it can be used for assessing plaque-specific atherosclerosis and the degree of atherosclerosis. We also depicted a relationship between VV neovascularization and atherosclerosis progression in vivo. The VV is a network of small blood vessels with thin walls that are located in the adventitia of large arteries. VV are supposed to carry nutrients to parts of the vascular walls that are distant from the lumen 28 . In the absence of atherosclerosis, the VV is limited to the adventitia and outer media 29 . Arterial regions with low VV density are prone to forming initial plaques, whereas advanced lesions develop more rapidly in regions with high intraplaque VV 30,31 . VV density is dynamic, increasing with hypercholesterolaemia 32 and decreasing with cholesterol-lowering strategies like statins 23 . Therefore, the present study focused on adventitial VV, a potential marker of plaque vulnerability. A growth in VV density has been reported to precede intimal thickening and even endothelial dysfunction in animal models 11 and humans 10 , suggesting that neoangiogenesis could occur at the earliest stage of atherogenesis. However, there has been no study of using CEUS imaging as a marker of early atherosclerosis. In our study, we designed a model of feeding only a high-fat diet to rabbits without injuring the endothelium, and the data suggest that the outer membrane was enhanced in the high-fat diet early atherosclerotic rabbit model, and an increase in adventitial VV without endothelium injury was confirmed by histological staining and immunohistochemistry. In the present study, we observed adventitial VV in early atherosclerosis without endothelial injury to assess the application and reliability of CEUS imaging to quantify neovessels. In addition, contrast ultrasound was performed to measure adventitial VV and neovessels in plaques of rabbit models with different degrees of atherosclerosis. We found that peak video-intensity is linearly correlated with the histologic indices of VV density. The highlight of our study was that we verified that formation of atherosclerotic plaques was preceded by neoangiogenesis in the adventitia. This conclusion was first drawn by visualizing the increased adventitial VV density in rabbit models generated using high-fat diet without arterial endothelium injury and then further confirmed by histological staining and immunohistochemical examination. Undoubtedly, this information is good news for patients with early-stage carotid atherosclerosis. CEUS imaging may be used as a non-invasive instrument for identifying early atherosclerosis in humans. Further studies in larger cohorts will be needed to confirm the diagnostic value of CEUS imaging in human early-stage carotid atherosclerosis. Another unique feature of this current study was that we designed a model of atherosclerotic rabbits with lumen occlusion to observe the relationship between the adventitial VV and neovessels in plaques. We found that the enhancement in the outer membrane was earlier than that in plaques and the enhancement from the outer membrane into plaques. This finding may support the view that intraplaque neovascularization comes from the sprouting of the existing VV network in the adventitia 33- 35 . The limitations of this study need to be mentioned. First, no animal model completely mimics human atherosclerosis. Thus, careful extrapolation of our results to humans should be made. Second, our image acquisition relied on 2D long-axis imaging of vessels that does not fully capture the spatially heterogeneity and asymmetric process of atherogenesis, and this issue may lead to imperfect correlations between imaging and histology data. Conclusion The present study demonstrated the feasibility of CEUS imaging for quantifying the VV in early atherosclerosis. The CEUS peak video-intensity predicts the extent of neovascularization, and it was histologically confirmed that the progression of atherosclerotic plaques was related to the VV. Our study also showed that CEUS imaging can be used as a non-invasive quantification tool for the VV. Early atherosclerosis could be identified through this method, and it maybe be helpful for clinical treatment.
Elucidation of the Role of Peptide Linker in Calcium-sensing Receptor Activation Process* Family 3 G-protein-coupled receptors (GPCRs), which includes metabotropic glutamate receptors (mGluRs), sweet and “umami” taste receptors (T1Rs), and the extracellular calcium-sensing receptor (CaR), represent a distinct group among the superfamily of GPCRs characterized by large amino-terminal extracellular ligand-binding domains (ECD) with homology to bacterial periplasmic amino acid-binding proteins that are responsible for signal detection and receptor activation through as yet unresolved mechanism(s) via the seven-transmembrane helical domain (7TMD) common to all GPCRs. To address the mechanism(s) by which ligand-induced conformational changes are conveyed from the ECD to the 7TMD for G-protein activation, we altered the length and composition of a 14-amino acid linker segment common to all family 3 GPCRs except GABAB receptor, in the CaR by insertion, deletion, and site-directed mutagenesis of specific highly conserved residues. Small alterations in the length and composition of the linker impaired cell surface expression and abrogated signaling of the chimeric receptors. The exchange of nine amino acids within the linker of CaR with the homologous sequence of mGluR1, however, preserved receptor function. Ala substitution for the four highly conserved residues within this amino acid sequence identified a Leu at position 606 of the CaR critical for cell surface expression and signaling. Substitution of Leu606 for Ala resulted in impaired cell surface expression. However, Ile and Val substitutions displayed strong activating phenotypes. Disruption of the linker by insertion of nine amino acids of a random-coiled structure uncoupled the ECD from regulating the 7TMD. These data are consistent with a model of receptor activation in which the peptide linker, and particularly Leu606, provides a critical interaction for the CaR signal transmission, a finding likely to be relevant for all family 3 GPCRs containing this conserved motif. The human extracellular calcium-sensing receptor (CaR) 2 is a novel cation-sensing G-protein-coupled receptor (GPCR) in parathyroid cells and plays a central role in the regulation of extracellular [Ca 2ϩ ] o homeostasis by controlling the rate of parathyroid hormone secretion (1). The CaR may also be involved in other physiological regulation in organs such as bone, brain, kidney, and intestine. Activation of CaR by elevated levels of [Ca 2ϩ ] o stimulates phospholipase C via the G q subfamily of G-proteins resulting in the increase of phosphoinositide (PI) hydrolysis and subsequently in the release of intracellular Ca 2ϩ from stores in the endoplasmic reticulum. CaR is a member of the family 3 GPCR gene family that includes eight metabotropic glutamate receptors (mGluR1-8), two ␥-aminobutyric acid receptor subunits (GABA B1 and GABA B2 ), three sweet and umami taste receptors (T1R1, T1R2, and T1R3), several putative rodent pheromone receptors (V2Rs), and orphan receptors (GPRC6A, GPRC5B-5D) (2). All family 3 GPCRs possess a large amino-terminal extracellular ligand-binding domain (ECD) that share structural similarity to the bi-lobed Venus flytrap domain motif (VFTM) of bacterial periplasmic binding proteins connected to a seven-transmembrane helical domain (7TMD) prototypical for all GPCRs responsible for G-protein activation (1,2). Many of the GPCRs in this family are covalently joined homodimers with two monomers being linked by one or more disulfide bridges in the VFTMs. This has been rigorously demonstrated for the CaR and mGluRs (3)(4)(5). The GABA B receptor, in contrast, is an obligate heterodimer composed of GABA B1 and GABA B2 subunits stabilized by a carboxyl-terminal coiled-coil interaction (6). Another major difference in the structure of the GABA B receptor and many other family 3 GPCRs including mGluRs, CaR, sweet/umami taste receptors, and putative pheromone receptors is the presence of a distinct highly conserved ninecysteine domain after the VFTM (called NCD) with a carboxylterminal extension of a 14-amino acid linker after the ninth cysteine connecting to the first transmembrane helix of the 7TMD in the later receptors. Structural predictions suggest that NCD may possess four ␤-strands and three disulfide bridges and for the CaR this domain seems to be essential for transmission of signals from the ECD to the 7TMD (7,8). Although the linker connecting the NCD with the first transmembrane helical domain contains no cysteines for disulfide linkages, the length of this 14-residue linker is highly conserved in NCD-containing receptors indicating a stringent structural constraint. Thus, we and others hypothesized that this peptide linker might contribute to the signal transmission of family 3 GPCRs (9,10). To address how the ligand-induced conformational changes of the VFTM might be transmitted for G-protein coupling, the peptide linkers of the GABA B receptor heterodimer were examined (11). Modification of two GABA B receptor subunit linkers by changes in sequence and/or length were mostly tolerated and thus the linker regions in GABA B receptors were predicted to act only as tethers for the VFTMs to the 7TMD, supporting a direct contact model of receptor activation. In this model illustrated in Fig. 1A, a, receptor activation occurs predominantly through contacts between the ligandbound VFTM and exo-loops of the 7TMD and the linker acts solely to keep the VFTM in proximity to the 7TMD. Alternatively, x-ray crystallographic analyses revealed that the distance between the VFTM carboxyl termini in the dimers of the mGluR1 decrease upon glutamate binding in the closed state of the two VFTMs suggesting that the two 7TMDs may be drawn closer to VFTMs through the linkers upon ligand activation (12). As depicted in Fig. 1A, b, this alternative model referred to as the peptide-linker model of receptor activation predicts that the structure and conformational change of the linker is important for receptor activation. In a combination model, a direct contact of the VFTM and 7TMD is mediated by a linker conformational change (Fig. 1A, c). To identify the role of the peptide linker in the activation process of the CaR and to test among the different proposed receptor linker activation models, we created deletions and insertions to modify the length and also changed the composition by chimeric substitution of homologous sequences between CaR and mGluR1 and introducing changes in residues by point mutation within this linker of the CaR. Our analysis of the expression and signaling properties of these receptor constructs suggests an essential role for highly conserved amino acids within this structure. Site-directed Mutagenesis of the Chimeric and Point Mutant Receptors-The coding sequences for the human CaR was inserted into the pCR3.1 expression plasmid. Mutations were introduced in the sequences encoding the linker region of the CaR using the QuikChange site-directed mutagenesis kit (Stratagene) as described previously (3). Briefly, a pair of complementary primers with 65-70 bases was designed for each chimeric receptor construct with targeted residues placed in the middle of the primers. In case of point mutation, a pair of complementary primers with 30 -35 bases was similarly constructed with the residue change placed in the middle of the primers. The overlapping oligonucleotide sequences used for site-directed mutagenesis and construction details of the chimeric and point mutant constructs are available upon request. The mutations were confirmed by automated DNA sequencing using a Taq Dye Deoxy terminator cycle sequencing kit and ABI Prism 377 DNA sequencer (Applied Biosystems). All five chimeric constructs were completely sequenced to confirm the absence of mutations in other regions of these mutant constructs. For each point mutant construct sequenced, we analyzed at least two independent clones and confirmed that they had identical expression and functional characteristics. Transient and Stable Expression of Receptors in Mammalian Cells-HEK293 and HeLa cells were transfected with the pCR3.1 expression plasmid encoding the CaR receptor constructs using Lipofectamine (Invitrogen). To achieve optimal expression, 90% confluent cells in 80-cm 2 flasks were transfected with an optimized amount of plasmid DNA (8 g/flask) diluted in Dulbecco's modified Eagle's medium, mixed with diluted Lipofectamine (Invitrogen), and incubated at room temperature for 30 min and added to cells. After 5 h of incubation, the transfection medium was replaced with complete Dulbecco's modified Eagle's medium containing 10% fetal bovine serum, and this medium was replaced 24 h after transfection with complete Dulbecco's modified Eagle's medium containing 10% fetal bovine serum. This amount of DNA was determined to generate optimal transfection efficiency and the highest expression level of the CaR. Clonal HEK293 cell lines expressing the Gly-link9ins chimeric CaR was selected by isolating Geneticin or G418 (800 g/ml)-resistant independent clonal lines isolated 3 weeks after transfection. The clonal cell line (Gly-link9insC10) was selected and maintained in media containing 400 g/ml G418 for functional studies on the basis of the highest cell surface expression determined by immunoblot analysis with monoclonal antibody, ADD, clone 5C10 (Affinity BioReagents), raised against a synthetic peptide from the CaR ECD. Immunoblot Analyses with Detergent-solubilized Whole Cell Extracts-Whole cell extracts were prepared as described previously (3). Confluent cells in 80-cm 2 flasks were rinsed with ice-cold phosphate-buffered saline, scraped, and solubilized in solution B (20 mM Tris-HCl, pH 6.8, 150 mM NaCl, 10 mM EDTA, 1 mM EGTA, and 1% Triton X-100) with freshly added protease inhibitor mixture (Roche Applied Science) and 10 mM iodoacetamide. The protein content of each sample was determined by a modified Bradford method (Bio-Rad). An equal amount of protein (30 g) was loaded in each lane and electrophoretically separated on 6% Tris glycine gels by SDS-PAGE. Samples were then electrophoretically transferred to nitrocellulose membrane, incubated with human CaR-specific monoclonal ADD antibody, and a secondary goat anti-mouse antibody conjugated to horseradish peroxidase (Kierkegaard and Perry Laboratories), and immunoreactivity was detected with an enhanced chemiluminescence system (Amersham Biosciences). For cleavage with endoglycosidase-H (Endo-H), Triton X-100-solubilized proteins (ϳ60 g) were incubated with 0.5 milliunits of Endo-H for 1 h at 30°C before loading 30 g of sample per lane on 6% SDS-PAGE for immunoblotting. PI Hydrolysis Assay-Stimulation of PI hydrolysis in intact HEK293 cells was performed as previously reported (3). Briefly, confluent cells in 24-well plates were replenished in medium containing 3.0 Ci/ml of myo[ 3 H]inositol (PerkinElmer Life Sciences) in complete Dulbecco's modified Eagle's medium overnight. Cells were then depleted of extracellular [Ca 2ϩ ] o as follows: attached cells were first washed with Ca 2ϩ -free phosphate-buffered saline, followed by a 30-min incubation and wash with PI buffer (120 mM NaCl, 5 mM KCl, 5.6 mM glucose, 0.4 mM MgCl 2 , 20 mM LiCl, and 25 mM PIPES, pH 7.2) containing no Ca 2ϩ . This solution was removed and cells were then treated for 30 min with test agents in PI buffer. The reactions were terminated by addition of 1 ml of HCl/methanol (1:1000, v/v). Total inositol phosphates (IP) were isolated by chromatography on Dowex 1-X8 columns. Data Analysis-The [Ca 2ϩ ] o saturation profiles of PI hydrolysis were analyzed using Graphpad Prism software (version 3). Data curve fits for single-site and other ligand-binding models were compared for minimum residual errors. The coefficients for the best-fit models for an individual experiment were used to compute EC 50 values reported in Table 1. For the L606A mutant, the data were best-fit to a single site non-cooperative model. Expression and Function of CaR Peptide Linker Chimeric Receptors-This study was designed to determine the impact of the length and composition of the peptide linker connection between the VFTM/NCD and 7TMD on processing/cell surface expression and signaling of the CaR. Amino acid sequence alignment among divergent members of family 3 GPCRs showed that the 14-amino acid chain length among all members between the last cysteine (Cys 598 in CaR) of the NCD and the first amino acid of the transmembrane domain (Gly 613 in CaR) is strictly maintained among the CaR, mGluR1, mGluR5, and sweet, umami taste (T1R1-3), and a putative pheromone receptor (V2R2), whereas there are significant divergence in the amino acid compositions of the linkers (Fig. 1B). We modified the linker length and composition of the CaR as shown in Fig. 1C. As described in this figure, two chimeric CaR receptor constructs were generated in which the linker lengths were increased with the insertion of either the corresponding ninelinker sequence of rat mGluR1 sequence (IPVRYLEWS) or with GGGASASGG, a Gly sequence predicted to form a random coil structure in the middle of the 14-amino acid CaR linker and these two chimeric receptor constructs were named R1-link9ins and Gly-link9ins, respectively. Two other chimeras (R1-link9 and Gly-link9, respectively) were generated that changed the composition of the linker sequences without changing the CaR linker peptide length. In these constructs, nine residues of the 14-amino acid CaR linker were changed FIGURE 1. A, models of family 3 receptor activation. The direct contact model (a) is shown here for CaR as initially proposed for the GABA B1 and GABA B2 heterodimer (11). In contrast to GABA B receptor subunits, the CaR, like mGluRs are disulfide-linked homodimers shown schematically here as two subunits in black. The Ca 2ϩ ligand is shown as light gray spheres bound to the VFTM stabilizing the closed conformation of the two VFTM protomers as seen in the glutamate-bound crystal structures of mGluR1 VFTM. The binding of ligand associated with the closed conformational change in the VFTM is transmitted to the 7TMD of the receptor through direct contacts of the ECD and 7TMD. In CaR and mGluRs, but not in GABA B receptor, the linker is connected to the VFTM through a NCD shown as a dotted sphere between VFTM and the linker. In this direct contact model, the linkers act only as tethers between the VFTM/NCD and the first transmembrane segments and undergo no conformational changes. In the peptide-linker model (b), the ligandbound closed conformation of the VFTM protomers decreases the distance between the protomer carboxyl termini and the linkers draw the VFTM/NCD and 7TMDs closer to trigger G-protein signaling. For completeness, we provide a combination model (c) in which the ligand-bound conformational changes in the VFTM/NCD draw the 7TMDs closer as in the peptide-linker model with movements of the linkers required for altered contacts of the VFTM/NCD and 7TMDs underlying receptor activation. B, sequence alignment of the linker regions for family 3 GPCRs. The sequences of 14-amino acid long linkers of 10 members of family 3 GPCRs are shown. The alignment is based on the CaR linker sequence from Cys 598 , the last cysteine in the NCD and Gly 613 , the first residue in the 7TMD. The underlined amino acids of the CaR and mGluR1 are the nine residues exchanged between these receptors within the linker peptide. Identical and highly conserved residues are boxed. C, five CaR mutants constructed to change the length and composition of the linker peptide. In R1-link9ins and Gly-link9ins chimeric constructs, the homologous nine amino acids from mGluR1 (underlined in B) or a nine-amino acid long random-coil peptide, GGGASASGG, was inserted between Phe 605 and Leu 606 , respectively, which resulted in linker extensions. In R1-link9 chimera, the homologous nine amino acids from mGluR1 (underlined in B) replaced the corresponding nine residues in CaR keeping the linker length constant. Similarly, in the Gly-link9 chimera, the corresponding nine amino acid residues of the CaR was deleted and replaced simultaneously with the GGGASASGG peptide so that the length of the linker was kept intact. A deletion mutant (⌬-link9) of the CaR nine residues (underlined in B), resulted in shortened length of the peptide linker. either to the corresponding nine mGluR1 sequence or to Gly sequence as shown above. A fifth construct (⌬-link9) with the deletion of nine residues was also created with a shortened length of the CaR linker peptide. The analysis of the expression patterns of the wild type CaR and these five chimeric mutants is shown in Fig. 2A. Immunoblotting experiments for chimeric receptors transiently expressed in HEK293 cells showed that both the wild type CaR and R1-link9 receptor expressed efficiently at the cell surface by immunoblotting with the monoclonal antibody ADD ( Fig. 2A). As seen in this figure, under reducing conditions, ADD antibody detected two major monomeric bands of ϳ150 and 130 kDa molecular mass in wild type receptor expressing cells. We and others (13,14) have previously reported that the 150-kDa band identifies CaR forms expressed at the cell surface, modified with complex carbohydrates by N-glycosylation of several Asp residues in the ECD and is resistant to Endo-H digestion. The 130-kDa band contains high mannose-modified forms, trapped intracellularly, and Endo-H digestion reduces these high mannose-modified receptor forms to non-glycosylated forms shifted to a 120-kDa band on immunoblots. Similarly, the ADD antibody detected both bands in R1-link9 chimeric receptor-expressing cells and the upper 150-kDa band showed resistance to Endo-H digestion, whereas the lower 130-kDa band was sensitive to Endo-H. HEK293 cells consistently expressed a lower intensity of sig-nal for the 150-kDa forms of R1-link9 compared with the wild type CaR, probably indicating a lower number of this chimeric receptor at the cell surface. Surprisingly, other chimeric mutant receptors showed no detectable 150-kDa band with mostly a single Endo-H-sensitive 130-kDa band. However, the Gly-link9ins mutant in several immunoblotting experiments showed the presence of a faint upper 150-kDa band, indicating some forms of this mutant receptor may reach the cell surface. Next, each mutant receptor was expressed transiently in HEK293 cells and the capacity of each mutant receptor to generate intracellular signals for IP accumulation in response to [Ca 2ϩ ] o was compared with that of a wild type CaR. With the exception of R1-link9, all of the chimeric receptors failed to show any response even at a saturating concentration (10 mM) of Ca 2ϩ . The maximal responses (E max ) for these mutants were indistinguishable from the basal level and none of the mutants displayed constitutive activity (data not shown). Among these chimeric receptors, the Gly-link9ins mutant that showed some cell surface-expressed receptor forms also failed to respond to [Ca 2ϩ ] o in the PI assay (Fig. 2B). The saturation curves for wild type CaR and R1-link9 receptor are shown in Fig. 2B. The R1-link9 receptor exhibited similar sigmoidal saturation as the wild type CaR but with somewhat reduced sensitivity to [Ca 2ϩ ] o (EC 50 of 5.6 mM versus wild type CaR EC 50 4.7 mM, Table 1). Signaling Properties of Chimeric Receptor Gly-link9ins-Because of the low level of cell surface expression in HEK293 cells transiently expressing Gly-link9ins receptor, we tested varying amounts of the Gly-link9ins plasmid DNA during transient transfection (1, 2, and 3 g of DNA/well of a six-well plate) and decreasing the wild type CaR plasmid DNA amount (0.25 and 1 g of DNA/well) to achieve comparable cell surface receptor levels of the two receptors. Based on the intensity of the 150-kDa band in immunoblots, comparable cell surface expression was achieved by transfecting cells with 0.25 g of DNA/well for wild type CaR and 3 g of DNA/well for Gly-link9ins receptor. Under these conditions, the wild type CaR showed 3-4-fold increases in IP accumulation in response to 10 mM Ca 2ϩ , whereas Gly-link9ins receptor showed no increase in inositol phosphate formation (data not shown). To confirm unambiguously that Gly-link9ins chimeric receptor expressed at the cell TABLE 1 Maximal response and EC 50 values of [Ca 2؉ ] o on CaR and its mutant receptors The maximal IP production measured with 10 mM Ca 2ϩ in cells transiently expressing the indicated receptor is expressed as percentage of maximal wild type (wt) CaR response and (n) designates the number of independent experiments performed in triplicate or duplicate. Role of Linker in CaR Activation FEBRUARY 23, 2007 • VOLUME 282 • NUMBER 8 surface did not respond to [Ca 2ϩ ] o , we created several stable HEK293 clonal cell lines expressing Gly-link9ins receptor. The clonal cell line (Gly-link9insC10) with the highest cell surface expression showed a 2-fold increase in basal response compared with non-transfected cells but with no IP formation at up to 10 mM Ca 2ϩ . However, application of 1 M NPS-R568, a positive allosteric modulator of the CaR, with or without 10 mM Ca 2ϩ resulted in greatly enhanced IP formation (Fig. 3A). Similarly, we observed in a HEK293 cell line stably expressing the ECD-deleted CaR mutant receptor (T903-Rhoc) that Ca 2ϩ or NPS-R568 alone stimulated little or no significant increases in IP accumulation, but co-application of both resulted in a significant increase in IP formation (Fig. 3B). Also, as shown in Fig. 3C, the [Ca 2ϩ ] o saturation curve in the presence of 1 M NPS-R568 on the Gly-link9ins receptor was similar to what we have observed previously for the T903-Rhoc receptor (15). These results are consistent with the reported binding sites of NPS-R568 and Ca 2ϩ within the 7TMD of the CaR and this Ca 2ϩ binding in the 7TMD appears to act synergistically to enhance NPS-R568 activation of the receptor (15)(16)(17). The results revealed that the cell surface forms of the Gly-link9ins receptor were indeed capable of generating cellular responses probably upon NPS-R568 and Ca 2ϩ binding in the 7TMD sites. Requirement for Leu 606 within CaR Linker Region for Cell Surface Expression and Function- The alignment of several family 3 GPCRs including CaR, mGluR1, mGluR5, V2R2, T1R1, T1R2, and T1R3 linker region amino acid sequences identify the presence of only four identical and/or highly conserved amino acid residues within the nine-amino acid linker sequences in these receptors (Fig. 1B). To evaluate the properties of these conserved residues, Ile 603 , Phe 605 , Leu 606 , and Trp 608 within the CaR linker peptide were mutated to alanines. These individual single point mutants probed the importance of each of these conserved residues within the nine amino acids in the linker sequence of the CaR. The [Ca 2ϩ ] o saturation curves of the I603A mutant receptor showed somewhat reduced maximal response, and F605A and W608A mutant receptors yielded similar maximal responses with calculated EC 50 values for [Ca 2ϩ ] o ranging from 4.2 to 5.7 mM (Fig. 4, A-C, Table 1). In contrast, the L606A mutant displayed markedly attenuated [Ca 2ϩ ] o -elicited response with a maximal response diminished to 31% of the wild type response at 10 mM Ca 2ϩ (Fig. 4B, Table 1). The shape of [Ca 2ϩ ] o saturation curve of L606A changed to hyperbolic from the steeply sigmoidal saturation profile of the wild type CaR. Immunoblot experiments confirmed that the highly reduced functional response of L606A was due to deficiencies in cell surface expression of this mutant as seen by the presence of little or none of the cell surfaceexpressed 150-kDa forms compared with the wild type and other mutant receptors tested (Fig. 4D). These observations indicate that a severe constraint on the amino acid side chain at this 606 position may be critical for the cell surface expression of the CaR. Alanine at this position possibly provides a less bulky side chain and perhaps failed to produce a critical Van der Waals contact provided by a leucine residue at position 606. To evaluate the importance of Leu 606 , we next engineered three point mutants, L606F, L606I, and L606V, in which the replaced amino acids (Phe, Ile, and Val) preserved the hydrophobicity but changed the length of the side chains of these amino acids. Because of similarities between the side chains of the aliphatic amino acids Leu, Ile, and Val, we believed the Ile and Val substitution mutants would fold and function normally at the cell surface. Phe has a rigid aromatic group on the side chain but like Leu is hydrophobic with similar side chain length, so we predicted this mutant may also be functional. As opposed to the L606A receptor, mutants replacing Leu at position 606 with Phe, Ile, and Val exhibited increased sensitivity to [Ca 2ϩ ] o with maximal responses similar to the wild type CaR (Fig. 5, A-C, Table 1). Both the L606I and L606V mutant receptors exhibited substantial increases in sensitivity to [Ca 2ϩ ] o (EC 50 values of 3.0 and 2.7 mM, respectively, versus wild type CaR EC 50 value 4.7 mM) with left-shifted dose-response curves. The response of the L606F receptor was also left-shifted but less activating with an EC 50 value of 4.4 mM ( Table 1). Immunoblot analysis and Endo-H treatment that qualitatively determined the cell surface levels of each receptor based on the intensity of the Endo-H resistant 150-kDa band (Fig. 5D) indicated that gain-of-function activities of L606V, L606F, and L606I mutants were not due to significantly higher cell surface expression of these receptors as compared with the wild type CaR, but indeed are due to the changes in signal transmission activities. We next evaluated the functional responses of the L606I, L606V, and L606A mutants in HeLa cells by transiently expressing the receptor constructs in these cells. Both L606I and L606V mutant receptors showed activating effects and L606A receptor showed a highly reduced response in PI assay (produced as a supplemental data). The findings confirm that signaling phenotypes of these mutant receptors are determined by structural changes within the receptors and not by other HEK293 cell-specific phenomenon. DISCUSSION The transmission of an activating signal from the VFTM recognition of agonist ligands in a family 3 GPCR presents a constraint not present in the major family 1 GPCR structures. Whereas studies of ECD-deleted CaR and mGluR5 receptor constructs have revealed 7TMD-contained activating site(s) for so-called allosteric ligands (15,18), indicating that these receptors may retain the activation mechanism(s) found in the homologous 7TMD structures of the family 1 GPCRs, the intact structures must undergo additional conformational transitions involving the ECD structures. In this report, we examined the peptide linker region connecting the ECD to the 7TMD of a Role of Linker in CaR Activation FEBRUARY 23, 2007 • VOLUME 282 • NUMBER 8 prototypical member of the family 3 GPCR, the CaR, to explore whether this sequence plays a role in the activation of the 7TMD upon binding of ligand in the extracellular VFTM of this receptor. Two general mechanisms have been proposed for the transmission of this signal illustrated in Fig. 1A: 1) through direct contacts of the ECD with exo-loops of the 7TMD with the linker acting solely to constrain the spatial separation of ECD and 7TMD or 2) through the peptide linker between the ECD and the 7TMD with the transmission of the signal dependent on the conformational transitions transmitted through the linker. Results obtained from experiments with the GABA B receptor in which a random-coiled sequence was introduced into the linker region suggested that the conformation of the linker was not essential to GABA B receptor activation (11). Our results for the CaR would appear to be entirely discrepant implicating a more fundamental role for the CaR linker region. Either deletion of the linker sequence or replacing it with an unrelated sequence abolished cell surface expression of the CaR receptor constructs. Similarly, insertion of an additional sequence, including the homologous sequence from mGluR1, abrogated receptor processing and, hence, [Ca 2ϩ ] o -mediated signaling. The one alteration that was well tolerated was replacement of the nine-amino acid CaR linker sequence with the corresponding sequence from mGluR1, resulting in a receptor with diminished cell surface expression but which retained [Ca 2ϩ ] o -induced response via the VFTM. This result echoes findings from the exchange of ECD structures among family 3 GPCRs with successful expression and signaling of regulated chimeric receptors only obtained for CaR ECD/mGluR1 7TMD exchanges (19,20). Of the poorly or non-expressing constructs, the Gly-link9ins receptor bears particular scrutiny. Whereas the chimeric receptor Gly-link9ins with an insertion of a nine-amino acid long random coil Gly-peptide expressed poorly at the cell surface, by selecting a stably transformed clone of HEK293 that expressed sufficient Gly-link9ins receptor, we found no response to [Ca 2ϩ ] o but activation by the allosteric agonist NPS-R568 alone or in combination with Ca 2ϩ . This phenotype mimics what we have previously reported for an ECD-deleted construct, T903-Rhoc, which revealed the 7TMD sites for two positive allosteric agonists NPS-R568 and Calindol and also for Ca 2ϩ (15,21). These results suggest that [Ca 2ϩ ] o -activated VFTM of the Gly-link9ins mutant receptor did not transmit the activation signal to the 7TMD for G-protein signaling. These observations would seem to refute the role of the linker sequence as only a "tether" and they reveal a significant structural constraint upon this region of the CaR. Interestingly, replacement of the nine residues of the CaR peptide linker with the mGluR1 linker sequence without changing linker length produced a receptor with very similar phenotype as the wild type CaR. Our examination of the four conserved amino acid residues within this sequence identified Leu 606 as essential for CaR signaling. Replacing Leu with hydrophobic amino acids Val and Ile produced functional receptors at the cell surface with enhanced [Ca 2ϩ ] o sensitivity, whereas Phe replacement at this position produced only a modestly activating phenotype. Identification of these activating mutations at Leu 606 of the CaR eliminates a passive tether model and implies a critical role for the conformation of the linker region. In contrast to these activating mutations, transfection of HEK293 cells with a CaR construct with Ala substitution at this position produced cells with dramatically decreased surfaceexpressed receptors and impaired [Ca 2ϩ ] o -induced signaling response. Whereas it seems likely that the L606A mutant failed to fold correctly and remained intracellularly trapped, we cannot rule out the alternative that this mutation generated a constitutively active CaR mutant form that is internalized rapidly leading to significant loss of receptors at the cell surface. It is interesting to note that a true constitutively active CaR mutant receptor has not been reported yet and it is quite possible that such CaR mutants may rapidly internalize and degrade or that their expression may be toxic to the cells. Our data then suggest that the analysis of the family 3 GPCR signaling must consider at least two distinct types of regulation based upon the receptor structures. Signaling of GABA B receptor subunits appear to be mediated mostly by direct contacts between the ECD and 7TMD (11). These receptor subunits, in contrast to CaR and mGluR1, form heterodimers with the predominant dimeric interface encoded in a extended carboxylterminal coiled-coil intracellular structure but the ECDs do not contain the NCD (6). CaR and mGluRs form homodimers structures with disulfide linkages in the extracellular ECD structures. Our data for the CaR strongly favors a peptidelinker mechanism of activation of this family 3 GPCR that may apply to other VFTM/NCD containing family 3 GPCRs. Our data cannot discriminate between a mechanism in which a direct interaction of the ligand-bound ECD with the exo-loops of the 7TMD requires a conformational rearrangement in the peptide linker during the activation process from one in which the conformational transition of the ECD produced upon ligand binding is transmitted by the linker region conformational rearrangement. The three-dimensional x-ray crystallographic structural analyses of the mGluR1 VFTM indicate that the closed conformation of this VFTM dimeric structure brings the carboxyl-terminal portions of the monomer VFTMs into closer apposition, which may produce the conformational rearrangement of the dimeric 7TMD structures via the linker regions (12). Whether this produces a torsional transmission of the conformation or specific binding contacts between the linker sequence and the ECD and/or 7TMD require more refined structures and/or models for the intact receptors. We believe these differences seen in the linker properties between the GABA B receptor and the CaR in the activation processes may be applicable to other members of this receptor family and further research to test the generality of these results will be revealing.
Loss of Hepatic Carcinoembryonic Antigen‐Related Cell Adhesion Molecule 1 Links Nonalcoholic Steatohepatitis to Atherosclerosis Patients with nonalcoholic fatty liver disease/steatohepatitis (NAFLD/NASH) commonly develop atherosclerosis through a mechanism that is not well delineated. These diseases are associated with steatosis, inflammation, oxidative stress, and fibrosis. The role of insulin resistance in their pathogenesis remains controversial. Albumin (Alb)Cre+ Cc1flox ( fl ) /fl mice with the liver‐specific null deletion of the carcinoembryonic antigen‐related cell adhesion molecule 1 (Ceacam1; alias Cc1) gene display hyperinsulinemia resulting from impaired insulin clearance followed by hepatic insulin resistance, elevated de novo lipogenesis, and ultimately visceral obesity and systemic insulin resistance. We therefore tested whether this mutation causes NAFLD/NASH and atherosclerosis. To this end, mice were propagated on a low‐density lipoprotein receptor (Ldlr)−/− background and at 4 months of age were fed a high‐cholesterol diet for 2 months. We then assessed the biochemical and histopathologic changes in liver and aortae. Ldlr−/−AlbCre+Cc1fl/fl mice developed chronic hyperinsulinemia with proatherogenic hypercholesterolemia, a robust proinflammatory state associated with visceral obesity, elevated oxidative stress (reduced NO production), and an increase in plasma and tissue endothelin‐1 levels. In parallel, they developed NASH (steatohepatitis, apoptosis, and fibrosis) and atherosclerotic plaque lesions. Mechanistically, hyperinsulinemia caused down‐regulation of the insulin receptor followed by inactivation of the insulin receptor substrate 1–protein kinase B–endothelial NO synthase pathway in aortae, lowering the NO level. This also limited CEACAM1 phosphorylation and its sequestration of Shc‐transforming protein (Shc), activating the Shc–mitogen‐activated protein kinase–nuclear factor kappa B pathway and stimulating endothelin‐1 production. Thus, in the presence of proatherogenic dyslipidemia, hyperinsulinemia and hepatic insulin resistance driven by liver‐specific deletion of Ceacam1 caused metabolic and vascular alterations reminiscent of NASH and atherosclerosis. Conclusion: Altered CEACAM1‐dependent hepatic insulin clearance pathways constitute a molecular link between NASH and atherosclerosis. liver adenocarcinoma has become a global concern. (1) To date, there has been no U.S. Food and Drug Administration-approved drug targeting NASH owing to our limited knowledge of the pathogenesis of the initiation and progression of the disease. This is in part due to the paucity of animal models that replicate faithfully the human condition. Despite controversy, (2) insulin resistance constitutes a risk factor for the early stages of NAFLD/NASH (steatosis). (3) Thus, treatment has commonly implicated a combinational therapy aimed at ameliorating insulin sensitivity to stop disease progression. (4) In addition to liver dysfunction, patients with NAFLD/NASH are at a higher risk of developing cardiovascular diseases, including atherosclerosis. (4)(5)(6)(7) This has stimulated interest in identifying shared molecular mechanisms underlying cardiometabolic diseases. Several common risk factors have been identified, including visceral obesity, insulin resistance, dyslipidemia, oxidative stress, and inflammation. (3,(8)(9)(10)(11) Of these, the role of insulin resistance in linking NAFLD/NASH to atherosclerosis remains debatable. This is in part because of the heterogeneity of insulin resistance implicating altered metabolism and differential regulation of insulin signaling in a cell-and tissue-dependent manner. (12) Additionally, the causeeffect relationship between these anomalies has not been mechanistically fully resolved. (13,14) While hyperinsulinemia compensates for insulin resistance, some studies propose that hyperinsulinemia driven by impaired hepatic insulin clearance causes insulin resistance. (15,16) The latter has been mechanistically demonstrated in mice with altered expression/function of carcinoembryonic antigen-related cell adhesion molecule 1 (CEACAM1; alias Cc1). (16) Consistent with its permissive effect on insulin clearance, mice with liver-specific deletion of the Ceacam1 gene (albumin [Alb]Cre + Cc1 flox [fl]/fl ) on C57BL/6J (B6) background exhibited impaired insulin clearance leading to chronic hyperinsulinemia at 2 months of age. (17) This was followed by hepatic insulin resistance and steatohepatitis at 6-7 months of age as well as hypothalamic insulin resistance, which caused hyperphagia contributing to systemic insulin resistance with concomitant release of nonesterified fatty acids (NEFAs) and adipokines from white adipose tissue (WAT). (17) Similarly, global null mutation of Ceacam1 (Cc1 −/− ) impaired insulin clearance and caused hyperinsulinemia followed by systemic insulin resistance, steatohepatitis, and visceral obesity, which were all reversed with liver-specific rescuing of Ceacam1. (18) In parallel, Cc1 −/− developed hypertension with endothelial and cardiac dysfunction in relation to increased oxidative stress, which were all reversed by liver-specific rescuing of Ceacam1. (19) Mechanistically, this was mediated by hyperinsulinemia-driven downregulation of cellular insulin receptors and subsequent reduction in plasma NO, with a reciprocal increase in endothelin-1 (ET-1) production (19) favoring vasoconstriction over vasodilation. (20) In addition to an altered cardiometabolic system, global Cc1 −/− mice also uniquely develop fatty streaks and plaque-like lesions in aortae despite the absence of proatherogenic hypercholesterolemia and hypertriglyceridemia. (21) These loss-and gain-of-function models highlight the important role that impaired CEACAM1-dependent insulin clearance pathways play in hyperinsulinemia-driven insulin resistance, NAFLD, and cardiovascular anomalies. Given that Cc1 −/− mice develop more progressive NASH features when fed a high-fat diet, including fibrosis and apoptosis, (22,23) we investigated whether liver-specific AlbCre + Cc1 fl/fl nulls develop NASH and atherosclerosis when propagated on a B6.low-density lipoprotein receptor (Ldlr) −/− background, with an overarching goal to investigate the role of hepatic insulin resistance in the mechanistic link between NAFLD/NASH and atherosclerosis. miCe maintenanCe All animals were housed in a 12-hour dark-light cycle at the Division of Laboratory Animal Resources at each institution. Starting at 4 months of age, male mice were fed a high-cholesterol (HC) atherogenic diet ad libitum (Harlan Teklad, TD.88137; Harlan, Haslett, MI) containing 0.2% total cholesterol (42% kcal from fat; 42.7% kcal from carbohydrate [high sucrose 34% by weight]) for 2 months (unless otherwise noted). All procedures were approved by the institutional animal care and use committees at each institution. metaBoliC pHenotyping Body composition was assessed by nuclear magnetic resonance (Bruker Minispec, Billerica, MA), as described. (19) For insulin or glucose tolerance, awake mice were fasted for 6 hours and injected intraperitoneally with either insulin (0.75 units/kg body weight [BW] human regular insulin [Novo Nordisk, Princeton, NJ] or 1.5 g/kg BW dextrose solution). Blood glucose was measured from the tail at 0-180 minutes. Retroorbital venous blood was drawn at 1100 hours from overnight-fasted mice into heparinized microhematocrit capillary tubes (Fisherbrand, Waltham, MA). Plasma and tissue biochemistry parameters were assessed as detailed in the Supporting Information. EX-VIVO palmitate oXiDation This assay was carried out in the presence of [ 1-14 C]palmitate (0.5 mCi/mL; American Radiolabeled Chemicals Inc., St Louis, MO)-2 mM adenosine triphosphate, terminated with perchloric acid to recover trapped CO 2 radioactivity. The partial oxidation products were then measured by liquid scintillation using CytoCint (MP Biomedicals, Solon, OH). The oxidation rate was expressed as the sum of total and partial fatty acid oxidation (nmoles/g/minute). (18) liVeR Histology Formalin-fixed paraffin-embedded sections were stained with hematoxylin and eosin (H&E). Sections were deparaffinized and rehydrated before being stained with 0.1% sirius red stain (Direct Red80; Sigma-Aldrich), as described. (23) en-FaCe anD lesion analysis Aortae were dissected from the root to the abdominal area and then formalin fixed. Connective tissues were removed from the longitudinally opened aortae, stained with Oil-Red-O (ORO) (#O0625; Sigma-Aldrich), fixed on a coverslip, and photographed with an Olympus CKX41 (Tokyo, Japan). The total surface and ORO-positive areas were determined using CellSens Standard software in area of pixel 2 . The extent of atherosclerotic lesions was defined as the percentage of total ORO-positive lesion area/total surface area. aoRtiC Root seCtioning anD plaQue analysis Hearts were perfused through the left ventricle with 1X phosphate-buffered saline (PBS; Thermo Scientific, Waltham, MA), followed by 4% paraformaldehyde (Sigma-Aldrich) and cut and embedded in optimal cutting temperature (OCT) compound (Tissue-Tek 4583). This was followed by frozen sectioning on a microtome-cryostat (10 µm) starting from where the aorta exits the ventricle and moving toward the aortic sinus. Sections were stained with ORO or trichrome (Gomori's Trichrome Stain Kit, 87020; Thermo Scientific) or immunostained with monocyte and macrophage 2 (MOMA-2) antibody (1:500; Abcam, Cambridge, United Kindom) overnight at 4°C before incubating with secondary antibody for 1 hour (immunohistochemistrylabeled streptavidin biotin kit; Dako). Images were taken at 4× using an Olympus SZX7-TR30 microscope and quantified with the CellSens Standard program. intRaVital miCRosCopy oF leuKoCyte aDHesion on tHe CaRotiD aRteRy Mice were anesthetized with ketamine and xylazine (100/10 mg/kg) and fixed on a 15-cm cellculture lid in a supine position. The right jugular vein and left carotid artery were exposed through a middle incision, as described. (25) We injected 100 µL of 0.5 mg/mL rhodamine 6G (R4127; Sigma-Aldrich) through a jugular vein puncture to label cells having mitochondria, including leukocytes. The carotid artery was carefully isolated from the surrounding tissue, and one piece of small, U-shaped, black plastic was placed under the vessel to block background fluorescence. The carotid artery (~4-5 mm length) was observed in real time using an intravital microscope (Leica DM6 FS), and video images were captured with a 14-bit RetigaR1 charge-coupled device color digital camera (Teledyne QImaging, Surrey, Canada) and STP7-S-STDT Streampix7 software (Norpix, Montreal, Canada). Video images were analyzed offline for leukocyte adhesion. Cells that adhered to the vessel wall without rolling or moving for at least 3 seconds were counted over the vessel observed. Total numbers were used for statistical analysis. isolation oF peRitoneal maCRopHages Mice were injected intraperitoneally with 1 mL thioglycollate (T9032; Sigma-Aldrich). Four days later, 6-8 mL of cold 1X PBS was injected into the peritoneal cavity; the peritoneal liquid was aspirated into conical centrifuge tubes and centrifuged to collect and culture the pellet in Roswell Park Memorial Institute (RPMI) medium (Thermo Scientific). Cells were treated overnight with 100 µg/mL of native low-density lipoprotein (LDL) (human plasma, 99% #J65039; Alfa Aesar, United Kingdom) or oxidized LDL (ox-LDL) (human plasma, Hi-TBAR #J65261; Alfa Aesar), fixed with 10% formalin for 10 minutes, and rinsed in PBS once (1 minute) and then with 60% isopropanol for 15 seconds. Cells were stained with filtered ORO (#O0625; Sigma-Aldrich) at 37°C for 1 minute in the dark, washed with 60% isopropanol for 15 seconds, then rinsed with PBS 3 times for 3 minutes each. Images were taken at 20× magnification. Bone maRRoW isolation FRom tiBia anD FemuR Mice were euthanized and their entire leg dissected. Skin was peeled off, and the tibia and femur were separated and placed in RPMI. The ends of both tibias and femurs were cut to flush out the bone marrow by using a syringe. Cells were collected after centrifugation and cultured in RPMI supplemented with 10-20 ng/mL recombinant macrophage colonystimulating factor (Thermo Scientific). Total RNA was isolated from cells using NucleoSpin RNA (740955.50; Macherey-Nagel, Bethlehem, PA). Complementary DNA (cDNA) was synthesized with the iScript cDNA Synthesis Kit (Bio-Rad), using 1 μg of total RNA and oligo deoxythymine primers. cDNA for total and Ceacam1-long (L) and short (S) isoforms were evaluated by quantitative reversetranscription (qRT)-PCR (StepOne Plus; Applied Biosystems, Foster City, CA) and normalized against 18S, using primers listed in Supporting Table S1. WesteRn anD qRt-pCR analyses Western blots and qRT-PCR analyses were carried out as routinely done and as detailed in the Supporting Information. statistiCal analysis Data were analyzed using one-way analysis of variance (ANOVA) with Tukey's test for multiple comparisons, using GraphPad Prism6 software. Data were presented as mean ± SEM. P < 0.05 was considered statistically significant. Ldlr −/− AlbCre + Cc1 fl/fl miCe maniFesteD insulin ResistanCe Starting at 5 weeks of HC intake (Table 1; Supporting Fig. S3), Ldlr −/− AlbCre + Cc1 fl/fl mice manifested a higher body weight gain and exhibited an increase in fat and visceral mass with a reciprocal decrease in lean mass relative to the three control groups after 2 months of HC (Table 1). They maintained fasting hyperinsulinemia relative to their littermate controls (Table 1), with impaired insulin clearance, which was measured by steady-state C-peptide/insulin molar ratio (Table 1). Null mice exhibited higher postprandial blood glucose levels ( Table 1) and intolerance to exogenous insulin and glucose relative to controls (Fig. 1A,B). Consistent with hyperinsulinemia repressing insulin receptor (IR) expression, (26) immunoblotting with α-IR β and normalizing against α-tubulin showed a ~50% reduction in IR level in liver and WAT of mutants (Fig. 1Cb,c). Insulin release (Fig. 1Ca) in control but not null mice refed (RF) for 7 hours following an overnight fast induced IR β phosphorylation relative to fasted (F) mice (Fig. 1Cb,c). Together, this demonstrated systemic insulin resistance with elevated fasting hyperglycemia in Ldlr −/− AlbCre + Cc1 fl/fl mice (Table 1). Ldlr −/− AlbCre + Cc1 fl/fl mice exhibited higher levels of hepatic and plasma total and free cholesterol. Plasma LDL cholesterol (LDL-C) and very low-density lipoprotein cholesterol (VLDL-C) were increased with a reciprocal decrease in high-density lipoprotein cholesterol (HDL-C) content in null mice (Table 1). This is consistent with increased hepatic cholesterol production in liver-specific inactivation of CEACAM1 (L-SACC1) mice with liver-specific CEACAM1 inactivation. (27) Accordingly, hepatic Srebp2 mRNA levels Male mice (4 months of age, n > 7/genotype) were fed an HC diet for 2 months before being killed. Except for fed blood glucose level that was assessed in blood drawn at 10 pm, mice were fasted overnight from 5 pm until 11 am the next day when retro-orbital blood was drawn and tissues were collected. Visceral adiposity was calculated as % of gonadal plus inguinal WAT per body mass; insulin clearance as steady-state plasma C/I molar ratio; and plasma VLDL-C was calculated as triacylglycerol × 0.2. Data were analyzed by one-way ANOVA with Tukey's test for multiple comparisons, and values are expressed as mean ± SEM. *P < 0.05 vs. Alb -Cc1 +/+ . † P < 0.05 vs. Alb + Cc1 +/+ . ‡ P < 0.05 vs. Alb -Cc1 fl/fl . Abbreviations: ALT, alanine aminotransferase; AST, aspartate aminotransferase; C/I, C-peptide/insulin; NMR, nuclear magnetic resonance; PCSK9, proprotein convertase subtilisin/kexin type 9. (Supporting Table S2), a master regulator of genes involved in cholesterol synthesis, including HMG-CoA reductase (Hmgcr) (Supporting Table S2), were elevated in parallel to its increased enzymatic activity (Fig. 3C) and an increase in plasma apolipoprotein B (ApoB) levels in mutants ( Table 1). The increase in hepatic Srebp1c and Srebp2 mRNA was not associated with changes in hepatic insulin-induced gene 1 (Insig-1) and Insig-2a/b mRNA levels (Supporting Table S2). Whether the regulatory effect of these endoplasmic reticulum membrane proteins on cholesterol synthesis (28) were altered by liver-specific Ceacam1 deletion remains to be examined. Nevertheless, it is possible that, like Srebp1c, increased transcriptional activity of Srebp2 in Ldlr −/− AlbCre + Cc1 fl/fl mice was driven by chronic hyperinsulinemia. (29) Fig. 2. Histologic and western analysis of NASH. Mice were fed HC for 2 months, as in Fig. 1. (A) H&E-stained liver sections in mice (n > 5/genotype), as in Fig. 1. Yellow arrows highlight the infiltration of inflammatory foci. Magnification x20. (B) Sirius red to detect bridging fibrosis (n > 5/genotype). (C) Western analysis was performed on liver lysates by immunoblotting with antibodies (α-) against pNF-kB, pStat3, and pSmad3 normalized against total proteins in parallel gels immunoblotted with α-NF-κB, Stat3, and Smad 3 antibodies, respectively. Similarly, immunoblots of CHOP and SMA were normalized against tubulin in parallel gels to account for total protein loaded. Gels represent 2 mice/genotype performed on different sets of mice/protein. Abbreviation: CHOP, CCAAT/enhancer binding protein homologous protein. Elevated LDL-C and non-HDL-C in Ldlr −/− AlbCre + Cc1 fl/fl mice could also be caused by reduced clearance, as supported by elevated plasma levels of ApoB and proprotein convertase subtilisin/ kexin type 9 (PCSK9) ( Table 1). Also supporting this finding is reduced expression of LDL receptor-related protein 1 (Supporting Table S2), which is involved in hepatocytic clearance of chylomicron remnants. Whether CEACAM1 modulates LDL clearance is unclear, but mRNA levels of hepatic Niemann-Pick type C1 protein (Npc1), a late endosomal protein involved in exporting LDL-C to other cellular compartments, was reduced by ~3-fold in nulls (Supporting Table S2). This suggests partitioning of free cholesterol to mitochondria where it could contribute to the ~3-fold to 4-fold reduction in plasma glutathione (GSH) in nulls (Fig. 4Aa). (30) Together, this demonstrates that Ldlr −/− AlbCre + Cc1 fl/fl mice developed the dyslipidemia profile of NAFLD and atherosclerosis. (8,31) Consistent with oxidative stress causing apoptosis in the presence of high TNFα levels, (10) CCAAT/ enhancer binding protein homologous protein (CHOP) levels were elevated in null livers (Fig. 2C). This was supported by a ~2-fold increase in mRNA levels of markers of hepatocyte injury (Supporting Table S2), without noticeable ballooning ( Fig. 2A) ORO staining of peritoneal macrophages treated with native or ox-LDL indicated a shared susceptibility to Ox-LDL for lipid-loaded macrophage formation by all mouse groups (Fig. 6E). This ruled out differential macrophage preactivation (priming), assigning a major role for systemic factors in the development of atherosclerosis in Ldlr −/− AlbCre + Cc1 fl/fl mice. Discussion Fed a regular chow diet, mice with global Ceacam1 deletion (Cc1 −/− ) and liver-specific deletion (AlbCre + Cc1 fl/fl ) and inactivation (L-SACC1) manifested hyperinsulinemia-driven insulin resistance, early stage NASH (steatohepatitis with hepatic fibrosis), and visceral obesity. (17,35,36) In addition to this cluster of metabolic abnormalities, Cc1 −/− mice developed several cardiovascular features of metabolic syndrome: endothelial dysfunction, hypertension, kidney dysfunction, cardiac dysfunction with myocardial hypertrophy, and small intimal plaque-like lesions with fat and macrophage deposition in aortae. The restricted size of these aortic lesions (~5-fold smaller than ApoE −/− ) (21) was attributed to their limited lipidemia (elevated plasma NEFA in the absence of hypercholesterolemia or hypertriglycedemia). These metabolic factors, driven by impaired insulin clearance owing to Ceacam1 loss in liver and kidney combined with increased endothelial permeability and defective vascular remodeling due to the removal of Ceacam1 in endothelial cells (37) and a robust proinflammatory response to the loss of Ceacam1 in immune cells, (38) could have all contributed to the development of plaque-like lesions in Cc1 −/− mice. By specifically deleting Ceacam1 from the liver, the current studies demonstrated that on the Ldlr −/− background, an HC diet-induced increase in hepatic cholesterol and triglyceride synthesis and their defective clearance synergized with hepatic insulin resistance to cause progressive NASH and atheroma. As we have previously shown, (17) loss of CEACAM1 in liver impaired hepatic insulin clearance to cause chronic hyperinsulinemia and subsequent downregulation of the insulin receptor and blunting of insulin signaling in hepatic and extrahepatic tissues (such as WAT and aortae). This led to increased hepatic FASN activity and de novo lipogenesis, which together with compromised fatty acid β-oxidation led to hepatic steatosis. In addition to hyperphagia driven by reduced insulin receptor in the hypothalamus, (17) repartitioning of hepatic VLDL-triglycerides to WAT induced visceral adiposity and the subsequent release of NEFA and adipokines (such as IL-6 and TNFα) that triggered systemic insulin resistance and activated several proinflammatory pathways in Ldlr −/− AlbCre + Cc1 fl/fl livers and aortae. The current studies provide in vivo evidence of hepatic insulin resistance playing an inciting factor in the pathogenesis of NASH and atherosclerosis. Mice with liver-specific deletion of the insulin receptor developed extreme insulin resistance and hyperinsulinemia (in part due to impaired insulin clearance). On an atherogenic diet, they developed atherosclerosis. (39) In contrast, Ldlr −/− mice with preserved hepatic insulin sensitivity but total loss of insulin receptor in extrahepatic peripheral tissues, including endothelial cells, showed a partial protection against atherosclerosis. (40) Similarly, liver-specific rescuing of Ceacam1 restored insulin clearance and subsequently curbed hyperinsulinemia and systemic insulin resistance and all cardiometabolic abnormalities of Cc1 −/− nulls. (18,19) Together with the current studies, this highlights the proatherogenic role of hepatic insulin resistance in association with dyslipidemia (low HDL, high VLDL and small dense LDL-ApoB particles, and high triglyceride secretion). (31) The regulation of insulin signaling by CEACAM1 provides a molecular basis for the "dual insulin The right jugular vein and left carotid artery were exposed through a middle incision. Carotid arteries were isolated from the surrounding tissues and intravital microscopy of leukocyte adhesion on the carotid artery was assessed in controls (Alb -Cc1 +/+ , Alb + Cc1 +/+ , Alb -Cc1 fl/fl ) and Alb + Cc1 fl/fl mutants fed HC for 3 months (n > 5/genotype). Cells that adhered to the vessel wall without rolling or moving for at least 3 seconds were counted over the vessel observed by using an intravital microscope. Video images were analyzed offline for leukocyte adhesion. Total numbers were used for statistical analysis. Values are expressed as mean ± SEM. *P < 0.05 vs. Alb -Cc1 +/+ (white); † P < 0.05 vs. Alb + Cc1 +/+ (gray); § P < 0.05 vs. Alb -Cc1 fl/fl (black). (B) Western analysis was performed on aorta lysates by immunoblotting with antibody (α-) against phosphorylated NF-kB, STAT3, and Smad3 antibodies normalized against parallel gels immunoblotted with antibodies against total NF-kB, STAT3, and Smad3, respectively. Gels represent analysis on 2 mice/group performed on different sets of mice/protein. (C) qRT-PCR analysis of total, long isoform (Cc1-4L), and short isoform (Cc1-4S) of Ceacam1 mRNA levels performed in triplicate relative to 18S (n = 5/genotype). Values are expressed as mean ± SEM. *P < 0.05 vs. Alb -Cc1 +/+ (white); † P < 0.05 vs. Alb + Cc1 +/+ (gray); § P < 0.05 vs. Alb -Cc1 fl/fl (black). (D) Bone marrow macrophages were isolated from tibia and femur of Alb -Cc1 +/+ (white), Alb + Cc1 +/+ (gray), Alb -Cc1 fl/fl (black), and Alb + Cc1 fl/fl (hatched) mice and grown in RPMI media supplemented with recombinant M-CSF. Cells were analyzed by qRT-PCR in triplicate to assess total Ceacam1, Cc1-4L, and Cc1-4S isoforms against 18S. Values are expressed as mean ± SEM. (E) Mice (n = 5/group) were fed HC for 2 months and then injected with thioglycollate into the peritoneal cavity. Their peritoneal macrophages were isolated and cultured in RPMI media and treated in vitro with native LDL (100 µg/mL) or ox-LDL (100 µg/mL). Cells were fixed with 10% formalin and stained with filtered ORO and counterstained with hematoxylin. Images were taken at 20× magnification. Abbreviation: M-CSF, macrophage colony-stimulating factor. signaling hypothesis" (6) that links impaired hepatic insulin clearance to common features of NAFLD/ NASH and atherosclerosis, such as low-grade inflammation, oxidative stress, endothelial dysfunction, and fibrosis. As we have shown, (16,21,41) insulinstimulated phosphorylated CEACAM1 binds to Shc Fig. 7. Insulin signaling in aortae. Aortae were removed from F and RF mice (n > 6/genotype/treatment) fed HC for 2 months. Western blot analysis was carried out to assess (A) IRβ phosphorylation (α-pIRβ), normalized to loaded protein IRβ levels, which was in turn normalized by immunoblotting parallel gels with α-tubulin. (B) Aliquots were subjected to immunoprecipitation with α-IRS1 antibody followed by immunoblotting with α-pIRS1 antibody (top gel), normalized to total α-IRS1 (lower gel). The immunopellet was also immunoblotted with α-pIRβ antibody to detect binding between IRS1 and IRβ (middle gel). (C) Western analysis was performed by immunoblotting with α-pAkt and (D) α-peNOS in parallel to immunoblotting with antibodies against Akt and eNOS, respectively, for normalization. (E) Coimmunoprecipitation was carried out to detect pCC1 (top gel) or IRβ (middle gel) in the Shc immunopellet, as in (B). (F) Immunoblotting with α-pMAPK in parallel to MAPK for normalization. The apparent molecular weight (kDa) is indicated on the right side of each gel. Analysis was performed on two different mice/genotype using different sets of mice/protein. to sequester it and reduce coupling of the ras-MAPK pathway to the insulin receptor and restrict NF-κB activation and the transcription of profibrogenic ET-1. By sequestering Shp2, CEACAM1 provides a positive feedback on the IRS1/2-phosphoinositide 3-kinase (PI3K)-Akt-eNOS pathway, promoting NO synthesis and maintaining endothelial function. Thus, insulin-stimulated phosphorylation of CEACAM1 mediates the anti-oxidative and vasodilation-permissive effects of insulin. (20) In hepatocytes, CEACAM1 binding to Shc stabilizes the insulin-insulin receptor complex and targets it to the endocytosis pathways, while its Shp2 binding facilitates insulin translocation between the lysosomal and endosomal compartments to mediate its degradation and receptor's recycling to the plasma membrane. These shared mechanisms of the regulatory effect of CEACAM1 on insulin metabolism and signaling support our findings that liverspecific deletion of Ceacam1 impairs insulin clearance to cause chronic hyperinsulinemia and modulate insulin signaling in liver and extrahepatic tissues (such as aortae). This tips the balance toward oxidative stress and endothelial dysfunction on one hand (altered metabolic arm of signaling) and inflammation and fibrosis (altered proliferative arm of signaling). Endothelial dysfunction in aortae induces VCAM-1 and ICAM-1 expression to capture leukocytes on the endothelium. That altered insulin signaling is required to mediate the metabolic basis of atherosclerosis is supported by findings that hyperinsulinemia in the absence of impaired insulin signaling and lipid homeostasis did not cause atherosclerosis in ApoE −/− haploinsufficient insulin-receptor mice. (42) Consistent with NAFLD/NASH (43) and atherosclerosis (12) being linked to insulin resistance by their shared low-grade proinflammatory state, Ldlr −/− AlbCre + Cc1 fl/fl mice manifested a remarkable increase in sera and tissues levels of IL-6, TNFα, IL-1β, MCP1, and others in association with ectopic fat accumulation and an increase in visceral obesity. This points to activation of macrophages in liver and aortae, likely by IFNγ released from the atherogenic CD4 + Th1 helper cells. Activation of NF-κB by TNFα can increase production of ET-1 from hepatic and aortic macrophages (44) to contribute to fibrosis in both tissues. The rise in TNFα can also suppress Smad7 and its inhibitory effect on the TGFβ pathway, which together with IL-6 would lead to fibrosis. (45) In light of intact expression of CEACAM1 isoforms and absence of preactivation of Ldlr −/− AlbCre + Cc1 fl/fl macrophages and lack of macrophage priming or preactivation, this altered response must be related to a cell-autonomous effect brought about by Ceacam1 deletion in liver, likely due to an increase in fat deposition and macrophage recruitment. Moreover, the activated IL-6/STAT3 pathway up-regulated Mcp-1/ Ccl2 expression in monocytes/macrophages and repressed Irf-8 to induce Cd11b expression in macrophages to contribute to their elevated transendothelial migration in aortae (46) and to steatohepatitis (47) in Ldlr −/− AlbCre + Cc1 fl/fl mice. Collectively, the cardiometabolic phenotype of Ldlr −/− AlbCre + Cc1 fl/fl mice provides in vivo evidence that NAFLD/NASH and atherosclerosis do not simply develop in parallel but that insulin resistance connects them mechanistically within the overall lipidemic-inflammatory microenvironment of metabolic syndrome. The current studies assign a key role for altered hepatic CEACAM1-dependent insulin clearance pathways in insulin resistance characterized by an imbalance of insulin signaling favoring ectopic fat accumulation, visceral obesity, endothelial dysfunction, oxidative stress, inflammation, and fibrogenesis. Thus, the current studies demonstrate a causative role for NAFLD in the pathogenesis of atherosclerosis mediated by impaired insulin clearance driving chronic hyperinsulinemia and followed by hepatic insulin resistance and increased lipogenesis and lipid secretion. The importance of these findings is supported by the reported lower hepatic CEACAM1 levels in patients with insulin-resistant obesity with NAFLD (48) and the attribution of hyperinsulinemia to impaired insulin clearance in these patients. (49) With Ceacam1 expression being induced (50) by most of the drugs that are used to treat these patients, (4) the current studies provide impetus to test whether CEACAM1 is a potential target for drug development against the cardiometabolic anomalies of metabolic syndrome.
The Relationship of CSR and Financial Performance: New Evidence from Indonesian Companies The research objectives of the study are to investigate whether there are any positive relationships between CFP and CSR under the slack resource theory and to investigate whether there are any positive relationships between CSR and CFP under good management theory by integrating concept of strategic management into the definition of CSR as sustainable corporate performance including economy, social, and environment. To answer the research questions of this study, questionnaire-based survey research design was used. The questionnaires that include items representing variables in this study (corporate social performance, corporate financial performance, business environment, strategy, organization structure, and control system) were sent to the respondents who are managers of state-owned companies (BUMN) and private-owned companies using post and e-mail services. There is a positive relationship between CFP and CSP under the slack resource theory and under good management theory. Introduction The phenomenon of management's low understanding of the CSR (corporate social responsibility)-CFP (corporate financial performance) link and the perceived CSR across the companies in Indonesia economy can raise some problems on the social and environmental performance. Even though, some attempts have been conducted to improve the social and environmental performance in Indonesian business practice, the performance has so far not indicated satisfactorily. There is no specific study explaining the phenomena. Some studies (Fauzi et al., 2007;Fauzi, 2008;Fauzi et al., 2009) on CSR in Indonesia have been conducted, but they focus on CSR disclosure in companies' annual report and do not touch managerial perception that is considered important approach in the literature (Cochran and Wood, 1984;Orlizky et al., 2003). In addition, studies of the CSR-CFP link using contingency factors have also been done, but the contingency factors used in the studies focus on common factor affecting the CSR such the size company and type of industry and not related to important factors affecting corporate performance (for example, Russo and Fouts, 1997;Rowley and Berman;McWilliam and Siegel, 2001;Husted, 2000;Brammer and Pavelin, 2006;Fauzi et al., 2007a and2007b). Based on understanding of the concept TBL (triple bottom line) coined by Elkington (1994), the three factors need to be considered as the CSR concept is an extended corporate performance. The approach is also a redefined concept of CSR concept as suggested by Fauzi (2009). This study is exactly the first attempt considering the important factors of corporate performance in affecting CSR under two theories: slack resource and good management theory. The demand for business considering the interest of stakeholder groups has recently become increasingly common across the world. The demand has emerged ever since the notion of corporate social responsibility (CSR), with other synonymous names, among others, sustainability, corporate accountability, social performance, and triple bottom line (TBL), has been introduced three decades ago. As a result, the term corporate performance has been extended to include not only financial aspect, but also social and environmental dimensions. Indonesia is not exceptional for the demand for the implementation of CSR and its various synonyms in the business practices. The demand has been met by Based on the review of accounting and strategic management literatures, it can be found that corporate performance is matching of business environment, strategy, internal structure, and control system (Lenz, 1980;Gupta and Govindarajan, 1984;Govindarajan and Gupta, 1985;Govindarajan, 1988;Tan and Lischert, 1994;Langfield-Smit, 1997). Thus it can be argued that corporate performance referred to the notion of TBL should be affected by several important variables: business environment, strategy, structure, and control system. Therefore, better attempt to seek explanation of the relationship between CSP and CFP need to be conducted using the integrated model as suggested in the accounting and strategic management literatures. The research objectives of the study are to investigate whether there are any positive relationships between CFP and CSR under the slack resource theory and to investigate whether there are any positive relationships between CSR and CFP under good management theory by integrating concept of strategic management into the definition of CSR as sustainable corporate performance including economy, social, and environment. The study also addresses methodological problems, which become the sources of the conflicting result of CSP-CFP link. The problems include (1) In explaining the relationship between CSP and CFP two theories from management literature may be adapted: (1) slack resource theory, and (2) good management theory or resource-based perspective of competitive advantage (Miles et al., 2000). Slack resource theory is developed based on the view that a company is able to carry out its activities because of the resources owned by the company, which have normally been dedicated to the predefined activities. The function of the resource is to enable the company to successfully adapt to internal pressure for adjustment or to external pressures for change (Buchholtz et al., 1999). The resource needed by the company to successfully adapt is slack in nature, which is defined as any available or free resource (financial and other organization resource) used to attain the company's certain goal (see for example Bourgeois, 1981;Jensen, 1986). According to Waddock and Grave (1997), when a company's financial performance improves, slack resources will be available to enable the company to conduct corporate social performance such as society and community relation, employee relation, and environment performance. Some activities conducted by the company in the domain of corporate social performance are meant to develop and enhance the company's competitive advantage through image, reputation, segmentation, and long term cost saving (Miles & Covin, 2000;Miles & Russel, 1997;. McGuire et al. (1988McGuire et al. ( , 1990 have provided some empirical support to the theory. Good Management Theory Good management theory, taken on by Waddock and Grave (1997) in explaining CSP-CFP link, is further articulation of stakeholder theory (Donaldson & Preston, 1995). Proposition developed under the good management theory is that a company should try to satisfy its stakeholders without presupposing its financial condition. In so doing, the company will have good image and reputation. Based on resource-based perspective, the attributes are one of company's assets in the intangible component that is one component contributing to the company's competitive advantage (Barney, 1991). Essentially, the theory encourages managers of a company to continuously seek better ways to improve the company's competitive advantage, which ultimately can enhance the company's financial performance. According to Miles and Covin (2000), environmental performance is an alternative way to satisfy stakeholders and can be a distinct layer of advantage that intensifies competitive power. Good management theory proponents also suggest that good management practice has high relation to CSP because it can improve a company's relationship to its stakeholders, and this in turn will improve the company's financial performance (Donaldson & Preston, 1995;Freeman, 1994;Waddock & Grave, 1997) and its competitive advantage (Prahalad & Hamel, 1994;Waddock & Grave, 1997). Good management theory has received some empirical support (McGuire, 1988(McGuire, , 1990Waddock & Grave (1997). CSP-CFP Relationship Based on the literature review, the relationship between CSP and CFP could be positive, negative, or neutral. Griffin and Mahon (1997) reviewed studies discussing the relationship between CSP and CFP for period of the 1970s (16 studies), the 1980s (27 studies), and the 1990s (8 studies) with total of 51 articles. Griffin and Mahon (1997) had mapped the issue of direction of the relationship between CSP and CFP for the periods. In the 1970s, there were 16 studies reviewed with 12 findings showing positive relationships. For the 1980s and 1990s, the findings of positive direction had been accounted for 14 of 27 studies and 7 of the 8 studies, respectively. Negative results (findings) were supported by 1 study in the 1970s, 17 studies in the 1980s, and 3 studies in the 1990s. Inconclusive findings were provided by 4 studies in the 1970s, 5 studies in the 1980s, and no finding in the 1990s. It should be noted that one or more studies could have one or more findings. This is because one study may use one approach to measuring CSP and one or more approach to measuring CFP. There may be mixed results within a study. One of the findings is positive and no effect/inconclusive as found, for example, in the studies of Anderson at al. (1980) and Fry et al. (2001). Another the findings are positive and negative relationship as indicated, for example, in the studies of Cochran and Wood (1984); Cofrey and Fryxell (1991);and McGuire et al. (1988). This is in line with the suggestion of Wood and Jones (1995) that mismatching measurement in CSP and CFP can contribute to the inconsistency result of the relationship between CSP and CFP. The work of Griffin and Mahon (1997) was not all inclusive. There were some studies contributing to the direction of the CSP-CFP relation in the 1990s. In the period, positive direction of the relationship had also been provided by Worrell, Davidson III, and Sharma (1991), Preston and O'Bannon (1997), Waddock and Graves (1997), Roman et.al. (1999). A negative result was revealed by Wright and Ferris (1997). Subsequently showed an inconclusive result. Mahoney and Robert (2007), based on the Canadian companies and by excluding the environmental aspect from the CSR variable aspect to be a separate variable, examined the corporate social and environmental performance variables on financial performance and institutional ownership using company size, financial leverage, and type of industry as control variables. The result of the study indicated that while environmental performance significantly and positively affected financial performance, corporate social performance variable did not. In addition, Mahoney and Robert (2007) found that while a positive relation between corporate social performance and institutional ownership existed, environmental performance variable did not. In addition to providing the different results of the direction of the relationship from the work of Griffin and Mahon (1997), Roman et al. (1999) corrected the table in the Griffin and Mahon's work (1997) for erroneous conclusion, from moving negative to positive result and moving from positive or negative direction to inconclusive result, and for invalidity of CSP or CFP measure used by authors of studies reviewed by Griffin and Mahon (1997). The correction might be due to the invalidity of research result included in the list of Griffin and Mahon (1997). For those generalized erroneously by Griffin and Mahon (1997), Roman et al. (1999) reclassified Griffin and Mahon's (1997) list from negative to positive direction and from positive or negative to inconclusive result. In their new table summarizing the direction of CSP-CFP relation, Roman et al. (1999) removed articles with problems of invalidity of measurement mentioned above and replaced with the new studies for those supplanted by later studies from the table of Griffin and Mahon (1997). Articles reviewed by Roman et al. (1999) were 46 studies comprising 51 research results (findings) of which 33 (65%) showed positive associations. In their more recent work, Margolis and Walsh (2003) had also mapped studies investigating the CSP-CFP relation as done by Griffin and Mahon (1997) using a wider span of period (1972 -2002) and 127 published studies for that period. Of the studies, 70 studies (55%) reported positive direction, while only 7 studies showed negative direction, 27 studies supported inconclusive result, and 23 studies found in both directions. Gray (2006), in his review of studies investigating the relationship between CSP and CFP, had argued that no sound theory exists to potentially create the implausible relationship in addition to the methodological problems in the previous studies. Those can lead to the inconclusive result. This argument was also supported by Murray et al. (2006) in their cross section data analysis. However, using the longitudinal data analysis, they found different results. In the most recent study, Hill et al. (2007) investigated the effect of corporate social responsibility on financial performance in terms of market-based measures and provided a positive result in the long-term horizon. Another issue of the relationship between CSP and CFP that Griffin and Mahon raised is about the causality. In an effort to meet stakeholders' expectation, every company should try to improve corporate social performance from time to time and, at the same time, the economic/financial should also be improved. One question raises regarding which one between corporate social performance and financial performance come first. Waddock and Graves (1997) and Dean (1999) put forward two theories to explain the question: Slack resource theory and good management theory. Under the slack resource theory, a company should have a good financial position to contribute to the corporate social performance. Conducting the social performance needs some fund resulting from the success of financial performance. According to this theory, financial performance comes first. Good management theory holds that social performance comes first. Based on the theory, a company perceived by its stakeholders as having a good reputation will ease the company to get a good financial performance (through market mechanism). Based on the findings of the previous studies especially the works of Griffin and Mahon (1997), Roman et al. (1999) and Margolis and Walsch (2003) and following the theories used by Waddock and Grave (1997), the hypothesis of this current study could be formulated as follows: H1a: there is a positive relationship between CFP and CSP based on the slack resource theory H1b: there is a positive relationship between CSP and CFP based on good management theory Research Method There are several variables used in this study: Corporate social performance, corporate financial performance, business environment, strategy, organization structure, and control system as main variable; and company size and type of company (in term of ownership: state-owned company non state-owned company) as control variables. The measure for CSP variable in this study used the MJRA's dimensions of CSP by deleting some indicators to adjust Indonesian environment. This CFP variable was measured by using the perceptual method to match with the CSP measure (Wood and Jones, 1995). In this approach, some subjective judgments were provided by respondents using 8 (eight) indicators developed by Ventakraman (1989) comprising of two dimensions: growth and profitability dimension. Business environment were measured using managers' perception of the level of hostility, dynamism, and complexity in each environmental dimension using a 7-point scale (Tan and Lischert, 1994). The business strategy variable was measured by strategic orientation. Using focus on decision as developed by Mintzberg (1973), the strategic orientation were broken down into several dimensions including (1) analysis, (2) defensiveness, (3) futurity, (4) proactiveness, and (5) riskiness. The organization structure was measured using three dimensions: formalization, decentralization, and specialization. Control system was defined by using typology of control of Simons (1994 and including belief system, boundary system, diagnostic control system, and interactive control system. The company size followed the measure used by Mahoney and Robert (2007) with the argument that total asset is "money machine" to generate sales and income. Type of company was measured using dummy variable. The measure of 1 is for state-owned company and while 0 is for non-state-owned company. Unit of analysis in this study is Indonesian managers. Population of this study is all Indonesian managers working in the Jakarta stock exchange's listed companies and in state-owned companies. Data set of manufacturing sector in publicly traded companies' stock (privateowned companies) and in the directory of state companies in State Ministry of State Owned Company (state-owned companies=BUMN) was used with the intention to reduce mismatching problem as suggested by Wood and Jones (1995) There are several techniques used to analysis the data (1) psychometric analysis, (2) factor analysis, (3) and multiple regression analysis. The psychometric analysis is used to determine consistency or reliability of the measured result. Exploratory factor analyses including coefficient alpha and item-to-total correlation were estimated to assess the psychometric characteristics of scales for each variable. Due to the fact that latent variables are used in this study coming from constructs that have been developed based on some dimensions of concept, factor analysis was needed to reduce the dimensions becoming the single measure of the latent variables. There two models used in this study: (1) model 1 and (2) The result of Model 1(see Table 1) shows that the model is significant at level of 0.01 with an R 2 of 67%. It should be noted that the β (0.000, p=0.621) for company size and the β coefficient (β=2.482, p=0.177) for type of company indicated that the variables had no impact on the variance of the dependent variable, corporate social performance (CSP). The result of Model 1 also shows that the β coefficient for the independent variable corporate financial performance (CFP) (β=0.655, p=0.000) demonstrated a significant positive impact on the variance of the dependent variable, corporate social performance (CSP). In addition, the model also shows the regression result of contextual variables (business environment, strategy, formalization, decentralization, combination of belief and boundary system, combination of diagnostic and interactive control system, and interactive control system) on the dependent variable, corporate social performance. The β coefficient (β=0.25721, p=0.000) for business environment, decentralization (β=0.243, p=0.004), combination of belief and boundary system (β=0.829, p=0.000), combination of diagnostic and interactive control system (β=0.653, p=0.000) and interactive control system (β=0.352, p=0.000) demonstrated significant positive impact on CSP, while strategy (β=-0.122, p=0.185) and formalization (β=0.239, p=0.537) clearly indicated no significant impact on the variance of CSP. Therefore, based on the model, using contextual variables as independent variable, this study accepted hypothesis H 1a and concluded that H 1a has been empirically supported. The CSP-CFP link under the two models are based on two theories namely slack resource and good management theory. The findings are inconsistent with numerous previous studies (Wright & Ferris, 1997;Moore, 2001;McWilliams &Siegel, 2001;McWilliams & Siegel, 2000;McWilliams & Siegel, 2000;Robert and Mahoney, 2007 1 ). The inconsistency was demonstrated in the results of study. There are some reasons to 1 . Mahoney and Robert (2007) exclude the environment aspect from the CSP construct and make it as one variable, beside the CSP itself. The finding is not consistent with the CSP itself. explain the difference of results: (1) the previous studies used the disclosure data to measure CSP, and (2) the previous study only measure CSP and never relate it to extended corporate performance model, which in this study is called sustainable corporate performance, (3) model used in this study has never been considered by previous studies. In addition, this finding is also inconsistent with that of Fauzi et al. (2007), Fauzi (2008), and Fauzi et al. (2009a. Even though the studies were conducted in the same setting, i.e. Indonesia, but different methods were utilized. The measurement of CSP conducted in the studies of Fauzi et al. (2007), Fauzi (2008), and Fauzi et al. (2009a is disclosure approach, while in this study perceptual approach both for CSP and for CFP has been used as suggested by Wood and Jones (2995). In terms of measurement, the previous studies mentioned above used the disclosure approach. Furthermore, the difference of the findings from the other studies may be explained in this study by the contextual variables becoming the important determining factor of corporate performance (business environment, strategy, organization structure, and control system) which was considered in the model to explain the relationship between CFP and CSP. The reason for the inclusion of the contextual variable in the model is that the CSP construct is considered an extended corporate performance that includes the three bottom line aspect: (1) social performance, (2) environmental performance, and (3) financial performance. The new aspects have never been considered in the previous studies. The findings of the CFP-CSP link and CSP-CFP link are consistent with the ones of Waddock and Grave (1997) even though they used different measurement of CSP. The index of CSR done by the third party was used by them, while in this study perceptual approach developed using questionnaires is used. The index data is not purely perceptual approach. Rather it combines perceptual and content analysis, like the one done by rating companies such as KLD (USA) and MJRA (Canada). Mahoney and Roberts (2007) followed the approach of Waddock and Grave (2007) using the index data of CSP issued by MJRA. The regression result of Model 1 supporting hypothesis H 1a indicated that CFP is the most important variable in promoting CSR in manufacturing firms in Indonesia. This finding may be explained partially by the fact that in Indonesia the strength of financial position affects the implementation of CSR. This finding is consistent with the finding of McGuire (1988 and1990) and Waddock and Grave (1997). In contrast, the finding of this study is conflicting with Fauzi (2007) which used the content analysis of more than 300 companies listed in BEJ (Jakarta Stock exchange both in manufacturing and nonmanufacturing sectors). In addition, the objection to Law No.40/2007 passed by Indonesian law maker for compulsory implementation of CSR (Fauzi, 2009) supported the inconsistency of this study with the other previous studies conducted in Indonesia. What becomes the business people's concern is the lack of resources to conduct the CSR. They are somewhat apprehensive of the profitability problem when they are obligated to conduct CSR. This study under the two theories found that the relationship between CSP and CFP is positive thus it should eliminate the concern that conducting CSR can impair their profitability. The variance of CSR was also contributed by business environment, decentralization, combination of belief and boundary system, combination of diagnostic & interactive control system, and interactive control system. The condition of business environment will determine the CSR. On the high uncertainty of business environment, the CSR will be high accordingly to maintain good relationship with customer. This finding is consistent with the study of Higgin and Currie (2004). In addition, decentralization also has a positive impact on CSP and CFP. More decentralization will improve CSR. Decentralization is defined as the delegation of power from higher level to the lower of managers. Given the power to make decisions, the managers can make some efforts to conduct CSR and improve CFP. This finding is consistent with the proposition of the Centre for Business Ethics (1986). In Indonesia, the commitment of top management is important to make CSR a success. The commitment of top management also means using inducement like implementation of Law No.40./2007 andlaw No. 19/2003 for state-owned company only. Control system has also an impact on CSR. Under slack resource theory, a company has more chance to conduct CSR (CSP). But in an Indonesian context, redefining CSR is needed (Fauzi, 2009) The regression result of Model 2 (under good management theory) indicated that CSP is also an important variable in improving financial performance in manufacturing companies in Indonesia. But further investigations of the regression results found that the R 2 of the model is relatively low (51%) compared to the R 2 of Model 1 (67%). The predictability of Model 2 is lower than that of Model 1. In addition, the coefficient of regression (β) of CSP (0.114) is lower than that of the CFP (0.655). This means that the CFP-CSP link (under slack resource theory) is stronger than the CSP-CFP link (under good management theory). This situation is similar to the one in Waddock and Grave (1997). Waddock and Grave (1997) discovered that under slack resource theory, the regression coefficient of CFP is greater than 1, while under good management theory; the regression coefficient of CSR is far less than 1 This situation may be explained by the implementation of CSR that is more driven by the availability of a firm's resources rather than the awareness to do that regardless the resources the firm has. On the other hand, Friedman's (1962Friedman's ( /1970 assertion that the social responsibility of business was to increase profit has dominated the view of business community all over the world, including in Indonesia. That is why the CSP-CFP link has produced conflicting results. The low regression coefficient in Waddock and Grave's (1997) study concerning good management theory has supported the assertion of Friedman. Similar situation also occurs in Indonesia in the case of the Law No. 40/2007. Companies in Indonesia were highly reactive to respond to the implementation of the law (article 74) that obligated them to conduct CSR. Conclusion The research questions of this study have been answered. There is a positive relationship between CFP and CSP under the slack resource theory and under good management theory. Based on the finding of the study, there is a need for further study on the impact of contextual variables of corporate performance on CSR as a base to develop TBL-based CSR reporting in Indonesia. This suggestion for future research is important for the following reasons: (1) stakeholder theory used in this study and others may undergo some modification given the deep study on impact of contextual variables of corporate on CSR, (2) as suggested in managerial decision implication, the CSR need to be redefined in Indonesian and (3) It should be pointed out that this study has several limitations. This may be especially important for researchers who are less familiar with Indonesia culture, business environment, and differing culture. The first limitation of the study is the timing of the survey. For the last two years, compulsory implementation of CSR in Indonesia based on the Law No. 40/2007 has been in the process and most Indonesian companies objected to the compulsory implementation of the law. The second limitation is related to the questionnaire procedure. The length of the questionnaires exceeds eleven pages. Such length, according to Dilman (1978), may reduce the expected response rate. In addition, non random and non probability methods were used in selecting the sample. These techniques may influence the finding of the study and its application to businesses other than manufacturing. The third limitation is that the population of the study for non BUMN was manufacturing companies listed on ISE (Indonesian Stock Exchange). Thus, other big manufacturing companies including mining companies such as Freeport are not included in the sample as they are not listed on the Exchange. Such companies may have importantly contributed to the environment. The fourth limitation is that no study has examined the constructs of this research (integrating contextual variables affecting corporate performance into CSR as an extended corporate performance), either in Indonesia or outside Indonesian. Therefore, the researcher has to proceed without the advantage of having an established model to refer to and research findings as comparisons.
Thriving at Low pH: Adaptation Mechanisms of Acidophiles Acid resistance of acidophiles is the result of long-term co-evolution and natural selection of acidophiles and their natural habitats, and formed a relatively optimal acid-resistance network in acidophiles. The acid tolerance network of acidophiles could be classified into active and passive mechanisms. The active mechanisms mainly include the proton efflux and consumption systems, generation of reversed transmembrane electrical potential, and adjustment of cell membrane composition; the passive mechanisms mainly include the DNA and protein repair systems, chemotaxis and cell motility, and quorum sensing system. The maintenance of pH homeostasis is a cell-wide physiological process that adopt differently adjustment strategies, deployment modules, and integration network depending on the cell’s own potential and its habitat environments. However, acidophiles exhibit obvious strategies and modules similarities on acid resistance because of the long-term evolution. Therefore, a comprehensive understanding of acid tolerance network of acidophiles would be helpful for the intelligent manufacturing and industrial application of acidophiles. Introduction Both natural and man-made acidic habitats are widely distributed in global land and ocean ecosystems, such as acidic sulfur-rich thermal springs, marine volcanic vents, and acid mine drainage (AMD) [1]. However, these unique habitats harbor the active acidophilic organisms that are well adapted to the acidic environments. Undoubtedly, acidophiles are distributed randomly throughout the tree of life and prevalent in the acidity or extreme acidity habitats, archaea and bacteria in particular, and they represent an extreme life-forms [2][3][4]. Generally, acidophilic archaea and bacteria mainly include members of phylum Euryarchaeota, Crenarchaeota, Proteobacteria, Acidobacteria, Nitrospira, Firmicutes, Actinobacteria and Aquificae such as Ferroplasma, Acidiplasma, Sulfolobus, Acidianus, Acidiphilum, Acidithiobacillus, Acidihalobacter, Ferrovum, Acidiferrobacter, Acidobacterium, Leptospirillum, Sulfobacillus, Acidibacillus, Acidimicrobium, and Hydrogenobaculum [5][6][7]. More importantly, acidophiles, as an important taxa of microorganisms, are closely related to the biogeochemistry cycles, eco-environment and human development, such as driving the elemental sulfur and iron cycles [8], the water and soil polluted by acidic effluents [9], biomining-bioleaching techniques and bioremediation technologies [9][10][11]. Thus, a comprehensive understanding of the acid-resistance networks and modules of acidophiles would be helpful for the Thriving at Low pH: Adaptation Mechanisms of Acidophiles DOI: http://dx.doi.org /10.5772/intechopen.96620 (that is, the differential proton concentrations of 4-6 orders of magnitude). The ΔpH across the membrane is a major part of the PMF, and the ΔpH is linked to cellular bioenergetics. Acidophiles, such as Acidithiobacillus ferrooxidans and Acidithiobacillus caldus, are capable of using the ΔpH to generate a large quantity of ATP [16,17]. However, this processes would lead to the rapid acidification of the cytoplasm of alive cells. Because a high level of protons concentration would destroy essential molecules in cell, such as DNA and protein, acidophiles have evolved the capability to pump protons out of their cells at a relatively high rate. The F 1 F o -ATPase consists of a hydrophilic part (F 1 ) composed of α, β, γ, δ, and ε subunits and a hydrophobic membrane channel (F o ) composed of a, b, and c subunits; among them, the F 1 catalyzes ATP hydrolysis or synthesis and the F o translocates protons. This mechanism pumps out protons from cells by hydrolyzing ATP (Figure 1), thereby efficiently protecting cells from the acidic environments. In several microorganisms, transcriptional level of the atp operon upregulated by exposure to the acidic environments, including A. caldus, Acidithiobacillus thiooxidans, and Lactobacillus acidophilus [18][19][20], suggesting its critical role in acid resistance of cell. Several proton efflux proteins have also been identified in the sequenced genomes of A. ferrooxidans, A. thiooxidans, A. caldus, Ferroplasma acidarmanus, and Leptospirillum group II [21,22]. The H + -ATPase activity and NAD + /NADH ratio were upregulated in A. thiooxidans under the acid stress [19]. The cells actively pump out protons by a respiratory chain from cell. For example, under the acid stress, the A. caldus increases its expression of respiratory chain complexes that can pump protons out of the cells [20]. Meanwhile, NAD + involved in glycolysis as the coenzyme of dehydrogenase, generating large amount of ATP and contributing to pump protons out of the cells though ATP hydrolysis. Among the active mechanisms, the proton consumption systems are necessary to remove excess intracellular protons. Once protons enter the cytoplasm, some mechanisms and patterns are required to mitigate effects caused by a high concentration of proton in cells. Under the acidic conditions, there is increased expression of amino acid decarboxylases enzymes (such as Glutamate decarboxylase-β (GadB)) that could consume the cytoplasmic protons by the catalytic reactions [23]. GadB, coupling with a glutamate/gamma-aminobutyrate antiporter (GadC), catalyzed glutamate to γ-aminobutyric acid (GABA) and exchanged with glutamate substrate to achieve continued decarboxylation reactions (Figure 1) [24]. It consumed a proton during the decarboxylation reactions and thus supported the intracellular pH homeostasis. And, it would contribute to a reversed Δψ in most bacteria. Similarly, the gadB gene was found in Ferroplasma spp., and the gene transcription was upregulated under acid shock conditions in A. caldus [20,22]. Therefore, in order to maintain pH homeostasis of cell, acidophiles need to be able to consume excess protons from the cytoplasm. A second major strategy for the active mechanisms used by acidophiles to reduce the influx of protons is the generation of an inside positive Δψ that generated by a Donnan potential of positively charged ions. A positive inside transmembrane potential was contributed to a reversed Δψ that could prevent protons leakage into the cells. The acidophiles might use the same strategies to generate a reversed membrane potential to resist the inward flow of protons, Na + /K + transporters in particular (Figure 1) [25]. Previous data showed that some genomes of acidophiles (A. thiooxidans, F. acidarmanus, Sulfolobus solfataricus, etc.) contain a high number of cation transporters genes and these transporters were probably involved in the generation of Donnan potential to inhibit the protons influx [21,22,25,26]. The genome of Picrophilus torridus also encodes large number of proton-driven secondary transporters which represents adaptation to the more extremely acidic environment [27]. Furthermore, we found that the maintenance of Δψ in A. thiooxidans was directly related to the uptake of cations, especially the influx of potassium ions [25]. Further evidence of chemiosmotic gradient created by a Donnan potential to support acid resistance is the Donnan potential created by a passive mechanism, that is, a small residual inside positive Δψ and ΔpH are maintained in inactive cells of A. caldus, A. ferrooxidans, Acidiphilium acidophilum, and Thermoplasma acidophilum [28][29][30]. The residual Δψ and ΔpH studies have been criticized because of measurement methods [31]. However, subsequent data showed that the energydependent cation pumps played an important role in generating an inside positive Δψ. In addition, acidophilic bacteria are highly tolerant to cations and are more sensitive to anions. In summary, the inside positive Δψ is a ubiquitous and significant strategy in maintaining the cellular pH homeostasis. Although improving the efflux and consumption of protons and increasing the expression of secondary transporters are a common strategy, the most effective strategy is also to reduce the proton permeability of cell membrane [32,33]. Acidophiles can synthesize a highly impermeable membrane to respond to proton attack (Figure 1). These physiological adaptations membranes are composed of the high levels of iso/anteiso-BCFAs (branched chain fatty acids), both saturated and mono-unsaturated fatty acids, β-hydroxy, ω-cyclohexyl and cyclopropane fatty acids (CFAs) [34]. It was found that cell membrane resisted the acid stress by increasing the proportion of unsaturated fatty acid and CFAs in some bacteria, such as A. ferrooxidans and Escherichia coli [35][36][37]. Although the cytoplasmic membrane is the main barrier to protons influx, the destruction of the membrane caused by protons may cause this barrier to break down. The key component of membranes preventing acid damage seems to be CFAs, which contributes to the formation of cell membrane compactness. Supporting this mechanism is that E. coli with a mutation in the cfa gene became quite sensitive to low pH and can overcome this sensitivity by providing the exogenous cfa gene [36]. Meanwhile, the transcription of cfa gene was upregulated under the acid stress in A. caldus [20], and it suggests that changing the fatty acid content of the cell membrane is an adaptive response to acid stress. In brief, the CFAs is important for maintaining membrane integrity and compactness under the acid conditions. To maintain the pH homeostasis of cells, acidophilic archaea cells have a highly impermeable cell membrane to restrict proton influx into the cytoplasm. One of the key characteristics of acidophilic archaea is the monolayer membrane typically composed of large amount of GDGTs, which are extremely impermeable to protons [38][39][40]. Although acidophilic bacteria have a variety of acid-resistant adaptation strategies, compared with acidophilic archaea, it has not been found that these bacteria would exhibit excellent growth ability below pH 1. The special tetraether lipid is closely related to acid-tolerance capability, because the ether linkages are less sensitive to acid hydrolysis than ester linkages [41]. And, the results of studies on acidophilic archaea indicated that tetraether lipids may be more resistant to acid than previously thought [42]. Therefore, the contribution of tetraether lipids to adaptation of archaea to extremely low pH is enormous. To a certain extent, it also supports the reason why dominance of archaea under extremely acidic environments. Similarly, the extreme acid tolerance of archaea can be attributed to cyclopentane rings and the vast methyl-branches [43]. In addition, it was found that the less phosphorus in the lipoprotein layer of acidophilus cell can contribute to higher hydrophobicity, which was beneficial for resisting extreme acid shock [13]. Irrespective of the basic composition of cell membranes, bacteria and archaea have extensively reshaped their membrane components to overcome the extremely low acid environments. In summary, the impermeable of acidophilic cell membrane is an important strategy for the pH homeostasis of acidophiles formed by restricting the influx of protons into the cells. DOI: http://dx.doi.org/10.5772/intechopen.96620 Passive strategies for acidophiles living When the cells are attacked or stressed by higher concentrations of protons, the passive mechanisms of pH homeostasis would support the active mechanism. If protons penetrate the acidophilic cell membrane, a range of intracellular repair systems would help to repair the damage of macromolecules [13]. The DNA and protein repair systems play a central role in coping with acid stress of cells (Figure 1). Because DNA carries genetic information of cell life and protein plays an important role in the physiological activities of cells, DNA or protein damage caused by protons would bring irreversible harm to cells. When the cells are exposed to a high concentration of proton environments or protons influx into the cells, a great number of DNA repair proteins and chaperones (such as Dps, GrpE, MolR, and DnaK protein) would repair the damaged DNA and protein [19,44,45]. Previously reported study showed that a great number of DNA and protein repaired genes presence in wide range of extreme acidophiles genomes might be related to the acid resistance, for example, a large number of the DNA repaired proteins genes in P. torridus genome [27,46]. Indeed, the transcription and expression of these repair systems were upregulated under the extreme acid stress, for example, the transcription of molecular chaperones repair system-molR and DnaK were enhanced in A. thiooxidans [19]. In addition, the GrpE and DnaK proteins expression were significantly improved in Acetobacter pasteurianus for coping with acetic acid stress [47]. Similarly, the molecular chaperones involved in protein refolding were largely expressed in L. ferriphilum under the AMD biofilm communities [48]. And, the chaperones were also highly expressed in F. acidarmanus during aerobic culture [49]. Quorum sensing (QS) system is a ubiquitous phenomenon that establishes the cell to cell communication in a population through the production, secretion and detection of signal molecules. In addition, The QS system is also widely involved in various physiological processes in cell such as biofilm formation, exopolysaccharides, motility, and bacterial virulence [50][51][52]. Moreover, the QS system can contribute to bacteria tolerating extreme environmental conditions by regulated biofilm formation. For example, bacteria showed the strong resistance to extremely low pH, due to these bacteria grown in a biofilm environment [53]. In case of acidophiles, QS system has been reported in A. ferrooxidans by producing the stable acylated homoserine lactones (AHLs) signal molecules under acidic conditions and overexpression strains promoted cell growth by regulated genes expression [54,55]. Flagella is an important cell structure for the motility and chemotaxis in most bacteria, and is also involved in the biofilm formation [56]. Flagella-mediated chemotaxis is essential for cells to respond to environmental stimuli (pH, temperature, osmotic pressure, etc.) and find nutrients for growth. The chemotaxis and motility of cells is a complex physiological behavior regulated by the diverse transcription factors, such as RpoF (σ 28 or FliA) of the σ factors and ferric uptake regulator (Fur) of the global regulator, and has strictly spatiotemporal characteristics [20,56]. For example, the mutant strain of A. caldus fur gene significantly upregulated some genes (cheY, cheV, flhF, flhA, fliP, fliG, etc.) related to cell chemotaxis and motility under the acid shock conditions [20]. Similarly, F. acidarmanus was capable of motility and biofilm formation [57]. This indicates that although the chemotaxis and cell motility ability of acidophiles cannot directly involve in acid resistance and maintain cell pH homeostasis, they have the ability to avoid extremely unfavorable acid environments to improve cells survival. Altogether, we suggest that the QS system and chemotaxis and cell motility are essential part of escaping the extremely acidic environments in passive mechanisms (Figure 1). It could be seen from the classification description above that there are a variety of mechanisms and strategies by which acidophiles can tolerate or resist the acidic or extremely acidic environments. However, some possible mechanisms have been imperfectly understood or classified, for example, the distinctive structural and functional characteristics of extremely acidophilic microorganisms (Figure 1) [13,15]. First, iron may act as a "rivet" at low pH, which plays an important role in maintaining proteins activity, for example, the high proportion of iron proteins in F. acidiphilum. And, it has been found that the removal of iron from proteins can result in the loss of proteins activity [58,59]. Secondly, the strategy of cell surface charges. The surface proteins of acidophiles have a high pI values (a positive surface charges), which can act as a transient proton repellent on the cell surface. For example, the isoelectric point (pI) of the OmpA-like protein in the outer membrane of the A. ferrooxidans is 9.4, whereas that of E. coli OmpA is 6.2 [60]. It may be the functional requirements that the possession of positive surface charges could reduce the permeability of A. ferrooxidans cells to protons. Then, adjustment of pore size of membrane channels is also used to minimize inward proton leakage under acid stress. For example, under the acid shock, the expression of outer membrane porin (Omp40) of A. ferrooxidans was upregulated [61], which could control the size and ion selectivity of the entrance to the pore. Ultimately, since organic acids could diffuse into the cells in the form of protonation at low pH environments and then the proton dissociation quickly acidify the cytoplasm, the degradation of organic acids might be a potential mechanism for maintaining pH homeostasis, especially heterotrophic acidophiles. Although the genes that degrade organic acids in some acidophile (such as F. acidarmanus, P. torridus) have been identified, it is unclear whether the degradation of organic acids would contribute pH homeostasis [27,62]. In summary, these possible mechanisms remain to be confirmed but these genes of existence and identification could be a mechanism associated with low pH tolerance. Evolution of low pH fitness of acidophiles In the past few decades, studies have confirmed that acidophilic microorganisms are widely present in the three domains of bacteria, archaea and eukarya, indicating that acidophiles have gradually developed in the evolution of life on earth, rather than from a single adaptation events. Although the extremely acidic environments are toxic to most organisms, there are still large number of indigenous microorganisms that can thrive in these habitats. The generally accepted view is that acidophiles can be divided into moderate acidophiles that have pH optima of between 3 and 5, extreme acidophiles that have pH optima for growth at pH < 3, and hyperacidophiles that have pH optima for growth below pH 1 [1]. Generally, with the acidity becomes more extreme, biodiversity also gradually decreases. Accordingly, as would be anticipated, the most extremely acidic environments hold the less biodiversity, for example, hyperacidophiles includes the relatively few species (e.g. F. acidarmanus and Picrophilus oshimae) [1]. Acidophiles can survive in the acidic or extremely environments and are the source of acidity environment [1,63,64]; thus, they have the ability to resist the acidic environments that evolved during evolution. Acidic hydrothermal ecosystems, such as Tengchong hot springs, Crater Lake, and Yellowstone National Park, are dominated by archaea [40,65], and suggesting that the acidophilic archaea evolved in the extremely acidic hydrothermal environments after the emergence of oxygenic photosynthesis [66]. Based on the niche similarity and physiological adaptation among archaea, it showed that the long-term acidity stress is the main selection pressure to control the evolution of archaea and leads to the co-evolution of acid-resistant modules [66]. Although the DOI: http://dx.doi.org/10.5772/intechopen.96620 species diversity decreases significantly as the pH decreases, the high abundance of acidophilic taxa, such as Gammaproteobacteria and Nitrospira, was detected in acid habitats. Indeed, for the dominant lineages such as Acidithiobacillus spp. and Leptospirillum spp., this pH-specific niche partitioning was obvious [67]. Consistent with this, Ferrovum is more acid-sensitive than A. ferrooxidans and L. ferrooxidans, and prefers to grow under the near-moderate pH [68]. Interestingly, the majority of acidophiles growing at extremely acidic (i.e. pH < 1) are heterotrophic acidophiles that are capable of utilizing organic matter for growth such as T. acidophilum and P. torridus. In addition, although the Acidiplasma spp. and Ferroplasma spp. can oxidize ferrous iron in biomining, organic carbon can also be used for growth, and their relative abundance would increase with the mortality of other bioleaching microorganisms [69,70]. Therefore, they can be regarded as scavengers of the dead microorganisms and help the material and energy cycle in acidic habitats. To sum up, coexisting species may occupy different niches that could be affected by the pH changes, resulting in the changes in their distribution patterns. The reasons for the dominance of these particular microorganisms in acidic ecosystems are presumed to their adaptive capabilities. Adaptations to acid stress dictate the ecology and evolution of the acidophiles. Acidic ecosystems are a unique ecological niche for acid-adapted microorganisms. These relatively low-complexity ecosystems offer a special opportunity for the evolutionary processes and ecological behaviors analyses of acidophilic microorganisms. In the last decade, the use of high-throughput sequencing technology and post-genomic methods have significantly promoted our understanding of microbial diversity and evolution in acidic environments [68]. At present, metagenomics studies have revealed various acidophilic microorganisms from environments such as the AMD and acidic geothermal areas, and found that these microorganisms play an important role in acid generation and adaptability to the environments [71,72]. For example, because the comparative metagenomics and metatranscriptomics directly recover and reveal microbial genome information from the environments, it has the potential to provide insights into acid-resistance mechanisms of the uncultivated bacteria, such as clpX, clpP, and sqhC genes for resistance against acid stress. In addition, metatranscriptomics and metaproteomics analyses further uncovered the major metabolic and adaptive capabilities in situ [71], indicating the mechanisms of response and adaptation to the extremely acid environments. The continuous exploration of acidic habitats and acidophilic microorganisms is the basis for comprehending the evolution of acidophilic microbial acid-tolerant modules, strategies, and networks. First, methods based on transcriptomics and proteomics are the key to understanding the global acid-tolerant network of individuals under acid stress [19,73]. Secondly, comparative genomics plays a vital role in exploring the acid adaptation mechanism of acidophiles and studying the evolution of acidophiles genomes [74]. Ultimately, the emerging metagenomics technologies play an important role in evaluating and predicting microbial communities and their adaptability to acidic environments [75]. Moreover, metagenomics approaches could also provide a large amount of knowledge and functional module analysis on the acid tolerance of acidophiles to fully develop their potential in the evolution of acid tolerance [76]. With the publication of large number of metagenomics data, the evolution of the acid-tolerant components of these extremophiles would be better illustrated in the future. Conclusions Understanding the maintenance of pH homeostasis in acidophiles is of great significance to comprehend the mechanisms of cells growth and survival, as well as to the eco-remediation and application of biotechnology; thus, it is essential to fully understand the acid-tolerant networks and strategies of acidophilic microorganisms. The aims of this chapter presents the acid-resistant modules and strategies of acidophiles in more detail, including the proton efflux and consumption, reversed membrane potential, impermeable cell membrane, DNA and protein repair systems, and QS system (Figure 1). However, at present, several of the pH homeostatic mechanisms still lack clear and rigorous experimental evidence to support their functions from my point of view. In addition, we also discussed the evolution of acidophiles and its acid-resistant modules. In brief, the true purpose of acidophilic microorganisms evolving these mechanisms is to tolerate the extremely acidic environments or reduce its harmful effects for cell survival. Acidophiles are known for their remarkable acid resistance. Over the last decades, the combination of molecular and biochemical analysis of acidophiles with genome, transcriptome, and proteome have provided new insights into the acid-resistant mechanisms and evolution of the individual acidophiles at present. Using these genome sequences in a functional context through the application of high throughput transcriptomic and proteomic tools to scrutinize acid stress might elucidate further potential pH homeostasis mechanisms. However, the disadvantages of genomics, transcriptomics, and proteomics are that the data are descriptive and analogous and more work is required to verify the hypotheses such as the mutational analyses and genetic markers. One of the main obstacles to the current research on acid tolerance of acidophiles is the lack of genetic tools for in-depth analysis. Therefore, the development of genetic tools and biochemical methods in acidophile would facilitate elucidating the molecular mechanisms of acidophile adapting to extremely acidic environments, such as vector development remain largely unexplored. In addition, as most acidophiles are difficult to isolate and culture, our ability to understand acid resistance of acidophile is limited. The emerging omics technologies would be a crucial step to explore the spatiotemporal transformation patterns of acidophilic microbial communities, microbial ecophysiology and evolution in the future. The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Optimization of the ultrasound-assisted extraction of flavonoids and the antioxidant activity of Ruby S apple peel using the response surface method A Box–Behnken Design (BBD) was employed to optimize the extraction of antioxidants from Ruby S apple peel by ultrasound-assisted extraction (UAE). The effect of extraction temperature (20–40 °C), extraction time (15–45 min), and ethanol concentration (50–90%) in water on extraction yield, total phenol content (TPC), total flavonoid content (TFC), and DPPH radical scavenging activity of Ruby S peel extracts (RPEs) were investigated. The optimized extraction conditions that maximized extraction yield, TPC, TFC, and DPPH radical scavenging ability, were temperature 20 °C, extraction time 25.30 min, and ethanol concentration 50%. The validity of designed model was verified, and experimental values obtained under optimum conditions concurred with predicted values. Hyperoside, isoquercitrin, and phloridzin, were among the major flavonoids extracted. Our findings demonstrate the suitability of UAE and RSM for the optimization of Ruby S peel extraction and suggest the potential use of RPEs as bioactive functional materials. Introduction The apple is a perennial woody plant belonging to the Rosaceae family and cultivate worldwide, also is one of the representative fruits in Korea that accounted for about 76% of total fruit production in 2020 (Yoon, 2021). As the recent consumption trend that centers on convenience and preferences, the cultivation of recently developed, smallsize apples are increasing (Yoon, 2021). Furthermore, it has been reported that these apples are rich in phenolics. Smallto-medium sized apple are not only convenient and easily consumed, but their peels have bioactive effects, which increases their nutritional values. Ruby S (Malus domestica Borkh.) is a new apple variety developed in 2014 by the National Institute of Horticultural Research of the Korean Rural Development Administration. It has an average weight of 86 g and excellent storage properties. In addition, studies have reported that Ruby S extracts have antioxidant, antiinflammatory, gout inhibitory, anti-diabetes, and whitening effects (Lee et al., 2018), which suggests their potential use as a functional material. Interest in functional materials continues to grow, and many studies have been performed on natural antioxidants (Gulcin, 2020) with the object of targeting reactive oxygen species in vivo. Phenolic compounds are representative natural antioxidants and are known to be present in large amounts in plants (Dai and Mumper, 2010). Flavonoids are found in all plant parts and are characterized by a C6-C3-C6 ring (A, B, and C ring) and 15 carbon atom skeletons. The flavonoid family is composed of several subgroups which include flavonols, flavones, flavanones, chalcones, and isoflavones (Raffa et al., 2017). Apples are rich in these phenolics, and several studies have reported that the phenolic compounds found in apples, which include quercetin, phloridzin, catechin, procyanidin, and rutin are effective at preventing various cancers, 1 3 degenerative diseases, and free radical-induced aging . The extraction method used is important in terms of obtaining these phenolic compounds from plants, and ultrasound-assisted extraction (R) is faster, cheaper, and has better extraction yields than other methods, and is considered a suitable method for extracting phenolics and antioxidants from plants (Park et al., 2020). The extraction parameters must be optimized for different situations, and the response surface method (RSM) is widely used for this purpose in the food industry because it reduces the amount of work required to evaluate interactions between factors and allows complex interactions to be evaluated (Zulkifli et al., 2020). Many studies have been conducted on the antioxidant components of apple, but no study has addressed the optimal extraction conditions that maximize extract yields for Ruby S. This study is aimed to establish optimal UAE extraction conditions for Ruby S peel that maximize yield, TPC, TFC, and DPPH radical scavenging activity using RSM. After optimization, flavonoids of Ruby S peel extracts were identified by UPLC-ESI-QTOF-MS, and major compounds were quantified by HPLC-DAD. Sample preparation and chemicals Ruby S apples were harvested in Andong-si, Gyeongsangbuk-do, Korea in November 2020. Suitable apples were selected for experiments and stored at 4 °C on the day of harvesting. Apples were washed and treated with 1% ascorbic acid and then peeled to separate peel and pulp. The apple peel was freeze-dried (FD8512, Ilshinbiobase, Yangju, Korea) and ground to uniform size using a multiprocessor. Powdered sample was kept in a freezer at − 70 °C for UAE experiments. Ultrasound assisted extraction process The powdered Ruby S apple peel was extracted to determine the optimal extraction conditions by using an ultrasound assisted extraction (Branson 8510, Branson, USA) at 40 kHz. Powdered apple peel (0.5 g) was added to 10 mL of ethanol (Kim et al., , 2020 and the extracts were obtained in 15-different combinations with 3-levels of extraction temperature (°C), extraction time (min), and ethanol concentration (%) according to experimental design. After the extracts were subsequently centrifuged at 2700 rpm for 15 min and supernatants were collected, they were filtered through Whatman No. 1 filter paper. The filtrate was concentrated using a rotary vacuum evaporator (EYELA Co, Tokyo, Japan) and water bath (EYELA Co, Tokyo, Japan) at their corresponding extraction temperatures, and then freeze-dried. The extracts were stored at -70°C until required for analysis. Experimental design In this study, the experiment used to optimize UAE conditions for extracting antioxidants from Ruby S peel was designed using a BBD. The BBD involved 15 experimental runs to optimize UAE conditions. The following independent extraction variables ( X n ) were varied (Table 1), namely, extraction temperature ( X 1 : 20, 30 and 40 °C), extraction time ( X 2 : 15, 30 and 45 min), and solvent concentration ( X 3 : 50, 70 and 90%). Extraction yield ( Y 1 ), total phenol content ( Y 2 ), total flavonoid content ( Y 3 ), and DPPH radical scavenging activity ( Y 4 ) were designated dependent variables. The ranges of extraction variables were determined through preliminary experiments and literature review. Experiments were conducted randomly. Response surface analysis results of relationships between independent and dependent variables were fitted to the following second-order polynomial model [Eq. (1)]: where, Y 1 , Y 2 , Y 3 are dependent variables, X 1 , X 2 , X 3 are independent variables, 0 is constant, i , ii , and ij are linear, quadratic, and interaction coefficients, respectively. Predictions of optimal UAE conditions required to extract antioxidants from Ruby S peel were performed within the range in which extraction yield, total phenol content, total flavonoid content, and DPPH radical scavenging activity values were maximized as determined by RSM. After setting an arbitrary point within the predicted range, optimum values were predicted by substitution into regression equations, and then verification of determined optimal conditions were obtained by comparing predicted and experimental values. (1) Extraction yield of sample extracts Extraction yield of sample was determined by the percentage of the weight of freeze-dried extracts from the total weight of the dried raw samples. The extraction yield was calculated using the Eq. (2). Total phenolic contents The total phenolic contents (TPC) in extracts were determined by colorimetric analysis using Folin-Ciocalteu reagent, as previously described (Stratil et al., 2006), with several modifications. Extracts 50 μL were reacted with 50 μL of Folin-Ciocalteu's phenol reagent for 3 min in a 96-well plate, treated with 150 μL of 2% sodium carbonate (w/v) per well, and then incubated for 2 h in a dark room Total flavonoid contents The total flavonoid contents (TFC) in extracts were determined by spectrophotometric method as previously described (Shi et al., 2019) with some modifications. Extracts (20 μL) were mixed with 200 μL of diethylene glycol and 20 μL of 1 N NaOH in a 96-well plate and then incubated for 1 h at 37 °C. Absorbances were measured using microplate reader at 420 nm, and concentrations were determined using a naringin standard calibration curve. Total flavonoid contents are expressed as naringin equivalents (NAE) in milligram per gram of dried samples. DPPH radical scavenging activities DPPH (1,1-diphenyl-2-picrylhydrazyl) radical scavenging activities were used to evaluate sample antioxidant activities, as previously described (Ramos et al., 2003) with minor modifications. Extracts (50 μL) were mixed with 150 μL of 0.3 mM DPPH dissolved in ethanol and then reacted for 30 min at room temperature. Decreases in absorbance were measured using microplate reader at 515 nm, and DPPH radical scavenging activities were defined as follows Eq. (3): where, A is the absorbance of a sample treated with DPPH radical, and B is the absorbance of a DPPH blank. Determination of flavonoids using UPLC-ESI-QTOF-MS and HPLC-DAD Optimally extracted samples were concentrated on a rotary evaporator and reconstituted with distilled water to 100,000 ppm. Dissolved concentrates were diluted and filtered through Whatman 0.45 μm PVDF filter (Whatman Inc., Piscataway, NJ, USA) to determine phenolic compositions. The analysis was performed as previously described (Kim et al., 2020) with modification. Phenolics in RPEs were identified by UPLC-ESI-QTOF-MS. LC analysis using a Waters® ACQUITY™ Ultra Performance LC system. A Waters Acquity BEH C18 column (1.7 μm, 2.1 mm × 100 mm) was used with a mobile phase consisting of 0.1% formic acid in water (mobile phase A), and 0.1% formic acid in acetonitrile (mobile phase (3) DPPH radical scavenging activity (% Inhibition) = 1 − (A∕B) × 100 B) at a flow rate 0.2 mL/min using the following gradient conditions: 5% to 10% B (0-10 min) then 10% to 36% B (10-45 min). Detection was performed at 280 nm. The injection volume was 5 μL, and the column oven temperature was 40 °C. MS analysis was conducted on a Waters SYNAPT G2 system with an electrospray ionization (ESI) source operating in negative ionization mode from 100 to 1000 m/z. The MS conditions used were cone voltage 40 V, capillary voltage − 2.5 kV, ion source temperature 120 °C, desolvation gas flow 800 L/h at temperature of 350 °C. Statistical analysis All experiments were carried out in triplicate, and results are expressed as mean ± standard deviation (SD). Statistical analysis was performed using one-way analysis variance (ANOVA) and SPSS ver. 20.0 (SPSS Inc., IL, USA). The significances of differences between means were determined using Duncan's multiple range test, and statistical significance was accepted for p < 0.05. Minitab 19 (Minitab Inc., PA, USA) was used to generate surface plots for optimization experiments. Results and discussions In UAE process, extraction temperature, time, and solvent type significantly influence extraction yield, and also the release of phenolic compounds from solid matrix and the antioxidant activities of extracts (Chemat et al., 2017). Generally, heating process enhances the solubility of the compounds and the diffusion coefficient of solvent, however, some flavonoids such as procyanidins, which is abundant in apples, could be degraded by high temperatures above 50 °C (Escribano-Bailón and Santos-Buelga, 2004). In preliminary experiments for study, TPC results showed no significant difference between 4 °C and room temperature extraction, respectively 1.47 and 1.46 mg GAE/g DW, while the TPC was lowered at 50 °C extraction. Generally, a mixture of solvents and water are more efficient than mono-solvent in phenolic extraction. As a result of preliminary experiments, TPC was 1.94-fold higher in 60% ethanol extraction than distilled water extraction, and 20% and 40% ethanol extraction showed more than 20% lower TPC and TFC than 60% ethanol extraction. Considering the results of the preliminary experiments and the literature review, extraction parameters and response variables were set described below, then 15-run experiments were performed according to the Box-Behnken design model to identify optimal conditions for extracting antioxidants from Ruby S apple peel using UAE. Table 1 shows the mean values of extraction yield, TPC, TFC, and DPPH radical scavenging activity of RPEs obtained at different extraction temperatures, extraction times, and ethanol concentrations. Effect of UAE factors on extraction yield As shown in Table 1, the extraction yield of RPEs obtained under various experimental conditions varied significantly (p < 0.05) from 31.5 ± 0.17% (run 2) to 39.0 ± 0.20% (run 13). The maximum value of the extraction yield was obtained at an extraction temperature of 30 °C, an extraction time of 15 min, and an ethanol concentration of 50% (abbreviated to 30 °C/15 min/50% EtOH hereafter). On the other hand, the minimum was observed at 20 °C/30 min/90% EtOH, which was 7.5% lower than the maximum. This finding exceeded that obtained by another study on optimizing extraction method (Nakamura et al., 2019). They found that extraction yield of plant extracts using ultrasonic extraction with on 0-100% ethanol was ranged 3.82 to 27.62% and was decreased to 3.82% on 100% ethanol extraction. The predicted model for extraction yield could be described in terms of coded factors using the following regression equation: The 3-dimensional response surfaces and 2-dimensional contours obtained by the prediction model are shown in Fig. 1. In the response surface graphs, the fixed values were 30 °C/30 min/70% EtOH. As a result of the response surface analysis, the predicted stationary point was the maximum point, and the maximum value was predicted to be 40.16% at the extraction temperature of 20 °C, extraction time of 15 min, and ethanol concentration of 50%. The higher the extraction temperature and the shorter the extraction time, the higher the extraction yield of RPEs, but it was not statistically significant. Generally, an increase of temperature leads to higher extraction yield, which concurs with a previous study of optimizing UAE conditions for extraction of hazelnut oil (Geow et al., 2018). Meanwhile, extraction yield was significantly higher when the ethanol concentration was reduced (p < 0.001). Therefore, ethanol concentration was, the only factor that had a linear effect on extraction yield in the response surface model, was also predicted to be the most significant factor. Effect of UAE factors on TPC The range for TPC of RPEs obtained under different experimental UAE conditions significantly ranged from 1.38 ± 0.11 mg GAE/g DW (run 2) to 2.51 ± 0.06 mg GAE/g DW (run 1) as shown in Table 1 (p < 0.05). Maximum TPC was recorded at 40 °C/30 min/50% EtOH and the minimum was observed at 20 °C/30 min/90% EtOH, which was significantly 1.8-times lower than the maximum (p < 0.05). This finding showed the similar results as previous study indicating decreased TPC at 90% ethanol extraction compared to using 50% ethanol when 'Picnic' apple were extracted with various concentration of ethanol. The predicted model for TPC could be described in terms of coded factors using the following regression equation: The 3-dimensional response surfaces and 2-dimensional contours obtained using the predicted model are shown in Fig. 1. In the response surface graphs, the fixed values were 30 °C/30 min/70% EtOH. The predicted stationary point according to the response surface analysis was the saddle point, and the maximum value was predicted to be 2.63 mg GAE/g DW at the extraction temperature of 40 °C, extraction time of 45 min, and ethanol concentration of 50%. As the extraction temperature and time increased and ethanol concentration decreased, TPC in RPEs increased, especially for temperature (p < 0.05) and ethanol concentration (p < 0.001). On the other hand, only quadratic parameter (X 2 3 ) of ethanol concentration had a significant effect at the p < 0.01 level on TPC. As a result, ethanol concentration Y 2 = 2.124 − 0.0629X 1 + 0.0100X 2 + 0.0413X 3 + 0.000507X 2 1 − 0.000137X 2 2 − 0.000494X 2 3 + 0.000501X 1 X 2 + 0.000377X 1 X 3 − 0.000197X 2 X 3 Fig. 1 Response surface plots and 2-dimensional contour lines for the effects of the extraction temperature (°C), extraction time (min), and ethanol concentration (%) on the extraction yield (%, Y 1 ), TPC (mg GAE/g DW, Y 2 ), TFC (mg NAE/g DW, Y 3 ), and DPPH radical scavenging activity (% Inhibition, Y 4 ) of RPEs was found to be the most significant factor, due to increased phenolic solubility (Prgomet et al., 2019). Extraction temperature also had a significant effect on TPC. Effect of UAE factors on TFC The means of TFC of RPEs obtained under the various experimental UAE conditions are shown in Table 1. Values were significantly varied (p < 0.05) from 2.72 ± 0.06 mg NAE/g DW (run 4) to 4.07 ± 0.02 mg NAE/g DW (run 3). The maximum and minimum TFC was observed at 20 °C/30 min/50% EtOH and 40 °C/30 min/90% EtOH, respectively, and minimum TFC value was 1.5-fold lower than maximum. The RPEs predicted model for total flavonoid content can be described in terms of coded factors by the following regression equation: The 3-dimensional response surfaces and 2-dimensional contours designed by the predicted model are shown in Fig. 1. In the response surface graphs, the fixed values were 30 °C/30 min/70% EtOH. The predicted stationary point according to the response surface analysis was the saddle point, and the maximum value was predicted to be 4.08 mg NAE/g DW at the extraction temperature of 20 °C, extraction time of 18.03 min, and ethanol concentration of 50%. A proportional inverse tendency was observed between all extraction parameters and TFC, and this was significant at the p < 0.001 level for temperature and ethanol concentration. Meanwhile, all the quadratic terms of extraction parameters (X 2 n ) , excluding extraction time, were significant (p < 0.01), which indicated that extraction temperature (X 2 2 ) and ethanol concentration (X 2 3 ) influenced on total flavonoid content. Moreover, the interaction between extraction temperature and ethanol concentration ( X 1 X 3 ) was significant at the p < 0.001 level. Therefore, all parameters influenced total flavonoid content, but extraction temperature and ethanol concentration were the major factors. Furthermore, these results agree with previously reported results (Alberti et al., 2014) that high temperatures degrade flavonoids in apples. Effect of UAE factors on DPPH radical scavenging activity The range for DPPH radical scavenging activity of RPEs obtained using the various experimental UAE conditions significantly ranged from 56.15 ± 2.12% (run 2) to 76.34 ± 0.03% (run 3) as shown in Table 1 (p < 0.05). The highest DPPH radical scavenging activity and TFC were observed at 20 °C/30 min/50% EtOH. On the other hand, the lowest DPPH radical scavenging activity was obtained at 20 °C/30 min/90% EtOH, as was observed minimum for extraction yield and TPC, 20.19% lower compared to the maximum. This finding indicating that DPPH radical scavenging activity has correlation with not only extraction conditions, but also extraction yield, TPC, and TFC. In ultrasonic extraction, particle size, solvent to solid ratio, solvent type, ethanol concentration, sonication amplitude and extraction time affects the extraction yield, antioxidant effects, TPC, and TFC. In addition, extraction yield has a significant correlation with TPC, TFC, and antioxidant effect (Lim et al., 2019). The predicted model for DPPH radical scavenging activity can be described in terms of coded factors by the following regression equation: The 3-dimensional response surfaces and 2-dimensional contours obtained using the predicted model are shown in Fig. 1. In the response surface graphs, the fixed values were 30 °C/30 min/70% EtOH. As a result of the response surface analysis, the predicted stationary point was the saddle point, and the maximum value was predicted to be 76.26% at the extraction temperature of 20 °C, extraction time of 22.58 min, and ethanol concentration of 50%. The effects of extraction time and ethanol concentration on DPPH radical scavenging activity tended to decrease, and ethanol concentration decreased most significantly (p < 0.001). Only the quadratic term of extraction time (X 2 2 ) was significant, but the interaction between extraction temperature and ethanol concentration (X 1 X 3 ) also had a significant effect on DPPH radical scavenging activity. Therefore, all parameters affected DPPH radical scavenging activity, and ethanol concentration variable had the strongest effect which agreed with a previous report (Prgomet et al., 2019) that affect antiradical power of the extracts. Model fitting and analysis of variance (ANOVA) Analysis of variance (ANOVA) and multiple regression analysis were conducted ( Table 2). The obtained models showed highly significant probability values. The models for TPC, TFC, and DPPH were highly significant at p < 0.001 level, and model for yield was significant at p < 0.01 level. The regression coefficient of yield was significant only by linear regression of X 3 , while other responses were significant by two or more source in linear, quadratic, and interaction regressions. The linear and quadratic regression coefficients of X 1 , X 3 and X 2 3 for TPC had significantly low p-values. A similar trend was observed in previous study (Alberti et al., 2014) that quadratic regression coefficient of methanol concentration was significant. The p-values of linear and interactive regression of X 2 , X 3 and X 1 X 3 were also significantly low for DPPH radical scavenging activity. Unlike other response values, regression coefficients for TFC were significant at the p < 0.001 level in all linear regressions ( X 1 , X 2 and X 3 ), quadratic regression of X 2 1 and X 2 3 , and interaction regression of X 1 X 3 . These findings indicated that extraction yield, TPC, TFC, and DPPH are affected by a single or interaction of temperature, time, and solvent concentration, and this trend was agreed with other studies on UAE condition optimization (Alberti et al., 2014;Mohamed Ahmed et al., 2020). The results of the analysis performed to assess the fitness of models are also summarized in Table 2. The regression coefficients of determination (R 2 ) of models to evaluate the quality of models were 0.9502, 0.9811, 0.9996, and 0.9890 for yield, TPC, TFC, and DPPH, respectively. These results indicate that only 4.98%, 1.89%, 0.04%, and 1.1% of the total variabilities of responses for yield, TPC, TFC, and DPPH could not be explained by the model. The adjusted R 2 values of models were 0.8606 for yield, 0.9469 for TPC, 0.9989 for TFC, and 0.9692 for DPPH. These models showed the better results than earlier study (Alberti et al., 2014) of optimizing UAE of phenolic compounds from apples using methanol that explained R 2 Adj of TPC model for 0.80, TFC for 0.82, and DPPH radical scavenging activity for 0.94. The lack-of-fit test (p > 0.05) indicated that the suitability of each model accurately predicted variations (Jibril et al., 2019). All models of responses were suitable, p-values obtained for the lack-of-fit test were 0.529 for yield, 0.618 for TPC, 0.320 for TFC, and 0.072 for DPPH radical scavenging activity. The prediction error sum of squares (PRESS) provides a measure of the deviation between fitted and observed values. In general, a smaller PRESS value indicates better model predictive ability (Kumar et al., 2019). The PRESS values of models were 31.23 for yield, 0.35 for TPC, 0.01 for TFC, and 65.25 for DPPH, which showed models of yield, TPC, TFC, and DPPH were suitable. Optimization and verification of UAE conditions Optimization was performed to determine extraction conditions that simultaneously maximize UAE extraction yield, total phenolic content, total flavonoid content, and DPPH radical scavenging activity in RPEs. The polynomial models established in this study were utilized to obtain optimal UAE conditions, and to predict values that maximized simultaneously responses of the above four variables. The optimal conditions for simultaneously maximizing all four responses (extraction yield, TPC, TFC, and DPPH radical scavenging activity) were 20 °C/25.3 min/50% EtOH. Under these extraction conditions, the fit was 39.00% for extraction yield, 2.44 mg GAE/g DW for TPC, 4.07 mg NAE/g DW for TFC, and 76.20% for DPPH radical scavenging activity. The 95% confidence intervals (CI) of predicted values were 37.25-40.76 (%) for extraction yield, 2.25-2.64 (mg GAE/g DW) for TPC, 4.03-4.10 (mg NAE/g DW) for TFC, and 74.10-78.31 (% Inhibition) for DPPH radical scavenging activity. In the previous study (Lee et al., 2018) of extracting Ruby S apple peels by a conventional extraction method, low temperature extraction for 24 h, the TPC was 8.76 mg GAE/g, which was superior to this study, but DPPH radical scavenging activity value was superior in this study. The difference in the TPC is attributed to the fact that even if the same variety is used, the polyphenol contained in the fruit varies depending on various factors such as the maturity, harvest time, color, and processing method (Rice-Evans et al., 1997). Individual and composite desirability is an indicator that ranges from 0 (undesirable response) to 1 (desirable response) and is used to assess how well a combination of variables satisfies goals (Maran et al., 2015). This indicator is used to assess how well settings optimize single or a series of responses. In the present study, the individual desirability of all responses was almost 1 (yield: 1.00, TPC: 0.94, TFC: 1.00, DPPH: 0.99), and the composite desirability was 0.983; a near ideal result. Experiments to compare the mean values of experimental and predicted results to verify the suitability of the model were performed in triplicate under optimized conditions. The experimental results obtained under optimal conditions were 38.17 ± 1.04% for extraction yield, 2.45 ± 0.01 mg GAE/g DW for TPC, 4.09 ± 0.05 mg NAE/g DW for TFC, and 77.52 ± 2.23% for DPPH radical scavenging activity, and these results were well-matched to predicted results and were valid within 95% CI of predicted values, which confirmed the suitability of devised model. In addition, the obtained optimal conditions were different with earlier study (Alberti et al., 2014) on optimization of extracting phenolic compounds from apple that using methanol and acetone for solvent, but absolute errors between predicted value and observed value were similar or better in this study. Identification and quantification of flavonoids in RPEs The flavonoids detected in the RPEs are shown in a chromatogram ( Fig. 2A) and are listed in Table 3. The identification of phenolic compounds was conducted by comparing retention times, UV spectra, and MS data, and theoretical fragmentation data reported in the literature. Fourteen peaks were detected and included 4 flavonols, 2 flavan-3-ols, 2 dihydrochalcones, 1 flavones, and five unknowns (peaks 1, 2, 4 and 5). The four flavonols were detected at retention times from 17.82 to 21.38 min, that is, quercetin-3-O-galactoside (hyperoside) at m/z 463.1078 (17.82 min), quercetin-3-Oglucoside (isoquercitrin) at m/z 463.1071 (18.44 min), quercetin pentoside at m/z 433.1034 (19.34 min), and kaempferol hexoside at m/z 447.1161 (21.38 min). Hyperoside and isoquercitrin are isomerized form, and are indistinguishable only by MS 1 , and thus, were identified by referring to literature MS 2 data obtained using the identical conditions and by comparing retention times with authentic standards. Previous studies reported that hyperoside and isoquercitrin are predominant in apple peel, including Ruby S peel. In agreement with earlier studies (Raffa et al., 2017;Stefova et al., 2019), which identified phenolic compounds in apple peel, pulp, and leaves, quercetin pentoside and kaempferol hexoside were detected in Ruby S apple peel. Quercetin pentoside is believed to be avicularin (quercetin 3-α-l-arabinofuranoside) or guajaverin (quercetin-3-O-arabino-pyranoside) and is known to be present in apples (Sánchez-Rabaneda et al., 2004). All detected flavonols were aglycones with pentose or hexose bound to quercetin or kaempferol. Flavonol glycosides were the most detected of the four subclasses, which agreed with a previous report that flavonol glycosides are predominant in apples (Raudone et al., 2017). (Epi) gallocatechin gallate (m/z 457.1934) and (epi) catechin gallate (m/z 441.2241) were the most detected flavan-3-ols at retention times of at 6.99 and 15.26 min, respectively (Zhang et al., 2014). Epigallocatechin gallate and epicatechin gallate are isomeric with gallocatechin gallate and catechin gallate, respectively, and cannot be positively identified by conventional LC-MS/MS, but are detectable by using MS with hydrogen/deuterium exchange (Susanti et al., 2015), and thus, were considered these identifications as tentative. Furthermore, our identification of epicatechin and catechin derivatives in apple peel agreed with previous studies that total catechins are abundant in apple peel and that they are the major phenolic compounds in apple peel . Only one flavone derivative was found at a retention time of 12.96 min, which corresponded with a deprotonated molecular ion at m/z 563.2297 assigned to apigenin-7-(2-O-apiosyl-glucoside) (Tang et al., 2020). Apigenin and apigenin-7-glucoside have been reported in apple leaves and the detected apigenin-7-(2-O-apiosylglucoside) (apiin) is a diglycoside of the flavone apigenin (Petkovska et al., 2017;Stefova et al., 2019). Two compounds were identified as phloretin derivatives. Peak 13 (phloretin pentosylhexoside) was detected at a retention time of 22.16 min and had a deprotonated molecular ion at m/z 567.1691. It was tentatively identified as phloretin-2-xylosylglucoside, which has been reported in apples (He et al., 2022;Montero et al., 2013). Peak 14 had a retention time of 24.62 min and [M-H] − m/z peak at 435.1548, which was confirmed to be phloridzin using authentic standard. Both compounds have been reported to be present in apple peel, flesh, and leaves by analytic studies on phenolic compounds in apples Montero et al., 2013;Stefova et al., 2019). HPLC-DAD at 280 nm was performed to quantify levels of hyperoside, isoquercitrin, and phloridzin levels, which were identified as major components by UPLC-QTOF-MS. Comparisons of major flavonoid contents in RPEs extracted under optimized condition and previous study condition (P-RPEs) extracted at 25 °C/15 min/60% EtOH were conducted to evaluate the efficiency of extracting major components. The UV-Vis chromatograms are shown in Fig. 2B and C. Three of major flavonoids were detected at relatively high concentrations in RPEs, and HPLC-DAD peaks concurred Fig. 2A). These results are in line with the tendency for TFC to be higher under optimum condition (4.09 mg NAE/g DW) than previous study (3.83 mg NAE/g DW) extracted at described above conditions. For RPEs extracted under optimized conditions, hyperoside was found to be 1.8-fold (121.50 ± 3.17 μg/g) and isoquercitrin to be 2.8-fold (22.27 ± 2.06 μg/g) significantly (p < 0.01) higher than P-RPEs (hyperoside: 67.62 ± 3.17 μg/g, isoquercitrin: 7.84 ± 2.06 μg/g). In particular, a large amount of phloridzin was extracted from optimized conditions, that is 10-times (52.27 ± 2.81 μg/g) significantly (p < 0.001) higher than P-RPEs (5.24 ± 3.72 μg/g), and this result concurs with (Choi and Chung, 2019) at green apple extracted with 50% ethanol showed the highest contents of polyphenol and phloridzin. These results indicate that optimal UAE condition is effective to extract antioxidants, especially flavonoids, and it could be served as a useful background for the large-scale optimization of extraction methods for food industrial.
A note on Shimura subvarieties in the hyperelliptic Torelli locus We prove the non-existence of Shimura subvarieties of positive dimension contained generically in the hyperelliptic Torelli locus for curves of genus at least 8, which is an analogue of Oort's conjecture in the hyperelliptic case. Introduction Let M g (resp. A g ) be the fine moduli scheme of smooth projective curves of genus g (resp. of principally polarized abelian varieties of dimension g) with level-N structures, N being a fixed integer at least 3 so that the corresponding moduli problems are representable. We have the Torelli map j • : M g → A g , whose image T • g is called the open Torelli locus. The closure T g of T • g is called the Torelli locus, and T • g is known to be an open subscheme of T g . Note that A g is a connected Shimura variety, in which we can talk about Shimura subvarieties (cf. section 2). A Shimura subvariety M ⊂ A g is said to be contained generically in T g if M ⊂ T g and M ∩ T • g = ∅. It was conjectured that: Conjecture 1.1 (Oort). For g sufficiently large, the Torelli locus T g contains NO Shimura subvarieties of positive dimension generically. We refer to the recent survey [MO13] of Moonen-Oort and the references there for the history, motivation, applications and further discussion of this conjecture. There has been much progress towards the above conjecture, see for example [CLZ14,dJN91,dJZ07,GM13,Hai99,LZ14,Moo10], etc. Inside T g there is the hyperelliptic Torelli locus T H g corresponding to Jacobians of hyperelliptic curves (including the non-smooth ones) with T H • g := T H g ∩ T • g = j • (H g ) open in T H g , where H g ⊂ M g is the locus of smooth hyperelliptic curves. In this paper we study the following hyperelliptic analogue of Oort's conjecture: Theorem 1.2 (hyperelliptic Oort conjecture). For g > 7, the hyperelliptic Torelli locus T H g does not contain any Shimura subvariety of positive dimension generically. Similar to the Torelli case, here a Shimura subvariety M of A g is contained generically in T H g if and only if M is contained in T H g and the intersection M ∩ T H • g is non-empty. It is known that when g is small, there indeed exist Shimura subvarieties of positive dimension contained generically in the hyperelliptic Torelli locus, see for instance [GM13,Moo10,LZ14]. In particular, Grushevsky and Möller constructed in [GM13] infinitely many Shimura curves contained in T H 3 . Assuming the André-Oort conjecture for A g , we deduce from the theorem above the following finiteness result on CM points in the open Torelli locus T H • g . Corollary 1.3. For g > 7, if the André-Oort conjecture for A g is true, then there exists at most finitely many smooth hyperelliptic curves of genus g (up to isomorphism) with complex multiplication. Coleman's conjecture (cf. [Col87]) predicts that for g sufficiently large, there exists at most finitely many smooth curves of genus g (up to isomorphism) whose Jacobians are CM abelian varieties. The corollary above gives a partial answer to the hyperelliptic analogue of this conjecture. The main idea of the proof is as follows: Step 1 We reduce the problem to the case when M ⊂ A g is a simple Shimura variety, in the sense that it is defined by a connected Shimura datum (G, X; X + ) with G der a Q-simple semi-simple Q-group. The case of Shimura curves have been studied in [LZ14], and we assume that G der is not isomorphic to SL 2 over Q, hence the boundary components in the Baily-Borel compactification of M are of codimension at least 2. In particular, the closure M of M in the Baily-Borel compactification of A g is obtained by joining boundary components of codimension at least 2, using functorial properties of the Baily-Borel compactification. Step 2 Assume that M is a Shimura subvariety of A g contained generically in T H g . Let C be a generic curve in the closure M of M as above. Then we may take C meeting M \M trivially due to the codimension condition in Step 1. If T H sing g := T H g \ T H • g meets M also in codimension at least 2, then we may take C meeting T H sing g trivially, which contradicts the affineness of the hyperelliptic Torelli locus. Hence the intersection T H sing g ∩ M contains a divisor of M . Step 3 The locus of decomposable principally polarized abelian varieties A dec g is a finite union of Shimura subvarieties of A g , and A dec g ∩ T H g = T H sing g . Hence the intersection T H sing g ∩ M contains a divisor M ′ which is also a Shimura subvariety. We may then apply dimensional induction to M ′ , and use the result of [LZ14] to obtain the bound g > 7. In section 2 we collect briefly some facts about Shimura subvarieties, part of which is reproduced from [CLZ14]. In section 3 we prove the main result by completing Step 2 and Step 3 introduced above. Convention and notations. Denote by S the Deligne torus Res C/R G m,C . For k a commutative ring, linear k-groups stand for affine algebraic k-groups. For G a linear Q-group, write G(R) + for the neutral connected component of the Lie group G(R), and G(Q) + for the intersection G(Q) ∩ G(R) + . Preliminaries on Shimura varieties In this section we recall some facts about Shimura (sub)varieties, functorial properties of Baily-Borel compactification, and the notion of decomposable locus in A g . We follow [CLZ14,LZ14] closely for the basic notions of connected Shimura data and Shimura subvarieties. Definition 2.1 (connected Shimura data, cf. [De79], [Mil05]). (1) A Shimura datum is a pair (G, X) subject to the following constraints: SD1 G is a connected reductive Q-group, and X is a G(R)-orbit in Hom R−Gr (S, G R ) with S = Res C/R G mC the Deligne torus. We also require that G ad admits no compact Qfactors. SD2 For any x ∈ X, the composition Ad • x : S → G R → GL g,R induces on g = LieG a rational Hodge structure of type {(−1, 1), (0, 0), (1, −1)}. SD3 For any x ∈ X, the conjugation by x( √ −1) induces a Cartan involution on G ad R . It is known that X is a finite union of Hermitian symmetric domains, each connected component of which is homogeneous under G der (R) + . A morphism between Shimura data is a pair (f, When f is an inclusion of a Q-subgroup, the push-forward f * is injective, and we get the notion of Shimura subdata. In is a morphism of Shimura data, then it is easily verified that the pair (f (G), f * (X)) is a subdatum of (G ′ , X ′ ), called the image subdatum of the morphism. When (G, X) is a subdatum of (G ′ , X ′ ) with G a Q-torus, then X consists of a single point {x} and one writes (G, x) for simplicity. (2) A connected Shimura datum is of the form (G, X; X + ) where (G, X) is a Shimura datum and X + is a connected component of X. Notions like morphisms between connected Shimura data, connected Shimura subdata, etc. are defined in the evident way. Definition 2.2 (Shimura varieties and Shimura subvarieties). (1) A (connected) Shimura variety is a quotient of the form Γ\X + where X + is a connected component from some connected Shimura datum (G, X; X + ) and Γ ⊂ G der (R) + is an arithmetic subgroup. We write ℘ Γ : X + → Γ\X + for the uniformization map x → Γx. (2) For M = Γ\X + a Shimura variety as in (1), a Shimura subvariety of M is of the form ℘ Γ (X ′+ ) where X ′+ comes from some connected Shimura subdatum (G ′ , X ′ ; X ′+ ) ⊂ (G, X; X + ). If we choose an arithmetic subgroup Γ ′ of G ′der (R) + which is also contained in Γ, then ℘ Γ (X ′+ ) is the same as the image of In particular, if (T, x) is a connected subdatum with T a Q-torus in G, the Shimura subvariety we obtained is a point. In the literature it is often referred to as special points or CM points, because in the Siegel case, i.e. when (G, X) = (GSp V , H V ) and M = A g cf.Example 2.4 below, they correspond to CM abelian varieties via the modular interpretation. Remark 2.3. In standard references on Shimura varieties, like [De79] and [Mil05], a complex Shimura variety is defined adelically as the double quotient which acts on X + through its image in G ad (Q) + , which in turn is an arithmetic subgroup of G ad (Q) + by [Bor69] 8.9 and 8.11. The adelic setting is convenient for discussion of arithmetic properties like canonical models. However in our study it suffices to treat Shimura varieties as complex algebraic varieties, and from the viewpoint of Baily-Borel compactification the definition of connected Shimura varieties as Γ\X + given above is sufficient, because an arithmetic subgroup Γ of G der (R) + acts on X + through its image in G ad (R) + which is again an arithmetic subgroup of G ad (R) + by [Bor69]. Example 2.4 (Siegel modular varieties, cf. [CLZ14, Example 2.1.7]). Let (V Z , ψ) be a symplectic space over Z with ψ : V Z × V Z → Z an symplectic pairing of discriminant ±1. Writing (V, ψ) for the symplectic Q-space obtained by base change Z → Q, we get the connected reductive Q-group of simplectic similitude GSp V together with a homomorphism of Q-groups λ : We put H V for the set of polarizations of (V, ψ), i.e., the set of R-group homomorphisms h : S → GSp V,R such that h induces a C-structure on V R and ψ(h( √ −1)v, v ′ ) is symmetric and definite (positive or negative). The set H V is naturally identified with the Siegel double half-space of genus g = 1 2 dim Q V , and the pair (GSp V , H V ) is a Shimura datum. Let H + V be the connected component of H V corresponding to positive definite polarizations, and let Γ ⊂ Sp V (R) be an arithmetic subgroup. Then the quotient Γ\H + V is referred to as a Siegel modular variety of level Γ. This is motivated by the case when Γ = Γ(N ) = Ker(Sp V Z (Z) → Sp V Z (Z/N )) is the N -th principal congruence subgroup using the integral structure V Z , which gives Γ(N )\H + V as the moduli space of principally polarized abelian varieties of dimension g with level-N structure, constructed by Mumford in [FKM94]. As we have mentioned, in this paper we fix N ≥ 3 and put A g = A g,N to be the Siegel modular variety associated to the standard symplectic space on V Z = Z 2g . The condition N ≥ 3 assures the representability of the moduli problem. The Shimura datum is also written as (GSp 2g , H g ). For simplicity we only consider arithmetic subgroups that are torsion-free. The quotients Γ\X + are therefore smooth complex manifolds. Theorem 2.5 (Baily-Borel compactification, [BB66]. [Bor72]). Let M = Γ\X + be a Shimura variety. Then the following hold: (1) M is a normal quasi-projective algebraic variety over C, and it admits a compactification, called the Baily-Borel compactification M BB , which is universal in the sense that if M → Z is a morphism of complex algebraic varieties with Z projective, then it admits a unique factorization M ֒→ M BB → Z. (2) The boundary components of M , i.e., irreducible components of M BB \ M are of codimension at least 2, unless G der admits a Q-factor isogeneous to SL 2,Q . Corollary 2.6 (boundary components). Let M = Γ\X + be a Shimura variety defined by (G, X; X + ) and an arithmetic subgroup Γ ⊂ G der (R) + . Let M ′ ⊂ M be a Shimura subvariety defined by (G ′ , X ′ ; X ′+ ) ⊂ (G, X; X + ), and assume that G ′der admits no Q-factor isogeneous to Definition 2.7 (decomposable locus). A principal polarized abelian variety A over C is said to be decomposable if it is isomorphic to a product A = A 1 × A 2 with A 1 and A 2 both principally polarized of dimension > 0 such that the polarization of A is isomorphic to the one induced by the two polarizations on A 1 and A 2 respectively. We thus get the locus A dec g ⊂ A g of decomposable principal polarized abelian varieties. Example 2.8 (Shimura subvarieties of decomposable locus). Given (U, ψ U ) and (W, ψ W ) two symplectic Q-spaces of dimension 2m and 2n respectively, the direct sum V = U ⊕ W naturally carries a symplectic structure This gives rise to the following Q-group homomorphism which is an inclusion: the fibred product GSp U,W is defined by the two homomorphisms and it is the Q-subgroup of GSp V whose elements can be written as pairs (g U , g W ) with g U ∈ GSp U and g W ∈ GSp W acting on U and on W respectively with the same scalar of similitude We proceed to show that the Q-group homomorphism f U,W above extends to a morphism of Shimura data (GSp U,W , and h ′ W only differs by the conjugation of some element of Sp W (R) = Ker(GSp W (R) → G m (R)), and there exists . If (U, ψ U ) and (W, ψ W ) are given by standard integral symplectic structures U Z ≃ Z 2m and W Z ≃ Z 2n , then we naturally have V given by the standard integral one V Z ≃ Z 2g with g = m + n. The N -th principal congruence subgroup Γ V (N ) = Ker(Sp V Z (Z → Sp V Z (Z/N ))) naturally restricts to the congruence subgroup Γ U (N ) × Γ W (N ) of GSp der U,W (R) + via Sp U × Sp W = GSp der U,W ֒→ Sp V , and we get Γ U (N ) × Γ W (N )\H V U + × H + W as a Shimura subvariety of A g = Γ V (N )\H + V , which we denote as A m,n with m, n > 0 and m + n = g. Lemma 2.9. The decomposition locus A dec g is a finite union of Shimura subvarieties in A g . Proof. If A is a principally polarized abelian variety decomposed as A ≃ A 1 × A 2 with A 1 of dimension m and A 2 of dimension n = g − m, where we assume for simplicity m ≤ n, then the point in A g parameterizing A 1 × A 2 naturally lies in A m,g−m ⊂ A g in the sense of Example 2.8. Hence we get which is a finite union of Shimura subvarieties. Finally we mention the following useful fact: Lemma 2.10 (intersection of Shimura subvarieties). Let M ′ and M ′′ be Shimura subvarieties of an ambient Shimura variety M = Γ\X + defined by (G, X; X + ). Then M ′ ∩ M ′′ is a finite union of Shimura subvarieties of A g if the intersection is non-empty. Proof. Write ℘ : X + → M for the uniformization map. Let M ′ and M ′′ be defined by connected subdata (G ′ , X ′ ; X ′+ ) and (G ′′ , X ′′ ; X ′′+ ) respectively. Then the non-empty intersection is non-empty, and we can find γ ∈ Γ such that X ′+ ∩ γX ′′+ = ∅. Since (G ′′ , X ′′ ; X ′′+ ) and (γG ′′ γ −1 , γX ′′ ; γX ′′+ ) defines the same Shimura subvariety M ′′ , we may assume for simplicity that X ′+ ∩ X ′′+ = ∅. We thus take x ∈ X ′+ ∩ X ′′+ . Then the homomorphism x : S → G R factors through H R , with H being the neutral component of the intersection G ′ ∩ G ′′ . We claim that: (b) H admits an almost direct product H = H 0 H 1 H 2 where H 1 is generated by non-compact Q-simple normal semi-simple Q-subgroups of H, H 2 is generated by the compact ones, and H 0 is the connected center. Note that H 1 H 2 equals H der , and we put H ′ := H 0 H 1 . To show that x(S) is contained in H ′ R , it suffices to show that the intersection x(S) ∩ H 2,R is zero-dimensional. But the inclusion x(S) ⊂ H R implies that the conjugation by x( √ −1) induces a Cartan involution on H der R = H 1,R H 2,R , which fixes the compact part H 2,R , hence H 2,R is centralized by x(S), which is essentially the same arguments used in [UY14] (right before Lemma 3.6). (c) The connected reductive Q-group H ′ of G admits no compact semi-simple Q-factors. The inclusion x(S) ⊂ H ′ R implies that LieH ′ is a rational Hodge substructure of LieG as LieH ′ R is stabilized by the adjoint action of x(S) on LieG R . Hence the condition on Hodge types and on Cartan involution are both satisfied, and we get a Shimura subdatum (H ′ , Y ) with Y = H ′ (R)x being the orbit of x under H ′ (R) inside X. We further have a connected subdatum (H ′ , Y ; Y + ), with Y + = H ′ (R) + x the connected component of Y containing x, and the Shimura subvariety it defines is contained in M ′ and M ′′ passing through ℘(x). Proof of the main result As we have explained in section 1 we prove the main result by induction on the dimension of a given Shimura subvariety M contained generically T H g . The bound g > 7 comes from the following theorem proved in [LZ14, Theorem E]: Theorem 3.1 (Lu-Zuo). For g > 7, the hyperelliptic Torelli locus T H g does not contain generically any totally geodesic curves of A g . Here totally geodesic subvarieties (including the one-dimensional case, namely totally geodesic curves) are closed algebraic subvarieties in A g which are totally geodesic for the Kähler structure. Shimura subvarieties are always totally geodesic. See [LZ14] for further details. We start with the following property on non-simple Shimura data. Proof. Take any CM subdatum (T 2 , x 2 ) of (G 2 , X 2 ; X + 2 ). The pre-image of G 1 × T 2 under G → G ad ≃ G 1 × G 2 is a Q-subgroup H of G, which is mapped onto G 1 × T 2 . The kernel of H → G → G ad is central in G, whose connected component is a Q-subtorus of the center of G. Hence H is reductive. Write G ′ for the neutral component of H, and take x = (x 1 , x 2 ) ∈ X + ≃ X + 1 × X + 2 for some x 1 ∈ X + 1 . Viewing x as a point in X + for the datum (G, X; X + ), we see that x(S) ⊂ H R because when viewing x i as a point of X + i of the datum (G i , X i ; X + i ) we have x 1 (S) ⊂ G 1,R and x 2 (S) ⊂ T 2,R . In particular x(S) is contained in G ′ R the neutral component of H R . The Q-group G ′ is a reductive Q-subgroup of G, whose adjoint quotient is G 1 , admitting no compact Q-factors. We claim that the pair (G ′ , X ′ = G ′ (R)x) is a Shimura subdatum of (G, X): first of all the action of S on LieG ′ R by the adjoint action coincides with the action of S on LieG R , which stabilizes LieG ′ R because x(S) ⊂ G ′ R ; the remaining conditions on Hodge types and Cartan involutions are valid for x, and clearly invariant when we conjugate x by any element g ∈ G ′ (R).
Subsequent Actions Engendered by the Absence of an Immediate Response to the Proposal in Mandarin Mundane Talk When there is no immediate response after a proposal and normally the silence is longer than 0.2 s, the proposer would take subsequent actions to pursue a preferred response or mobilize at least an articulated one from the recipient. These actions modulate the prior deontic stance embedded in the original proposal into four trends as follows: (1) maintaining the prior deontic stance with a self-repair or by seeking confirmation; (2) making the prior deontic stance more tentative by making a revised other-attentiveness proposal, providing an account, pursuing with a tag question, or requesting with an intimate address term; (3) making the prior deontic stance more decisive by making a further arrangement (for the original proposal), closing the local sequence, or providing a candidate unwillingness account (for the recipient's potential rejection); and (4) canceling the prior deontic stance by doing a counter-like action. Additionally, these trends inherently embody a decisive-to-tentative gradient. This study would penetrate into the phenomena occurring in Mandarin mundane talk with the methodology of Conversation Analysis to uncover the underflow of deontic stance. INTRODUCTION In talk-in-interaction, participants take turns to talk with minimal gap and overlap. Operations of the turn-taking follow such rules as Rule 1a, 1b, and 1c, and Rule 2 1 (Sacks et al., 1974). However, it is accountable if Rule 1a fails, or the recipient fails to take the turn to give a coherent response (Pomerantz, 1984). For instance, the recipient may look blank or questioning, or make hesitating noises such as Uhs, Ums, and Wells. The recipient's failure would be treated as having some problem in responding or as projecting a high probability of a dispreferred response since a preferred one will normally be produced immediately (Pomerantz and Heritage, 2013). At this moment, the speaker would pursue a response by "clarifying, reviewing the assumed common knowledge, and 1 Levinson (1983: 298) concluded that "At each initial recognizable end of a turn-constructional unit (TCU) or transition relevance place (TRP), current speaker can select a recipient to talk next (Rule 1a); or current speaker doesn't select a recipient as next, then any (other) party may self-select, and the first one gaining the rights to the next turn (Rule 1b); or current speaker has not selected a next and no other party self-selects under Rule 1b, then current speaker may (but need not) continue (Rule 1c). Rule 2 means when rule 1c has been applied by current speaker, then at the next TRP rule 1a-c apply, and recursively at the next TRP, until speaker change is effected". modifying one's position" (Pomerantz, 1984: 153). This study focuses on similar phenomena in proposal sequences in Mandarin mundane talk. It is found that in most cases when the recipient fails to make "an immediate response" (Lee, 2013: 417) to a proposal and generally the duration of silence 2 is longer than 0.2 s (Stivers et al., 2009;Roberts et al., 2015), the proposer would take subsequent actions to pursue a preferred response or mobilize at least an articulated one from the recipient. As a social action, proposing is different from other social actions such as requesting, offering, inviting, or suggesting, and "proposing invokes both speaker and recipient in (a) the decision task and (b) the ensuing activity in a way that is mutually beneficial" (Stivers and Sidnell, 2016: 148). Prior studies on proposal sequences generally focus on (1) actions prior to the proposing turn (Drew, 1984;Couper-Kuhlen, 2014;Robinson and Kevoe-Feldman, 2016); (2) the initial actions of proposing (Drew, 2013;Stevanovic, 2013;Toerien et al., 2013;Couper-Kuhlen, 2014;Kushida and Yamakawa, 2015;Robinson and Kevoe-Feldman, 2016;Stevanovic and Monzoni, 2016;Stivers and Sidnell, 2016;Stevanovic et al., 2017;Stivers et al., 2017;Yu and Hao, 2020;Thompson et al., 2021); (3) responses to a proposal (Davidson, 1984;Heritage, 1984a;Stevanovic, 2012b;Stevanovic and Peräkylä, 2012;Ekberg and LeCouteur, 2015;Stevanovic and Monzoni, 2016); and (4) subsequent actions after a response to a proposal (Stevanovic, 2012a;Maynard, 2016). For example, Drew (1984: 146) concluded that if a speaker wishes to invite a recipient to come over or do something together, one of the options available is "to hint at an opportunity for some sociability, and leave it to the recipient to propose an arrangement explicitly." Stivers and Sidnell (2016: 148) examined two common ways that speakers propose a new joint activity with "Let's X" and "How about X, " in which "Let's constructions treat the proposed activity as disjunctive with the prior, while How about constructions treat the proposed activity as modifying the ongoing activity." Additionally, besides an affirmative response token, "a second unit of talk is required where the recipient indexes her stance toward the fulfillment of the remote proposal" (Lindström, 2017: 142). Although when encountering a potential or an actual rejection, a proposer "may then display an attempt to deal with this possibility or potentiality through the doing of some subsequent version" (Davidson, 1984: 124, 125). Moreover, it is observed that the subsequent actions or versions conducted by the proposer in this study are highly related to the deontic stance, which refers to the display of "the capacity of an individual to determine action" (Stevanovic, 2018: 1). In addition, these actions modulate the prior deontic stance embedded in the original proposal into four trends as follows: (1) maintaining the prior deontic stance with a self-repair or by seeking confirmation; (2) making the prior deontic stance more tentative by making a revised other-attentiveness proposal, providing an account, pursuing with a tag question, or requesting with an intimate address term; (3) making the prior deontic stance more decisive by making a further arrangement (for the original proposal), closing the local sequence, or providing a candidate unwillingness account (for the recipient's potential rejection); and (4) canceling the prior deontic stance by doing a counter-like action. Additionally, these trends inherently embody a decisive-to-tentative gradient and are analyzed in the following sections through related sequences. By examining the actual production of proposers' subsequent actions, we hope to uncover the underflow of deontic stance modulated in and through proposal sequences. MATERIALS AND METHODS Using everyday telephone talks in Mandarin Chinese as research materials, this study adopts the method of Conversation Analysis (hereafter CA) to investigate the occurrence of subsequent actions in proposal sequences. "The central domain of data with which conversation analysts are concerned is everyday, mundane conversations" (Heritage, 1984a: 238). The whole database from which targeted proposal sequences are selected consists of 662 intact Mandarin mundane telephone talks (33 h, 35 min, 44 s) collected during 2014-2022 among classmates, friends, lovers, couples, relatives, and parentchild, out of which 112 intact telephone calls (4 h, 59 min, 36 s) contain 226 proposal sequences. Then, 34 targeted proposal sequences have been selected, which include the phenomena under investigation. All the data are transcribed according to CA conventions (Hepburn and Bolden, 2013). One important research focus of CA is social action (Drew, 2013), which is implemented on a turn-by-turn basis in conversation. Most basically, an action sequence consists of a first pair part (FPP) and a second pair part (SPP), and the action enacted by an FPP normatively requires one of the alternative types of responsive actions by an SPP (Schegloff, 2007). For example, the recipient may accept or reject a proposal, or fail to respond to it. This study examines the recipient's failure to respond to a proposal. In facing the recipient's absence of an immediate response to a proposal, the proposer would take subsequent actions to solicit or mobilize a preferred or vocal response from the recipient. These actions modulate the prior deontic stance. Table 1 shows the modulated deontic trends and the distribution of these subsequent actions. SUBSEQUENT ACTIONS ENGENDERED BY THE ABSENCE OF AN IMMEDIATE RESPONSE The proposer would do a self-repair or seek confirmation to clarify the original proposal or reexamine the assumed common knowledge in the first trend, or make a revised otherattentiveness proposal to modify his/her position in the second trend, to pursue a preferred response or at least mobilize an articulated one in this research. These solutions to the absence of an immediate response are identical to the findings of Pomerantz (1984) on assertions. However, more subsequent actions have been identified in proposal sequences in this research. In addition, they will be illustrated with examples in the following sections. Maintaining the Prior Deontic Stance There are usually two ways to maintain the prior deontic stance, which are doing a self-repair and seeking confirmation. They are commonly conducted by the proposer to fix the interactional problems in terms of the speaker's or the recipient's epistemic domain (Heritage, 2013). By doing so, the proposer not only deals with the potential interactional problems but also provides the recipient with another chance to respond in the face of a growing silence, which may indicate an impending rejecting and disaffiliating response (Sacks et al., 1974). Doing a Self-Repair When engaging in a conversation, interactants frequently encounter problems in hearing, speaking, and understanding. Under such circumstances, the conversational repair is resorted to by interactants to ensure "that the interaction does not freeze in its place when trouble arises, that intersubjectivity is maintained or restored, and that the turn and sequence and activity can progress to possible completion" (Schegloff, 2007: xiv). In addition, there is "a strong empirical skewing" (Schegloff et al., 1977: 362) toward self-repair than other-repair. In example (1), Liang and Li are friends and college students. Liang has promised to lend her library card to Li, and now they are making an arrangement to transfer the card. (1) 14LJ_JKJM 39 Liang: → yaobu-yaoburan ni gen women yikuai chifan ba me. Otherwise-Otherwise you and us together eat PRT PRT. Or you can have a meal with us together. 40 (1.0) After they decide to meet each other at Liang's dormitory, from which Li is distant (data not shown), Liang proposes having a meal together and transferring the card passingly, thus reducing Li's cost. However, no verbal response is produced but occurs a noticeably long silence (1.0 s) in line 40 that may indicate a certain difficulty for Li. Then, in line 41, Liang selfrepairs the pronoun "women" in line 39 with "wo:. e:, Liu he Shi." (Schegloff et al., 1977), which indicates that Liang treats the occurrence of no immediate response as the result of her ambiguous expression in line 39 since the identity of "women" should be established "at the time the pronoun is used" (Li and Thompson, 1989: 132). Therefore, what Liang is doing with the self-repair is to inform Li of the specific ones having a meal together to address the possible interactional problems, instead of modulating the prior deontic stance embedded in the original proposal. In this regard, doing a self-repair does not impose Li to accept the proposal, and the prior deontic stance is maintained. Seeking Confirmation By seeking confirmation, the proposer displays his/her relatively low epistemic stance compared with the recipient (Heritage, 2013). In this way, the recipient has been involved to reexamine the assumed common knowledge related to the original proposal. In the following example, the husband (Wei) and his wife (Jiu) are discussing how to accomplish the wife's eye-brow shaping and their lunch arrangement with their child. 29 → Yaobu jiu shi zanmen dai shang haizi, Otherwise just be we take up kid, Or we could take our kid, 30 → wo:, wo he haizi dengde ni. I:, I and kid wait you. The kid and I will wait for you. Yeah. In the talk before line 28 (data not shown), they have already talked about other arrangements but not fully agreed with each other. Then in lines 28-33, the husband makes another proposal, yet the wife does not respond immediately. Instead, a silence of 1.6 s in line 34 occurs. Through confirmation seeking in line 35 to check if his proposal's premise is valid or if the wife's eye-brow shaping is still on her "wish list, " the husband treats the silence as an interactional problem. Therefore, this action serves to make the husband see if the original proposal is appropriate, instead of modulating the prior deontic stance. In this regard, seeking confirmation does not impose the wife to accept the proposal, and the prior deontic stance is maintained. In summary, by doing self-repair or seeking confirmation, the proposer creates another opportunity for the recipient to provide a preferred response or at least an articulated one, thereby solving the discontinuity in the ongoing talk and maintaining the prior deontic stance. Making the Prior Deontic Stance More Tentative According to the data, the proposer produces four kinds of subsequent actions to make the prior deontic stance more tentative. These actions include making a revised otherattentiveness proposal, providing an account, pursuing with a tag question, and requesting with an intimate address term. Making a Revised Other-Attentiveness Proposal Speakers could conduct a self-repair with "I-mean utterance" (Maynard, 2016: 74) to display other-attentiveness. For example, in a proposal sequence, the speaker produces a repairformatted utterance "to shift attention from the speaker and his desires, to the recipient and his or her needs or experiences" (Maynard, 2016: 88, 89). Similarly, in dealing with an absence of an immediate response in an assertion sequence, the speaker would change his/her position from the one (s)he had just asserted with the remedy that "apparently is directed toward a problem caused by the speaker having said something that was wrong" (Pomerantz, 1984: 162). In proposal sequences in this research, the proposer would revise his/her original proposal when there occurs a noticeably long silence. In addition, the above three findings all focus on the similar phenomenon of other-attentiveness occasioned by the silence or the delay which "is a general device which permits potentially 'face-threatening' rejections to be forestalled by means of revised proposals, offers and the like" (Heritage, 1984a: 275, 276). In example (3), Yao and Li are friends. They have both taken part in a Special Offer launched by a bank, which is supposed to return a certain amount of money to their phone bills in 15 days. However, they have not received any money after waiting for more than 15 days. Since Yao has called the head office of the bank, Li asks for more information about it. Then, Yao tells Li the solution given by a bank staff, which obviously cannot explain the delayed date or provide an exact date of returning the money (data not shown). (3) 14DY_HFDY In lines 176-178, Li complains about the delaying date and related issues since "part of how a complaint is formed is to provide for the recognizability of the offender's wrongdoings" (Pomerantz, 1986: 221). In responding to Li's complaint, Yao acknowledges it merely with a minimal response in line 179 without doing any other action. In addition, there is a silence of 0.8 s in line 180, which probably projects disaffiliation from Li. Then, Yao continues to provide an account or a disclaimer in line 181 informing that he cannot figure it out. Yet after another silence of 0.3 s in line 182, Yao proposes waiting to solve the problem in line 183, which follows a silence of 0.9 s in line 184 indicating that Li is not satisfied with the proposal. After the silence, Yao revises his proposal to call the branch some day in lines 185 and 188. In addition, Li gives his response in line 190, which indicates that the revised proposal in line 188 is more acceptable. Therefore, when the recipient fails to provide an immediate response, the proposer would revise the original proposal to make it more other-attentive. In doing so, the recipient is provided with a new opportunity to respond and also faces less pressure or imposition to accept the revised proposal. Thus, the prior deontic stance is modulated to be more tentative. Providing an Account An account can be provided by speakers to "modify (e.g., change, explain, justify, clarify, interpret, rationalize, (re)characterize, etc.), either prospectively or retrospectively, other interlocutors' understandings or assessments of conduct-in-interaction in terms of its 'possible' breach of relevance rules" (Robinson, 2016: 15, 16). Therefore, when there is a dispreferred response or projection of a dispreferred response, the speaker would provide an account to make his/her proposal more justified and easier to be accepted. Thus, the prior deontic stance is modulated to be more tentative, and the imposition embedded in the original proposal is downgraded. In example (4), Wang and Han are classmates and friends. Wang is calling Han to talk about the driving license test. After Han confirms that the test will be arranged only if they are sure to have a score of 95 (a full score is 100) in the practice test, Wang expresses her worry about it (data not shown). (4) 15HB_MNKS After asking if Wang has reviewed what is related to the test in line 218, Han proposes in line 219 that they can have a look at the driving license school. In addition, immediately after, there occurs a silence of 0.5 s in line 220. What Han is doing after the silence is providing an account in line 221, which makes her proposal more convincible and more explicitly otherattentiveness, since Han mentions that they would better let the officers know their presence at the driving license school, which would probably indicate a positive attitude toward the test from the perspective of the officers. In doing so, the proposer attempts to make the original proposal easier to be accepted, which makes the prior deontic stance more tentative rather than more decisive. Pursuing With a Tag Question "A statement can become a question by the addition of a short A-not-A question form of certain verbs as a tag to that statement" (Li and Thompson, 1989: 546). Tag questions can not only transform a statement into an interrogative in a proposal sequence but also decrease the deontic stance embedded in the original proposal in the meantime. In example (5), Xing and Ying are friends, and Xing is calling Ying to meet her at the hotel as they have planned since Ying has booked the hotel rooms for Xing and another friend. However, Ying is still at home right now when Xing and the other friend are already on the way to the hotel in a taxi. Then comes the following talk. Oh. Oh. After making sure that Xing is in a taxi right now through lines 21 and 23, Ying proposes handing over the hotel receipts in a place, which is convenient to them in lines 24-26. However, there occurs a silence of 0.5 s in line 27, after which Ying pursues Xing's response with a tag question "xing bu xing?" in line 28. This tag question is meant to mobilize the addressee to make a preferred response or at least an articulated one. After the production of the other-initiated self-repair sequence (partial repeat) (Robinson and Kevoe-Feldman, 2010) in lines 29-30 and the silence (0.8 s) in line 31, Xing responds to Ying with merely an acknowledgment token (Heritage, 1984b) in line 32. The tag question in line 28 makes a potential inter-turn gap into an intraturn pause (Schegloff, 2016). The inter-turn gap indicates that the recipient fails to produce an immediate response where (s)he is interactionally expected to give one. Thus, by pursuing with a tag question, the proposer makes the silence something that is caused by him/herself, and the recipient acquires a new chance to respond. Besides, the tag question in line 28 works to display more contingency (Curl and Drew, 2008) to the recipient's acceptance compared with the original proposal without a tag question, thus making the prior deontic stance more tentative. Besides the employment of "xing bu xing (similar to 'ok')" in line 28 in example (5), there are other tag questions or similar tags occurring in the data, which include "zen me yang (what do you think), " "ha (ok), " "ke yi wa (is that ok), " "shi bu shi (similar to 'right'), " "a (right), " and "hao ba (ok)". In addition, they all function to make the prior deontic stance more tentative and more or less downgrade the imposition displayed in the original proposal. Requesting With an Intimate Address Term As put forward by Sacks (1964) in his lecture, the Membership Inference-Rich Representative Device (M.I.R.) means that "a great deal of the knowledge that members of a society have about the society is stored in terms of these categories" (Jefferson, 1989: 90). Also, "When people recognize someone as an incumbent of a category such as student, mother, or friend, they make inferences regarding the rights and responsibilities, typical conduct and motives, and possibly personal characteristics of the incumbent" (Pomerantz and Mandelbaum, 2005: 152). In the following example, the speaker initiates a proposal in lines 06-08. When there is no verbal response in line 09, the proposer produces an intimate address term, which seems to transform the original proposal into a request to some extent, thus invoking the responsibility of a brother to accept the original proposal produced by his little sister. Therefore, the prior deontic stance has been decreased when the original proposal has been altered into one that needs the recipient's permission, rather than something that can be jointly decided by both sides. .h We now just go PRT. Now go get. Let's go now. Go and get it now. 07 → qu Rencai Shichang nashang wa. ni shuo le. Go Talent Market get PRT. You say PRT. Go to Talent Market and get it. How do you think. 08 → wo-xiawu huilai hai neng shuishangyijiao le. I-afternoon back still can have a nap PRT. Having a nap is still possible after we are back from Talent Market this afternoon. Yeah. Can. Yeah. This can be done. In example (6), the wife (Jing) is calling her husband (Quan) to propose going out and retrieving an archive right now in lines 06-07. The proposal is performed with an account in line 08 which includes a self-repair transforming a possible self-attentiveness account (abandoned as the cut-off in line 08 indicates) into an other-attentiveness account using a nonperson reference design, which could invoke the speaker and the recipient as the beneficiaries of the proposal (as they could both have a rest after retrieving the archive). After a long silence of 1.2 s in line 09, the wife produces an intimate address term "gege." in line 10, which indicates their relationship in terms of the rights and responsibilities between them. Thus, the husband is supposed to indulge his wife by doing what his wife asks him to do. Then immediately in the next turn (line 12), the husband acknowledges that address term and grants what the wife pursues with "keyi.", which indicates his higher deontic rights. Therefore, the intimate address term used by the wife transforms the action from proposing to requesting, making the original proposal sound like something that requires the husband's permission, thus giving the husband more rights to decide. In the meanwhile, the wife's prior deontic stance has been made more tentative. In conclusion, the proposer would make a revised otherattentiveness proposal to try to satisfy both participants. Also, by providing an account, the proposer can make the original proposal more justified, more convincing, and easier to be accepted by the recipient. Additionally, the proposer can also lower his/her prior deontic stance by pursuing with a tag question or can grant the recipient more deontic rights by requesting with an intimate address term. These subsequent actions all make the proposer's prior deontic stance more tentative, helping to pursue a preferred response or at least to mobilize an articulated one from the recipient. Making the Prior Deontic Stance More Decisive It is observed that the prior deontic stance can be made more decisive by making a further arrangement, closing the local sequence, and providing a candidate unwillingness account. Making a Further Arrangement People make arrangements to complete agreed decisions, such as a request, an offer, or a proposal. Thus, the whole sequence, e.g., a whole proposal sequence, is "achieved across a series of adjacency pairs, which are nonetheless being managed as a coordinated series that overarches its component pairs" (Heritage and Sorjonen, 1994: 4). Nevertheless, when there is no such agreed decision, the proposer would still propose a further arrangement. In example (7), the husband (Wei) is calling his wife (Jiu) to ask if she wants to go out with him to get their car washed and to buy some fish food. The wife agrees but she has to put the baby to bed first. Then, the husband asks how long it will take. The wife does not give a direct answer. Instead, she tells her husband she is still feeding the baby right now, indicating that she cannot provide a precise time as the answer to his question. However, the husband still pursues by providing a candidate time and asks for his wife's confirmation in line 20, and then comes the following talk. After the husband gets her answer in line 22, which is somewhat vague: "wo chi le fan.", there is a silence of 0.3 s in line 23. Then, the wife proposes that they can meet each other in approximately 20 min in line 24. The husband does not immediately respond to the proposal, since there is a silence of 0.4 s in line 25. Nevertheless, the wife continues to make a further new proposal about the meeting location, using an imperative ending with the particle "ha." in line 26 to solicit the recipient's agreement or confirmation (Cui, 2011). After another 0.3 s in line 27, the husband accepts his wife's two proposals in line 28. Despite the projection of a dispreferred response by the silence (Pomerantz and Heritage, 2013) in line 25, probably because the time that the wife proposes is doubled as we can know from line 20, the wife continues to make a further new proposal without acquiring the husband's agreement, thus making her first proposal as an accepted one or one that does not need the husband's agreement. Therefore, the deontic stance displayed in the first proposal has been made much more decisive by leaving almost no space for the husband to turn it down. In a word, making a further arrangement without the recipient's agreement to the original proposal enables the proposer to make the prior deontic stance more decisive, thus adding an imposition to the recipient than the original proposal does. Closing the Local Sequence A proposal needs to be accepted or agreed on by the recipient before it can be fully realized, and recipients "typically make action declarations ('yeah, let's take it') and/or positive evaluations ('yeah, that's good')" (Stevanovic, 2012b: 843) to indicate the acceptance of a proposal. In addition, when there is no further arrangement after the recipient's acceptance of a proposal, or in other words, after the second pair part to its prior action, a sequence-closing third (SCT) such as "oh", "okay", and assessment would be delivered to "move for, or to propose, sequence closing" (Schegloff, 2007: 118). However, if the local sequence has been closed by the speaker before the recipient's explicit acceptance or rejection, then the recipient's rights to decide have been partially deprived at that very moment. In the meanwhile, the proposer's higher decisive deontic stance is highlighted, compared with the prior deontic stance embedded in the original proposal. In example (8) Jun proposes talking with Yun in the afternoon in line 99, after abandoning something in the middle of a telling in lines 96-97, as indicated by "ranhou" in line 96. Yet the silence (0.6 s) occurs right after the proposal in line 100. Then without receiving any agreement or rejection from his wife, the husband closes the local sequence with " • en. • " in a low voice in line 101, and it sounds like the husband is acknowledging the wife's acceptance (which obviously does not occur) of his original proposal, making it an established one without the wife's agreement. Therefore, the wife's rights to decide have been deprived at that moment. In addition, in doing so, the husband makes the prior deontic stance embedded in the original proposal more decisive, leaving no space for his wife to decide at that very moment. Providing a Candidate Unwillingness Account First actions, such as requests and invitations, inherently prefer affiliative responses. When disaffiliative responses are produced, some remedial work will be performed, in that these responses will threaten or damage interpersonal relationships between participants. Also, accounts are frequently offered as a remedy for doing disaffiliative actions (Heritage, 1984a;Schegloff, 2007;Pomerantz and Heritage, 2013). Regularly, recipients tend to provide a "no fault" account (Heritage, 1984a: 272) instead of an unwillingness account. For instance, such accounts as (s)he has already got an appointment, or (s)he is under the weather, etc. are treated as no-fault ones to reject or disagree with the speakers' first actions. However, in the following example, it is the first speaker (the proposer) rather than the recipient that provides a candidate unwillingness account when the silence occurs. In example (9), a couple is discussing dinner arrangements. (9) 19H_HGYY In line 22, the husband (Quan) proposes eating hotpot with his wife (Jing). However, Jing does not give an immediate response, and a significantly long silence (3.2 s) occurs in line 23, which may be a harbinger of a dispreferred response. At this moment, the husband provides a candidate unwillingness account in line 24. Grammatically, the question "bu xiang chi?" is designed as a polar question, which projects a response of a particular lexical item like "yes" or "no, " or other equivalents or a type-conforming response (Raymond, 2000(Raymond, , 2003. If the wife does acknowledge the candidate unwillingness account provided by the husband, then disaffiliativeness would arise, in that the wife's unwillingness will hinder the achievement of the proposal. On the contrary, the wife would probably not agree with the candidate unwillingness account so as not to indicate disaffiliativeness. In this regard, the action conducted by the husband may impose the wife to accept the proposal, making the prior deontic stance embedded in the original proposal more decisive. Observably, the wife denies the husband's inquiry about the candidate unwillingness account in line 26, thus indicating her acceptance of the husband's imposition. Therefore, to make the recipient accept the original proposal, the proposer would make a further arrangement or close the local sequence to ignore the silence or the absence of an immediate response from the recipient, or would provide a candidate unwillingness account to impose the recipient to accept. Also, these actions make the prior deontic stance embedded in the original proposal more decisive. Canceling the Prior Deontic Stance In a proposal sequence, when there is silence or the absence of an immediate response, the proposer may withdraw his/her proposal with a counter-like action, and at the same time, cancels his/her prior deontic stance. In interaction, the SPP of an adjacency pair is not always produced after the FPP (Schegloff, 2007). Between them, there often occur insert expansions or even counters that are a special kind of alternative to the SPP. Counters "do not serve to defer the answering of the question; they replace it with a question of their own. They thus reverse the direction of the sequence and its flow; they reverse the direction of constraint" (Schegloff, 2007: 17). However, the following example suggests that the proposer rather than the recipient would do the so-called "counter" or counter-like action to address the absence of an immediate response. In example (10), the husband (Wei) calls to tell his wife (Jiu) to meet in a while. After they have made the arrangement, the wife proposes eating dumplings at home in line 29. Yeah. Ok. Then just call it a day. Receiving a silence of 0.9 s in line 30, the husband responds merely with an acknowledgment token "ao." in line 31, which is not enough to indicate a full acceptance of a proposal. Then, in the same turn, the husband proposes discussing this issue when they meet. But the wife does not make an immediate response. Then, after a silence of 0.2 s in line 32, the husband withdraws his proposal with the employment of "sha dou xing." in line 33 designed as an Extreme Case Formulation (Pomerantz, 1986), which literally indicates that the husband would accept whatever the wife proposes. Thus, the husband abandons his deontic rights to decide and reverses the rights to his wife with a counter-like action. In doing so, his prior deontic stance is canceled. Thus, giving up his/her own deontic rights to decide by withdrawing the original proposal, the proposer actually transfers the rights as well as the obligations originally shared and fulfilled by both participants to the recipient. RESULTS The above examples illustrate that the relatively long silence or the absence of an immediate response is ascribed by the proposer as a harbinger of either interactional problems or potential dispreferred responses. Generally, the proposer would take subsequent actions to address them. These actions serve to pursue a preferred response or mobilize at least an articulated one. In the meanwhile, the accomplishment of these actions modulates the prior deontic stance, and the underflow of the deontic stance displays a decisive-to-tentative gradient, which can be reflected in the following figure. Figure 1 shows that the prior deontic stance can be maintained [line (1) (4)]. Moreover, it should be highlighted that what the present investigation focuses on is how the prior deontic stance is modulated by subsequent actions, but not on the deontic stance displayed by subsequent actions themselves. Additionally, the prior deontic stance shown on the left side of Figure 1 is not invariable but dynamically constructed through various interactional resources. For example, yàobù-TCUs are found to be "a conversational practice which enables the proposer to tentatively make a proposal with minimal imposition on the recipient" (Yu and Hao, 2020: 18). However, for the consideration of convenience, the start point of the figure is made fixed. DISCUSSION Sequentially, the proposer takes subsequent actions to address the absence of an immediate response. Interactionally, the consequence these actions bring about is the modulation of the prior deontic stance, which has something to do with face-saving or face-threatening. Also, "face, " including positive face and negative face, has been identified and elaborated as basic human desires and characteristics of all competent adults (Goffman, 1967;Brown and Levinson, 1987). Specifically, "negative face refers to the desire to be free from imposition and to have one's autonomy and prerogatives honored and respected. Positive face refers to the desire to have a favorable self-image that is validated by others" (Clayman, 2002: 231). This study argues that maintaining the prior deontic stance damages neither proposer's nor recipient's faces; whereas, making the prior deontic stance more tentative weakens the proposer's deontic rights to decide, thus highlighting his/her damaged negative face; in contrast, making the prior deontic stance more decisive constrains the recipient's rights to make a decision so as to damage his/her negative face; and canceling the prior deontic stance actually transfers the rights as well as the obligations originally shared and fulfilled by both participants to the recipient, thus damaging the proposer's positive face as a responsible participant, as well as the recipient's negative face as a willing participant to shoulder the obligations. In our data, only in one case out of the total 34 cases, the proposer cancels the prior deontic stance, which indicates that normally the proposer would not do it. Also, this suggests that canceling may not be an appropriate action. Besides, it is worth mentioning that the relationship between a proposer and his/her recipient is not fixed but interactionally constructed. In addition, the recipient's response plays a vital role. The modulated deontic stance, whether tentative or decisive, needs to be acknowledged by the recipient in reaching an agreement as well as building social solidarity between the two interlocutors. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
The Impact of Phenocopy on the Genetic Analysis of Complex Traits A consistent debate is ongoing on genome-wide association studies (GWAs). A key point is the capability to identify low-penetrance variations across the human genome. Among the phenomena reducing the power of these analyses, phenocopy level (PE) hampers very seriously the investigation of complex diseases, as well known in neurological disorders, cancer, and likely of primary importance in human ageing. PE seems to be the norm, rather than the exception, especially when considering the role of epigenetics and environmental factors towards phenotype. Despite some attempts, no recognized solution has been proposed, particularly to estimate the effects of phenocopies on the study planning or its analysis design. We present a simulation, where we attempt to define more precisely how phenocopy impacts on different analytical methods under different scenarios. With our approach the critical role of phenocopy emerges, and the more the PE level increases the more the initial difficulty in detecting gene-gene interactions is amplified. In particular, our results show that strong main effects are not hampered by the presence of an increasing amount of phenocopy in the study sample, despite progressively reducing the significance of the association, if the study is sufficiently powered. On the opposite, when purely epistatic effects are simulated, the capability of identifying the association depends on several parameters, such as the strength of the interaction between the polymorphic variants, the penetrance of the polymorphism and the alleles (minor or major) which produce the combined effect and their frequency in the population. We conclude that the neglect of the possible presence of phenocopies in complex traits heavily affects the analysis of their genetic data. The most widely used statistical tests are single point statistics (chi-square, or Cochrane-Armitage test) along the genome; these tests can be integrated with haplotype (or multi-marker) analysis once the linkage disequilibrium (LD) structure is drawn and thus haplotype blocks have been identified. All these tests can be performed under different assumptions and with slightly different approaches, and multivariate analyses are generally performed. Two main obstacles can be envisaged as: N the false positive rates, and consequently the efficacy of the corrections adopted; N the capability to identify low-penetrance variations across the human genome. As for false positives, many different approaches have been proposed and, provided the sample collection to be large enough, a multi-stage design has been shown to be very effective in detecting key leads in the genome, often replicated in other populations. It's not the purpose of this paper to address this area [7,19]. As for the identification of low-penetrance polymorphisms, the area is of a major consideration when disentangling the picture of any complex trait. Indeed, it's quite realistic for complex phenotypes to be determined by a combination of many different polymorphic loci each of them accounting for a minor part of the total variance [20], hence very difficult to be detected when a genome-wide genotyping is performed and when GWA significance rates are applied [20]. Despite this issue being of a key importance, most of the papers reporting GWA studies applied single point statistics, multi-marker analysis and haplotypes analyses, performed LD mapping, adopted different false-positive rate corrections [21,22,23,24,25]. Few of them actually included interaction analysis and other similar approaches capable to grasp the effect of interactions and across-genome combinations, rather then the main effect of single markers or (despite more importantly) the major contribution of a specific haplotype in a locus [26,27,28]. Among the phenomena reducing the power of these analysis, phenocopy hampers very seriously the investigation of complex diseases, a well known issue in neurological disorders [29,30], cancer [31], and likely of primary importance in the study of human ageing [32]. However, the concept of phenocopy is quite 1) 2) old in genetics, and assumed different meanings according to many different authors: for the purpose of this paper, we mainly refer a definition adopted in linkage studies, where ''phenocopy'' indicates affected individuals who had acquired the disease by different means than the ones segregating in rest of the family [33]. Moreover, the term here needs to be even more focused, due to the characteristics of the simulating algorithm adopted in this study to generate the disease model and subsequently the datasets: globally we consider here a ''phenocopy'' an individual marked as affected, but where the underlying genetic markers associated with the disease are different from the other cases in the dataset. We also aknowledge that the classical definition of phenocopy assumes a smooth and wider perspective when we consider the most important complex traits: in this scenario its importance appears to be even higher, due to its intrinsic presence when the interplay of multiple genetic loci determines a disease. Phenocopy (indicated as PE, ''phenocopy error'', from the terminology of the genomeSIMLA software) seems to be the norm, rather than the exception, especially when considering the role that epigenetics and environmental factors exert on the phenotype [34]. Considering the scenario we are dealing with, additional terminology needs to be clarified. As previously mentioned, one of the hot topics geneticists are currently debating is whether the so called ''missing heritability'' issue would find an answer in very rare and highly penetrant mutations (detectable with exome sequencing or whole genome next generation sequencing only [35]), or in a multitude of polymorphisms with no effect when considered alone (main effect) but with a more significant effect when their statistical interaction is considered [36,37]. As far as this latter point is concerned, several models have been proposed since many years [38] which define ''epistasis'' (again another term used with different meanings in genetics) as the interaction between different loci, and call ''purely epistatic'' those interactions between loci that do not display any single locus main effect [37,38,39]. This model has been proposed and largely debated [34,40,41]: some authors consider the additive model widely used as sufficient to incorporate these effects [42], or argue about the scarce impact of such a scenario, but few papers address specifically this topic [43,44]. Despite some attempt [45,46,47], no widely recognized solution has been therefore proposed, particularly to estimate the effects that phenocopies could exert either on the study planning or its analysis design. At present, the most of the analysis strategies do not take into account the intrinsic presence of phenocopy in complex traits. We present a simulation [48,49,50,51], where we attempt to define more precisely how phenocopy impacts on different analytical methods under different scenarios. Simulation of the datasets Two disease models have been simulated. In the first model, i.e ''model ME'', standing for ''Main Effect'', the marker RL0-855 was simulated, having a main effect and an OR = 2.225. Three additional SNPs (Table 1) have been simulated with a very small marginal effect, and an interaction associated with the disease, according to the mixed model offered by the logistic function of genomeSIMLA. In the second model, i.e. ''model EPI'', standing for ''purely epistatic'', the second disease model (model EPI), three markers (RL0-75 RL0-153 and RL0-272, Table 2) have been simulated in order not to display any main effect and associate with the disease with a purely epistatic penetrance table, with target OR = 4. For each disease model, the following datasets have been extracted from the population: a) 6 different case-control datasets with increasing phenocopy level generated with the method implemented within the software (PM1); b) 6 different case-control datasets with increasing phenocopy level generated with an alternative method (PM2) develop in our lab, as described in materials and methods; c) 6 pedigree datasets with increasing phenocopy level generated as implemented in genomeSIMLA. Main effect model As far as the model ME is concerned, the results show that strong main effects are not hampered by higher levels of PE, despite an inflation of the significance (figure 1). A very similar behaviour appears to happen on the pedigrees dataset, with TDT analysis, even if the overall significance level is a bit lower (2log10(p) = 40 at 0%PE and 2log10(p) = 8.63, see Supplementary Figure S2). Among the other markers where only an interaction was simulated, only the marker RL0-245 appeared among the top ten significant at 0%PE (2log10(p) = 11.47) but it was no more on the top 10 when the phenocopy level reached 10%. The same happened on the TDT analysis. Purely epistatic model When we analyzed the EPI model on the case control dataset, none of the three markers ranked among the top list of significant markers. Moreover if we had to correct for multiple testing, none of the markers would reach a 0.05 level of significance neither at 0% PE level, nor at 45%. Despite some fluctuations on the data, mainly due to sampling and data extraction, a positive but no significant trend in the number of falsely significant markers could be observed according to the increase of phenocopy error percentage (figure 2). The same pattern was observable when analyzing the case-control dataset generated with the PM2 phenocopy method (see Supplementary Figure S3). When applying PM2 we observed the appearence of a single progressively significant marker (RL0-255), which was borderline for the Hardy-Weinberg equilibrium in the main dataset and therefore was unbalanced when affected individuals from different dataset suffering the same simulation phenomenon were added. This SNP can be considered a false positive, as it was not simulated in association of the disease in none of the additional datasets. A similar behaviour of the markers with a purely epistatic effect was observable in the pedigree dataset with a TDT analysis: again none of them ranked as significant (Supplementary Figure S4). In order to check for the correctness of the model we generated, we performed a logistic regression on the interaction term between the three markers we simulated to be associated with a purely epistatic effect. The p value of the logistic regression was highly significant both at a 0% PE (p = 7.8*10 221 ) and at a 45% PE (p = 4.17*10 26 ). Therefore we decided to analyze the data by using a logic regression approach. Logic Regression is an adaptive regression methodology mainly developed to explore high-order interactions in genomic data and its goal is to find predictors that are Boolean (logical) combinations of the original predictors. By applying this methodology the analysis was capable to identify in most cases two of the three interacting SNPs among the top ranking interactions ( figure 3). The more the phenocopy error was increasing and the more these interactions ranked lower, even if in any case at least one of the three markers (RL0-153) was always present among the top five. As a purely epistatic model is a challenge for the analysis in itself, we adopted a further analysis method, i.e. the multifactor dimensionality reduction (MDR) [44,52]. MDR analysis was performed on the EPI model with PM2 phenocopy levels. Comparably with the logic regression analysis, the MDR method perfomed with random non exhaustive explorations, was unable to catch efficiently all the interactions, and this became more evident with increasing PE levels (Supplementary Table S1). When testing directly the interacting SNPs, the efficiency and the OR of the MDR outcome was very close to the modelled one, but these values progressively decreased the more the PE level increased: at a 0% PE the predicted OR was 3.80 (compared to a target OR of the model = 4.0) and at 45% PE the predicted OR decreased to 2.39 (table 3, and Supplementary Figure S5 and Supplementary References S1). Discussion Investigating the genetic determinants of complex traits challenges researchers with obstacles yet unresolved completely. We can argue that the genetic scenario of the most important complex traits is not explainable in black and white, i.e. only by the presence of very rare variants yet to be discovered with sequencing or by the presence of purely epistatic effects. Complex traits are likely determined by a different contribution of both causes, with proportions that can differ from a phenotype to another. In this paper we chose to address this second aspect which deserves specific attention. The characterization of the phenotypes is of extreme importance to this regard, and in our work we focused simulations of genetic data on the analysis of the effect that phenocopy levels could have in the capability to understand the genetic determinant of a disease with different methodologies. We would like to stress that the concept of ''phenocopy'' can be interpreted in several ways, as we pointed out in the introduction, and that the classical definitions of phenocopies should be largely revisited in the context of complex traits, where multilocus genotypes could play a decisive role. Yet this aspect plays a major role in the discovery of genetic determinants: if to a certain extent complex traits could be considered by definition phenocopies, and if purely epistatic interactions play an important role in the missing heritability (perhaps along undiscovered rare variants), then future analysis methods have to take into account this scenario and model not only interactions, but also phenocopy within their statistical model. In our simulation we decided to verify the impact of phenocopy level by testing two methods for the generation of phenocopies: the PM2 method we developed, specifically produces phenocopies by introducing affected individuals in which different genetic Our results show that strong main effects are not hampered by the presence of an increasing amount of phenocopy in the study sample, despite progressively reducing the significance of the association, if the study is sufficiently powered. On the opposite, when purely epistatic effects are simulated, the capability of identifying the association depends on several parameters, such as the strength of the interaction between the polymorphic variants, the penetrance of the polymorphism, the alleles (minor or major) which produce the combined effect and their frequency in the population. The influence of these parameters has been partially discussed in 0% PE datasets in the literature. In our simulation the critical role of phenocopy emerges, and the more the PE level increases the more the initial difficulty in detecting these gene-gene interactions is amplified, even with methodologies more suitable to the discovery of epistatic models. Classical analytical methodologies are very sensible to this error, and new statistical methods have to be developed, addressing in a less computing-intensive way SNP-SNP interactions as well as accounting or adjusting their results on estimates of the phenocopy error. Since the presence of phenocopy can be a characteristic intrinsic to the phenotyping of complex traits, we conclude that the neglect of the possible presence of phenocopies in these scenarios heavily affects the analysis of their genetic data. Simulations We performed simulations by using the software genome-SIMLA [50] which performs the simulation of large-scale genomic data both in population based case-control samples and in families. It is a forward-time population simulation algorithm that allows the user to specify many evolutionary parameters and control evolutionary processes and allows the user to specify varying levels of both linkage and LD among and between markers and disease loci. [48,49,53]. Particular SNPs may be chosen to represent disease loci according to desired location, correlation with nearby SNPs, and allele frequency. Up to six loci may be selected for main effects and all possible 2 and 3-way interactions. Disease-susceptibility effects of multiple genetic variables can be modeled using either the SIMLA logistic function [49,53] or a purely epistatic multi-locus penetrance function [41] found using a genetic algorithm to assign affected status (for program configuration files see Supplementary Model S1). Disease models We generated two different disease models. In the first one (referred to as ''model ME'', standing for ''Main Effect'') a single SNP (RL0-855, figure 4) was simulated to have a main effect on disease, with an OR = 2.225; at the same time the disease model included also three other SNPs (RL0-75, RL0-245, RL0-457) with no main effect and an interaction associated to the affection status. We simulated this model on a single chromosome with 1.362 markers. In the second model (referred to as ''model EPI'', standing for ''purely epistatic''), we performed a simulation on a smaller chromosome (401 markers), where no main effect was present and three SNPs (RL0-75, RL0-153, RL0-272) were affecting disease with only a purely epistatic disease model, generated by using SIMPEN [49]. The penetrance table was generated with a target OR = 4. In both simulations the SNP chosen to be associated with the disease had a MAF.0.30, in order to allow us to simulate the condition so called ''common variant common disease'' [54,55,56]. Table 1 and Table 2 provide information on the associated markers and their target OR. Supplementary Figure S6 gives additional details on the disease model generation. For each of the two models case-control data and pedigree data were generated. On each case six different large pooled datasets were extracted, with an increasing level of phenocopy error (i.e. 0%, 5%, 10%, 20%, 30% and 45%). In order to avoid biases due to data extraction and fluctuation, each dataset has been obtained by sampling and then pooling 50 different datasets on each PE level. The case/control simulation included datasets of 200 cases and 200 controls each, i.e. finally 20.000 individuals each PE level dataset. Each family simulation included 25 families with 1 affected sib and 2 unaffected, 25 families with 3 affected and 1 unaffected, 25 families with 2 affected, 2 unaffected sibs and 3 random extra sibs: the total number of individuals for each dataset of different PE level was 25.000 samples. Supplementary Figure S7 gives additional details on the datasets generation. Generation of the phenocopies The genomeSIMLA software version used (1.0.7w32), currently implements a method for generating the phenocopy designed as follows. The software generates cases and controls using the penetrance function and the marker specified by the user. Then, in casecontrol datasets, it removes a percentage (user specified) of cases and replace them with individuals sampled from the control individuals in the full population and assign them the affected status. In family datasets, the software determines the total number of affected to modify as phenocopies, identifies the pedigrees to be modified and redraw the family according to the new requirements. Pedigrees with the required number of affected and unaffected are selected and then the unaffected phenocopies are marked as affected, according to the initial design specified by the user (personal communication). In order to verify the correspondence of such phenocopy generation method with what we defined as ''phenocopy'' (see introduction), we also developed another methodology to be applied on the case-control datasets only. According to this second algorithm (referred into the article as ''phenocopy method two'', PM2), five additional datasets have been generated, with different markers associated to the affected status. In order to generate the phenocopy level required, a uniform random sampling of affected individuals from the five additional datasets have been performed, and these individuals have been substituted with affected individuals randomly picked up from the original dataset. This method generates five datasets with the same phenocopy percentage as the PM1. Supplementary Figure S8 provides a more detailed explanation and supplementary Box S1 reports the R code used to generate these datasets. Table 2 provides information about the markers associated to the affection status in the additional datasets and the target OR used. Statistical analysis The analysis were conducted using the R software (www. r-project.org) and PLINK. In particular whole-chromosome casecontrol analysis and TDT analysis were performed with PLINK and visualized with R. The calculation of genetic contrasts and the logistic regression on single markers, markers' interaction analysis with logistic regression where performed according to Clayton as developed in the ''DGCgenetics'' package. Interaction analysis by using a logic regression approach was performed by using the R package ''logicFS'' by Schwender, according to the developer's specifications. [27] The MDR analysis has been conducted by using the MDR java package (www.epistasis.org) [57] and performing 5.000 random explorations in the model discovery of attributes ranging from 2 to 4-way interactions, as implemented in the software. Supporting Information Model S1 Model Configuration files. Table S1 The table summarizes the 10 best models for each phenocopy level identified during the MDR analysis. It has to be stressed that the MDR analysis has been conducted by performing 5.000 evaluations of possible interactions. An exhaustive analysis as implemented in the software would be computationally very intensive, as pointed out by the authors in a recent paper (see Pattin K. A. et al. [4]). In bold the correct SNPs as modelled in the purely epistatic penetrance function. Found at: doi:10.1371/journal.pone.0011876.s004 (0.08 MB DOC) Figure S1 For the case-control dataset generated with the main effect disease model (see SF6), an alternative method of producing phenocopies has been applied (see SF8). The method displays the same performance of the internally implemented one, with the only exception of few markers which progressely fall outside the equilibrium of Hardy-Weinberg, thus resulting in a false-positive association (indicated by the arrow). The red circle indicates the marker associated with the disease in the main dataset. Figure S6 Two disease models have been applied. In the first model a single SNP displays a main effect (target OR = 2.225) and three additional SNPs do not have a main effect and interact with each other with a modest effect; this model is implemented as part of the SIMLA logistic function [1]. In the second model instead, three SNPs have been simulated as having no main effect, and a purely espistatic effect on the disease (with a target OR = 4); this model has been implemented in genomeSIMLA and it has been proposed by Culverhouse [2] and discussed by Moore [2,3]. Figure S8 The method has been developed by using the R software (code provided) in order to perform a random sampling from five additional datasets where different SNPs have been associated in the disease model with the affected individuals. A uniform and random sampling, followed by a random substitution of the individuals in the original dataset produced different levels of phenocopies in the sample, thus generating six dataset with increasing phenocopy percentage. This method ensures the effective substitution of individuals generated as affected but with completely different causative markers. The method has been developed as a further analysis of possible effect generated by the ''phenocopying'' method implemented in the genomeSIMLA software. Found at: doi:10.1371/journal.pone.0011876.s012 (1.67 MB EPS)
The ROSAT Deep Survey: VI. X-ray sources and Optical identifications of the Ultra Deep Survey We describe in this paper the ROSAT Ultra Deep Survey (UDS), an extension of the ROSAT Deep Survey (RDS) in the Lockman Hole. The UDS reaches a flux level of 1.2 x 10E-15 erg/cm2/s in 0.5-2.0 keV energy band, a level ~4.6 times fainter than the RDS. We present nearly complete spectroscopic identifications (90%) of the sample of 94 X-ray sources based on low-resolution Keck spectra. The majority of the sources (57) are broad emission line AGNs (type I), whereas a further 13 AGNs show only narrow emission lines or broad Balmer emission lines with a large Balmer decrement (type II AGNs) indicating significant optical absorption. The second most abundant class of objects (10) are groups and clusters of galaxies (~11%). Further we found five galactic stars and one ''normal'' emission line galaxy. Eight X-ray sources remain spectroscopically unidentified. The photometric redshift determination indicates in three out of the eight sources the presence of an obscured AGN in the range of 1.2<z<2.7. These objects could belong to the long-sought population of type 2 QSOs, which are predicted by the AGN synthesis models of the X-ray background. Finally, we discuss the optical and soft X-ray properties of the type I AGN, type II AGN, and groups and clusters of galaxies, and the implications to the X-ray backround. Introduction The X-ray backgound has been a matter of intense study since its discovery about 40 years ago by Giacconi et al. (1962). Several deep X-ray surveys using ROSAT, BeppoSAX, ASCA and recently Chandra and XMM-Newton have resolved a large fraction (60-80 %) of the soft and hard X-ray background into discrete sources , Mushotzky et al. 2000, Giacconi et al. 2000 The optical/infrared identification of faint X-ray sources from deep surveys is the key to understanding the nature of the X-ray background. A significant advance was the nearly complete optical identification of the ROSAT Deep Survey (RDS), which contains a sample of 50 PSPC X-ray sources with fluxes above 5.5 · 10 −15 erg cm −2 s −1 in the 0.5-2.0 keV energy band (Hasinger et al. 1998, hereafter Paper I). The spectroscopy has revealed that ∼80% of the optical counterparts are AGNs (Schmidt et al. 1998, hereafter Paper II). The very red colour (R − K ′ > 5.0) of the only two optically unidentified RDS sources indicates either high redshift clusters of galaxies (z > 1.0) or obscured AGNs (Lehmann et al. 2000a, hereafter Paper III). This study found a much larger fraction of AGNs than did any previous X-ray survey (Boyle et al. 1995, Georgantopoulos et al. 1996, Bower et al. 1996and McHardy et al. 1998. Several deep hard X-ray surveys (>2 keV) have been started with Chandra and XMM-Newton to date (e.g., Brandt et al. 2000, Mushotzky et al. 2000, Giacconi et al. 2000, but most of their optical identification is at an early stage or their total number of sources is still relatively small (Barger et al. 2001). In this paper we present the X-ray sources and their spectroscopic identification of the Ultra Deep Survey (UDS) in the region of the Lockman Hole. The UDS contains 94 X-ray sources with 0.5-2.0 keV fluxes larger than 1.2 · 10 −15 erg cm −2 s −1 , which is about 4.6 times fainter than our previously optically identified RDS survey. The scope of the paper is as follows. In Sect. 2 we define the X-ray sample and present its X-ray properties. The optical imaging, photometry and spectroscopy of their optical counterparts are described in Sect. 3. The optical identification of the new X-ray sources and the catalogue of optical counterparts of the 94 X-ray sources are presented in Sect. 4. A K ′ survey covering half of the X-ray survey area is presented in Sect. 5. The photometric redshift determination of three very red sources (R − K ′ > 5.0) is shown in Sect. 6. The implications of our results with respect to the X-ray background are discussed in Sect. 7. The X-ray observations The ROSAT Deep Surveys consist of ROSAT PSPC and HRI observations in the direction of the Lockman Hole, a region of an extremely low galactic Hydrogen column density N H = 5.7 × 10 19 cm −2 (Lockman et al. 1986). About 207 ksec were accumulated with the PSPC detector (Pfeffermann & Briel 1992) centered at the direction α 2000 = 10 h 52 m 00 s , δ 2000 = 57 o 21 ′ 36 ′′ . The most sensitive area of the PSPC field of view is within a radius of ∼20 arcmin from its center. The ROSAT PSPC image is the basis for the ROSAT Deep Survey (RDS), which includes a statistically complete sample of 50 X-ray sources with 0.5-2.0 keV fluxes larger than 1.1 · 10 −14 erg cm −2 s −1 at off-axis angles smaller than 18.5 arcmin and fluxes brighter than 5.5 · 10 −15 erg cm −2 s −1 at off-axis angles smaller than 12.5 arcmin (see Paper I). A pointing of 1112 ksec (net exposure time) with the ROSAT HRI (David et al. 1996) has been obtained centered at the direction α 2000 = 10 h 52 m 43 s , δ 2000 = 57 o 28 ′ 48 ′′ . The HRI pointing is shifted about 10 arcmin to the North-East of the PSPC center (see Fig 1. in Paper I). This shift was chosen to allow a more accurate wobble correction to increase the sensitivity and spatial resolution using some bright optically identified X-ray sources. The HRI field of view is about 36 × 36 arcmin. The main advantage of the HRI compared to the PSPC is the a) The off-axis angle is given with respect to the center of the pointing of the instrument defining each sample. b) Limiting flux in the 0.5-2.0 keV energy band. c) Excluding the area already included in the HRI sample. higher angular resolution of the HRI (∼5 arcsec) compared to that of the PSPC (∼25 arcsec), but the HRI has practically no energy resolution. Fig. 1 presents the combined PSPC/HRI image of the overlap region covered by both pointings. In addition to the HRI and PSPC pointings we have obtained a raster scan with the HRI for a total 205 ksec exposure time, which covers nearly the entire field of the PSPC pointing. The 1112 ksec HRI exposure is the main basis for the Ultra Deep ROSAT Survey (UDS) in the Lockman Hole region. The sample selection and the complete X-ray catalogue are presented in the following sections. For a detailed description of the X-ray observations, the detection algorithm, the astrometric correction and the verification of the analysis procedure by simulation, see Paper I. Sample definition The Ultra Deep Survey sample combines three statistically independent samples of HRI and of PSPC sources, whose defining characteristics, both in terms of area and flux limits, are given in Table 1. The total number of sources is 94. Forty-seven of the 68 sources in the HRI sample are also detected with the PSPC, while seven of the 26 sources in the PSPC samples are also detected with the HRI (but outside of the central HRI survey region). For the 54 sources both detected with the HRI and the PSPC we adopt the flux (see Table 3) measured with the instrument which defines the sample to which each source belongs. This corresponds to assuming that each sample is defined at the epoch at which its observation has been obtained. The HRI detector has practically no energy resolution in the 0.1-2.4 keV band, which leads to a significant modeldependence in the count rate to flux conversion. The conversion of the PSPC count rates in the PSPC hard band to fluxes in the 0.5-2.0 keV band is straightforward in comparison to the HRI because of the similar band passes (see Paper I). The HRI fluxes in the 0.5-2.0 keV band were determined from the HRI count rates using the corresponding Energy to counts conversion factor in cts s −1 for a source with 0.5-2.0 keV flux of 10 −11 erg cm −2 s −1 b) PSF loss correction factor from simulations (see Paper I). exposure time, a vignetting correction, an energy-to-flux conversion factor (ECF) and a point spread function-loss factor determined from simulations (see Table 2). To derive the ECF factor we have assumed a power law spectrum with photon index 2 and galactic absorption (using N H = 5.7 × 10 19 cm −2 ), folded through the instrument response. 2.2. The X-ray source catalogue Table 3 provides the complete catalogue of the 94 Xray sources from the Ultra Deep ROSAT Survey in the Lockman Hole region. The first two columns give the source name and an internal source number. The capital letters in parenthesis mark whether the source belongs to the HRI sample (H) or to the PSPC samples (P1 or P2). The weighted coordinates of the X-ray sources for an equinox of J2000.0 are shown in columns 3 and 4. The 1σ error of their position (in arcsec), including statistical and systematic errors, is given in column 5. The capital P indicates that the X-ray position is based on the 207 ksec PSPC exposure. The positions based on the 1112 ksec HRI pointing and the 205 ksec HRI raster scan are marked with capital letters H and R, respectively. Columns 6 and 7 show the distance of the sources from the center of the HRI and PSPC field (in arcmin). Columns 8 and 9 contain the 0.5-2.0 keV HRI and PSPC fluxes of the sources in units of 10 −14 erg cm −2 s −1 . The PSPC flux marked with parenthesis is lower than the flux limits of the PSPC samples (see Table 1). These sources would not belong to the PSPC samples without HRI detection. The hardness ratio HR1 = (H-S)/(H+S) derived from the PSPC hard and soft bands (see Table 2) is given in column 10. The differences between the UDS and the RDS samples The UDS sample contains all X-ray sources from the RDS sample (see Paper I and II), except the sources 36 and 116, which are in the HRI area (HRI off-axis angle smaller than 12 arcmin) and have not been detected by the HRI. The re-analysis of the PSPC data has led to slightly different Table 4). The error bar in the upper right gives the mean error of the data points. The lines show the theoretical HRI to PSPC flux ratio in depenence of an increasing Hydrogen column densitiy from N H = 0× PSPC fluxes compared to Paper I. The new PSPC fluxes are statistically consistent with Paper I fluxes. The flux limit of the PSPC subsample P2 (see Table 1) is marginally lower than that limit for the same off-axis angles in the RDS sample (1.1 × 10 −14 erg cm −2 s −1 ). The PSPC fluxes differ from the HRI fluxes for several sources (see Table 3). Fig. 2 shows the HRI to PSPC flux ratio plotted versus the PSPC hardness ratio HR1 for all sources detected both with the HRI and the PSPC. The flux ratio of the softer X-ray sources (HR1 ≤ −0.5) scatters around one, whereas the HRI flux of the hard sources (HR1 ≥ 0.5) is clearly lower than the PSPC flux, indicated by an average flux ratio of 0.6. All spectroscopically identified type II AGNs including the very red sources (see Sect. 4 and 5) are relatively hard sources. Their lower HRI to PSPC flux ratio can be due to a harder spectrum than the assumed unabsorbed powerlaw spectrum with photon index 2, leading to lower HRI fluxes compared to PSPC fluxes for intrinsically absorbed sources. We have performed simulations with XSPEC to determine the influence of intrinsic absorption to the HRI to PSPC flux ratio using three different photon indices (Γ = 1, 2, 3). The models predict a decrease of the flux ratio ( ≤ 0.8) at apparent column densities (i.e., not corrected for redshift) larger than 1.2 × 10 21 cm −2 (see Fig. 2). The lower HRI to PSPC flux ratio of the harder UDS sources (HR1 ≥ 0.5) results probably from intrinsic absorption. The ASCA Deep Survey in the Lockman Hole by Ishisaki et al. (2001) has confirmed that the X-ray luminous UDS type II AGN, detected with ASCA in the 1-7 keV energy band, show harder spectra than type I AGN, which can be explained by intrinsic absorption with N H ∼ 10 22−23 cm −2 . Several groups and clusters of galaxies (e.g., 41C and 228Z) show significant lower HRI fluxes compared to their PSPC fluxes. This is probably due to the lower sensitivity of the HRI to detect extended X-ray emission. The 0.5-2.0 keV X-ray flux of the UDS groups and clusters of galaxies can be at least a factor of 2 larger than those values, which are based on the HRI, given in Table 3. However, the difference between the PSPC and HRI fluxes for several sources can be also due to X-ray variability, because the fluxes are defined by the epoch of the observation. Optical spectroscopy Optical images and optical spectra of 50 of the 94 UDS sources are published in Paper III and in Hasinger et al. (1999, hereafter Paper IV). Finding charts and, when available, optical spectra are given in Fig. A.1 for the remaining 44 objects. The optical identifications of most of the new X-ray sources is based on their highly accurate HRI positions and Keck R band images with a limiting magnitude ranging from 23.5 to 26.0. We have found for nearly all sources only one optical counterpart inside the HRI error circle (see Fig. A.1). In some cases the HRI detection of PSPC sources (e.g., 15 and 18) is essential to identify the correct optical counterpart. For the new sources, presented here for the first time, three (24, 70, and 151) are only covered by the PSPC exposure. For these sources, due to the lower positional accuracy of the PSPC detector (25 ′′ ), we considered all objects inside the PSPC error circle as possible optical counterparts. For sources 5,24,70,131,151,477,804,827,832, and 840 we used Palomar 5-m V band images taken with the 4-Shooter (Gunn et al. 1987) to locate the optical counterparts, as for these cases Keck R images are not available. The optical imaging and photometry of the X-ray sources has been previously described in Papers II and III. However, we have to complete the R-band coverage (not the entire UDS survey area is covered by Keck LRIS frames). The optical photometry should still be regarded as somewhat uncertain for those objects, not covered by the Keck R images. For instance, the R magnitudes of the objects 9A, 17A, 29A, and 54A are significant lower than those given in Paper II due to the more reliable Keck photometry compared to the R band photometry based on imaging with the University of Hawaii 2.2-m telescope (see Paper II). A more complete and reliable photometry will be published in a future paper. Optical spectroscopy of nearly all new UDS sources was performed with the Low Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) at the Keck I and II 10-m telescopes during observing runs in February and December 1995, April 1996, April 1997, March 1998and March 1999. The spectra were taken either through multislit masks using 1.4 ′′ and 1.5 ′′ wide slits or through 0.7 ′′ and 1.0 ′′ wide long slits. The detector is a back-illuminated 2048 × 2048 Tektronics CCD. A 300 lines mm −1 grating produces a spectral resolution of ∼10Å for the long slit spectra and of ∼15Å for the mask spectra. The wavelength coverage of the spectra varies within 3800-8900 A. In addition, long slit spectra of some relatively bright counterparts (R < 21) were obtained with LRIS at the Keck II telescope during service observing in February, March and April 1998. The exposure times for long slit spectra range from 120 to 3600 sec, for mask spectra from 1800 to 3600 sec. The optical spectrum of the bright star 232A (R = 11.7) was taken in March 1998 at the Calar Alto 3.5-m telescope using the Boller & Chivens Cass Twin spectrograph. The detector is a 800 × 2000 SITe CCD. A 600 lines mm −1 transmission grating and a slit width of 1.5 ′′ produce spectra from 3500-5500Å at a spectral resolution of 4.4Å in the blue channel. The spectra were processed with the MIDAS package involving standard procedures (e.g., bias-subtraction, flat-fielding, wavelength calibration using He-Ar or Hg-Kr lamp spectra, flux calibration and atmospherical correction for broad molecular bands). The optimal extraction algorithm of Horne (1986) was used to extract the onedimensional spectra. A relative flux calibration was performed using secondary standards for spectrophotometry from Oke and Gunn (1983). Several spectra show residuals from night sky line subtraction (e.g., 15A and 33A). In some cases the atmospheric band correction was only partially successful (see 131Z). The optical images (R or V ) of the new UDS objects and their optical spectra are presented in Fig. A.1. The optical identification of UDS X-ray sources To identify the new 46 X-ray sources from the UDS not contained in the previously optically identified RDS sample, we apply the identification scheme described in Paper II. The scheme involves identification (ID) classes that characterize the spectroscopic information as detailed below. The existence of broad UV/optical emission lines with FWHM>1500 km s −1 in the optical spectra of 27 X-ray sources of the 46 new UDS sources reveals broad emission line AGNs (type I), which are subclassified in the ID classes a-c. The 15 new objects of ID class a show at least two of the high-redshifted broad emission lines: Mg II λ2798, C III] λ1908, C IV λ1548, Si IV λ1397, Lyα λ1216. Object 817A, a quasar at z = 4.45 (Schneider et al. 1998), is the most distant X-ray selected quasar found to date, and object 832A is a broad absorption line quasar (BALQSO) at z = 2.735. Ten new objects belong to the ID class b. Their optical spectra show only a broad Mg II emission line and usually several narrow emission lines (e.g., [Ne V] λ3426 and [O II] λ3727). The broad Mg II lines of 513 (34O) and of 426A are marginally seen in the low S/N spectra (see Fig. A.1), but the Gaussian fit to the lines reveals 3σ detections in both cases. In addition, source 513 (34O) has been detected at 6 cm wavelength with the VLA (Ciliegi, private communication) confirming its AGN nature. Open squares are groups and cluster of galaxies. The only galaxy in our sample is given by the hexagon. Asterisks are stars. The crosses mark the spectroscopically unidentified sources. The dotted lines give the typical f x /f opt ratios for stars (left) and for AGNs (right) defined by Stocke et al. (1991). There is no ID class c object (showing only broad Balmer emission lines) among the 46 new sources. Four X-ray sources are identified with narrow emission line AGNs (type II). One of these AGNs (901A) is identified with ID class d, where high ionization [Ne V] λ3426 emission lines indicate an AGN. Three narrow line AGNs belong to the ID class e, which contains 13 new UDS sources. The optical spectra of these sources show neither broad emission lines nor high ionization [Ne V] emission lines. The identification of the ID class e objects is based on the ratio of X-ray to optical flux f x /f v as defined by Stocke et al. (1991), the angular extent of the X-ray source and its high 0.5-2.0 keV X-ray luminosity (L x ≥ 10 43 erg s −1 ) indicating an AGN or a group/cluster of galaxies. Table 3). Seven of the new class e objects are spectroscopically confirmed as groups or clusters of galaxies (see notes on individual sources below). In two cases we found stars as optical counterparts of the X-ray sources (see Table 4). Six new UDS sources (434B, 486A, 607, 828A, 866A, and 905A ), which belong to class e, still lack an optical identification. Fig. 3 shows the 0.5-2.0 keV X-ray flux of all UDS sources as a function of the R magnitude of their optical counterparts. Nearly all AGNs are located around a well defined line with f x /f v = 1. Six of the eight unidentified UDS sources have R magnitudes lower than the spectroscopic limit of our survey, which is approximately R ∼ 23.5. The photometric redshift determination of these sources are presented in Sect. 6. The identification of several new UDS sources is discussed in more detail below. Notes on individual objects Source 24 is detected as a point source with the PSPC and is not covered by the HRI exposure. The Palomar V image shows one object (24Z) inside the PSPC error circle. The optical spectrum contains narrow emission lines of Oxygen and a Hβ emission line at z = 0.480. The Gaussian fit to the Hβ line reveals a significant broader line (FWHM=1440 km s −1 ) compared to the width of the Oxygen emission lines (see Table B.2). The log f x /f v ratio and the X-ray luminosity of L x > 10 43 erg s −1 in the 0.5-2.0 keV band are rather high for a normal galaxy. We classify the object as an AGN. Source 34 is extended in the HRI exposure. It was not possible to determine the morphology with the PSPC, because of the confusion with source 513 at a distance of about 35 ′′ . There is no optical or even K ′ band counterpart inside the HRI error circle. Two galaxies 34L and 34M are located at distances of ∼ 5 ′′ and ∼ 18 ′′ from the HRI position. The object 34M is a narrow emission line galaxy at z = 0.263. A 1 hour long slit exposure of the object 34L has not yielded a definitive redshift. Another emission line galaxy (34F) at z = 0.262 is at a distance of ∼ 28 ′′ from the HRI position. The ratio log f x /f v = −0.87 for 34F is consistent with those of groups or clusters of galaxies. We tentatively classify source 34 as a group of galaxies. Source 104 is consistent with a point source in the PSPC and the HRI exposures. The relatively bright galaxy 104A is located inside the HRI circle and shows several narrow emission lines at z = 0.137. The flux ratios of the emission lines log ([O III] λ5007/Hβ λ4861) = 0.86, log ([N II] λ6584/Hα λ6563) = −0.24 and log ([S II] λ6716 + λ6731/ Hα λ6563) = −0.50 reveal an AGN using the diagnostic diagrams by Veilleux & Osterbrock (1987). The ratio log f x /f v = −1.05 is at the lower limit of the EMSS AGN (Stocke et al. 1991). This object has a 0.5-2.0 keV X-ray luminosity of log L x = 41.65 (erg s −1 ) and is one of the four low luminosity AGN found in the survey (see Fig. 6). A second galaxy (104C) at z = 0.134 is at a dis-tance of 25 ′′ from the X-ray source. Although the AGN is probably the source of the X-ray radiation, object 104A could belong to a group of galaxies. Source 128 appears significantly extended in the HRI image, but it is not detected in the PSPC exposure. However, we are not able to exclude that the extended emission results from source confusion. The galaxy 128E inside the HRI error circle shows a narrow [O II] λ3727 emission line at redshift z = 0.478. The ratio log f x /f v = −0.08 agrees with the range of galaxy groups and clusters of galaxies. The X-ray extent of the source cannot originate from the galaxy 128E at such a high redshift. A second galaxy (128D) at a redshift z = 0.031 is located near the edge of the HRI error circle. The spectrum of the object shows several narrow emission lines of Oxygen and the Balmer series. The flux ratios log ([O III] λ5007/Hβ λ4861) = 0.46 and log ([S II] λ6716 + λ6731/Hα λ6563) = −1.13 allow a secure classification as a starburst galaxy using the Veilleux & Osterbrock diagnostic diagrams (1987). The galaxy 128D is probably a field object. There are three additional faint galaxies within a distance of 20 ′′ from the galaxy 128E, which seems to indicate the presence of a poor group of galaxies with an X-ray luminosity of 3.1 × 10 42 (see, e.g, Mulchaey et al. 1996) or a fossil galaxy group similar to that object found by Ponman et al. (1994). We tentatively classify source 128 as a galaxy group. Source 131 shows extended X-ray emission in the HRI exposure. The relatively bright elliptical galaxy 131Z (z = 0.205) is located at the edge of the HRI error circle and is surrounded by at least 10 fainter galaxies. A similar object is the elliptical cD galaxy in A2218 at z = 0.175 (Le Borgne et al. 1992). We classify source 131 as a cluster of galaxies. However, the ratio log f x /f v = −1.95 is rather low for a galaxy cluster. Sources 228/229, the most extended X-ray sources of our sample, are separated only by about 1 ′ . The Keck R-image shows a marginal excess of galaxies brighter than R = 24.5, which indicates a high-redshift cluster of galaxies. The optical spectrum of the brightest galaxy close to the center of source 229 (left peak in Fig.1 of Paper IV) revealed a gravitationally lensed arc at z = 2.570 (Hasinger et al. 1999, Paper IV). Near-infrared images have shown a number of relatively bright galaxies with R−K ′ = 5.5−5.7 (Lehmann et al. 2000b). Recently, Thompson et al. (2001) have detected a broad Hα emission line at z = 1.263 in the near-infrared spectrum of one of the galaxies. The X-ray and optical morphologies suggest the presence of either a bimodal cluster or two clusters in interaction. This would be one of the two most distant X-ray selected cluster of galaxies found to date. Another cluster at z = 1.267 was discovered in the ROSAT Deep Cluster Survey (Rosati et al. 1999). Source 815 appears pointlike in the HRI exposure. The optically brightest galaxy 815C inside the HRI error circle is an elliptical galaxy at z = 0.700. Two further narrow emission line galaxies (815D and 815F) have been found at the same redshift. The ratio log f x /f v = −0.87 for 815C is consistent with a cluster of galaxies or an AGN. The galaxy 815C has been detected in the VLA 6cm survey of the Lockman Hole (Ciliegi, private communication), which reveals a radio galaxy. At least 7 additional faint galaxies are within a distance of 20 ′′ from the center of Xray position, which seems to indicate a rich galaxy cluster around the radio galaxy. We cannot conclude that the detected emission is due to a single AGN or due to a cluster of galaxies. Extended X-ray emission is probably not detected because of the faintness of this source. On the base of the existense of three galaxies at z = 0.700 we classify source 815 as a cluster of galaxies. Source 827 is consistent with a point source in the HRI exposure. The HRI error circle contains a galaxy (827B) at z = 0.249. The optical spectrum of 827B shows only a very strong Hα emission line, but no further emission lines resulting in a large Balmer decrement. The Hα line width of 1200 km s −1 (FWHM) is significantly broader than typical narrow emission lines, which indicates an AGN. The source is detected in the VLA 20 cm survey of the Lockman Hole region (DeRuiter et al. 1997). The ratio log f x /f v = −0.35 of 827B agrees with the range of AGN. We classify source 827 as an AGN. It is one of the four low-luminosity AGN (log L x ≤ 42 erg s −1 ) of our sample (see Fig. 6). Source 840 is extended in the HRI pointing and not covered by the inner PSPC field of view. The HRI error circle contains a bright star and a faint galaxy (840C) at z = 0.074. A brighter galaxy (840D) at the same redshift is located at a distance of 20 ′′ from the center of the HRI position. The ratio log f x /f v = −2.55 for 840D and the X-ray luminosity (log L x = 40.7) are very low for a group of galaxies. Due to the lower sensitivity of the HRI to detect extended X-ray emission the X-ray flux could be underestimated by a factor of 2-8 (see, e.g., sources 41 and 228 in Table 3). Source 840 is classified as a group of galaxies due to the extended X-ray emission. However, we cannot exclude that the X-ray emission is due to a single galaxy or a to the sum of the emission of a few galaxies in the group. Source 901 is identified with ID class d. A high ionization [Ne V] λ3426 emission line indicates an AGN in this source. The flux ratios of the emission lines log ([O III] λ5007/Hβ λ4861) > 1.31 and log ([N II] λ6584/Hα λ6563) = 0.17 confirm the optical identification as an AGN (see diagnostic diagrams by Veilleux & Osterbrock 1987). Its X-ray luminosity of 2.3 · 10 41 erg s −1 in the 0.5-2.0 keV energy band and its ratio log f x /f v = −3.09 are very low for an AGN, which could be due to intrinsic absorption. The object is one of the four low luminosity AGNs (log L x < 42) of the total sample of 69 UDS AGNs (see Fig. 6). The following six new UDS X-ray sources are still spectroscopically unidentified: Source 434 is a faint X-ray source detected in the HRI and the PSPC pointing. The HRI error circle contains two objects (434A and 434B). The optical spectrum of 434A reveals an elliptical galaxy at z = 0.762. The ratio log f x /f v = −0.15 is rather high for a galaxy, but consistent with an AGN or a group or cluster of galaxies. We have no optical spectrum of 434B (R = 22.4), which has R − K ′ = 4.1. The galaxy 434A is not detected on the K ′ image. We need to take an optical spectrum of 434B to identify the source 434. Source 486 is detected as a point source in the HRI and the PSPC exposures. The R images (see Fig. A.1) show the faint object 486A (R = 23.8) inside the HRI error circle. This object has a K ′ magnitude of 18.5; R − K ′ = 5.3. Source 486 is one of the three unidenitified soures with R−K ′ indices larger than 4.5 (14Z and 84Z being the other two). We argue that 486A is probably an obscured AGN (see photometric redshift determination in Sect. 6.). The ratio log f x /f v = 0.65 is too high for a galaxy. Source 607 has been detected in the HRI and the PSPC pointing. The R images show a very faint counterpart (R = 24.1) inside the HRI error circle. Due to the high accuracy of the HRI position we argue that 36Z is the optical counterpart. We were unable to determine the spectroscopic redshift of 36Z with a one hour Keck exposure. The high ratio log f x /f v = 0.78 is consistent with an AGN. The photometric redshift estimation of the object 607 (36Z) is described in Sect. 6. Source 825 is a point source in the HRI image. The HRI error circle contains a faint object 825A (R = 22.8). We were not able to determine the redshift of 825A with a one hour Keck exposure. The high ratio log f x /f v = 0.35 would be consistent with an AGN or with a group/cluster of galaxies. Source 866 is a HRI point source. The faint object 866A (R = 24.2) is located inside the HRI error circle, but hard to see in Fig. A.1. The high ratio log f x /f v =0.95 is consistent with an AGN or a group/cluster of galaxies. However, due to its X-ray faintness source 866 could be a spurious detection. We expect about 1-2 spurious detections in our sample of 94 X-ray spources down to a limitimg flux of 1.2 · 10 −15 erg cm −2 s −1 in the 0.5-2.0 keV band. Source 905 is detected as a point source in the HRI exposure. The faint optical counterpart 905A (R = 25.0) shows a very red colour (R − K ′ = 6.3), which could indicate an obscured AGN similar to 14Z or 84Z . The high ratio log f x /f v =0.86 is consistent with an AGN or a group/cluster of galaxies. Table 4 contains the optical properties of the 94 X-ray sources in the UDS sample as defined in Sect. 2.1. The table is sorted by increasing internal X-ray source number (see Table 3), which is included in the name of the optical object identified with the X-ray source. The first four columns give the name of the object, the R magnitude of the object and its right ascension and declination Column 5 gives the distance of the optical counterpart from the position of the X-ray source in arcsec. The capital letters (H, P and R) mark whether the source position is mainly derived from the 1112 ksec HRI exposures, the 207 ksec PSPC exposure or the 205 ksec HRI raster scan. The HRI or the PSPC flux in units of 10 −14 erg cm −2 s −1 (0.5-2.0 keV) depending if the source belongs to the HRI sample or to the PSPC samples is given in column 6. The next column shows the X-ray to optical flux ratio f x /f v as defined by Stocke et al. (1991). Columns 8 and 9 give a optical morphological parameter of the object (s=star-like, g=extended, galaxy-like) and the redshift. The morphological parameter has changed for several objects (e.g., 9A and 60B) compared to that of Paper II due to availability of higher quality Keck images. The catalogue of optical counterparts The absolute magnitude M v (using the assumption V − R = +0.22, corresponding to a power law spectral index of -0.5) and the logarithm of the X-ray luminosity in the 0.5-2.0 keV energy band in units of erg s −1 (assuming an energy spectral index of -1.0) are shown in columns 10 and 11. For the three very red sources with photometric redshifts we have determined the absolute magnitude M v using their K magnitudes. The R − K ′ colour of the counterparts is given, when available, in column 12. Column 13 gives the optical classification (broad emission line AGNs -AGN I, narrow emission line AGNs -AGN II, GALgalaxy and GRP/CLUS -group/cluster of galaxies). The large Balmer decrement, indicated by the large ratio of the Hα to the Hβ emisssion line equivalent widths (> 9), of some AGNs (e.g., 28B, 59A) reveals a large amount of optical absorption. We therefore classify these objects as AGN type II. The ID class of the optical counterparts is given in the last column. AGN emission lines and galaxy absorption lines The optical identification of the UDS X-ray sources depends on the accurate measurement of the optical emission line properties. Nearly all identified X-ray sources in the UDS have high S/N Keck spectra, which would allow detection of faint high ionized [Ne V] λλ3346/3426 emission lines, if they were present. To help assign a reliable identification we have derived the emission line properties, the FWHM in km s −1 corrected for instrumental resolution and the rest frame EW inÅ. For the determination of the line parameters we fitted single or double Gaussian profiles using the Levenberg-Marquardt algorithm (Press et al. 1992), see Paper III for details. The parameter set was accepted if the reduced χ 2 was less than 3.0. We distinguish between narrow and broad emission lines at FWHM of 1500 km s −1 . In some cases we found very broad components with FWHM > 8000 km −1 . Tables A.1-B.3 (see Appendix) contain the FWHM and the EW values of those emission lines, which were detected at the 3σ level. The UDS AGNs cover a redshift range from 0.080 to 4.45 with a median redshift z = 1.2, which is only slightly larger than that of the RDS AGN sample (z = 1.1). In Paper III we compared the FWHM and the EW distributions of the RDS AGNs with those from several X-ray selected samples (e.g., the RIXOS sample of Puchnarewicz et al. 1997, the CRSS sample of Boyle et al. 1997) and from optical/UV selected AGN/quasar samples (e.g., the AGN A) Ca H+K λ3934/3968, B) CH G λ4304, C) Mg I λ5175, D) Na I λ5890. * ) Lines are not covered by the spectrum (see Fig. A.1). samples of Steidel & Sargent & 1991, of Brotherton et al. 1994, and of Green 1996 We have found a good agreement for the distributions of both broad and narrow emission lines. Slightly smaller mean EW values of the narrow emission lines are probably due to significant continuum emission from the host galaxies of several RDS AGNs. To compare the results obtained from the RDS sample we have derived the mean FWHM and the mean EW for the UDS AGN sample, which contains in total 70 objects. Table 5 shows the comparison of emission line properties of the UDS AGN sample and the RDS AGN sample, which is a subsample of the UDS at higher X-ray flux. The first column gives the name of the most prominent emission lines. Column 2 marks the line component. The columns 3 and 4 (5 and 6) give the mean FWHM (EW), its 1σ error, and the number of lines found in the RDS AGNs. The same data are shown for the UDS AGNs in columns 7 to 10. The very broad line components (FWHM>8000 km s −1 ) have not been considered here. The mean EW and FWHM of the two samples agree very well with each other, confirming the results from the comparison of the RDS AGN sample with other AGN/quasar samples from Paper III. For completenes Table 6 shows the galaxy absorption lines of those UDS objects, that do not belong to the RDS sample (see Table 1 in Paper III for similar data for the objects in the RDS sample). Objects with z > 1.1 are not listed, because the mentioned galaxy absorption lines lie outside the covered wavelength range. The columns 1 and 2 contain the name and the class of the objects (AGN I -type I AGN, AGN II -type II AGN, GRP/CLUS -galaxy group/cluster of galaxies). The redshift of the objects and the typical galaxy absorption lines found in the optical spectra are given in the columns 3 and 4. The entry "no" means that the absorption lines are not detected. Column 4 shows the 4000Å break index, as defined by Bruzual (1983), and its 1σ error. Seven out of the 11 new identified AGNs with z < 1.1 show typical galaxy absorption lines in their optical spectra. Four of those AGNs have D(4000) values clearly larger than 1.0, which is an indication for a large continuum emission from their host galaxies (see Paper III for a detailed discussion). This is consistent with the fact 10 of these 11 AGNs have an absolute magnitude fainter than -22.0, well in the range covered by normal galaxies. Near-infrared photometry A deep broad-band K ′ (1.9244-2.292 µm) survey of the Lockman Hole region has been performed with the Omega-Prime camera (Bizenberger et al. 1998) on the Calar Alto 3.5-m telescope in 1997 and in 1998. Some of the data were kindly provided to us by M. McCaughrean. About half of the ultradeep HRI pointing area is already covered. We planned to observe the remaining area in spring 2000, but weather conditions did not allow us to finish the K ′ survey. The camera uses a 1024×1024 pixel HgCdTe HAWAII array with an image scale of 0.396 arcsec pixel −1 , and covers a field-of-view of 6.7×6.7 arcmin. A large number of background limited images were taken at slightly dithered positions. The total accumulated integration time of the combined images ranges from 45 to 70 minutes. The image reduction involves the usual standard techniques. The SExtractor package (Bertin & Arnouts 1996) was used to detect the sources and to measure their fluxes. Point sources are well detected at the 5σ level at K ′ =19.7 mag in a 45 min (net) exposure. Four of the spectroscopically unidentified sources in this paper, 14Z, 84Z, 486A, and 607-36Z, were observed on UT 1999 December 15 and 16 under good seeing and photometric conditions using the facility Near-Infrared Fig. 4. R − K ′ colour versus optical R magnitudes for all objects in the Lockman Hole. Same symbols as for the X-ray sources in Fig. 3.Plus signs mark the remaining optically unidentified sources, where we show the brightest optical counterparts in the 80% confidence level circle. All X-ray sources not covered by the K ′ survey so far are plotted at R − K ′ = 0 or 0.2. Small dots show field objects not detected in X-rays. Imaging Camera (NIRC, Matthews & Soifer 1994) on the Keck I telescope. NIRC has an image scale of 0. ′′ 15 per pixel, imaging onto a 256 2 InSb detector for a 38. ′′ 4 square field of view. Objects 14Z, 84Z, and 486A were observed for 10 minutes in each of the zJHK bands; 607-36Z was observed for 5 minutes in the JHK filters (the z band data were saturated because the data were obtained near dawn). The telescope was dithered in a random pattern every one minute of integration. The images were skysubtracted and flatfielded, then stacked using integer pixel offsets. Calibration onto the Vega flux scale was done using the Persson et al. (1998) infrared standard stars. Fig. 4 presents the R − K ′ colour versus the R magnitude for almost half of the objects of the UDS field. We have detected all X-ray sources at the K ′ images, which are covered by the deep K ′ band survey in the Lockman Hole to date. There is a trend for the R − K ′ colour of the X-ray counterparts to increase to fainter R magnitudes. Four of the optically unidentified X-ray sources have only very faint optical counterparts (R > 24). The K ′ images of these sources (see Fig. 3 in Paper III) show relatively bright counterparts resulting in very red colours (R − K ′ > 5.0). We have already argued in Paper III that such red counterparts of X-ray sources suggest either obscured AGNs or a high-z cluster of galaxies. The spectroscopic identification of five counterparts with R − K ′ > 4.5 con-firms this so far (see Table 4). However, we cannot exclude that the very red objects are high-z quasars at z > 4.0. The most distant X-ray selected quasar (817A) found to date (Schneider et al. 1998) shows also a relatively red colour (R − K ′ ∼ 4). Photometric redshift determination We have estimated photometric redshifts for the four unidentified sources 14Z, 84Z, 486A, 607 with broad-band photometry in several filters, described in Sect. 4. Due to the hardness of 14Z, 84Z and 486A (see Table 3) and their large R − K ′ colours we argue that they are probably obscured AGNs. The photometric redshift of these objects is based on the assumption that their spectral energy distribution (SED) in the optical/near-infrared is due to stellar processes. If emission from an obscured AGN is contributing significantly at some wavelength, the following results should be taken with caution. We used a standard photometric redshift technique (see e.g., Cimatti et al. 1999 andBolzonella et al. 2000). In our version of the software the templates consist of a set of synthetic spectra (Bruzual & Charlot 1993), with different star formation histories and spanning a wide range of ages (from 10 5 to 2 × 10 10 yrs); the basic set of templates includes only solar metallicity and Salpeter's IMF. The effect of IGM attenuation (Madau 1995), extremely important at high redshift, is included, along with the effect of internal dust attenuation, using a dust-screen model and the SMC extinction law with E(B-V) ranging from 0 to 0.5. In total 768 synthetic spectra have been used. The "best" photometric redshift (z phot. ) for each galaxy is computed by applying a standard, errorweighted χ 2 minimization procedure. Moreover we have computed error bars to z phot. corresponding to 90% confidence levels, computed by means of the ∆χ 2 increment for a single parameter (Avni 1976). The observed spectral energy distribution (SED) of each objects, obtained from broad-band photometry in several filters (V, I from 8K UH, R from Keck+LRIS and z, J, H, K from Keck+NIRC), is compared to our set of template spectra. The V, I 8K UH observations are described by Wilson et al. (1996). In agreement with their very red colours (4.6 < R − K < 5.7) we obtained relatively high z phot. , ranging from 1.21 < z phot. < 2.71, for all sources (Fig. 5). For objects 14Z and 486A we estimate z phot. = 1.94 +0.18 −0.10 and z phot. = 1.21 +0.10 −0.14 , respectively. Both observed SEDs are consistent with an old stellar population (age=2.5 ÷ 5 Gyrs), while the content of dust is badly constrained because of the absence of U and B band photometry. The formal best estimate for redshift of object 607 (36Z) is z phot. = 1.36 +0.07 −0.12 , but the resulting fit is very poor; the upturn in the V photometry at λ < 6000 indicates the presence of a young stellar population (0.3 Gyrs) and the absence or a low dust extinction or the presence of an underlying AGN component. The photometric redshift of 607 (36Z) is therefore very uncertain. Data in bluer bands (U and B) and combined AGN and galaxy templates would probably be needed for a more reliable redshift estimate for this object. We therefore have not included the value of 607 in Table 4. Object 84Z shows a quite clear break between J and H photometry, consistent with a best fit model at z phot. = 2.71 +0.29 −0.41 , while the decreasing flux towards shorter wavelength indicates the presence of a moderate dust content (E(B − V ) = 0.3) in a young stellar population (age=0.1 Gyrs). As seen in Fig 5, the resulting fit is very good, with a χ 2 value of the order of unity. However, due to the relatively large magnitude errors, especially in the I and z bands, also lower redshifts (down to z ∼ 1.5) would be statistically acceptable. Discussion and conclusion We have presented the nearly complete optical identification of 94 X-ray sources with 0.5-2.0 keV X-ray fluxes larger than 1.2·10 −15 erg s −2 from the Ultra Deep ROSAT Survey in the Lockman region. Highly accurate HRI positions, deep Keck R and Palomar V images, and high signal-to-noise ratio Keck spectra allow a reliable identification of 85 (90%) X-ray sources. Table 8 shows the spectroscopical identification summary of the UDS, compared with that of the RDS. As seen in the table, the population of optical counterparts in the UDS has not changed from that of the RDS, with the large majority of the identifications (75% of the X-ray sources) being identified with AGN. The ratio between AGN I (57 objects) and AGN II (13 objects) is greater than 4, although it could decrease to about 3 if some, or most, of the 8 remaining spectroscopically unidentified Xray sources are type II AGN (see Sect. 6). The second most abundant class of objects is constituted by groups and/or clusters of galaxies (10), followed by galactic stars (5). We have spectroscopically identified only one source in the entire sample with a "normal" emission line galaxy (53A). The ASCA Deep Survey of the Lockman Hole by Ishisaky et al. (2001) suggests an obscured AGN also in this case. These results clearly show that the fraction of AGN as optical/infrared counterparts of faint X-ray sources in the 0.5-2.0 keV energy band remains high down to a limiting flux of 1.2·10 −15 erg cm −2 s −1 , a factor ∼4.6 times fainter than the RDS survey, and confirm that the soft X-ray background is dominated by the emission of type I AGN, and that the increased sensitivity of the UDS has not revealed an increase in the fraction of type II AGN. In the following we briefly summarize, in turn, some of the properties of the objects in the three main classes of optical counterparts. a) Type I AGN • The emission line properties of the 57 type I AGN in the UDS survey are consistent with those of other brighter X-ray selected samples and optical/UV selected samples. • The X-ray luminosity of these objects covers the range 43 < log L x < 45, confirming that most of the contribution to the X-ray background from AGN I is due to moderately powerful objects. • The X-ray and optical luminosity of the type I AGN are reasonably well correlated, with an average value corresponding to f x /f v ∼ 1 (see Fig. 6). • The UDS contains the most distant X-ray selected quasar (z = 4.45) found to date (Schneider et al. 1998). • The surface density of type I AGN in the HRI defined sample, which is the deepest part of the UDS survey (40 objects in 0.126 sq.deg., corresponding to 317 ± 50 objects per sq.deg.) is higher than any reported surface density based on spectroscopic samples for this class of objects. This confirms the very high efficiency of X-ray selection in detecting this kind of objects (see discussion in Zamorani et al. 1999 for a comparison of the relative efficiencies of X-ray and optical selections of type I AGN). Fig. 6. The logarithm of the 0.5-2.0 keV X-ray luminosity is plotted versus the absolute magnitude M v of the optical counterparts marked with different symbols; filled circles -type I AGNs, open circles -type II AGNs, open squares -groups/clusters of galaxies, hexagon -galaxy, and crosses -very red sources with known photometric redshift (type II AGNs). The solid line corresponds to the X-ray to optical flux ratio f x /f v = 1 typical for AGNs. All except four AGNs have X-ray luminosities larger than 10 43 erg s −1 (above the dotted line). b) Type II AGN and unidentified X-ray sources Following Schmidt et al. (1998;Paper II) and Lehmann et al. (2000;Paper III), we have adopted a variety of indicators (e.g. optical diagnostic diagrams, presence of [Ne V] and/or strong [Ne III] forbidden lines in the spectrum, large X-ray luminosity) to classify an object as AGN II. In addition, we have put in this class also two "intermediate" objects (28B and 59A), which, although having broad Balmer lines, show a clear indication of significant absorption as suggested by their large Balmer decrement. • While the colours of type I AGN are relatively blue, type II AGN and groups/clusters of galaxies show on average much redder colours. The location of type II AGN in the redshift -(R − K ′ ) diagram (see Fig. 7) occupies the same region as expected for elliptical and spiral galaxies. The colours of type II AGN appear to be dominated by the light from their host galaxies. The optical spectra of type II AGN confirm that a substantial fraction of their emission originates in their host galaxies (see Sect. 4.3). Two type II AGN (12A and 117Q), with spectroscopic redshifts 0.990 and 0.780, have very red colours (R − K ′ ∼ 5.0). Both sources are detected with the PSPC and show hard spectra. It is therefore likely that these objects are significantly absorbed in X-ray. • Most of the type II AGN (9 out of 13) have X-ray luminosities in the relatively narrow range 43 < log L x < 43.7 and are approximately consistent, although toward low luminosities, with the L x − M v relation defined by the type I AGN (see Fig. 6). The X-ray luminosity distributions of the two classes of AGN are significantly different: we have 37 type I and no type II AGN with log L x > 43.7, but 19 type I and 13 type II AGN with log L x < 43.7. • Four objects (59A, 104A, 827B, 901A) have X-ray luminosities significantly smaller than all the other AGN (log L x ∼ 41.5), with correspondingly low f x /f v ratio. All of them are at low redshift (z < 0.25). • The different distributions in X-ray luminosity of type II and type I AGN translates into significantly different distributions in redshift. For example, up to z = 0.25 we have 5 type II and no type I AGN. The ratio between the type I and type II AGN, which is ∼ 4.3 for the entire sample, is ∼ 0.3 up to z = 0.5, ∼ 0.4 up to z = 0.75 and ∼ 1 up to z = 1. • These trends of the ratio between the type I and type II AGN (increasing with redshift and/or luminosity) would remain qualitatively similar even if most of the 8 spectroscopically unidentified sources will turn out to be, as we have argued in Sect. 6, high redshift type II AGN. For three of these sources, assuming that their optical/near-infrared SED is mainly due to stellar processes, we have determined photometric redshifts ranging from 1.22 to 2.71. These redshifts lead to X-ray luminosities in the 0.5-2.0 keV band of up to 10 44 erg s −1 , which is in the regime of typical QSO X-ray luminosities. Even in this case, however, the number of high luminosity type II AGN in our sample appears to be smaller than that expected in the simplest version of the unified . R − K ′ colour versus redshift for those X-ray sources in the Lockman Hole with available K ′ band photometry. Same symbols are shown as in Fig. 6. For the objects 14Z, 84Z, and 486A we have used photometric redshifts (see Sect. 6). The dotted lines are from Steidel and Dickinson (1994) corresponding to unevolved spectral models for E (upper) and S b (lower) galaxies from Bruzual and Charlot (1993). models which has been used so far for the AGN synthesis models of the X-ray background (see e.g., Comastri et al. 1995 andGilli et al. 1999). This is consistent with the fact that until now only a few hard X-ray selected type II AGN at high luminosity are known , Nakanishi et al. 2000. In addition, the recent 1-7 keV ASCA Deep Survey in the Lockman Hole suggests a deficit of high luminous absorbed sources at z ∼ 1 − 2. Unfortunately, all these samples, including ours, are based on a very limited number of sources, so that a strong conclusion is not warranted yet. For example, Gilli et al. (2001) have recently shown that the redshift distribution of type I and type II AGN for a sub-sample of the UDS sources is statistically consistent with models in which the fraction of obscured AGN is constant with luminosity. Only significantly larger spectroscopic samples of hard X-ray selected AGN from Chandra and XMM-Newton observations can fully clarify this issue. For example, the PV-observation of the Lockman Hole with XMM-Newton has recently revealed a large fraction of very red sources in the 2-10 keV energy band, which are probably heavily obscured AGNs , see also Barger et al. 2001 for similar finding in the Chandra observation of the Hawaii Deep Survey Field SSA13), but no spectroscopic identifications for these objects currently exist. c) Groups and clusters of galaxies • Nine out of the ten X-ray sources identified with groups and/or clusters of galaxies are extended either in the HRI or in the PSPC images or both. The point-like source 815 is classified as a cluster on the basis of the optical data: three galaxies within a few arcseconds of the X-ray position have the same redshift (z = 0.700). • Nine out of ten objects have X-ray luminosities in the range 41.5 < log L x < 43.5 and cover the redshift range 0.20 -1.26. The median redshift of this sample of groups is ∼ 0.5. • The faintest X-ray group in our sample (identified with the X-ray source 840) has a very low X-ray luminosity (log L x = 40.7). In this case, given the small redshift (z = 0.074) we cannot exclude the possibility that the X-ray emission is due to a single galaxy or to the sum of the emission of a few galaxies in the group.
Revisiting the Fradkin-Vilkovisky Theorem The status of the usual statement of the Fradkin-Vilkovisky theorem, claiming complete independence of the Batalin-Fradkin-Vilkovisky path integral on the gauge fixing"fermion"even within a nonperturbative context, is critically reassessed. Basic, but subtle reasons why this statement cannot apply as such in a nonperturbative quantisation of gauge invariant theories are clearly identified. A criterion for admissibility within a general class of gauge fixing conditions is provided for a large ensemble of simple gauge invariant systems. This criterion confirms the conclusions of previous counter-examples to the usual statement of the Fradkin-Vilkovisky theorem. Introduction Among available approaches towards the quantisation of locally gauge invariant systems, the general BRST quantisation methods are certainly the most popular and widely used. Within the BRST-BFV Hamiltonian setting [1], one result stands out as being most relevant, namely the so-called Fradkin-Vilkovisky (FV) theorem according to which, in its statement as usually given [1,2], the BRST invariant BFV path integral (BFV-PI) representation of transition amplitudes is totally independent of the choice of gauge fixing conditions, the latter thus being made to one's best convenience. However in this form, such a claim has been disputed on different grounds [3,4,5,6,7], while general classes of explicit counter-examples have been presented [3,4,5,8] within simple gauge invariant systems. Indeed, all these examples agree with the following facts, which are to be considered as defining the actual content of the FV theorem [3,5]. Given the gauge invariance properties built into the formalism, the BFV-PI is, by construction, manifestly BRST and gauge invariant. Consequently, whatever the choice of gauge fixing conditions being implemented, the BFV-PI always reduces to some integral over the space of gauge orbits of the original gauge invariant system. In particular, any two sets of gauge fixing conditions which are gauge transforms of one another lead to the same final result for the BFV-PI. Nevertheless, which "covering" (an integration domain with some measure) of the space of gauge orbits is thereby selected, depends directly on the gauge equivalence class of gauge fixing conditions to which the specific choice of gauge fixing functions belongs. In other words, the BFV-PI depends on the choice of gauge fixing conditions only through the gauge equivalence class to which these conditions belong. Nonetheless, the BFV-PI cannot be totally independent of the choice of gauge fixing conditions. Gauge invariance of the BFV-PI is a necessary condition, but it is not a sufficient one for a choice of gauge fixing conditions to be admissible. Indeed, an admissible gauge fixing is one whose gauge equivalence class defines a single covering of the space of gauge orbits, namely such that each of these orbits are included with equal nonvanishing weight in the final integration. Nonadmissibility, namely a Gribov problem [9], arises whenever either some orbits are counted with a smaller or larger weight than others (Gribov problem of Type I), or when some orbits are not included at all (Gribov problem of Type II), or both [5]. Since the identification of a general criterion to characterise admissibility of arbitrary gauge fixing conditions appears to be difficult at least [6,7], this issue is best addressed on a case by case basis. Notwithstanding the explicit examples confirming the more precise statement of the FV theorem as just described, the arguments purporting to establish complete independence of the BFV-PI on the choice of gauge fixing seem to be so general and transparent, being based on the nilpotency of the BRST charge and BRST invariance of the external states for which the BFV-PI is computed, that the usual FV theorem statement is most often just simply taken for granted and to be perfectly undisputable. Confronted with this contradictory situation, it is justified to reconsider the status of the FV theorem and identify the subtle reasons why the formal arguments do not apply as usually described. This is the purpose of the present note, at least within a general class of simple constrained systems to be described in Sect.2.1. One should point out in this context that there is no reason to question the validity of the usual statement of the FV theorem within the restricted context of ordinary perturbation theory for Yang-Mills theories. Indeed, there exists explicit and independent proof of this fact [10]. Furthermore, perturbation theory amounts to considering a set of gauge orbits in the immediate vicinity of the gauge orbit belonging to the trivial gauge configuration. However, Gribov problems and nonperturbative gauge fixing issues involve the larger topological properties of the space of gauge orbits [11], and it is within this context that the relevance of the FV theorem is addressed in the present note. There is no doubt that in the case of Yang-Mills theories, for example, such issues must play a vital role when it comes to the nonperturbative topological features of strongly interacting nonlinear dynamics. The outline of this note is follows. After having described in Sect.2 the general class of gauge invariant systems to be considered, including their quantisation within Dirac's approach which is free of any gauge fixing procedure, Sect.3 addresses their BRST quantisation. Based on the usual plane wave representation of the Lagrange multiplier sector of the extended phase space within that context, the actual content of the FV theorem is critically reassessed within a general class of gauge fixing conditions, while subtle aspects explaining why its usual statement fails to apply are pointed out. Then in Sect.4, a regularisation procedure for the Lagrange multiplier sector is considered, which avoids the use of the non-normalisable plane wave states, by compactifying that degree of freedom into a circle. A general admissibility criterion for the classes of gauge fixing conditions considered is then identified, while further subtle reasons explaining why the usual statement of the FV theorem fails also in that context are again pointed out. No inconsistencies between the two considered approaches arise, confirming the actual and precise content of the Fradkin-Vilkovisky theorem as given above. Concluding remarks are presented in Sect.5. Classical formulation Let us consider a system whose configuration space is spanned by a set of bosonic coordinates q n , with canonically conjugate momenta denoted p n , thus with the canonical brackets {q n , p m } = δ n m . These phase space degrees of freedom are subjected to a single first-class constraint φ(q n , p n ) = 0, which defines a local gauge invariance for such a system. Finally, dynamics is generated from a first-class Hamiltonian H(q n , p n ), which we shall assume to have a vanishing bracket with the constraint φ, {H, φ} = 0. Given that large classes of examples fall within such a description, the latter condition is only a mild restriction, which is made to ease some of the explicit evaluations to be discussed hereafter. A well known [5] system meeting all the above requirements is that of the relativistic scalar particle, in which case the first-class constraint φ defines both the local generator for world-line diffeomorphisms as well as the mass-shell condition for the particle energy-momentum. Other examples in which the first-class constraint is the generator of a local internal U(1) gauge invariance may easily be imagined, such as those discussed Refs. [8,12]. In the latter reference for instance, one has a collection of degrees of freedom q a i (t) (a = 1, 2; i = 1, 2, · · · , d) with Lagrange function This system may be interpreted as that of d spherical harmonic oscillators in a plane subjected to the constraint that their total angular momentum vanishes at all times, The U(1) gauge invariance of the system is that of arbitrary time-dependent rotations in the plane acting identically on all oscillators, with λ(t) being both the associated Lagrange multiplier and U(1) gauge degree of freedom (the time component of the gauge "field"). Returning to the general setting, all the above characteristics may be condensed into one single information, namely the first-order Hamiltonian action principle over phase space expressed as where λ(t) is an arbitrary Lagrange multiplier associated to the first-class constraint φ(q n , p n ) = 0. The Hamiltonian equations of motion are generated from the total Hamiltonian H T = H + λφ, in which the Lagrange multiplier parametrises the freedom associated to small gauge transformations throughout time evolution of the system. These small gauge transformations are generated by the first-class constraint φ(q n , p n ). Indeed, in their infinitesimal form, small gauge transformations are generated by the first-class constraint as δ ǫ q n = ǫ{q n , φ} , δ ǫ p n = ǫ{p n , φ} , δ ǫ λ =ǫ, ǫ(t) being an arbirary function of time (the above action then changes only by a total time derivative). Related to this simple character of gauge transformations, it is readily established [3,5] that, given a choice of boundary conditions (b.c.) for which the coordinates q n (t) are specified at the boundary of some time interval [t i , t f ] (t i < t f ), which then also requires that the gauge transformation function obeys the b.c. ǫ(t i,f ) = 0, the space of gauge orbits is in one-to-one correspondence with Teichmüller space, i.e. the space of gauge orbits for the Lagrange multiplier λ(t). In the present instance, this Teichmüller space reduces to the real line spanned by the gauge invariant modular or Teichmüller parameter Consequently, any admissible gauge fixing of the system is thus to induce a covering of this modular space in which each of all the possible real values for γ is accounted for with an equal weight. Indeed, any real value for γ characterises in a unique manner a possible gauge orbit of the system, while on the other hand any configuration of the system belongs to a given gauge orbit. Thus, in order to account for all possible physically distinct gauge invariant configurations of the system, all possible values for the single coordinate parameter γ on modular space must be accounted for in any given admissible gauge fixing procedure (absence of a Gribov problem of Type I), while at the same time none of these orbits may be included with a weight that differs from that of any of the other gauge orbits (absence of a Gribov problem of Type II). An admissible gauge fixing procedure must induce a covering of modular space which includes all real values for γ with a γ-independent integration measure over modular space. Quantum formulation As the above notation makes already implicit, in order to avoid any ambiguity in the forthcoming discussion, the configuration space manifold is assumed to be of countable discrete dimension, if not simply finite. Furthermore at the quantum level, we shall also assume that the associated Hilbert space of quantum states itself is spanned by a discrete basis of states. Depending on the system, this may require to compactify configuration space, such as for instance into a torus topology, or introduce some further interaction potential, such as a harmonic well, it being understood that such regularisation procedures may be removed at the very end of the analysis. In this manner, typical problems associated to plane wave representations of the Heisenberg algebra, [q n ,p m ] = ihδ n m witĥ q n † =q n andp † n =p n , are avoided from the outset. As a matter of fact, a torus regularisation procedure will be applied to the Lagrange multiplier sector when considering BRST quantisation at some later stage of our discussion. Furthermore, for ease of expression hereafter, we shall assume to be working in a basis of Hilbert space which diagonalises the first-class constraint operatorφ, with in particular the integers k 0 denoting the subset of these states which is associated to a vanishing eigenvalue for the constraint with an unspecified degeneracy, The latter states |k 0 for all the possible values k 0 thus define a basis for the subspace of gauge invariant or physical states, which are to be annihilated by the constraint. The examples mentioned in (1) provide explicit illustrations of such a general setting. The spectra of both the Hamiltonian and constraint eigenstates are discrete, with specific degeneracies for each class, including the physical sector of gauge invariant states. In the case of the relativistic scalar particle, the same situation arises provided one introduces a regulating harmonic potential term quadratic in the spacetime coordinates in order to render the spectrum discrete. Even for a system as simple as a topological particle on a circle, for which the Lagrange function is given by L = Nq where N is some normalisation factor that needs to take on a quantised value at the quantum level, the momentum constraint operator,φ =p − N , then also possesses a discrete spectrum, and thus falls within the general setting of systems addressed in our discussion (in this case, the first-class Hamiltonian vanishes identically, while the gauge invariance associated to the first-class constraint is that of arbitrary coordinate redefinitions of the degree of freedom q(t)). Given an arbitrary choice for the Lagrange multiplier λ(t), and since it is also assumed that quantisation preserves the gauge invariance property of the first-class HamiltonianĤ (namely that even at the quantum level we still have the vanishing commutator [Ĥ,φ] = 0, which also implies that the time-ordered exponential of the total Hamiltonian,Ĥ T (t) =Ĥ + λ(t)φ, coincides with its ordinary exponential), time evolution of the quantum system is generated by the operator which propagates both gauge variant and invariant states. Propagation of physical states only is achieved by introducing the physical projection operator [13] E I , obtained essentially by integrating over the gauge group of all finite small gauge transformations e −i/hγφ , which in the present case may be expressed as Consequently, the physical evolution operator is given byÛ which all matrix elements in the basis |k vanish, except on the physical subspace spanned by the states |k 0 , The latter are thus the matrix elements that the BFV-PI must reproduce from BRST quantisation given any admissible gauge fixing choice. Note that one may also write, which clearly reproduces the above matrix elements, and makes it explicit that one indeed has performed an admissible integration over the modular space of the system parametrised by −∞ < γ < +∞ with a uniform integration measure [14], precisely a covering of modular space which is characteristic of an admissible gauge fixing choice. BFV extended phase space Within the BFV approach [1,2,5], phase space is first extended by introducing a momentum π(t) canonically conjugate to the Lagrange multiplier λ(t), {λ(t), π(t)} = 1. Consequently, one then has the set of first class constraints G a = (G 1 , G 2 ) = (π, φ) = 0, a = 1, 2, such that {H, G a } = 0. To compensate for these additional dynamical degrees of freedom, a further system of pairs of Grassmann odd canonically conjugate ghost degrees of freedom, η a (t) and P a (t) with η a † = η a , P † a = −P a and {η a , P b } = −δ a b , is introduced. By convention, η a (resp. P a ) are of ghost number +1 (resp. −1). The ghost number is given by Q g = P a η a . Within this setting, small local gauge transformations are traded for global BRST transformations, generated by the BRST charge Q B , which in the present situation is simply given by a Grassmann odd quantity, real under complex conjugation and of ghost number (+1), characterised by its nilpotency property, {Q B , Q B } = 0. BRST invariant dynamics on this extended phase space is generated by the general BRST invariant Hamiltonian Ψ being an a priori arbitrary Grassmann odd function of extended phase space, pure imaginary under complex conjugation and of ghost number (−1), known as the "gauge fixing fermion" as this is indeed the role it takes within this formalism. In order to obtain a BRST invariant dynamics, the equations of motion generated from H eff must be supplemented with BRST invariant boundary conditions. Considering BRST transformations, it appears that a choice of b.c. which is universally BRST invariant is such that while the b.c. in the original "matter" sector (q n , p n ) are those already mentioned in the discussion of Sect. , H eff } = 0 on account of the BRST invariance and vanishing ghost number of H eff , these b.c. imply that any solution is indeed BRST invariant and of vanishing ghost number, Q B (t) = 0 and Q g (t) = 0. These are precisely the b.c. that are imposed in the construction of the BFV-PI for the BRST invariant quantised system. Obviously, a condition which the gauge fixing function Ψ must meet is that, given the above b.c., the set of solutions to the equations of motion generated by the corresponding Hamiltonian H eff coincides exactly with the set of solutions obtained in the initial formulation of Sect.2.1. This requirement restricts already on the classical level the classes of gauge fixing functions Ψ that may be considered. Even the classical BRST invariant dynamics is not entirely independent of the choice of Ψ [5], a point we shall not pursue further here (having already been discussed to some extent in Ref. [5] through detailed examples), but which indicates that it cannot be so either at the quantum level. A general class of functions that is to be used explicitly hereafter within the quantised system is of the form F (λ) being an arbitrary real function and β an arbitrary real parameter. It may readily be established that associated to this choice, the classical Hamiltonian equation of motion for λ(t) amounts to the gauge fixing condition In terms of some integration constant λ 0 , the solution λ(t; λ 0 ) defines a value γ(λ 0 ) for the Teichmüller parameter. Given a choice for F (λ), as the value λ 0 varies over its domain of definition, γ(λ 0 ) varies over a certain domain in modular space with a specific oriented covering or measure over that domain. It is only when the entire set of real values for the Teichmüller parameter γ is obtained with a γindependent integration measure that the function F (λ), namely Ψ, defines an admissible gauge fixing choice. For instance, the case F (λ) = 0 is readily seen to meet this admissibility requirement, and to define a choice of gauge fixing which is known to be admissible for the considered class of systems [2,3,5,15]. Indeed, the equation of motion for λ then simply readsλ = 0, showing that all values of the Teichmüller parameter γ are obtained with a single multiplicity when integrating over the free integration constant λ 0 = λ(t 0 ) at some time value t = t 0 , the b.c. in this sector being π(t i,f ) = 0. One of the purposes of the present note is to identify, at the quantum level, a general criterion for the admissibility of the class of gauge fixing functions in (17). BRST quantisation Quantisation of the BFV formulation amounts to constructing a linear representation space for the (anti)commutation relations, withη a =ĉ a andP a = −ihb a , and equiped with an hermitean inner product · | · such that all these operators are self-adjoint. Note thatĉ a2 = 0 andb 2 a = 0. Quantisation of the "matter" sector (q n , p n ) has already been dealt with in Sect.2.2, for which we shall use the same notations and choice of basis. An abstract representation space for the ghost sector (c a , b a ) is constructed as follows [5]: that sector of Hilbert space is spanned by a basis with 2 2 = 4 vectors denoted | ± ± (the first entry refering to the sector a = 1 and the second to the sector a = 2; this convention also applies to the bra-states ± ± |), on which the ghost operators act as follows, (20) Their only nonvanishing inner products are with any of these numbers pure imaginary, such as for instance − − | + + = ±i. Finally, the normal ordered quantum ghost number operator is defined aŝ Consequently, one has the following ghost number values for these states, Even though at some later stage in our discussion we shall perform a circle compactification of the Lagrange multiplier degree of freedom, let us at this point consider the usual plane wave representation of the Heisenberg algebra in the (λ,π) Lagrange multiplier sector. Eigenstates of these operators are thus defined bŷ with the normalisation choices, Consequently, one has the wave function representations of these operators acting on any state |ψ , with the matrix elements for the change of basis, The quantum BRST charge is given bŷ Furthermore, time evolution of the quantised system is generated by the BRST invariant Hamiltonian operatorĤ leading to the BRST invariant evolution operator For the class of gauge fixing functions (17), an explicit evaluation findŝ this operator being expressed in such a way as to make manifest its hermiticity property,Ĥ † eff =Ĥ eff . Classically within the extended formulation, physical states need to meet the constraints π(t) = 0 and φ(t) = 0, which implies that for the BRST quantised system, the BRST invariance conditions characterising physical states must lead to the eigenvalues φ k = 0 and π = 0, namely k = k 0 and π = 0. This is achieved by considering the cohomology of the BRST charge, i.e. by considering the states which are BRST invariant but are defined modulo a BRST transformation, It may be shown that the general solution to this equation is given by while the state |ϕ may be constructed from the remaining components of the BRST invariant state |ψ expanded in the basis |k; π; ±± , |ψ = k;±± ∞ −∞ dπ ψ n;±± (π) |n; π; ±± . Consequently, both the BRST cohomology classes at the smallest and largest ghost numbers,Q g = −1 andQ g = +1, are in one-to-one correspondence with the physical states |k 0 in Dirac's quantisation (or |k 0 ; π = 0 when the Lagrange multiplier sector is included), while the BRST cohomology class at zero ghost number, Q g = 0, includes two copies of the Dirac physical states, associated to each of the ghost states | + − and | − + . Physical states are usually defined to correspond to the BRST cohomology class at zero ghost number [2]. The matrix elements of the BRST invariant evolution operatorÛ eff (t f , t i ) between states of ghost number (−1) all vanish identically, on account of the vanishing ghost number ofĤ eff and the vanishing inner product − − | − − = 0, irrespective of the choice of gauge fixing function Ψ, and whether the external states of ghost number (−1) are BRST invariant or not. However, these are not the matrix elements ofÛ eff (t f , t i ) that ought to correspond to those in (10) and (11) which describe in Dirac's quantisation the propagation of physical states only. Indeed, the latter may be obtained only for external states which are BRST invariant and of vanishing ghost number, in direct correspondence with the choice of such b.c. in (16). Equivalently, given the action of the ghost and BRST operators, such states are spanned by the set |k; π = 0; −+ , so that we now have to address the explicit evaluation of the matrix elements By construction, these matrix elements are clearly BRST and thus gauge invariant, and include those of the BRST cohomology class at zero ghost number associated to one of the two sets of states corresponding to Dirac's physical states. Nevertheless, these matrix elements are not totally independent of the choice of gauge fixing function Ψ, as shall now be established. The BFV-BRST invariant propagator Given the choice of gauge fixing function in (17) and the expression for the associated Hamiltonian H eff in (31), it is clear that (35) factorizes into two contributions, whether the conditions π f = 0 = π i required for BRST invariance of the external states are enforced or not, with the factor N (π f , π i ; φ k i ) given by 1 Of course, one is particularly interested in the value for N (π f = 0, π i = 0; φ k i ) as function of the first-class constraint spectral value φ k i or φ k f . As a warm-up, let us first restrict to the choice F (λ) = 0, known to be admissible. On basis of the above explicit expression for N (π f , π i ; φ k i ), it is clear that in this case, one has the further factorisation whose value readily reduces to Restricting then to the BRST invariant external states, one has finally (the condition ∆t > 0 is implicit), Hence indeed, all these matrix elements vanish identically, unless both external states are physical, namely k i = k 0,i and k f = k 0,f or φ k i = 0 = φ k f . However, when the external states are physical, these matrix elements are singular, on account of the δ-function δ(φ k i ). Clearly, this is a direct consequence of the plane wave representation of the Heisenberg algebra in the Lagrange multiplier sector (λ,π) of the BFV extended phase space. Nevertheless, up to the singular normalisation factor (−sgn(β) − + | + − δ(φ k i )), the BFV-BRST invariant matrix elements reproduce correctly the result in (10) for the propagator of physical states within Dirac's quantisation approach. On the other hand, note also that this distribution-valued normalisation factor is not entirely independent of the choice of function Ψ even when F (λ) = 0, since it depends on the sign of the arbitrary parameter β. In spite of that dependency, an admissible gauge fixing is achieved since all of modular space is indeed recovered with a γ-independent integration measure. Turning now to an arbitrary choice of function F (λ), the explicit and exact evaluation of N (π f , π i ; φ k i ) may proceed through its discretized path integral representation. Applying the approach detailed in Ref. [5], one then establishes the general and exact result, thus reproducing indeed the result (40) established for F (λ) = 0. A more general class of admissible gauge choices is given by a and b being constant parameters. On the other hand, choices such as Deconstructing the Fradkin-Vilkovisky theorem An argument often invoked [2] in support of complete independence of the BFV-PI on the choice of gauge fixing fermion is based on the observation that for BRST invariant external states |ψ 1 and |ψ 2 such thatQ B |ψ i = 0 (i = 1, 2), the matrix elements of the operator {Ψ,Q B } vanish identically, where the last equality follows by considering the separate action of the BRST operatorQ B on the external states adjacent to it. Indeed, given nilpotency of the BRST charge,Q 2 B = 0, this argument should also extend to similar matrix elements of the evolution operatorÛ eff (t f , t i ) which includes the contribution, In the case of the factor N (π f , π i ; φ k i ), this argument would appear to imply that one should have, for the states of interest, given the facts that π f |π i = δ(π f −π i ) and −+|−+ = 0. Even though this expression is ill-defined, it appears to be totally independent of the choice of gauge fixing fermion Ψ, in sharp contrast with its previous evaluations. The singular character of this result follows once again from the plane wave representation of the Lagrange multiplier sector (λ,π). Consequently, matrix elements are generally distributionvalued, and cannot simply be evaluated at specific values of their arguments. Rather, they should be convolved with test functions, or else evaluated first for arbitrary values of their arguments [5]. Hence the above argument certainly cannot be claimed to be standing on a sound basis, and needs to be reconsidered carefully for the explicit evaluation of N (π f , π i ; φ k i ) given a specific value φ k i for the constraint eigenvalue but as yet unspecified values for π f and π i . In order to remain faithful to the spirit of the above argument, the calculation needs to be performed in the form as given in (46), namely not by first computing the result of the anticommutator {Ψ,Q B } and only then compute its matrix elements-in effect, this is the procedure used to reach the results of Sect.3.3-, but rather by having the operators act from left to right onto the external state |ψ 2 for the first term inside the square brackets at each order in ∆t/h 2 in (46), and from right to left onto the state ψ 1 | for the second term. This calculation is straightforward for the specific admissible choice F (λ) = 0, in which case one obtains, It would appear that indeed, this expression vanishes whenever one considers BRST invariant external states for which π f = 0 = π i . However, this is not the case, since the factor which is multiplied by (π i − π f ) is itself singular for the values π f = 0 = π i , being distribution-valued. Indeed, the above sum may also be expressed as a result that coincides with (39). Nevertheless, the details of the calculation in the above series of relations make manifest the fact that had one set from the outset the values π f = 0 = π i , an identically vanishing result would have been obtained, rather than the correct but distribution-valued one, N (π f = 0, π i = 0; φ k i ) = −β∆t − + | + − δ(β∆tφ k i ), which does vanish unless when precisely φ k i = 0. On the other hand, if from the outset one considers a value φ k i = 0, one finds through the above analysis, once again in agreement with the general results in (39) and (49). However, performing such a calculation with π f = 0 = π i from the outset leads back to an identically vanishing result, missing once again the correct distribution-valued result, N (π f = 0, π i = 0; φ k i ) = −β∆t − + | + − δ(β∆tφ k i ). In conclusion, these considerations establish that the argument based on (45) or (46), purportedly a confirmation that the BFV-PI is necessarily totally independent of the gauge fixing fermion Ψ, is not warranted. Being distribution-valued quantities, the relevant matrix elements have to be convolved with test functions, or equivalently, first be evaluated for whatever external states, and only at the end restricted to the BRST invariant ones. In particular, setting from the outset the values π f = 0 = π i is ill-fated, indeed even leads to ill-defined quantities such as 0 · δ(0). Nevertheless, when properly computed, the end result is perfectly consistent with that established in the previous section in a totally independent manner. And in the latter approach, a general expression for N (π f , π i ; φ k i ) is even amenable to an exact evaluation for whatever choice of function F (λ), through a path integral representation of the matrix elements of relevance. This exact result displays explicitly the full extent to which, in a manner totally consistent with the built-in gauge invariance properties of the BFV-PI, the gauge fixed BFV-BRST path integral is indeed dependent on the choice of gauge fixing fermion Ψ [3,4,5], namely only through the gauge equivalence class to which that gauge fixing choice belongs, such a gauge equivalence class being characterised by a specific covering of modular space. Being gauge invariant, the BFV-PI necessarily reduces to an integral over modular space, irrespective of the gauge fixing choice. Nevertheless, which domain and integration measure over modular space are thereby induced are function of the choice of gauge fixing conditions. The BFV-PI is not totally independent of the choice of gauge fixing fermion Ψ. The Admissibility Criterion As manifest from previous expressions, the plane wave representation of the Heisenberg algebra in the Lagrange multiplier sector (λ,π) leads to distribution-valued results for specific BFV-BRST matrix elements. Consequently, it is sometimes claimed [16] that this very fact calls into question the relevance of the counter-examples to the usual statement of the FV theorem available in the literature and described in the previous sections, while a proper handling of the ensuing singularities would show that these counter-examples are actually ill-fated, and that indeed, the BFV-PI ought to be totally independent of the choice of gauge fixing fermion Ψ. In order to avoid having to deal with non-normalisable plane wave states, let us now regularise the Lagrange multiplier sector by compactifying the degree of freedom λ onto a circle of circumference 2L such that −L ≤ λ < L, it being understood that any quantity of interest has to be evaluated in the decompactification limit L → ∞. Furthermore, the representation of the Heisenberg algebra [λ,π] = ih, which is to be used on this space with the nontrivial mapping class group π 1 (S 1 ) = Z Z, is that of vanishing U(1) holonomy 2 [17]. Consequently, this sector of Hilbert space is now spanned by a discrete set ofπ-eigenstates for all integer values m, The configuration space wave functions are |λ being the configuration space basis such that Given an arbitrary state |ψ and its configuration space wave function ψ(λ) = λ|ψ which must be single-valued on the circle, one has States in this sector are thus characterised by the normalisibility condition L −L dλ|ψ(λ)| 2 < ∞. In the above relations, δ 2L (λ − λ ′ ) stands for the δ-function on the circle of circumference 2L, Given such a discretization of the Lagrange multiplier sector (λ,π), let us now address again the different points raised previously concerning the FV theorem. BRST cohomology classes remain characterised in the same way as previously. The general solution to the BRST invariance conditionQ B |ψ = 0, namely |ψ = |ψ phys +Q B |ϕ , is given by while the state |ϕ may be constructed from the remaining components of the BRST invariant state |ψ expanded in the basis |k; m; ±± , |ψ = k;m;±± ψ k;m;±± |k; m; ±± . Consequently, both the BRST cohomology classes at the smallest and largest ghost numbers,Q g = −1 andQ g = +1, are in one-to-one correspondence with the physical states |k 0 in Dirac's quantisation (or |k 0 ; m = 0 when the Lagrange multiplier sector is included), while the BRST cohomology class at zero ghost number, Q g = 0, includes two copies of the Dirac physical states, associated to each of the ghost states | + − and | − + . Physical states are usually defined to correspond to the BRST cohomology class at zero ghost number [2]. The matrix elements of the BRST invariant evolution operatorÛ eff (t f , t i ) between states of ghost number (−1) all vanish identically, on account of the vanishing ghost number ofĤ eff and the vanishing inner product − − | − − = 0, irrespective of the choice of gauge fixing function Ψ, and whether the external states of ghost number (−1) are BRST invariant or not. However, these are not the matrix elements ofÛ eff (t f , t i ) that ought to correspond to those in (10) and (11) which describe in Dirac's quantisation the propagation of physical states only. Indeed, the latter may be obtained only for external states which are BRST invariant and of vanishing ghost number, in direct correspondence with the choice of such b.c. in (16). Equivalently, given the action of the ghost and BRST operators, such states are spanned by the set |k; m = 0; −+ , so that we now have to address the explicit evaluation of the matrix elements the discretized analogues of the matrix elements in (35). As before these matrix elements are, by construction, BRST and thus gauge invariant, and include those of the BRST cohomology class at zero ghost number associated to one of the two sets of states corresponding to Dirac's physical states. Nevertheless, they are not totally independent of the choice of gauge fixing function Ψ, as shall now be established once again. Evaluation of the BRST invariant matrix elements In order to evaluate the matrix elements (58), rather than using a path integral approach, the operator representation of the quantised system shall be considered. Given the choice of gauge fixing function in (17) and the expression for the associated HamiltonianĤ eff in (31), it is clear that (58) as well as its extension for whatever values for m f and m i factorizes as with the factor N L (m f , m i ; φ k i ) given by 3 The evaluation of the ghost contribution to this factor, through a direct expansion of the exponential operator and a resolution of the ensuing recurrence relations, implies a further factorization Consider then the quantities where the functions G n (λ) are defined by their relation to the l.h.s. operator acting on the state |m = 0 . These functions obey the recurrence relations Introducing the variable u such that dλ(u) du = F (λ(u)) , given some initial value λ 0 = λ(u 0 ), the functions G n (λ) are solved by Using the representation it thus follows that one may write It is of interest to first consider the choice F (λ) = 0, which is known to define an admissible gauge fixing. One then has G n (λ) = (βφ k i λ) n , leading to the following values, Consequently, in the limit L → ∞, the matrix elements (58) are given by, Hence indeed, up to a β-dependent normalisation, these matrix elements reproduce those in (10) representing within Dirac's quantisation the propagation of physical states only. Given the representation in (11), one thus concludes that the choice F (λ) = 0 defines an admissible gauge fixing. Let us now turn to the general case of an arbitrary function F (λ). Given the result (67), it is clear that whenever φ k i = 0 and φ k f = 0, the matrix element (58) reduces again to the same value as in (68) and (69). However, it is the decoupling of the unphysical states which may not be realised [6], implying specific restrictions on the choice for F (λ). In order to apply the limit L → ∞ to these matrix elements, it is best to introduce a rescaled variable λ = Lλ with −1 ≤λ < 1. Given the general expression (67), it should be clear that in order to reproduce the same results as in the admissible case F (λ) = 0, the following limit is to define a finite functionF (λ) ofλ for all values ofλ. Whenever this criterion is met, the choice of gauge fixing function in (17) defines an admissible gauge fixing of the system, for which the BRST invariant matrix elements (58) are given as in (69), and do indeed reproduce, up to some normalisation factor which is also function of the parameter β, the correct time evolution of Dirac's physical states only. Note, however, that the resulting matrix elements in (69) are nonetheless functions of the parameter β appearing in such choices of admissible functions Ψ. Furthermore, when the criterion (70) is not met, the associated choice of gauge fixing is not admissible, since the BRST invariant matrix elements (58) then do not coincide with (69), and thus cannot be expressed through a single integral covering of Teichmüller space as in (11). In other words, the BFV-PI, which provides the phase space path integral representation for the BRST invariant matrix elements (58), cannot be entirely independent of the choice of gauge fixing "fermion" function Ψ, in contradiction with the FV theorem as usually stated. The conclusion reached in (70) it being understood that the action of the operators on these external states is evaluated along the same lines as in Sect.3.4. Hence, this evaluation shall also be done for the specific choice F (λ) = 0 known to be admissible and to lead to the results in (68) and (69). The explicit expansion of the above matrix elements then reduces to the following series of expressions, in perfect analogy with the calculation in (49), Note that in this form, setting from the outset the values m f = 0 = m i leads to a vanishing expression, as it did in the analysis of Sect.3.4. Furthermore, if from the outset we take the physical values φ k i = 0 = φ k f , only the term with k = 1 in the above sum survives, leading to the following values, None of these results thus reproduce the correct ones in (68). However, in the plane wave representations of Sect.3.4 these quantities being distribution-valued, a final integration by parts had to be applied before recovering the correct result. Likewise in the present discretized representation, the final evaluation of the above expression finally leads to Setting now m f = 0 = m i , still the value for N L (m f = 0, m i = 0; φ k i ) vanishes identically, irrespective of whether the constraint eigenvalue φ k i is physical or not. Nevertheless, by having compactified the degree of freedom λ(t) onto a circle thus leading only to a discrete spectrum of quantum states in the Lagrange multiplier sector (λ,π), we have avoided any use of distribution-valued matrix elements. Why, then, does the argument based on (45) and (46) still not lead to the correct result? The fact of the matter is that the adjoint action from the right onto the external states m f ; −+| of the operators (Q BΨQBΨ · · ·) in (46) is not necessarily warranted when the operatorsλ andπ appear in combination for the compactified regularisation. For example, consider the matrix elements where in the second expression the adjoint action of the operatorπ onto the state m f | is used. However, one must then conclude that in obvious contradiction with the Heisenberg algebra, In presence of the operatorλ, the adjoint action ofπ on bra-states should be avoided. Rather, one should evaluate the action of all operators from the left onto ket-states and only at the very end project the result onto the relevant bra-states. For instance, so that in obvious agreement with the Heisenberg algebra λ ,π = ih. The same conclusions may be reached by considering the explicit wave function representations of the Heisenberg algebra given in (52) and (53) for the circle topology. In fact, the operatorλ being represented through multiplication by λ of single-valued wave functions λ|ψ on the circle for which the operatorπ = −ih∂/∂λ is self-adjoint, leads to wave functions that are no longer single-valued on the circle. In particular, the required integration by parts corresponding to the adjoint action of the derivative operatorπ = −ih∂/∂λ induces a nonvanishing surface term because of the lack of single-valuedness of the wave function λ λ|ψ , in direct correspondence with the second relation in (80). In other words, even though both operators are well defined on the space of normalisable wave functions on the circle, the operatorλ maps outside the domain of states for which the operatorπ is self-adjoint. This is the thus the core reason why the evaluation of the matrix element (58) according to the argument in (46) in which the strings of operators (· · ·ΨQ BΨQB ) and (Q BΨQBΨ · · ·) act separately from the left onto the ket-states |m i ; −+ and from the right onto the bra-states m f ; −+|, respectively, is unwarranted. Indeed, even when F (λ) = 0, precisely the combinationπλ appears in the product Q BΨ for which, as detailed above, the adjoint action ofπ from the right onto the bra-states is not justified unless the proper surface term contributions are accounted for as well (whereas for the product ΨQ B the relevant combination isλπ which unambiguously acts from the left onto the ket-states). Nevertheless, such ambiguities do not arise for the actual anticommutator Ψ ,Q B when it is explicitly evaluated, without keeping the two classes of terms separate as done in the argument based on (45) and (46). For example when F (λ) = 0, the potentially troublesome term that is then left over is simply and is thus responsible for the transformation of the ghost ket-state | − + into the state | + − possessing a nonvanishing overlap with the ghost bra-state − + |. Applying this prescription for the evaluation of the matrix elements in (72), in fact one is brought back to the approach used in Sect.4.1, thereby reproducing the general results established in that context. For example when F (λ) = 0, a direct calculation along the lines of (73) readily finds hence finally a result to be compared to (75) in light of the remarks in (79) and (80). In particular, setting then m f = 0 = m i , exactly the same results as in (68) and (69) in the L → ∞ limit are thus recovered. In conclusion, even though the compactification regularisation of the Lagrange multiplier sector was introduced to circumvent the subtle issues explaining why the argument, based on (45) and (46) and plane wave representations of the Heisenberg algebra and claiming to confirm that the BFV-PI indeed ought to be totally independent of the choice of gauge fixing fermion Ψ, is unwarranted, new subtleties arise for a finite value of L implying again that this argument does not stand up to closer scrutiny. When properly analysed, the argument rather confirms once again the results obtained by direct evaluation of the relevant matrix elements. In particular, these matrix element corresponding to the BFV-PI, even though gauge invariant, are not independent of the choice of gauge fixing procedure. The general criterion for the admissibility of the class of gauge fixing fermions defined in (17) is provided in (70). Conclusions Rather than gauge fixing the system through its Lagrange multiplier sector, as is achieved through the choice made in (17), it is also possible to contemplate gauge fixing in phase space through some condition of the form χ(q n , p n ) = 0, which, within the BFV-BRST formalism, is related to the following choice of gauge fixing "fermion", ρ being an arbitrary real parameter. In the same manner as described in this note for the class of gauge choices (17), it would be of interest to identify a criterion that the function χ(q n , p n ) should meet in order that the associated gauge fixing be admissible. However, this issue turns out to be quite involved, and we have not been able to develop a general solution. In fact, in contradistinction to the class of gauge fixings analysed in this note, the answer to this problem in the case of the choices in (84) would also depend on more detailed properties of the first-class Hamiltonian H, the structure of the original configuration space q n , and how the local gauge transformations generated by the first-class constraint φ act on that space. In Ref. [6], two specific models are considered for which the criterion of admissibility in terms of the function χ(q n , p n ) is indeed different for each model. Another simple model which was considered is defined by the seemingly trivial action where the single degree of freedom q(t) takes its values in a circle of radius R, while N is some normalisation factor. The associated first-class constraint φ = p − N generates arbitrary redefinitions of the coordinate q(t), while in this case the first-class Hamiltonian H vanishes, H = 0. An admissible phase space gauge fixing condition is χ(q, p) = q − q i , q i being some initial value for q(t). At the quantum level, and when taking due account of a possible nontrivial U(1) holonomy [17] for the representation of the Heisenberg algebra [q,p] = ih, it turns out that the factor N is quantised and that the physical spectrum is reduced to a singlep-eigenstate. When computing the BRST invariant matrix elements (58) of interest for the choice of gauge fixing (84) with χ = q − q i , the admissibility of this gauge fixing is confirmed, once again up to a normalisation factor stemming from the ghost and Lagrange multiplier sectors which is explicitly dependent on the parameters ρ and β appearing in Ψ. In particular, and as is also the case with the result established in (69), the BRST invariant matrix elements (58) vanish identically in the limit β → 0, including those for k i = k 0,i and k f = k 0,f which should correspond to the nonvanishing physical ones in (10). Hence, contrary to the usual statement of the Fradkin-Vilkovisky theorem, the BRST/gauge invariant BFV path integral is not totally independent of the choice of gauge fixing "fermion" Ψ. This note revisited once again this issue, with two main conclusions. First, for a general class of gauge fixing "fermions", it identified a general criterion for admissibility within a simple general class of constrained systems with a single first-class constraint which commutes with the first-class Hamiltonian. This criterion is in perfect agreement with the conclusions of explicit counter-examples to the usual statement of the FV theorem, and it may be seen as a continuation of the work in Ref. [6]. Second, the basic reasons why the general argument claiming to establish complete independence of the BFV-PI on the gauge fixing "fermion" is unwarranted in the case of the associated BRST invariant matrix elements, have been addressed in simple terms. It has been shown that the lack of total independence from Ψ of the BFV-PI arises because, whereas the action of the anticommutator {Ψ,Q B } on BRST invariant states is unambiguous, that of the operatorsΨQ B andQ BΨ separately is not. These conclusions were reached by two separate routes, namely by working either with the plane wave representation on the real line for the Lagrange multiplier sector Heisenberg algebra, or else by compactifying that sector onto a circle in order to avoid having to deal with non-normalisable states and a continuous spectrum of eigenstates. In the first approach, it was shown that due to the distributionvalued character of the relevant matrix elements, usual arguments claiming to establish complete independence from Ψ have to be considered with greater care, thereby confirming the lack of total independence, even though manifest gauge invariance is preserved throughout. In the compactified approach, it was shown that the usual argument is beset by another ambiguity, namely the fact that the Lagrange multiplier operatorλ maps outside of the domain of normalisable states for which the conjugate operatorπ is self-adjoint, inducing further crucial surface terms which are ignored by the usual argument. Incidentally, were it not for such subtle points, the usual statement of the FV theorem would be correct, so that the BFV-PI would always be vanishing, irrespective of the choice of gauge fixing, clearly an undesirable situation since the correct quantum evolution operator could then not be reproduced. This is explicitly illustrated by the fact that using the compactification regularisation and in the limit β → 0, the BFV-PI vanishes for the choices (17) and (84), and this independently of the functions F (λ) or the parameter ρ. Indeed for these two choices, it is precisely the parameter β which controls any contribution from the gauge fixing "fermion" to the BFV-PI. The actual and precise content of the FV theorem is already described in the Introduction. As mentioned there, its relevance is really within a nonperturbative context, while for ordinary perturbation theory, there is no reason to doubt that the BFV-PI integral should be independent of the gauge fixing fermion [10]. However, the subtle and difficult issues raised by the correct statement of the Fradkin-Vilkovsky theorem are certainly to play an important role in the understanding of nonperturbative and topological features of strongly interacting nonlinear dynamics, such as that of Yang-Mills theories. Faced with this situation, it thus appears that the admissibility of any given gauge fixing procedure must be addressed on a case by case basis, once a dynamical system is considered. In particular, this requires the knowledge of the modular space of gauge orbits of the system, in general a difficult problem in itself. However, it should be recalled that any quantisation procedure of a constrained system not involving any gauge fixing procedure, such as that based on the physical projector [13] which is set within precisely Dirac's quantisation approach only, avoids having to address these difficult problems of identifying modular space and assessing admissibility. Indeed, through the physical projector approach, an admissible covering of modular space is always achieved implicitly [14], as illustrated for example in (11).