text
stringlengths
3.22k
165k
Social Integration and Community Health Participation of Elderly Men in Peri-Urban Ecuador Background: Social integration is an essential element to the maintenance of health and well-being in elderly populations. In the Cumbaya Valley of Quito, Ecuador, community health clinics sponsor social clubs for specific populations to address this important aspect of health. Men, who tend to be less socially integrated than women, are largely absent from these programs. Objective: This paper investigates the quality and extent of men’s social integration in the Cumbaya Valley of Quito to understand why men are less likely to attend the community health center clubs and to develop ideas for increasing male participation, which may differ from current methods. Methods: A composite survey was used to interview 100 men over the age of 40 to collect data on their social health and information regarding their interaction with community health center clubs and other local social groups. Findings: Social integration scores were varied, with some men having high social scores and others having low scores. Men generally had greater access to affectionate and tangible support but lower access to emotional support and positive social interaction. Men spend far more social time with their families and much less with friends and neighbors. Regression analysis revealed that social scores have a relationship with age and education. Qualitative results suggest gendered expectations of men in the community have negatively impacted their willingness to engage in community health groups. Participants also provided suggestions, including specific sports, gardening, and meal distribution, to promote male participation. Conclusion: There is a strong need to increase services, strategies, and programs that address the lack of social integration experienced by men. This paper presents the particular role community clinics can play in increasing the social well-being of its male patients. in a diversity of social activities acts as a protective measure from both communicable and non-communicable diseases. In an assessment of nearly seven thousand citizens from Finland, "one of the most powerful predictors of loneliness was living alone; the lonely folks…were 31 percent more likely to have died…than people who felt intimately connected" [3]. A supportive social network facilitates better health behaviors, including smoking cessation and increased exercise [4]. Moreover, "socially isolated seniors are more at risk of negative health behaviors," with social isolation linked to a greater risk of longer and more frequent hospitalizations as well as lower cognitive functioning [4,5]. A systematic review of 34 articles relates both music and physical activity programs with increased cognitive performance in older adults [6]. Exercise, aerobic in particular, is associated with an increase in hippocampal volume, improving cognition and protecting against memory loss in late adulthood [7]. The positive implications of social integration and physical activities in the community are broadly-scoped. While patriarchy privileges men in most social and economic contexts, there are persistent disparities in health outcomes for men along gender, sometimes referred to as the cost of masculinity [8]. Patriarchy and machismo DiBello J, et al. Social Integration and Community Health Participation of Elderly Men in Peri-Urban Ecuador. Annals of Global Health. 2020; 86(1): 138, 1-10. DOI: https://doi.org/10.5334/aogh.3020 culture are, in the end, detrimental to both men and women. Men have a lower life expectancy than women, with the gap between male and female life expectancy widening [9]. Men are less likely to utilize health services than women, and the gender norm of traditional masculinity stigmatizes against self-care, unemployment, and other health-relevant aspects of life in a male-specific way [8,9,10]. Despite these documented disparities, the topic of men's health has not been a significant focus of research and policy efforts, especially in Latin America. A new report from the Pan-American Health Organization (PAHO) emphasizes the importance of studying masculinities related to health outcomes given the impact of gender norms in machismo culture in Latin America [10]. The report indicates that the hegemonic forms of masculinity should be considered a risk factor for health based on men's limited socialization, identity-formation, and cultural participation, especially in a retirement stage of life where a man no longer can be defined by his profession [10]. Women tend to have more robust social networks than men. It seems that "women's tendency to put a premium on their social connections is one of the main reasons they live longer" [1]. While the economic and patrimonial aspects of gender inequity heavily impact women, the socio-emotional implications of traditional gender roles are detrimental to men. Men are frequently treated as mere obstacles in gender research and policy, which has become synonymous with "women". True promotion of health equity requires treating men as both diverse and as allies in building healthier communities [10]. As a middle-income country, Ecuador is undertaking both a demographic and epidemiological transition [11]. The life expectancy in Ecuador is 77.7 years for both sexes, 80.5 for women and 75.1 for men [12]. The health system requires restructuring to address an increase in the elderly population (above 65 years of age) and the rising incidence of non-communicable disease. While care for the elderly through nuclear and extended families is culturally understood as ubiquitous across all Ecuadorian populations, research from the Ecuadorian Sierran highlands indicates that social networks may not be as integrated conventionally perceived [13]. Only after social movements throughout the twentieth century, culminating in the new Constitution of 2008, did the state formally recognize the full multicultural and multinational identity of its citizenry [14,15]. The historical marginalization of indigenous populations continues to impact today, especially in regions that are more rural and/or historically indigenous, where social isolation occurs at higher rates [11,13]. The majority of the Ecuadorian population self-defines as mestizo [11], a mix between indigena (indigenous) and European (White) descent, which results from a complicated history wherein mestizos were afforded full rights of citizenship (including access to land, health and education) whereas indigenous populations were not [14]. Other ethnic demographics in the Sierra region include afro-Ecuadorian or negro (Black), montubio (coastal campesinos, or farmworkers), and mulatto (historically understood as a mix between Black and Caucasian/Mestizo Ecuadorians) [11,14]. The Ecuadorian Ministry of Health relies on a framework for integrated health, the Modelo de Atención Integral de Salud (MAIS), that finds basis in upholding the rights and entitlements of all citizens. MAIS addresses social determinants of health while promoting "family, community, and intercultural health" [16]. Cumbaya Valley locates itself in the Sierran highlands, east of the capital city of Quito, and forms one of two agrarian valleys that border the urban locale. Within hours of the capital city with proper transport, it has historically been home to a marginalized indigenous population and still contains many remote areas. Publicly funded community health centers, one pertaining to each town in the Valley, execute the MAIS framework at the community level in partnership with Universidad de San Francisco de Quito (USFQ) Medical School. Through the Programa de Atención Integral de la Salud (Integrative Health Care Program), USFQ worked with local community health centers in 2015 to cultivate equitable partnerships that benefit the communities in the Cumbaya Valley [17]. The Clubes de Adultos Mayores (Elderly Clubs), free of charge and growing in popularity, have been established at each clinical site. The weekly Elderly Clubs allow participants to engage socially with members of their local community and medically with health professionals and medical students. The programming acts as a sustainable form of community engagement in each town while also introducing medical students to community-based learning that encourages social integration as an aspect of well-being. As Elderly Clubs take root in a clinical environment, community providers see the weekly social gathering as a form of physical health management (i.e., blood pressure monitoring, nutrition promotion, physical exercise promotion, etc.). Often overlooked is the importance of mere human congregation: Surprisingly, face to face social capital in a neighborhood can predict who lives and who dies even more powerfully than whether the area is rich or poor. In 2003, when several Harvard epidemiologists put nearly 350 neighborhoods under the microscope, they discovered the social capital -as measured by reciprocity, trust, and civic participation -was linked to a community´s death rates. The higher the levels of social capital, the lower its mortality rates [18]. Even the health providers who promote the Elderly Clubs may not fully understand the implications of such groups. The clinic itself can become a place of equitable social integration that strengthens community health and wellbeing. Participation rates and activities vary across the location. One constant in the various clubs, however, is a large gender disparity in participation. The gender distribution of the Elderly Clubs is roughly 80% women and only 20% men. This study investigates the quality and extent of men's social integration in the Cumbaya Valley to better understand why men are less likely to attend the community health center clubs and develop ideas for increasing male participation. Study Population This observational study was completed in conjunction with the Latitude-0 Ecuador Research Initiative of USFQ and collaboration with the USFQ Medical School. Data collection took place from April through July 2019. Study participants were 100 men 40 years of age or older from the following five demographically similar towns in the Valley of Tumbaco: El Quinche, Lumbisi, Pifo, Puembo, and Tumbaco. Participants were recruited from the five health centers from each of these communities through a community-based partnership previously established between the USFQ Medical School and the Ministry of Health. Initially, only four sites were consulted for the study population. When access to the Tumbaco clinical site became difficult due to administrative difficulties within the Ministry of Health, researchers transitioned to the neighboring community of Lumbisi to collect the remaining surveys. Based on clinical records of the older people who attend the El Quinche and Pifo clinics, only 0.4% and 1% identify as indigenous, respectively. Recruitment occurred through convenience sampling at the health centers, during athome medical visits, and through local community groups. All participants spoke Spanish. Surveys were administered through interviews between participants and research assistants affiliated with USFQ. All study materials and procedures were approved by the Comité de Ética de Investigación en Seres Humanos at USFQ (IRB), and all materials containing sensitive personal identity information were guarded at USFQ. Study Instrument and Procedure General demographics were collected for all study participants following the standardized structure of previously utilized surveys developed at USFQ for the Cumbaya Valley's elderly population ( Table 1). The composite survey (Sup1SIEM.pdf) compiled two previously validated surveys from the Social Participation and the Health and Well-Being of Canadian Senior Citizens (SPHWB) survey [19] as well as the Medical Outcomes Study (MOS) Social Support Survey [20]. The MOS data allows insight into the subjective experiences of loneliness, isolation, community engagement, and affective love experienced by each study participant. The SPHWB provides a baseline of understanding the types and frequency of social interaction in which participants engage. The qualitative component of the study included conversing with men who did not participate in social activity clubs. They shared both the causes for their lack of participation in these groups and their ideas on how to increase participation. Regardless of age or participation, all participants were welcome to provide suggestions for new programming at the Elderly Clubs. Efficacy in translation to local, culturally-appropriate language was verified by native, Spanish-speaking staff at USFQ. Verification included a two-step process in which the survey was first reviewed and edited by USFQ Medical staff and then further tested by administering three mock surveys/interviews with male students of USFQ medical school. Data collection, assisted via the KoboToolBox data analysis platform, resulted in 100 valid surveys. Statistical Analysis Upon achieving 100 total surveys, data collection was considered sufficient and exhausted. Data were analyzed using Stata/SE 15.1 software, and statistical significance was set at an alpha value of 0.05. Means and standard deviations for continuous variables, and percentages for nominal variables were determined. Geometric means of each social participation score were compared to the means of each subcategory using paired t-tests. A linear regression model was used to assess the association between social participation scores and selected demographic characteristics. Quantitative Results In responses to the Medical Outcomes Survey, which indicates the subjective experience of social support, overall scores largely fell in the top 3 quintiles. 15% of participants produced low, answering "never" or "rarely" to most questions posed about social support. The SPHWB, which examines the types of people with whom participants interact and the types of activities they undertake, has more varied results. The majority of patients reside in the middle three quintiles (Figure 1). The emotional and social subscores are not statistically significantly different from each other (p = 0.7156). The subscores for affectionate and tangible help are not different from each other (p = 0.7917). However, the emotional subscore is statistically significantly less than the affectionate subscore (p = 0.0022) and the tangible subscore (p = 0.0054). Similarly, the positive social participation subscore is statistically significantly less than the affectionate subscore (p = 0.0088) and the tangible subscore (p = 0.0189). Overall, participants demonstrate higher access to tangible and affectionate support than emotional and positive social participation (Figure 2). In considering the SPHWB questions ( Table 2), participants were more likely to frequently participate in social activities with their family than with neighbors or friends (Figure 3). Age is negatively related to the MOS score until age 68.96, at which point each additional year is related to an increase in the score ( Table 3). With increased education, there is an increased social score. There were no associations between other demographic components (race, housing, employment) and overall social integration scores. The majority of study participants over the age of 60 did not participate in an Elderly Club through the Health Center or the municipality (Figure 4). Qualitative Results The authors determined two categories from qualitative results: barriers to access community health programming and community-suggested solutions for increasing male participation in Elderly Club activities. Barriers to Health Center Programs Many study participants cited specific reasons for not participating in the Elderly Clubs sponsored by the clinics. A total of 37 respondents provided answers to these preset responses. Some respondents gave multiple answers ( Figure 5). In terms of time conflicts, work was cited five times. Transportation issues, six of which were specifically cited as health-related, were mentioned as another main barrier to attendance. A few interviewees agreed to become members of the club after the interview concluded, demonstrating that established interest exists, but the communication and coordination of the clubs' times and activities might be lacking. One participant describes why his recent medical history influenced his participation: Scheduling compounds accessibility issues for some, especially with those who have caregivers and cannot leave the house by themselves. One participant's daughter, who acts as his caregiver, explained: "All of the programs are always on a fixed schedule; it is difficult to bring [my elderly relatives]." Attendance for those with limited physical mobility is highly unlikely given the current scheduling and social (including at home) support systems. The roles of needing care, as well as being a caregiver, can limit a person's ability to engage in social life beyond the family. Some older men asserted that they did not attend the club because they had to fulfill caretaking duties for fellow family members: "I like physical activities, I cannot leave my wife for whom it is difficult to leave the house and has the risk of falling, but yes, I like to go out and do activities when I can." While the barriers listed above are related to non-gender specific dilemmas, many gender-specific implications did arise. One participant explained why he does not attend the Elderly Club: "Because my wife does not want to go." He explained that while he did enjoy the activities of the club -and did want to attend -he would only attend alongside his wife. While interested, he does not feel empowered to participate without the social support of his wife. In this situation, a peculiar and important phenomenon appears: The participant demonstrated a social dependence on his wife. Further responses highlight gendered attitudes towards social life in general and the Elderly Clubs in particular: Men cite their gender, gendered social expectations, and social stigma as barriers to club attendance. Even one man who attends experiences shame from his community for partaking in the activities. Men perceive the Elderly Clubs as an unwelcoming environment for men: "They don't appreciate us, men, very much… not all men are bad, and we care about the rest of society and our families." Addressing the barriers to access requires solutions that are supported by community members and address both the gender specific and gender non-specific barriers to Elderly Club attendance. Community Suggested Solutions Male interviewees provided several unique suggestions for increasing male participation. Some men also expressed interest in becoming involved directly with the programming initiatives and working in an organizing role. A total of 34 respondents provided answers to these preset responses. Some respondents gave multiple answers (Figure 6). Common responses included various types of exercise, conversation/socializing, or community work. Recommendations ranged in feasibility, depending on the age and integral health of participants. In the interviews, many of the men cited that the club should be more inclusive towards older men specifically: "I would like to see more 'strength-based' activities." "When it is your own space, you can do more things. People dance in their homes, play national songs… [then] maybe I'll come to the club." "Men are more closed off [than women], there are people that live far away within Pifo, it's such a big place…[I'd suggest] volleyball, sports, beers, and food." "We need to change the name; there is a stigma related to age." Specific pleas for "more masculine" activities as well as the indications that it currently does not feel like one's own space demonstrates an implicit, even sometimes explicit, understanding that not all are welcome. The name of the club, while using the more formalized and respectful Spanish term to reference elderly persons, stigmatizes some who think the term adulto mayor categorizes them in a negative light. Many study participants supported the idea of starting a community garden from the health clinic: "There are different needs -emotional, physical. [The club must be] more than something superficial, especially in physical and mental terms. [Men] have worked for the most part out in the fields on the land, and men continue an agricultural way of life [as they age]." In addition to the cultivation of fruits and vegetables, the provision of snacks or meals were listed (noted in five interviews) to be the main attraction for men to participate. A further suggestion was made by one man in Pifo who belonged to the local Jehovah's Witness parish. He suggested that door-to-door invites, similar to the style of his church, might positively increase the number of participants in the community clubs. He further invited the interviewer to attend the church to promote the club and complete more interviews with men. Discussion This is the first study to evaluate social participation in adult men in Latin America, and the study brings to light a need for more attention in this regard. Strengths included quick and efficient engagement with the community, with simultaneous data collection as well as the promotion of currently available resources. Many men of the Cumbaya Valley did not have high social integration scores and could benefit from increased social activity. Men were more likely to have access to tangible and affective support than emotional aid or positive social interaction (Figure 2). This difference may arise because men interact with people outside of the family with a significantly lower frequency (Figure 3). Men in the Cumbaya Valley would benefit from stronger, more integrated social networks. More information must be consulted to understand the impact of age on social integration, given its unique trend. As people age, they may realize an increased desire to experience connection and stronger social networks. The tangible realities of retirement and increased physical dependency may also allow for greater social connection. An alternative explanation might posit that people who have more robust social networks and social support tend to live longer; those living longer would be expected to have a higher social score as a result of the protective nature of their lifestyle. Education was positively associated with social integration scores. We hypothesize that, just as education is associated with better overall health, higher education levels may promote better social wellbeing, through either helping to establish stronger and larger social networks or increasing self-efficacy. Age was positively associated with social scores until the age of 69, at which point it became negatively associated. We believe this is a result of shrinking social networks that come with older age. While education and age did trend with higher levels of participation, employment, as well as housing, did not show to have any significant impact on social engagement. The health center groups are largely attended by women, and most men that participate are the husbands of women in the clubs. Men generally feel uncomfortable, especially on their own. People are more willing to join if they have friends or acquaintances who already participate and assume an already normalized role (i.e., husband of a woman who has female friends in the group) within the community. Many men may experience social dependency on their female partners. Many of the study participants held strong attitudes towards men's role in social life, even though questions never specifically intended to acquire older men's views on gender and sexuality. The study participants expressed normative sentiments regarding masculinity. Many believed that men should not have social commitments beyond those of work and family. The types of social activities promoted at the health clinic appear to be culturally understood as feminine. As expressed in the PAHO Masculinities Report, a machismo culture is circumscribing social participation for men [9]. Requests for more traditionally masculine activities should be taken seriously, and a wider conversation should begin regarding limiting gender roles that diminish health equity. Gardening was strongly supported by male community members and has shown efficacy when piloted in Pifo. Especially given the large population of elderly persons that worked in agriculture before retirement, this activity welcomes both men and women equally. The leaders of the garden initiative in Pifo, who planted most plants and tended to it daily, consist of both men and women who work together. The garden, which lies in the backyard of the clinic, increases the number of informal interactions throughout the week between clinical staff and the community members and addresses one current barrier to accessing Elderly Clubs: a strict schedule. Communities of faith could become another manner to integrate elderly community members who already have a platform for community-based engagement. Such programming already exists in places such as El Quinche, where the local medical staff visit the Reina de El Quinche parish to complete medical check-ups and initiate social engagement for the elderly each week. The study design faced several limitations. Convenience sampling, while generally a reliable method for pilot testing, limited the representative nature of the results. Further, while the interview/questionnaire allowed for qualitative analysis, a structure was not in place to gather the more detailed stories of participants nor formally collect suggestions for the clubs at each clinic. The study design focused on frequency scales, but the actual number of social connections was not tracked. For example, a person with only one social connection may respond that he "always has someone to give him a hug," but that access could qualitatively differ from a person who is constantly surrounded by several doting relatives, friends, and acquaintances. Local health providers were not formally consulted to understand their perspective on male participation in Elderly Clubs. Conclusions and Suggestions Research conducted with communities in the Cumbaya Valley demonstrates the need for more social spaces that integrate male community members. Clinical staff and university collaborators may conduct further research through means such as focus groups to understand social barriers faced by men and to implement more inclusive programming. Understanding an individual's social web -rather than a mere subjective perspective on socialization -would provide more a firm understanding of the communities' needs. This becomes especially important, considering that overlapping social relationships are protective of health [1]. Given the trend of age with social integration levels, it may be best to target younger populations before reaching an age of higher social abandonment. Since older populations often may require increased caretaking and transportation support, pro-gramming could be designed to decentralize the gathering of the groups or facilitate community-based resources dedicated to transportation and/or accompaniment. As the groups are largely attended by women, they may act as safe social support for women. However, older men currently feel excluded by the structure of programming. New approaches must maintain the current communal nature of the Elderly Clubs while addressing disparities in participation. Further research should continue to address gender-based drivers of access inequity at the local community health level. Addressing male-specific needs may include an expansion of the existing Elderly Clubs or the formation of new groups. A final citation represents the sentiment described by many of the older men interviewed: "When I arrive [to the Elderly Club], it is important that people hug me, that they listen to me. All of us need to be heard. [You should] take care of your dad, listen to the things from his whole life." The local community clinic can be a space in which the Village Effect is reproduced. For some in the Cumbaya Valley, Los Clubes de Adulto Mayores already lift the capacity for such social integration, a capacity that must be revitalized in a safe and equitable manner as and when COVID-19 protocols allow. More research and engagement, especially focused on male individuals who face gender-specific stigmatization, will ensure that a positive and supportive community is facilitated for all.
Magnetic Resonance Imaging as a Tool for the Study of Mouse Models of Autism Autism is a heterogeneous disorder, in both its behaviour and genetics. This heterogeneity has led to inconsistencies in the neuroanatomical findings in human autistic patients. The benefit of a model system, such as the mouse, is that there could be a decrease in the heterogeneity of the genetics and standardization of the environment could be done, in order to determine a specific anatomical phenotype, which is representative of a specific genotype. Magnetic Resonance Imaging (MRI) has been used quite extensively to examine morphological changes in the mouse brain; however, examining volume and tissue microstructure changes in mouse models of autism with MRI, is just in its infancy. This review will discuss the current research on anatomical phenotyping in mouse models of autism. Autism Animal Models in Autism ISSN: 2165-7890 Autism an open access journal Jacob Ellegood1*, R. Mark Henkelman1,2 and Jason P. Lerch1,2 1Mouse Imaging Centre, Hospital for Sick Children, Toronto, Ontario, Canada 2Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada In Leo Kanner’s 1943 paper he evaluated 11 children with differing signs and symptoms, describing what have come to be referred to, as Autism. The children in that study were quite heterogeneous in both their symptoms, and the severity of those symptoms. Autism, as currently defined, is still quite heterogeneous. The three hallmark characteristics of autism, social deficits, communication deficits, and repetitive restrictive behavior, have large ranges in severity. For example, the communication deficits range from a delay in the development of spoken language to a total lack of any communication (American Psychiatric Association, 2000). Autism is a genetic disorder, with a 90% concordance rate in identical twins and a 15-20% risk of autism in siblings. Similar heterogeneity is seen in the genetics, with well over 200 genes associated with Autism [2]. However, no single gene accounts for more than 1-2% of autistic cases [3]. Autism Open Access A uti sm : Open Acess Introduction Using Magnetic Resonance Imaging (MRI), one can detect subtle volume and tissue microstructure changes in the brain, in both humans and the mouse [4]. Meta-analyses of human brain imaging papers have revealed some overlap across studies, yet autism imaging research is plagued by inconsistencies [5][6][7][8]. The authors of these analyses highlight age and IQ as an explanation for these inconsistencies, which is certainly a factor, but it is also the genetic, environmental, and behavioural heterogeneity that is driving this variability in imaging. In an animal model, such as the mouse, almost all of that heterogeneity could be eliminated as the genetics and the environment can be tightly controlled. This review will focus on MRI in mouse models of autism. Specifically, examination of how MRI is used to assess differences in volume and tissue microstructure in the mouse brain would be done. The current literature will be discussed, followed by a brief synopsis of where to go from here. The Mouse as a Model System When the sequencing of the human genome was completed [9,10], researchers started to map the genomes of other mammals. The first mammal examined was the mouse [11]. Knowing the genome of the mouse allows one to gain an understating of how the genotype relates to the phenotype: the anatomical or behavioural characteristic of the mouse. The genes and pathways in the mouse are very similar to the human; in fact, there is a 99.5% probability that a gene from the mouse is also recognized in the human [11]. Economical reasons also make the mouse an excellent model for research as well. For one, the mouse is quite small in size, limiting housing costs. Secondly, a number of different readily available inbred mouse strains exist, which are, within each strain, genetically identical. Genes can be added, deleted or replaced with relative ease in the mouse, allowing the investigation of the effect of any specific gene. A growing inventory of behavioural tests that show characteristics similar to autism has been reported. Combining all of these factors makes the mouse an easy to use and economical model system, with which the consequences of human disease and behavior could be examined. Magnetic Resonance Imaging in the Mouse Where a brain phenotype is unknown, 3D imaging techniques at the mesoscopic scale (which is a range in between microscopic and macroscopic) can detect very subtle differences, which can lead the researcher to a region of interest, for further examination at the microscopic scale [12]. Examples of mesoscopic 3D imaging techniques [12], used in the mouse are Computed Tomography (CT), which is used frequently for investigating high density structures like bone [4], or vascular trees that have been filled with X-ray opaque contrast agents [13,14]. Recently, there has been a growing interest in embryo imaging with microCT, which relies on the use of contrast agents such as iodine, to enhance soft tissue contrast [15][16][17]. Ultrasound Biomicroscopy (UBM), commonly used for cardiac imaging [18,19] is also useful for studying embryotic development [20]. Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT), requires the use of exogenous contrast agents which can be tagged to any molecule, nanoparticle, or cell. Both, however, are difficult to be scaled down from human to mouse [21], and are often combined with other 3D imaging techniques (MRI or CT), to obtain better spatial resolution combined with the molecular specificity of the PET/SPECT at low resolution [22]. Optical Projection Tomography (OPT) Imaging, which is basically fluorescence CT, has recently been used to image fixed samples of the brain or embryo of a mouse at quite high resolution [23]. Lastly, the focus of this review, there is Magnetic Resonance Imaging (MRI), which has been used extensively in the brains of both mouse models [24], and human patients. MRI uses the nuclear magnetic resonance properties of the water molecule to produce an image of the brain, or other organs of interest. MRI has the best soft-tissue contrast of all the 3D imaging techniques. This contrast comes about because water in different regions of the brain interacts differently with the surrounding environment. With MRI, these differences in the water could be harnessed and the MRI sequences could be manipulated, to get differing tissue contrast dependent on our interests. Figure 1 shows the different types of contrast available with MRI imaging. MRI is not readily scaled from human to mouse due to the decreased signal as the voxels are scaled down, which is caused by less water being within the voxel as they get smaller. The voxel dimensions need to decrease by 10-15 folds in each dimension (human voxel dimensions-1 mm isotropic, mouse voxel dimensions in-vivo-0.125 mm isotropic, fixed brain-0.056 mm isotropic), to achieve comparable images in the mouse as in the human. In order to achieve this increase resolution, several modifications to the MR scanner hardware and imaging protocols are required, including specialized radio-frequency coils, an increase in magnetic field strength, and an increase in the scan duration. The long scan duration causes two additional problems: 1) For in-vivo scanning, there is a time limit due to the anesthesia limits for mice, typically ~3 hrs, and 2) for fixed imaging, when there is no physiological limitation; the problem becomes scanner time, especially on a shared system. This could be overcome by scanning more than one mouse at a time in parallel; a technique coined "Multiple Mouse MRI" by the Henkelman group at the Mouse Imaging Centre in Toronto [25][26][27]. Currently, the major application of mouse MRI is neuroscience, with much of the work focused on genetic models. Examples include Huntington's disease [28][29][30][31], Alzheimer's disease [32][33][34], and other mental health diseases like schizophrenia [35,36], and recently, autism [37][38][39]. Genetic knockouts are also examined to identify the role of specific genes in development, behaviour and aging. Behaviour links tightly with anatomy, with 90% of gene mutations in mice that show a motor/neurological deficit, featuring an MRI detectable anatomic phenotype [40], and surprisingly even learning and memory can be detected in neuroanatomical changes. Five days of training in the Morris water maze were sufficient to induce changes in mesoscopic neuroanatomy [41], indicating that anatomical phenotyping could be used to assess learning or other behaviours in the mouse. Anatomical Imaging with MRI Anatomical phenotyping with MRI can be used to examine differences between groups of mice, usually a mutant mouse group versus a control mouse group, with the goal being to determine where in the brain they differ. This could be done by measuring the volumes of brain structures, which gives us a quantitative measure that then can be compared between groups. In some cases, it may be easy to see a difference in volume; for example, the Engrailed2 Knockout (KO) mouse has a smaller cerebellum, which is clearly visible [42]. However, in other cases, the differences may be quite subtle. Deformation Based Morphometry (DBM) is a commonly used automated technique that can be used to detect anatomical differences between populations. DBM requires no prior hypotheses and produces an unbiased measurement of the volume differences between groups, across the entire brain. DBM is a quantitative image analysis technique which evaluates information contained within the vector field, generated by the nonlinear warping of an individual MRI scan to some sort of reference brain, or to each other [32,43]. DBM has been used previously to examine crosssectional morphological differences and longitudinal anatomic changes in humans [44], as well as in mouse models [45][46][47]. Figure 2 is a diagram of the process used for DBM. Using a method such as this, is highly specific and reproducible [48], and with only 10 mice in each group (genetic mutant vs. control), a 5% difference in volume could be detected. Diffusion Tensor Imaging (DTI) DTI is an alternative method used to generate different type of contrast on an MRI image, and can provide quantitative information that can be related to the tissue microstructure. DTI was originally proposed in 1994 by Basser et al. [49], and in that work they estimate, what is called the effective diffusion tensor by measuring the diffusion of water in multiple directions. This diffusion tensor is representative of how water diffuses within a certain voxel, and highlights differences between isotropic (unordered or spherically symmetric) tissues, such as gray matter, and anisotropic (highly ordered) tissues, such as the white matter. The major quantitative measures taken from DTI imaging are Fractional Anisotropy (FA), which measures the degree of anisotropy The resulting scans are then automatically aligned towards a common average, segmented with an anatomical atlas, and local volume differences measured. Final outcomes are anatomical structure volumes per mouse, as well as maps of significant local volume differences (Defomation Based Morphometry). (order) in the tissue, and Mean Diffusivity, which is the average diffusion over all directions. A difference in FA is representative of differences in myelination, a change in the tissue permeability, and/or a difference in axonal organization, structure, or size. While differences in FA are not specific to one of these factors (regardless of many researchers attributing a difference in FA to myelination differences), it still reveals a change/difference in the underlying tissue microstructure and highlights an area of interest in the mutant brain. DTI has become quite useful for examining mouse brain development. Mori et al. [50] at Johns Hopkins University have pioneered the use of diffusion in the mouse brain. Their DTI studies have revealed a characteristic evolution of diffusion anisotropy in the cortex and white matter tracts, throughout the brain during development. This ability to detect changes in the organization of the brain during development can answer questions about both normal and abnormal development. Mori et al. [50] have also looked at different genetic mouse models to examine how the genetics influence the tissue microstructure. One example they looked at was the Frizzled3 (Fz3) KO mouse [51], in which multiple structures and white matter fiber tracts were found to be absent or greatly reduced. Specifically, they found an abnormal U-shaped bundle immediately caudal to the optic tract, which connected the hemispheres of the thalamus; they determined that this tract most likely failed to join the internal capsule at an earlier stage of development. They also examined callosal dysgenesis in two different mouse models, which have a new tract called a Probst bundle [52]. The Probst bundle is an anterior posterior travelling white matter bundle that is caused by a rerouting of the corpus callosum. Thus, DTI provides a wealth of new information about the organization of white matter tracts in the brain, which can help us to better understand the structural connectivity in the brain. As demonstrated from this previous work in the mouse, using MRI can provide the researcher with an enormous amount of data that identifies previously unknown regions of interest, and may help answer questions about both the effect of genetics or behaviour on the brain. Imaging in Autism In human autism, the results are often compounded by confounds such as age, IQ, environment, genetics, etc. These factors increase the heterogeneity of an already heterogeneous disorder. For example, while some studies report an increase in the size of the hippocampus [53], other studies report a decrease or no change at all [54,55]. Similar differences can be seen with the amygdala [53,55]. In the mouse, the genetics of the subjects could be matched, such that a single genetic mutation could be looked at, and further the environment could be controlled and most of these confounding factors could be eliminated. Humans Several meta-analyses have examined volumetric findings in human autism with MRI, without identifying much consensus. However, there are trends worth mentioning. In 2005, Redclay and Courchesne [56] published a Meta analysis determining when exactly the brain is enlarged in autism, as there have been conflicting reports. They conclude that there is an early period of pathological brain overgrowth, followed by normalization in autism, and this happens during the first 5 years of life. Stanfield et al. [6] performed a meta-analysis on structural MRI studies, in order to determine the neuroanatomy of Autism. They also looked at the total brain and found that it increased in size as well as the cerebral hemispheres, cerebellum, and caudate nucleus, whereas the corpus callosum was reduced. They also noted that the inconsistencies in the literature might relate to differences in age and IQ, as well as different regions showing abnormal growth trajectories. A review that summarizes the findings from structural MRI studies of human autism has recently been published by Stigler et al. [57]. While these studies highlight some consistent anatomical findings in autism, there is no possibility of accurately diagnosing a child with autism, using structural MRI findings alone. The most consistent, well replicated finding is the reported decrease in size or thinning of the corpus callosum, and there are still reports that have not found differences. Two possible causes lead to this inconsistency: 1) The noise of the given study is too high to find the subtle changes that are happening in the brain, and 2) there are multiple causes of autism (i.e. different genes) that result in different anatomical correlates, yet produce similar behavioural symptoms. Thus, a model system in which the heterogeneity of the genetics could be decreased and the environment could be standardized is needed, which makes the mouse ideal. Mouse As mentioned previously, human autism is defined by three behavioural characteristics: social deficits, communication deficits, and repetitive restrictive behaviours. While it may follow that autism in the mouse should be equivalently behaviourally diagnosed, how to determine a communication or a social deficit in the mouse? Jacqueline Crawley's lab has pioneered behavioural testing in the mouse to help define autistic behaviour [58][59][60], and in fact there have been a few behaviourally autistic mouse model strains that have been discovered. An example of a mouse that encompasses all 3 of the core behavioural features of Autism, would be the BTBR mouse [61,62]. For the most part, however, autism in the mouse is defined only through genetics. Autism related syndromes account for a small portion of autistic patients. The rest of the autism population is made up of abnormal Copy Number Variations (CNVs), single gene mutations, or currently unknown causes [3]. These unknown cases are thought to be the cause of multiple genetic mutations. Currently, the SFARI gene database lists 200+ genes that have been associated with autism [2], with no single gene accounting for more than 1-2% of autistic cases [3]. Of those 200+ genes, 70+ are listed as having animal models, with that number increasing every year. Typically, the way a new mouse model of autism is created is as follows: a genetic study of a human autistic population is performed, and a genetic mutation is discovered. Then a mouse model, which is representative of that genetic mutation, is created and analyzed to see how it relates to the human case. For example, Jamain et al. [63] found an inherited mutation in the NeuroLigin3 (NL3) gene in a family with two brothers, one with typical autism and the other with Asperger's syndrome. This mutation replaced a highly conserved arginine Residue with Cysteine at amino acid position 451 (R451C), which caused a decrease in the amount of NL3. Tabuchi et al. [64] later introduced that same mutation into a mouse, creating the NL3 R451C Knockin (NL3 KI) mouse model. Of those 70+ genetic mouse models, less than 10 have published on volumetric analysis using MRI; most of them recently (Table 1). Therefore, using MRI to detect differences in mouse models of autism is just in its infancy. However, there is a growing literature on the subject. Originally, the papers focused on single gene syndromes that were related to autism, such as Fragile X Syndrome (FXS) and Rett Syndrome (RTT). Approximately, 15-33% of the patients with FXS are also classified as having autism, and currently under DSM IV (although this is changing in DSM V), Rett Syndrome is classified as an Autism Spectrum Disorder (ASD). One of the first papers to examine with MRI, a mouse model related to autism looked at FXS [65]. FXS is caused when the Fragile X Mental Retardation 1 (FMR1) gene is mutated by a small part of the gene sequence being repeated (the more repeats, the more severe the phenotype). In 1994, the Dutch-Belgian Fragile X Consortium created the first FMR1 mouse, in order to study the physiological role of the FMR1 gene [66]. In 1999, the same group studied the neuroanatomy of the FMR1 mouse using high resolution MRI [65], and although they didn't find any significant difference between groups in that original paper, they hypothesized that "the method described may find wide application in the study of mutant mouse models with neurological involvement". Examination of the FMR1 was revisited in a 2010 study [67]. In contrast to the 1999 FXS study, which examined the volume of 3 regions and the surface area of 7 different regions on a mid-sagittal slice finding no differences, the 2010 study examined 62 different regions in the brain and after accounting for multiple comparisons, 3 regions were highlighted and only one achieved significance, the arbor vita of the cerebellum. The other two regions showed trends towards a decrease in the size of the striatum and an increase in the parietotemporal lobe. The arbor vita of the cerebellum is composed of both the white matter of the cerebellum and the deep cerebellar nuclei. When the authors investigated this further, they reported that two of the deep cerebellar nuclei were significantly decreased in size, namely the fastigial nucleus and the nucleus interpositus. The authors then proceeded to examine these regions further, using histology and it was concluded that the changes in the nuclei occurred due to a loss of neurons and a subsequent increase in the astrocytes, as a result of reactive gliosis. The anatomical phenotyping performed in such studies is not the end of the investigation. What it does is highlight an area of interest within the brain where a detectable difference is found, which then leads to further investigation. Several mouse models of Rett Syndrome (RTT) are currently available. They range from full null mutants, who have shortened lifespans of ~8 weeks, to truncation mutations with milder consequences, but similar behavioural characteristics. In 2006, a mouse model of Rett Syndrome, which was a null mutant, was examined with MRI [68]. In this study, volumes were calculated by manual segmentation of the structure on all slices. The authors found an overall reduction in the brain size, a reduction in the thickness of the motor cortex and corpus callosum. Trends were found in the cerebellar volume, as well as noticeable changes in the number of lobules in the cerebellum. This global reduction in overall brain size is a constant feature found in RTT patients. Furthermore, the thinning of the corpus callosum and motor cortex are also commonly found in RTT. The authors do note, however, that not all the morphologic abnormalities that are found in RTT were seen in the mouse model, as the caudate nucleus and thalamus were not decreased in size in this mouse. In 2008, Ward et al. [69] preformed a longitudinal study on the brains of a RTT null mouse from 21 to 42 days of age. They used MRI to calculate 4 different measures of brain development: total brain volume, cerebellar volume, ventricle volume, and motor cortex thickness. Similar to the 2006 study, total brain volume was decreased in the RTT null mice at all time-points in the study, and the cerebellum volume was also decreased initially, but normalized by 42 days of age; however, the motor cortex thinning reported in the 2006 study was not replicated. The same group later assessed the response to environmental enrichment on these same 4 regions [70]. They determined that the environmental enrichment not only improved the performance of the RTT mouse in locomotor and fear conditioning tasks, but it also showed that the ventricular volume negatively correlated with the improved locomotor activity. In 2011, Ellegood et al. [38] examined the brain of the Mecp2 308 truncation RTT mouse (Society for Neuroscience Annual Meeting). They reported the volumes of 62 different structures. Similar to the previous RTT studies, the total brain volume was decreased, and they report volume changes that are consistent with what has been found previously in human RTT. These four studies highlight the power of using MRI to detect volume differences in the brain. Not only are the changes found often replicated in subsequent studies, but also these changes show similar findings to human RTT patients, showing that anatomical phenotyping in mouse models can replicate volumetric abnormalities found in human patients. Copy number variations are quite common in the human population, and specific CNVs have been found to be associated with autism susceptibility. The long arm of chromosome 16 is an example. A deletion of the 16p11.2 region is associated with autism, while a duplication of this region is associated with both autism and schizophrenia. Recently, Horev et al. [71] created mouse models of 16p11.2 deletion and duplication. These mice were then anatomically phenotyped to look for differences in the brain between groups. In this study, the authors report a strong dosage effect on the volumetric findings in the brain. Specifically, deletions in the 16p11.2 increased brain size, in comparison to controls. Conversely, 16p11.2 duplications lead to decreases. In fact, these mice had dosage dependent effects in gene expression, brain architecture, and behaviour. Furthermore, they found that the deletion was more severe than the duplication. Strong increases in brain size between the 16p11.2 deletion and the WT were found in a number of midline structures, with the hypothalamus findings being the most intriguing. The hypothalamus finding in this study was a previously unreported finding in mouse or human; however, it did account for the behaviour seen in the mouse. Thus, anatomical phenotyping added a previously unknown region of interest that in fact was responsible for the behavioural phenomenon. Recently, two additional single gene mutations which are associated with autism, have been examined in the mouse. Many common volumetric findings were found in the two models. The two seemingly unrelated models are the Neuroligin3 R451C Knockin (NL3 KI) and the Integrinβ3 Knockout (ITGβ3 KO) mouse. The Neuroligin genes are synaptic adhesion genes located on the postsynaptic membrane, and the ITGβ3 gene's role is to control platelet function, cell adhesion and cell signaling, as well as being related to the serotonin system. Both of these genes have been associated with Autism in separate human studies [63,72]. These mouse models were both studied using the same MRI sequence and analysis [38,39]. The NL3 KI mouse model had marked volume differences in many different structures, including the total brain volume, which was decreased by 8%. Specific gray matter regions such as the hippocampus, striatum, and thalamus were significantly decreased; in fact, the total gray matter in the brain was decreased by 8%. Similarly, white matter regions had quite strong decreases in size, the corpus callosum, cerebral peduncle, fornix, and internal capsule were all strongly decreased in volume, and the total white matte volume was also decreased by 10%. The ITGβ3 KO mouse model also had strong volume differences. Similar to the NL3 KI, the ITGβ3 KO mouse's total brain volume was decreased by 11%, and while not all the volume differences were similar to the NL3 KI mouse, there were some striking similarities. 1) In both the NL3 KI and the ITGβ3 KO mice, the white matter was strongly affected. White matter differences have become common findings in human autism, with the theory that children undergo a period of abnormal white matter development. Furthermore, white matter deficits in Autism have been thought of as atypical or incomplete connectivity; 2) Volume differences in the corpus callosum have a similar pattern in both models. Figure 3 shows an image of the significant decreases in the corpus callosum in both models. As mentioned, a decreased volume or thinning of the corpus callosum has been one of the most consistent findings in human autism; 3) Both the ITGβ3 KO and NL3 KI also had significantly smaller hippocampi, and in both cases, the dentate gyrus and stratum granulosum were much smaller. These similarities between two seemingly unrelated mouse models of Autism highlight a large benefit of the unbiased volumetric measurements performed with Deformation Based Morphometry, using MRI. Anatomical phenotyping can show similarities and differences across the spectrum of autistic models, perhaps grouping some of the genetic causes. All the white matter volume difference reported in these mouse models, make DTI increasingly necessary to look at the tissue microstructure of the white matter. Only a few studies have looked at mouse models of Autism with DTI. One study examined the FMR1 KO and found no differences in any of the diffusion measures [67]. Another study on the NL3 KI model found only small differences in FA in the globus pallidus of the mouse brain, in spite of the large number of volume differences found in the white matter structures in that model [38]. Given these large volume differences in the white matter in the NL3 KI, the authors were surprised to find a lack of FA differences. They speculated that this could be caused by a loss in the number of axons (a decreased bandwidth), but that the density, size and organization of the axons remained consistent between models. Recently, Kumar et al. [73] used DTI to examine the BALB/CJ mouse. The BALB/CJ mouse is a model of reduced sociability relevant to Autism. In that study, they examined the social behaviour of the BALB/CJ mouse at 3 different time-points and scanned the mice longitudinally with DTI at each of these times. The authors examined 8 manually selected regions of interest (5 gray matter and 3 white matter), in which they reported trends (as noted in that paper, these findings did not hold up when corrected for multiple comparisons) of higher Mean Diffusivity (MD) in the corpus callosum, and a reduced Fractional Anisotropy (FA) in the external capsule. They attribute the change in FA in the external capsule to reduced myelination, although it could also be attributed to a change in the structure or density of the axons in that region. Conclusions and Future Directions Anatomical phenotyping at the mesoscopic scale in autism is obviously still in its infancy and no strong conclusions about autism, as a whole can be made from the imaging that has been performed so far. In spite of the findings that the ITGβ3 KO and NL3 KI have similar anatomical characteristics, there is no great overlap across the small number of mouse models of autism that have been examined currently, and perhaps one should not be expected. The anatomical findings of each individual gene or CNV are certainly relatable to the same genetic case in the human population, as illustrated by the RTT findings. With the 70+ mouse models of autism currently existing and <10 models examined, a larger overlap or grouping of models could not be found, until more are investigated. The goal should be to examine as many models of Autism, in as similar a way as possible. Then the findings from all of those mice should be pooled together to cluster the different models based on their neuroanatomical findings. These clusters could then give rise to different Autism subsets allowing for different treatments. This in turn, could lead to better individual treatments of human autism.
Rutin-Mediated Priming of Plant Resistance to Three Bacterial Pathogens Initiating the Early SA Signal Pathway Flavonoids are ubiquitous in the plant kingdom and have many diverse functions, including UV protection, auxin transport inhibition, allelopathy, flower coloring and insect resistance. Here we show that rutin, a proud member of the flavonoid family, could be functional as an activator to improve plant disease resistances. Three plant species pretreated with 2 mM rutin were found to enhance resistance to Xanthomonas oryzae pv. oryzae, Ralstonia solanacearum, and Pseudomonas syringae pv. tomato strain DC3000 in rice, tobacco and Arabidopsis thaliana respectively. While they were normally propagated on the cultural medium supplemented with 2 mM rutin for those pathogenic bacteria. The enhanced resistance was associated with primed expression of several pathogenesis-related genes. We also demonstrated that the rutin-mediated priming resistance was attenuated in npr1, eds1, eds5, pad4-1, ndr1 mutants, and NahG transgenic Arabidopsis plant, while not in either snc1-11, ein2-5 or jar1 mutants. We concluded that the rutin-priming defense signal was modulated by the salicylic acid (SA)-dependent pathway from an early stage upstream of NDR1 and EDS1. Introduction Flavonoids belong to an important class of secondary metabolites in plants, which can be divided into several subgroups by the diversity of chemical radical groups [1]. They exhibit broad biological functions including defense (antibacterial activity), UV protection, auxin transport inhibition, allelopathy, energy transfer, control of respiration and photosynthesis and flower coloring in plant [2]. Rutin is one of the huge families of flavoniods which was broadly distributed in fruits, vegetables and other plant food sources [3,4]. Even in tobacco leaves, the content of rutin is approximately reached to 80 μg g -1 fresh weight [5]. Rutin has anti-inflammatory and strong antioxidant properties too. It was reported to attach to metal ions and phase. Cells were collected by centrifugation and resuspended in distilled water to form a gradient concentration of 10 6 , 10 7 and 10 8 CFU ml -1 (equal to approximate 0.2 OD). The bacterial suspensions were grown on PSA solid medium containing 0 to 4 mM rutin which purchased from Sangon Biotech (Shanghai, CN), at 28°C for 24 h. Pathogenic bacteria Ralstonia solanacearum SD (Isolated from Shandong Province, China at the year of 2011) and Pseudomonas syringae pv. tomato DC3000 (Pst DC3000) were cultured as described above and the medium were replaced with Nutrient Agar (NA) medium and King's B (KB) medium, respectively. All the pathogens were only studied in the lab and greenhouse. There has no specific permissions were required for these locations/activities. Plant material and pathogen inoculation Rice Mudanjiang 8 (Oryzae sativa cv. japonica) plants were grown in the greenhouse at 28°C, 70% relative humidity and with a 12 h photoperiod. Each of five plants at booting stage were sprayed with a solution containing different concentrations of rutin diluted in distilled water, which was supplemented with 0.02% Tween 20. The control plants were sprayed with 0.02% Tween 20 only. The plants were inoculated with PXO99 (Philippine race 6) by the leaf clipping method after three days of pre-spraying with rutin, as described previously [24]. The disease was scored by measuring the lesion length at 7 and 14 days after inoculation. The results show average values of triple experiments. N. benthamiana were grown in the greenhouse under a 16 h light/8 h dark cycle at 25°C, with 70% relative humidity. Eight-week-old plants were sprayed with different concentrations of rutin diluted in distilled water containing with 0.02% Tween 20. The control plants were sprayed with 0.02% Tween 20. The plants were inoculated with 10 8 CFU ml -1 of R. solanacearum SD through hypodermic injection with a syringe after three days of treatment [25]. Different growth stages of R. solanacearum SD were detected after inoculation to draw the growth curve. Bacteria in leaves was counted by determining the CFU of 1 g leaves (fresh weight) either pretreated or untreated with rutin on NA medium [20]. At least three plants for each time point were inoculated through leaf injection with the bacterial suspension. The same experiment was repeated in triplicate at the greenhouse. RNA extraction and qRT-PCR Total RNA was isolated from 100 mg plant tissue with TRI reagent according to the manufacturer's instructions (T9424, Sigma-Aldrich, USA). 0.5 μg RNA was used for first-strand cDNA synthesis using the PrimeScript™ RT reagent Kit with gDNA Eraser (TaKaRa, Dalian, CN). Quantitative PCR was performed with SYBR 1 Premix Ex Taq™ (Tli RNaseH Plus) (Takara, Dalian, CN) on the IQ5 Real-Time PCR System (Bio-Rad, USA). The following PCR program from the reference was used [24]: 95°C for 5 min, followed by 40 cycles of 95°C for 15 s, 55°C for 15 s, and 72°C for 30 s. A heat dissociation curve (55-95°C) following the final cycle of the PCR was checked to test the specificity of the PCR amplification. OsActin of rice, NbEF1α of tobacco and AtActin2 of Arabidopsis were used as internal control to standardize the results. We used NCBI database to get the gene sequences and Primer Premier 5 to design the primers. For each gene, qRT-PCR assays were repeated at least twice with triplicates runs. Relative expression levels were measured using the 2 -⊿⊿Ct analysis method. The sequence of each primer for all detected genes is listed in Table 1. Data treatment All experiments were performed in three replicates with similar results. Each replicate contained at least three plants. The quantitative data were performed by Student's t test (two-tail t Results Rutin has limited antibacterial action to all tested bacterial pathogens at 2 mM or lower concentration To evaluate the effects of rutin against bacterial pathogens, four strains represented as three bacteria species were growth on cultural medium plus with different concentration of rutin. Among four strains, PXO99 and RH3 were belong to typical strain of Xanthomonas oryzae pv. oryzae (Xoo) and Xanthomonas oryzae pv. oryzicola (Xoc), they were shown no clear growth inhibition on PSA contains either 0.5 mM, 1.0 mM or 2mM rutin as well as R. Solanacearum SD and Pst DC3000 (Fig 1). The growth inhibition of all four pathogens was only observed with 4 mM rutin supplemented in PSA when their inoculum titrations were lower as 10 6 CFU ml -1 . Compared with rutin, the quercetin demonstrated a better growth inhibition capacity to all four tested pathogens (S1 Fig). These results were indicative of limited antibacterial ability of rutin for tested bacteria. Rutin promoted resistance against Ralstonia solanacearum in Nicotiana benthamiana Previous studies described that AtMYB12-overexpressing tobacco was resistant against R. solanacearum SD as well as enriched rutin. To test whether rutin could directly activated the plant resistance, in this study, we investigated the effect of rutin on the defense response against R. solanacearum SD in N. benthamiana. Most leaves in the control group showed water-soaked symptoms, wilted post three days inoculation, as shown in Fig 2a. However, the wilted symptoms were attenuated in N. benthamiana leaves when it was pretreated with rutin from 1 mM to 4 mM. The stronger attenuation disease symptoms were observed to associate with rutin concentration on inoculation plants (Fig 2a). In addition, rutin hardly inhibit bacteria growth at a concentration of 2 mM in the cultural medium (Fig 1). Therefore, this concentration was chosen for subsequent experiments. The bacterial growth curve indicated that pre-sprayed rutin could remarkably protect N. benthamiana from R. solanacearum SD infection at a concentration of 2 mM (Fig 2b). Compared to pretreatment with 2 mM rutin, more than 4.82 folds of bacteria had been evaluated on control plant at 48 hpi (Fig 2b). Additionally, in spite of less antibacterial ability than quercetin, pretreated with 2 mM rutin presented better resistance to R. solanacearum SD (S2a Fig). We also analyzed the transcription level of PR genes: NbPR1a, NbNOA1 (nitric oxide-associated 1) which is related to NO production and to defense responses [27], and NbrbohB (respiratory burst oxidase homologs B) which is related in active oxygen species generation [28]. Without inoculation of R. Solanacearum SD, the transcription level of NbPR1a was slightly upregulated one day after spraying with rutin and turn to down-regulation at 3 dpi compared with spraying with water in N. benthamiana (Fig 2c). So we selected three days as the interval time between spraying rutin and R. solanacearum SD inoculation to balance the weak activation defense caused by spraying rutin. We observed there was a more rapid and strong increased expression levels of PR genes, including NbPR1a, NbNOA1 and NbrbohB in rutinpretreated plants than in control plants when R. solanacearum SD was inoculated (Fig 2d). The transcript levels reached their maximum value at 6 hpi for NbNOA1 (7.26-fold highter than the control) and NbrbohB (3.33-fold highter than the control), and 24 hpi for NbPR1a (2.44-fold highter than the control) in rutin-pretreated leaves, respectively. These results suggested that rutin primed the expressing activation of several PR genes in challenged N. benthamiana. Suppressed the proliferation of Xanthomonas oryzae pv. oryzae by prespaying rutin on rice To test whether rutin could enhance resistance against bacterial pathogen in other host, we have evaluated the efficacy of rutin against PXO99 which caused bacterial blight disease in rice. The plants were inoculated with PXO99 three days later after sprayed with different concentrations of rutin as 1 mM, 2mM and 4 mM respectively. The lesion length of rice leaves was measured post 14 days inoculation. It was averaged to 14.32 ± 3.75 cm for control plant which pre-spraying with 0.02% Tween 20 only. And the average lesion length was reduced to 9.76 ± 2.65 cm, 7.79 ± 2.19 cm and 6.94±0.57 cm for pretreatment with 1 mM, 2mM and 4 mM of rutin respectively (Fig 3a). Statistical data also suggested that the lesions caused by PXO99 were suppressed in rutin-pretreated Mudanjiang 8 (Fig 3a). As 2 mM rutin inhibited little or nothing to PXO99 in vitro, and it has dramatically reduced the lesion length in rice after pre-spraying 2 mM rutin (Fig 1), therefore, we chose 2 mM rutin for subsequent experiments. Capture our attentions, compared with the control, the lesion length was dramatically reduced for rutin pre-spraying rice leaves since 7 d-post-inoculation (Fig 3b). Interestingly, the reductive lesion length was almost similar with each other between rutin-and quercetin-pretreated leaves at 14 dpi (S2b Fig). To investigate whether the pre-spraying 2 mM rutin affected the proliferation of PXO99 in rice leaves or not, we conducted a growth curve experiments in rice. Compared to spray 0.02% Tween 20 only, the number of colonies from spraying rutin leaves has no clear difference post 2 days inoculation, while it was regarding to 5.01 times and 26.92 times reduction at 4 dpi and 6 dpi, respectively (Fig 3c). These results were suggested that pretreated with rutin could enhance rice to against PXO99. Because enhanced plant disease resistance is usually related to the expression of PR genes, to elucidate the rutin-mediated resistance, the expression pattern of several PR genes was investigated in rice. The results demonstrated that all six PR genes included PR-1a, PR-1b, PR-10, phenylalanine ammonia lyase (PAL), peroxidase (POX) and LOX genes were up-regulated expression after inoculation with PXO99 both for pre-spraying with 0.02% Tween 20 and 2 mM rutin. But the expression of chloramphenicol acetyl transferase (CAT) was not significant changed post rutin treatment compared to the control. (Fig 3d). The expression levels of PR- 1a, PR-1b PR-10 and POX genes were all reached their maximum levels at the 12 h post inoculation, and were approximately 4.98-, 5.05-, 3.39-fold and 4.23-fold higher than the control, respectively. The maximum transcription level of the PAL gene was obtained at 24 h post inoculation. The transcription of LOX was also induced more highly in treated plants compared to the control plants after inoculation. It was reached high values at the 12 h and 48 h time points after the initiation of inoculation which was approximately 5.71-and 8.23-folds higher than the control plants, respectively. Enhanced the resistance against to Pst DC3000 in Arabidopsis thaliana In addition to the above mentioned rice-Xoo and N. benthamiana-R. solanacearum investigation, we also tested the function of rutin in Arabidopsis thaliana. The results demonstrated that rutin also protected susceptible Arabidopsis ecotype Columbia-0 (Col-0) against the virulent Pseudomonas syringae pv. tomato strain DC3000 (Pst DC3000). After inoculation with Pst DC3000, typical wilting and chlorotic symptoms was observed on the leaves without pre-spraying with rutin at 3 dpi. However, attenuated disease symptoms were observed on Arabidopsis leaves pre-sprayed with 1, 2 and 4 mM rutin (Fig 4a). The proliferation of Pst DC3000 was indicated that the growth had been inhibited in leaves which were pretreated wtih 2 mM rutin (Fig 4b). More than 111.78 folds of Pst DC3000 had been identified in control leaves. To understand the mechanisms involved in rutin-mediated resistance in Arabidopsis, we analyzed the expression patterns of four PR genes including AtPR1, AtPR2, AtPR5, and AtPAL, which are involved in defense responses to pathogen attack (Fig 4c). Similar with our observation in N. benthamiana-R. solanacearum and rice-Xoo interactions, four PR genes was shown more rapidly and stronger expressing activation in rutin-pretreated plants than in control plants after inoculation with Pst DC3000 (Fig 4c). These results suggested that rutin had the function of primed resistance in a broadly range of host, including Arabidopsis. Rutin-mediated priming is dependent on the SA signal pathway in Arabidopsis Plant hormones were known as the signals of plant defense. To explore the resistance signal transduction pathway mediated by rutin, a set of Arabidopsis mutants were used for investigation which was involved in SA, JA and ethylene (ET) dependent pathway. The NahG transgenic plant abolishes the accumulation of SA, and the Arabidopsis mutant npr1 was a typical mutant of SA-dependent pathway. jar1-1 and ein2-5 were typical mutants of JA-and ET-dependent pathway. If the rutin-mediated priming defense is dependent on one of them, the inhibition growth of Pst DC3000 by pretreatment with 2 mM rutin will be attenuated. The results demonstrated that it was still able to inhibit the growth of Pst DC3000 in jar1-1 and in ein2-5, while not in npr1-1 and NahG plants (Fig 5). It was indicated that the rutin-mediated plant resistance is dependent on the SA signal pathway in Arabidopsis and independent on the JA and ET pathways. Rutin-mediated signaling initiated from upstream of NDR1, PAD4 and EDS1 To obtain more details about the signals of rutin-mediated resistance, we had investigated the growth of Pst DC3000 in several other mutants involved in SA signaling pathway, including with snc1-11, pad4-1, ndr1, eds5 and eds1 [29]. SNC1 is encoded an interleukin-1 receptor-like nucleotide-binding site leucine-rich repeat type of resistance (R)-like gene residing in the RPP5 gene cluster which possibly mediates race-specific disease resistance [30][31][32]. EDS1 and PAD4 belong to two lipase-like proteins [33], NDR1 is a putative membrane-binding protein [34] and EDS5 is an MATE-like SA transporter which pumped the SA from the chloroplast to the cytoplasm [35]. These proteins belong to three upstream components responsible for the transduction of SA signals and for downstream pathways triggered by the R protein. Except in snc1-11, the inhibition growth of Pst DC3000 by pretreatment with 2 mM rutin had been attenuated in pad4-1, ndr1, eds 5 and eds1 (Fig 6). These results were further suggested that the rutinmediated resistance was dependent on the SA signal pathway, which was initiated upstream of NDR1, PAD4 and EDS1. Discussion Rutin, classified as a polyphenolic substance, had also shown to exhibit bactericidal and fungicidal activity in vitro assay. The antibacterial activity of rutin was reported to specific bacteria species, such as Xanthomonas campestris, Agrobacterium tumefaciens, Xylella fastidiosa etc [19,20]. The possible mechanism of action is presumably as follows: first, such polyphenolic substances most likely disrupt the cell wall and the cell membrane integrity of microbial cells, which leads to the release of intracellular components and causes the electron transfer at the membrane, the repression of nucleotide synthesis and ATP activity, thereby inhibiting the growth of microorganisms [36]; Second, rutin excessively scavenges the reactive oxygen species of microbes, leading to a reduction in the normal physiological function of reactive oxygen [37]. But rutin were effective in inhabiting bacteria causing the plant disease at relative high minimum inhibitory activity (MIC) which means to weaker bactericidal activity than other phenolic compounds [19,20]. In this study, we have measured the inhibition efficiency of rutin against four plant bacterial pathogens, including R. solanacearum SD, Xanthomonas oryzae pv. oryzae (PXO99), Xanthomonas oryzae pv. oryzicola (RH3) and Pst DC3000. The results demonstrated that rutin was only functional in very high concentration over 4 mM (Fig 1). Our other study demonstrated that the AtMYB12-overexpressing tobacco had approximately enriched the averaged concentration of rutin as 1.43 mM in fresh weight. It was also enhanced resistance against R. solanacearum (Li et al., unpublished data). Together, the conclusions of this work were consistent with previous studies that rutin has demonstrated weak antibacterial activity to against three additional species of plant gram-negative bacterial pathogens. In vitro assays, 2 mM rutin hardly inhibit the growth of R. solanacearum SD, PXO99 and Pst DC3000 in medium. Causing we hardly quantify the concentration of rutin for intercellular space, we couldn't completely eliminate the direct inhibition caused by the antibacterial agent. However, spraying 2 mM of rutin dramatically reduced the growth of those bacteria in each host plant, which implied that other resistance mechanisms had been triggered (Figs 2 and 3). Notably, the foliar application of 2 mM rutin almost rarely affected the expression of the SAresponsive PR1a gene on N. benthamiana (Fig 2c). This was indicated that rutin couldn't directly activate the basal plant defense. Interestingly, when challenged with a pathogen, the plants pre-spraying rutin show a faster and stronger expression of PR1a than control as well as other PR genes (Figs 2d, 3d and 4c). The delay of occurrence resistance was indicated that rutin promotes disease resistance by a priming mechanism. In addition, exogenous application of rutin simultaneously enhanced the expression of genes which involved into SA, reactive oxygen species and nitric oxide signal pathway (Figs 2 and 3), indicating the ownstream signaling activated by rutin was complex. Many chemicals or plant metabolic components have also been reported to induce or prime plant defense responses that are dependent on the SA signal transduction pathway. However, Fig 6. Analysis of rutin-primed resistance in Arabidopsis mutants. a-f represent as Wild type, eds5, snc1-11, pad4-1, eds1, and ndr1 respectively. The growth rate of Pseudomonas syringae pv tomato strain DC3000 was measured. In total, samples from control and treated plants were measured at 0, 3 days post inoculation. The data were collected from 10 plants. The data showed representative experiments that were repeated three times. The values are means ±SE. The asterisks denote significant differences (t -test, P < 0.01). doi:10.1371/journal.pone.0146910.g006 most of these studies primarily focused on the characterization of the effects of these components using NahG and npr1 mutants [12,13,15], except for azelaic acid which has been analyzed to induce plant defense responses dependent on NDR1 and PAD4, which are two importance components involved in the upstream signals of SA [14]. Rutin-stimulated plant resistance was compromised in many defective SA pathway mutants, confirming that SA signaling was required for rutin-primed disease resistance (Figs 5 and 6). Our data have also identified that NDR1, PAD4 and EDS1 were required for rutin-primed plant defense (Fig 6). This result implied that rutin-primed plant resistance might slightly differ from other plant activators. NDR1 and EDS1 mediated the signal downstream from the major subsets of R proteins, including the CC-NBS-LRR type and the TIR-NBS-LRR type, and they represent an important node acting upstream of SA in PTI [38,39]. Interestingly, we have determined that snc1-11 mutant did not affect the rutin-primed resistance, given the possibility that rutin may specifically affect the other R proteins or downstream components of the SNC1 and other R protein. Based on these results, the possible work model for rutin-primed defense is described (Fig 7), which temporally suggests that the resistance signal is initiated upstream from NDR1, PAD4 and EDS1, followed by activating SA signal transduction. Even though we did not decipher the beginning signals or the targeted receptor of rutin in plants, nevertheless, this study it still offers a new research insight into this newly characterized plant activator. Flavonoids play a critical role in preventing human diseases and have been evolved as a protective mechanism for different plants. In this study, we also found that rutin as a component of flavonoids which could involve into plant immunity with a broad range of host. Together with the quercetin [22], it is feasible suggestion of a conserve mechanism for priming the plant immunity with other components of flavonoids. As rutin was functional at a relative high concentration and economical cost, it is formidable to use directly as a purify bactericide. However, there has increasingly growing of reports that the high content of rutin could be regulated synthesis and accumulated in plant by several transcriptional factors, including with AtMYB11, AtMYB12 and AtMYB111 [23,[40][41][42]. It was provided opportunity to promote the use of rutin by reducing the economic cost in future. Additionally, AtMYB12-expression tobacco was also reported to be resistant to insects, such as aphid, whitefly, Spodoptera litura and Helicoverpa armigera, by the high-level accumulation of rutin [23,40]. And our previous study showed that the flavonol enriched AtMYB12-expression tobacco enhanced the resistance against pathogens, such as R. solanacearum, Colletotrichum nicotianae Averna and Alternaria alternata. The priming resistance identified with rutin would be helpful to understand the resistance generated by AtMYB12-expression tobacco. It was opens the opportunity to make the daily nutrient and biosafety bactericide with overcapacity simultaneously by transgenic method.
Neighborhood Walkability in Relation to Knee and Low Back Pain in Older People: A Multilevel Cross-Sectional Study from the JAGES Few studies have focused on a relationship between the built environment and musculoskeletal pain. This study aimed to investigate an association between neighborhood walkability and knee and low back pain in older people. Data were derived from the Japan Gerontological Evaluation Study (JAGES) 2013, a population-based study of independently living people ≥65 years old. A cross-sectional multilevel analysis was performed, of 22,892 participants in 792 neighborhoods. Neighborhood walkability was assessed by residents’ perceptions and population density. Dependent variables were knee and low back pain restricting daily activities within the past year. The prevalence of knee pain was 26.2% and of low back pain 29.3%. After adjusting for sociodemographic covariates, the prevalence ratio (PR) of knee and low back pain was significantly lower in neighborhoods with better access to parks and sidewalks, good access to fresh food stores, and higher population densities. After additionally adjusting for population density, easier walking in neighborhoods without slopes or stairs was significantly inversely correlated with knee pain (PR 0.91, 95% confidence interval 0.85–0.99). Neighborhoods with walkability enhanced by good access to parks and sidewalks and fresh food stores, easy walking without slopes or stairs, and high population densities, had lower prevalences of knee and low back pain among older people. Further studies should examine environmental determinants of pain. Introduction Musculoskeletal diseases, including osteoarthritis (OA), are major public health problems. Between one in three and one in five people live with painful musculoskeletal conditions, making these diseases the second highest contributor to global disability. Low back pain alone is the leading cause of disability worldwide [1]. A strong relationship exists between musculoskeletal pain and a reduced capacity to engage in physical activity. This often results in functional decline, frailty, reduced quality of life, and loss of independence [2]. The prevalence and impact of musculoskeletal diseases are particularly high in older people. While OA may be treated surgically when severe, it is now considered amenable to prevention and treatment in the early stages [3]. For example, weight loss for obesity, prevention of injury, and exercise have all been shown to be effective in reducing knee and lower back pains [4,5] Although strong evidence supports the benefits of regular exercise, physical inactivity remains highly prevalent worldwide [6]. In fact, the number of daily steps people take in Japan is decreasing year by year, despite the fact that walking, the most frequent type of exercise, is recommended by national health policy [7,8]. For many, however, it is difficult to get regular exercise, and there are limitations to the effects of policy pronouncements at the individual level where a number of other factors are in play. One of these factors, the built environment, has been found to exert a noticeable influence on health [9][10][11]. The World Health Organization recommends improving the built environment as a way to promote healthy aging [12]. The built environment is related to physical activity [13,14], most notably in terms of neighborhood walkability [15,16]. Neighborhood walkability is a measure of how friendly the residential built environment is to walk in. It is generally expressed as a composite index of population density, land-use diversity, and pedestrian-friendly design [17]. Neighborhood walkability has been shown to be related to time spent walking [18], physical activity [15], obesity [19], and depression [20]. These are all factors which are also well known to be associated, in one way or another, with musculoskeletal pain. However, few studies have investigated an association between the built environment and musculoskeletal pain. If neighborhood walkability is associated in some way with musculoskeletal pain, it would become clear that not only individual factors but environmental factors can be addressed in policies designed to prevent musculoskeletal pain. Therefore, we aimed to examine whether neighborhood walkability is related to knee and low back pain, focusing on older people in Japan. Study Design and Participants The present study is based on the Japan Gerontological Evaluation Study (JAGES), an ongoing population-based cohort study in Japan [21]. In 2013, self-reported questionnaires were mailed to 193,694 community-dwelling, independently-living individuals aged 65 years or older, of whom 137,736 responded to the survey (response rate, 71.1%). Participants with missing values for ID, age, or sex (n = 7996); who needed assistance in activities of daily living (n = 4247); or people living in communities with less than 30 respondents (n = 2108) were excluded from the analysis. A total of 123,385 participants' responses from 792 communities were used to evaluate neighborhood walkability. About one-fifth of the total participants (n = 24,806) was randomly selected, including some from each of the 792 communities, to complete a survey module enquiring about pain. The module was a planned part of the JAGES. Because long-term exposure to neighborhood walkability was considered to be beneficial, we excluded residents who had lived in their neighborhood for 3 years or less (n = 732). Responses were also excluded if data on knee and low back pain was missing (n = 1182). This left responses from 22,892 participants that were included in the subsequent analysis ( Figure 1). Our research protocol and informed consent method were approved by the Ethics Committee of Nihon Fukushi University (number 13-14). Outcome Variables Data on the presence of knee and low back pain within the last year were collected in the survey by asking the following two questions. "In the past year, have you had knee pain that restricts your daily activities? In the past year, have you had low back pain that restricts your daily activities?" A response of "yes" was defined as the presence of pain. Neighborhood Walkability Many studies have established the predictive value of residents' perceptions as a measure of neighborhood walkability [22,23]. Previously studied relationships include those between access to parks and body mass index (BMI) [24], food environment and mortality rate [25], and walking up slopes and diabetes control [26]. While some studies demonstrate that objective measures affect various health outcomes [19], two studies reported that subjective walkability rather than objective geographic information system-based data was associated with health outcomes [25,27]. Subjective walkability has the advantage of easily grasping the actual situation; for example, it can change depending on factors such as the size, number, and design during the evaluation of parks and sidewalks. Moreover, there are few studies on walkability in Japan, and the validity of objective indicators has not been sufficiently verified. Therefore, we used subjectively assessed walkability as an explanatory variable. Outcome Variables Data on the presence of knee and low back pain within the last year were collected in the survey by asking the following two questions. "In the past year, have you had knee pain that restricts your daily activities? In the past year, have you had low back pain that restricts your daily activities?" A response of "yes" was defined as the presence of pain. Neighborhood Walkability Many studies have established the predictive value of residents' perceptions as a measure of neighborhood walkability [22,23]. Previously studied relationships include those between access to parks and body mass index (BMI) [24], food environment and mortality rate [25], and walking up slopes and diabetes control [26]. While some studies demonstrate that objective measures affect various health outcomes [19], two studies reported that subjective walkability rather than objective geographic information system-based data was associated with health outcomes [25,27]. Subjective walkability has the advantage of easily grasping the actual situation; for example, it can change depending on factors such as the size, number, and design during the evaluation of parks and sidewalks. Moreover, there are few studies on walkability in Japan, and the validity of objective indicators has not been sufficiently verified. Therefore, we used subjectively assessed walkability as an explanatory variable. We evaluated neighborhood walkability by asking about access to parks and sidewalks, access to fresh food stores, and easy walking without slopes or stairs. Three questions were posed about the neighborhood within 1 km of the participant's house. "How do you feel about access to parks and sidewalks when walking? How many stores or facilities selling fresh fruit and vegetables are located We evaluated neighborhood walkability by asking about access to parks and sidewalks, access to fresh food stores, and easy walking without slopes or stairs. Three questions were posed about the neighborhood within 1 km of the participant's house. "How do you feel about access to parks and sidewalks when walking? How many stores or facilities selling fresh fruit and vegetables are located near you? How do you feel about easy walking without slopes or stairs?" Responses were given on a four-point Likert scale, with 1 = none, 2 = a few, 3 = some, and 4 = many. The average of the points in each neighborhood was used to compare each walkability variable, resulting in a minimum of 1 and a maximum of 4 continuous points. To assess neighborhood walkability, we used the data derived from all 123,385 participants rather than only the smaller subset (n = 22,892) of individuals who responded to the questions about knee and low back pain. We also used population density as a variable because it is one of the main factors associated with neighborhood walkability, as it includes factors such as land-use mix, access to public transport, and number of walkable destinations [17,28]. The population density of each of the 792 communities included for analysis was calculated using the 2010 census and Land Utilization Tertiary Mesh Data (as of 2010) of the National Land Numerical Information from the Ministry of Land, Infrastructure, Transport, and Tourism in Japan based on the 1:25,000 Topographic Map of Japan [29]. These calculations excluded undeveloped areas (e.g., rivers, lakes, forest, and wasteland). Quartiles of population density (persons/km 2 ) were used for analysis. Statistical Analysis We first calculated the association between each neighborhood walkability factor and knee or low back pain using Pearson's correlation coefficient. Multilevel Poisson regression models were then analyzed to investigate the association between neighborhood walkability and pain. An initial model was specified to assess the crude association between neighborhood walkability and knee or low back pain. This was then adjusted in Model 1 using sex, age, equivalent annual income, educational background, and past occupation as individual confounders to evaluate the influence of sociodemographic factors. Model 2 was additionally adjusted for walking time, physical activity, driving status, BMI, and depressive symptoms as potential confounders. As population density strongly affects various aspects and is easy to correlate with other walkability [28,40]. For example, in order to clarify that it is not just the influence of population density, we additionally adjusted the population density in Model 3. Using Appendix A, we identified whether covariates affected the outcomes. Stata 14.0 (StataCorp LP, College Station, TX, USA) was used, and prevalence ratios (PR) and 95% confidence intervals (CI) were calculated from the regression models. The significance level was set at 0.05. Participants with missing covariate data were still included in the analysis. Results The prevalence of knee pain and low back pain was 26.2% (n = 6257) and 29.3% (n = 6989), respectively ( Table 1). The largest proportion by age was 70 to 74 years old (30.3%), followed by those 65 to 69 years old (28.0%). Approximately two-thirds of the participants had normal BMIs and no depression. More than a third (38.7%) walked >60 min; another third (35.2%) walked 30 to 59 min; and 23.9% walked <30 min. About half drove a car. The means for the three subjective neighborhood walkability factors ranged from 2.56 to 2.97 ( Table 2). The mean population density was 6543 persons/km 2 (22-31,565 persons/km 2 ). Reports by neighborhood of knee pain ranged from 15.6% to 51.4%, and of low back pain, from 13.6% to 51.4%. The Pearson correlations between neighborhood walkability factors were all significant. The correlations were relatively high between access to parks and sidewalks and access to fresh food stores; access to parks and sidewalks and population density; and access to fresh food stores and population density (0.44 to 0.59). There were significant negative correlations between knee pain and access to parks and sidewalks (−0.21); knee pain and population density (−0.33); and low back pain and population density (−0.17). For neighborhood factors (i to iv), n = 792, while for pain (v to vi), n = 148, calculated only for areas with more than 30 responses about pain. For factors i-iii, the average points on a scale from 1 to 4 (1 = none, 2 = few, 3 = some, 4 = many) were calculated for each community and then combined for analysis of each factor. * p < 0.05. SD = standard deviation. In the Crude regression model, knee pain was significantly less prevalent with access to parks and sidewalks, access to fresh food stores, and a high population density (Table 3). After adjustment for sociodemographic confounders (Model 1) and behavior and activity covariates (Model 2), all three walkability factors remained statistically significant. After adjusting for population density in Model 3, the only statistically significant factor associated with less knee pain was ease of walking without slopes or stairs (PR = 0.91, 95% CI = 0.85-0.99). For low back pain, the initial results were similar to those with knee pain (Table 4). However, with Models 1 and 2, only access to fresh food stores and population density remained significantly associated with less low back pain. After adjusting for population density, ease walking without slopes or stairs fell just short being statistically significant. Discussion In a large and diverse, population-based sample, we found that subjectively perceived neighborhood walkability was associated with a lower prevalence of knee and low back pain. This relationship remained after adjusting for sociodemographic variables (Model 1). Although we adjusted for walking time, physical activity, driving status, BMI, and depressive symptoms as potential mediators, the association remained similar (Model 2). Even after adjusting for population density to eliminate that as a factor, one factor contributing to better walkability-ease of walking without slopes or stairs-was significantly negatively associated with knee pain (Model 3). To our knowledge, this is the first study indicating that features of the built environment may be correlated with the prevalence of musculoskeletal pain in a large-scale survey of older adults. Earlier studies of neighborhood walkability indicated a negative association with obesity [19], which is a risk factor for knee and low back pain [3,41]. A population-based study of 9046 adults in Japan reported that living in a rural area was associated with a high prevalence of knee pain and low back pain [42]. However, that study did not adjust for occupation. The jobs of primary industry workers tend to place a heavy burden on the knee and low back, and many of these individuals live in rural areas. In our study, after adjusting for past occupation, we found that higher population density, access to parks and sidewalks and fresh food stores, and easy walking without slopes or stairs were related to lower prevalences of knee pain and low back pain. The sociodemographic factors we assessed are considered key not only in regard to physical activity [43] and obesity [44] but to knee and low back pain, as we found relatively large changes in the PRs from the Crude Model to Model 1 after adjusting for sociodemographic factors. In fact, an association between low back pain and socioeconomic status, such as educational background, past occupations, and income, has been reported [31]. A longer time spent walking, greater physical activity, a lower BMI, and the absence of depression are factors known to be negatively related to knee and low back pain. Therefore, we initially hypothesized that these factors would be potential mediators, and as shown in the Appendix A, these factors were actually related to knee pain and low back pain. However, after adjusting for these covariates in Model 2, little change was seen in our results. Therefore, walking time, physical activity, BMI, and depression were thought to largely depend on sociodemographic status, and other factors should still be considered. Social environment variables such as social capital and safety may also be involved, as the social environment has been shown to be associated with cognitive function and social participation [45,46]. As a mechanism that might mediate the relationship between neighborhood walkability and pain, social interaction and the greenness provided by parks and sidewalks have been considered. Social interaction increases for people who frequently use parks [47] and can have a positive psychosocial influence. Good access to parks and sidewalks is likely to increase exposure to greenness which has also been shown to be associated with less obesity [48]. A fresh food store may be a place people would go every day, which would therefore encourage daily walking [25] as well as meeting friends. Such access to fresh food would also support a healthy diet that can be beneficial in preventing obesity. The relationship between walking up slopes or stairs and health is controversial [35,49]. However, to the extent that such features might hinder walking and physical activity among older adults, a flatter environment might be better in terms of walkability. Higher population density can lead to more walkable destinations, a better land-use mix, and better access to public transport and healthcare services [28]. We found that, compared with knee pain, low back pain was not significantly associated with access to parks and sidewalks or easy walking without slopes or stairs in Models 1-3. A previous review indicated that low back pain was strongly influenced by awkward posture among agricultural workers [50]. It may be, therefore, that knee pain is more closely linked with walking than is low back pain. Strengths of this study include the focus on the association between the built environment and musculoskeletal pain in a large-scale population-based study. Past research has mainly focused on individual factors vis-à-vis musculoskeletal pain. However, it is difficult to get regular exercise and maintain a desirable weight for people with and without pain. A population-based approach should also be used for investigating musculoskeletal pain, particularly when considering public policies to prevent disability or to improve the health system [21,51]. Our results will be useful in further research on environmental determinants of pain and specific population approaches such as the primordial prevention [52], which aims for a society where people live in a health-friendly place and remain healthy without additional effort because risk factors have been minimized. Several limitations of this study should be mentioned. First, with the exception of population density, our explanatory variables were subjectively assessed. A comprehensive scale that takes into account various factors, such as walk score or MAPS Global tool, may also be useful [53,54]. In this study, we focused on subjective indicators because it was easy to comprehend the actual situation of each element; however, evaluation of both subjective and objective indicators in the future will lead to a more detailed verification of the relationship between the built environment and pain. Second, we selected certain items that seemed to be particularly influential among various factors contributing to walkability, and that have been reported to be useful in previous studies [24][25][26]. Other variables such as street connectivity and safety may warrant inclusion in similar studies [23,55]. This study did not include them because we thought the other factors were unlikely to be related to pain alone. Further research must explore which built environment elements and scales are associated with musculoskeletal pain. Third, our outcomes included both acute and chronic pain. However, knee pain in older people is mostly due to OA [56], and the relationship weakens when other causes of knee pain are included. Therefore, it can be said that the connection to neighborhood walkability is strong. Fourth, as this is a cross-sectional study, it cannot prove a causal relationship. Exercise has been shown to have a preventive and therapeutic effect on low back pain [4,57], so better neighborhood walkability could theoretically be beneficial by improving access to exercise. People without knee pain or low back pain might choose to live in areas with good walkability, but we could not evaluate that in our study because we excluded those who have lived in the same neighborhood for 3 years or less. Longitudinal studies will be needed to better examine the nature of the relationship between neighborhood walkability and the incidence of musculoskeletal pain. Finally, although there is a high generalizability in Japan, it is difficult to generalize these results to other countries with greatly differing environments and cultures, such as those in Europe and America. In the future, aiming at the realization of a society where pain is naturally prevented, research should be conducted on whether improvement of the built environment helps reduce the prevalence of musculoskeletal pain in various regions. Conclusions Good neighborhood walkability with access to parks and sidewalks and fresh food stores, easy walking without slopes or stairs, and high population density were associated with a lower prevalence of knee and low back pain among older people, as demonstrated in this large-scale, population-based, multilevel analysis. Further studies should examine not only individual factors but also environmental determinants of pain. [29][30][31][32][33][34][35][36][37][38][39][40][41][42]; and the World Health Organization Centre for Health Development (WHO Kobe Centre) (WHO APW 2017/713981). The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the respective funding organizations. Acknowledgments: We are particularly grateful to the staff members in each study area, and in the central office, for conducting the survey. Conflicts of Interest: The authors declare no conflict of interest.
Hepatoprotective activity of depsidone enriched Cladonia rangiferina extract against alcohol-induced hepatotoxicity targeting cytochrome P450 2E1 induced oxidative damage Alcoholic liver disease (ALD) is a broad-spectrum disorder, covering fatty liver, cirrhosis, alcoholic hepatitis and in extreme untreated condition hepatocellular carcinoma (HCC) may also develop. Cladonia rangiferina (CR) is a class of lichen having a broad spectrum of pharmacological activity. It is used like traditional natural sources in ancient times in India, China, Sri Lanka, etc. Folkloric record about CR has reported their use as an antimicrobial, antitumor, antioxidant, anti-inflammatory activities, etc. Hence, the present study was requested to ascertain the effect of the ethanolic extract of Cladonia rangiferina (CRE) on alcohol-induced hepatotoxicity. The animals were evaluated for the estimation of the liver in vivo biochemical antioxidant parameters. The liver tissues were further evaluated histopathologically and western blotting examination for localization of apoptotic gene expression that plays a pivotal role in hepatotoxicity. The results of this study reveal that CRE proves to be helpful in the treatment of alcohol-induced hepatotoxicity and oxidative stress. Results of different markers have shown that among all, CRE has demonstrated the best hepatoprotective activity. These observations say about the importance of the components of the extract. The ameliorative action of CRE in alcoholic liver damage may exist due to antioxidant, anti-inflammatory, and anti-apoptotic activities. Introduction Alcohol is one of the most widely used psychoactive substances after caffeine. Long-term consumption of alcohol has been the key cause of major health issues. WHO report, 2009 determines alcohol consumption has more detrimental effects than tobacco use, high cholesterol levels, or hypertension. The liver is the first organ involved in the metabolism of alcohol consumed (Shanmugam et al., 2010). Alcohol damages the liver by creating oxidative stress, which leads to metabolic disturbances. Changes by which alcohol causes oxidative stress are the formation of acetaldehyde, damage to the cell membrane and mitochondria, hypoxia, disturbed immune system and cytokine production, CYP2E1 induction and mobilization of iron (Baskaran et al., 2010). Stages of alcoholic liver disease (ALD) are mainly divided into fatty liver/steatosis, alcoholic hepatitis and liver cirrhosis. Previously reported pieces of evidence indicated that intermediates formed from the reduction of oxygen may be responsible for the occurrence of ALD. A steep amplification in the levels of free radicals (in human's hepatocytes) is seen after alcohol consumption because ethanol or its metabolites act either as pro-oxidant or lessens the level of antioxidants in the body. This is the reason behind the progression of a huge range of chronic liver diseases. Reactive oxygen species (ROS) are very harmful and may cause considerable damage to lipids, protein, and DNA (Saalu et al., 2012). Cladonia rangiferina is a class of lichen having a broad spectrum of pharmacological activity. It is used in ancient times in India, China, Sri Lanka, etc. as traditional natural sources. The previous phytochemical investigation of these lichens has proved that they have a variety of secondary metabolites like depsides, depsidones, etc. Folkloric record about this lichens has reported their use as an antimicrobial, an antitumor agent, antioxidant, anti-inflammatory activities, etc. (Boustie and Grube, 2005). This evidence suggests that CR was known for their various medicinal values, but to the best of our knowledge, these lichens were not yet, thoroughly explored for their hepatoprotective activity. Extraction The lichen Cladonia rangiferina (Fam. Cladoniaceae) was acquired from the Department of Lichenology, CSIR-NBRI (National Botanical Research Institute), Lucknow, India. (Accession code of Cladonia rangiferina: 4/63/006521). 500 g of powdered material of CR was extracted three times by ethanol (50% v/v) by carrying out cold percolation at room temperature. Rotary evaporator (Buchi, USA) was employed to concentrate the extract at reduced temperature (5°C). After that, it was freeze-dried (FreeZone 4.5, Labconco, USA) over high vacuum and at a reduced temperature of 133 Â 10 À3 mBar and À35 ± 2°C respectively. Pharmacological analysis of CRE was carried out by suspending the dried extract in double distilled water containing a surfactant, carboxymethylcellulose (CMC, 1% w/v). Preparation and administration of ethanol A dose of ethanol (30% v/v solution of ethanol) used here for causing liver damage is 7 g/kg body weight. A volume of 6.2 ml of the prepared solution was administered to the rats for four weeks (Rahman et al., 2006). Animal Thirty male Wistar rats were purchased and used in this study. They were acquired from the animal house of Central Drug Research Institute, Lucknow, India and were kept in the departmental animal house. Animals weighed about 140-170 g were kept in departmental animal house. All the methods followed here were conducted according to the guidelines provided by CPCSEA for animal experimentation (Reg. No. 1732/GO/Re/S/13/CPCSEA). Preparation of animal model for ethanol-induced hepatic injury Thirty rats were randomly divided into six groups (five animals in each group). Estimation of in vivo oxidative stress markers For the estimation of oxidative stress markers, rat livers were homogenized in an ice-cold Tris-EDTA buffer (pH 7.4) and tissue homogenate obtained was used in further analysis. Estimation of malondialdehyde (MDA) The extent of lipid peroxidation and oxidative stress in tissue is directly proportional to the level of MDA. For this assay, firstly homogenate was properly mixed with trichloroacetic acid (30%) and thiobarbituric acid (2%). After that, the mixture was boiled in a water bath at a temperature of 90°C for 15 min. This complete mixture was centrifuged at 1500g, for 10 min. and absorbance of the supernatant (pink color, 532 nm) was recorded using ELISA plate reader (BioTek). The concentration of malondialdehyde was expressed as nmol protein (Rahman et al., 2006). cglutamyl transferase (GGT) In serum, GGT interacts with L-gamma-glutamyl-3-carboxy-4nitroanilide and glycylglycine, which results in the formation of L-gamma-glutamyl-glycylglycine and 5-amino-2-nitrobenzoate. The rate of reaction is recorded per minute for 3 min. at 405 nm, distilled water was taken as blank. Reagents reconstituted for the analysis are, tris buffer (182 mM, pH 8.25) and L-gamma-gluta myl-3-carboxy-4-nitroanilide (2.97 mM) having glycylglycine (85 mm), were taken as the working reagent (1 ml) were thoroughly mixed with 0.1 ml of serum. After 1 min., again the variations in absorbance were recorded per minute for 3 min at 405 nm. Here distilled water was taken as blank (Szasz, 1969). Estimation of reduced glutathione (GSH) content For assessment of GSH, the homogenate was thoroughly mixed with 0.1 M sodium phosphate buffer (pH 8.0) and 6 mM 5,5 0 -dithio bis-(2-nitrobenzoic acid) (DTNB). After that, it was incubated for 10 mins at room temperature leading to the formation of the deep yellow colored product. Absorbance for this product was recorded at 412 nm by using ELISA plate reader (BioTek). The concentration of GSH (as lM of GSH/lg protein) was counted by preparing the standard curve with GSH (Ellman, 1959). Estimation of inflammatory mediators 2.6.4.1. Estimation of tumor necrosis factor-a (TNF-a) in serum and liver tissues. Rat TNF-a ELISA kit was used for assessing the levels of TNF-a levels. The microtitre plate was thoroughly cleaned four times by using a diluted wash buffer. Standard solution of TNF-a and samples to be analyzed were mixed in quantities as given in the manufacturer's protocol. The plates were incubated and kept shaking for 2 h followed by exclusion of the solution and washing for 4 times as instructed. The detection antibody was properly affixed for one hour. Again incubated the plate, this time for 30 mins and after that added avidin-HRPD with steady shaking. This was again followed by the washing of plates in the same manner. Once again the plate was properly washed and the substrate solution was specified and a stop solution was added after 15 mins. The absorbance was recorded at 450 nm within 30 mins (Petrovas et al., 1999). 2.6.4.2. Estimation of interleukins levels (IL-1b, IL-6, and IL-10) in liver tissues. For estimation of levels of interleukins, rat specific ELISA kits (Sigma Aldrich) was used. All the procedures followed were in accordance with the protocol given by the manufacturer. Caspase-3 and caspase-8 activities For measuring the caspases activities, instructions provided with kits were strictly followed. Quickly, the mixture of detection buffer (80 lL), samples (10 lL), and Ac-IETD-pNA (10 lL) was incubated at 37°C for 60 mins and after that OD 405 was recorded. Their activities were calculated with the help of a standard curve (Casciola-Rosen et al., 1996). TUNEL assay Tunel staining was done to detect apoptosis by using a kit for in situ apoptosis detection (Sigma Aldrich, Bangalore, India). Liver tissues processed for this assay were embedded in paraffin. The images were taken by fluorescence microscopy (Olympus, Lucknow, India) (Kyrylkova et al., 2012). DNA ladder For this analysis samples of DNA were extracted by employing the kit along with the spin column. After that, samples were separated by electrophoresis in 1% agarose gel and ethidium bromide was employed for staining. Agarose gel was carefully visualized and photographs were taken under UV light with the help of the BioSpectrum Gel Imaging System (Saadat et al., 2015). Western blot analysis For this analysis, livers were thoroughly washed two times with cold PBS and protein extracted. After that, they were lysed by using an appropriate amount of lysis buffer (cold) consisting of 1 mM PMSF lysates. After that, they were centrifuged at 12,000g, at a temperature of 4°C for 15 mins. Total protein obtained was verified by coomassie brilliant blue G. For carrying out the western blot assay, protein (5 mg/mL) was carefully denatured by thoroughly mixing it with an equivalent volume of 2 Â buffer for loading sample. This step was followed by the boiling of this mixture at 100°C for a duration of 5 mins (Saadat et al., 2015). An equivalent amount of protein was loaded on the SDS gel. Protein was separated by the help of electrophoresis and then it was transferred to the PVDF membrane. Electrophoresis was carried out by using 10% polyacrylamide gel. FasL, Fas, and NF-jB p65 were 30, 35, 45, and 50 mins respectively. First of all, the PVDF membrane was carefully incubated in 10 mM TBS along with 1% Tween 20 and then treated with 5% dehydrated skimmed milk for blocking non-specific protein binding. The membrane was properly incubated with primary antibodies overnight at a temperature of 4°C, either with rabbit anti-Fas (1:200 dilution), anti-FasL (1:500 dilution), rabbit anti-NF-jB p65 (1: 600 dilution), or mouse anti-GAPDH (1: 2000 dilution). After that blots were incubated in horseradish conjugated goat anti-rabbit IgG or horseradish peroxidase-conjugated goat antimouse IgG for 2 h at a dilution of 1:2000 (at room temperature). Detection was carried out by an enhanced chemiluminescence method and photographs were taken by Biospectrum Gel Imaging System. The data were normalized with the help of GAPDH (objective protein IOD vs GAPDH protein IOD) (Kurien and Scofield, 2006). RT-PCR (Reverse transcription Polymerase Chain Reaction) analysis Trizol reagent was employed for the preparation of RNA from the cells. For RT-PCR analysis here, a total of 500 ng RNA was required every time. RT-PCR was carried out in accordance with the protocol given with the RT-PCR kit (Thermofischer, Mumbai, India). PCR system (Bio-Rad Laboratories India Private Limited) was employed for the amplification. First of all, RNA samples were reverse transcribed and after that, without any delay, these were amplified by the help of PCR. For performing amplification first step is denaturation (94°C, 1 min.), followed by annealing (60°C) and after that extension at 72°C for 1 min. Extra fifty cycles were used for amplification. Biospectrum Gel Imaging System was employed for analyzing the IOD values of the electrophoresis bands (Kurien and Scofield, 2006). Histology and immunohistochemistry For observing the liver damage, sections of 5 lm thickness were made, stained by hematoxylin-eosin (H & E) and examined under light microscopy (40x, Olympus BX50). For analyzing immunohistochemistry, additional sections were employed for further twostep IHC detection. The activity of endogenous peroxidase was carefully blocked for 10 mins by using 3% H 2 O 2. The non-specific protein binding was blocked for 30 mins by using normal goat serum. The part was incubated at a temperature of 4°C with rabbit anti-Bcl-2 and anti-p53 antibody (1: 100, dilution). After that antigen repair was microwaved and left for complete night. This process was followed by the incubation at a temperature of 37°C in PV6001 for 30 mins. These were visualized by using 3,3-diaminobenzidine tetrahydrochloride (DAB) substrate and were counter-stained with hematoxylin. Images were taken with the help of inverted digital image light microscopy. The tissues which acquired brown stain were considered to be damaged and the IOD values were examined by using for evaluating the protein expressions (de Araujo et al., 2016). Effect of CRE treatment upon inflammatory mediators Levels of TNF-alpha (P < 0.01) and IL-1b (P < 0.001) were elevated in the AF group, but the concentration of IL-10 (P < 0.01) in samples was decreased, in comparison to the control group. CRE treatment reversed the alcohol-induced effects. The remarkable decline in levels of IL-1b and TNF-alpha were observed in the CRE (100 mg/kg) group, an effect opposite to as observed in the AF group. Moreover, in all three groups of CRE (50 mg/kg, 100 mg/kg, and 200 mg/kg) levels of IL-10 were elevated as compared to the AF group (P < 0.001). CRE (100 mg/kg) group was found to be most effective among all treatment groups and its effect was almost comparable to the Liv. 52 group (Fig. 2). Caspase-3 and caspase-8 activities The caspases were significantly augmented in the AF group. CRE treatment at three different doses considerably reduced their activities (Fig. 3). Caspase-3 and caspase-8 activities in 100 mg/kg dose of CRE were decreased by 58.08% and 48.77% in comparison to the AF group and the values were found almost similar to the Liv. 52 group 61.24% and 50.18%. TUNEL assay and DNA ladder For assessing the apoptosis in liver tissues and in situ detection kit for cell apoptosis was employed. The TUNEL-positive apoptotic nuclei increased drastically in the AF group, and very Fig. 1. Effect of CRE and standard drug (Liv. 52) on GSH, GR, GST, GGT, and MDA. Statistical analysis was carried out by 1-way ANOVA and after that Dunnett's multiple comparison assay was done. a Denotes significance when compared to control group (P less than 0.05); x Denotes significance when compared to AF (P less than 0.05); y Denotes significance when compared to AF (P less than 0.01); z Denotes significance when compared to AF (P less than 0.001) (n = 5). few TUNEL-positive cells were seen in the hepatic tissue acquired from CRE (100 mg/kg) treated rats and Liv. 52 treated rats. Also, the effect of CRE on DNA fragmentation was examined (Fig. 4). The typical DNA ladder (Fig. 5) was seen in the alcohol-fed group; though, the DNA laddering was appreciably reduced in CRE (100 mg/kg) group, indicating that CRE may reduce hepatocyte apoptosis occurring due to alcohol administration in rats. Western blot analysis This analysis was carried out for recording the hepatoprotective activity of CRE (100 mg/kg) concerning its effect over the expressions of Fas/FasL and NF-jB p65. Fas is a member of the death receptor family. Stimulation of Fas leads to the induction of apoptotic signals, such as caspase 8 activation, as well as ''non-apoptotic" cellular responses, notably NF-jB activation. Convincing experimental data have identified NF-jB as a critical promoter of cancer development, creating a solid rationale for the development of antitumor therapy that suppresses NF-jB activity. On the other hand, compelling data have also shown that NF-jB activity enhances tumor cell sensitivity to apoptosis and senescence. Furthermore, although the stimulation of Fas activates NF-jB, the function of NF-jB in the Fas-mediated apoptosis pathway remains largely undefined. Engagement of Fas with FasL triggered NF-jB activation. (Liu et al., 2012) As shown in Fig. 6(1) and (2), protein expressions of Fas and FasL were augmented by almost two times and five times in the liver of rats of AF group as compared to the rats of a control group, whereas their expressions were appreciably reduced in CRE and Liv. 52 rats. NF-jB p65 expression was amplified by almost six-folds in ethanoltreated rats while CRE and Liv. 52 treated rats partially prevented this effect (Fig. 6(3)). Majorly two pathways are involved in cell apoptosis (death receptor pathway and mitochondrial pathways). Reported findings say that interaction of death receptor and its ligand e.g., the interaction of Fas/FasL, are important for initiating apoptosis (extrinsic pathway). Prior studies have shown that by suppressing Fas and FasL proteins leads to a reduction of hepatic cell death due to liver injury. Fas/FasL interaction causes activation of cascades involving caspases, which is a vital factor in the occurrence of apoptosis in hepatic damage. In this study, CRE considerably reduced alcohol-dependent up-regulation of Fas and FasL. This fact demonstrates that by suppressing the expression of proteins Fas and FasL, inhibiting the caspase-3 and caspase-8 enzymes, CRE showed a significant effect by shielding against alcohol-induced hepatic injury (Qu et al., 2012). Fig. 2. Effect of CRE and standard drug (Liv. 52) on inflammatory mediators (IL-1b, IL-6, IL-10, and TNF-a level). Statistical analysis was carried out by 1-way ANOVA and after that Dunnett's multiple comparison assay was done. a Denotes significance when compared to control group (P less than 0.05); x Denotes significance when compared to AF (P less than 0.05); y Denotes significance when compared to AF (P less than 0.01); z Denotes significance when compared to AF (P less than 0.001) (n = 5). Fig. 3. Effect of CRE and standard drug (Liv. 52) on caspase activity. Statistical analysis was carried out by 1-way ANOVA and after that Dunnett's multiple comparison assay was done. a Denotes significance when compared to control group (P less than 0.05); x Denotes significance when compared to AF (P less than 0.05); y Denotes significance when compared to AF (P less than 0.01); z Denotes significance when compared to AF (P less than 0.001) (n = 5). RT-PCR analysis Expressions of Bcl-2, Bak, and Bax mRNA are demonstrated in Fig. 6(4)-(6). Bax (Bcl-2 associated X, apoptosis regulator) is a protein coding gene. The protein encoded by the Bax gene belongs to the Bcl-2 protein family. Bcl-2 family members form hetero-or homodimers and act as anti-or pro-apoptotic regulators that are involved in a wide variety of cellular activities. This protein forms a heterodimer with Bcl-2 and functions as an apoptotic activator. This protein is reported to interact with, and increase the opening of, the mitochondrial voltage-dependent anion channel (VDAC), which leads to the loss in membrane potential and the release of cytochrome C. The expression of this gene is regulated by the tumor suppressor P53 and has been shown to be involved in p53-mediated apoptosis. During apoptosis, Bax and Bak puncture the mitochondrial outer membrane. For excluding the variations due amount and nature of RNA, the results recorded were adjusted according to the expression of GAPDH. Liver injury in AF group was indicated by considerably amplified the levels Bak & Bax and significantly declined the levels of Bcl-2. Their levels were appreciably inverted in CRE (100 mg/kg) and Liv. 52 group. CRE 100 group showed the best results which were almost comparable to the Liv. 52 group. In comparison to Liv. 52, CRE (100 mg/kg) demonstrated a more significant effect in the up-regulation of Bcl-2 protein and down-regulation of p53 protein. In mitochondrial pathway cell apoptosis chiefly involves the Bcl-2 g. Bcl-2 and Bax (both belong to the Bcl family) control the secretion of proapoptotic factors from mitochondria. In the present study, Bax and Bak mRNA (proapoptotic) were down-regulated, while Bcl-2 mRNA and protein which are anti-apoptotic were upregulated in CRE (100 mg/kg) group opposite to AF group. Also, p53 regulates the Bcl-2 family proteins its expression in the given experiment was appreciably reduced in CRE (100 mg/kg) group opposite effect was recorded in the AF group. These findings indicated that CRE could exert its hepatoprotection by interacting with these proteins. 6. (1-3) Showing protein expressions of Fas, FasL, and NF-jB p65 in livers of the experimental rats in Western blot analysis (4-6) Showing expressions of Bcl-2, Bak, and Bax mRNA in livers of the experimental rats in RT-PCR analysis. Statistical analysis was carried out by 1-way ANOVA and after that Dunnett's multiple comparison assay was done. a Denotes significance when compared to the control group (P less than 0.05); x Denotes significance when compared to AF (P less than 0.05); z Denotes significance when compared to AF (P less than 0.001) (n = 5). Immunohistochemical analysis Immunohistochemistry (IHC) is the most common application of immunostaining. It involves the process of selectively identifying antigens (proteins) in cells of a tissue section by exploiting the principle of antibodies binding specifically to antigens in biological tissues. The AF group was stained brown due to the presence of apoptotic proteins. The immunohistochemical study was performed for observing the expression of proteins, Bcl-2 and p53. As depicted in Fig. 7, the Bcl-2 expression in the AF group was five times lower in comparison to the control group. CRE (100 mg/kg) reversed the effect, and the expression of Bcl-2 expression of CRE at 100 mg/kg was higher as compared to the control and Liv. 52 groups. The p53 expression in the AF group was eight times more than the levels in the control group and the levels were appreciably reduced in CRE (100 mg/kg) and Liv. 52 groups (Fig. 8). Conclusions Inflammatory cytokines, such as TNF-a, induce liver injury in the rat model of alcoholic liver disease (ALD). Hepatoprotective cytokines, such as IL-6, and anti-inflammatory cytokines, such as IL-10, are also associated with ALD. IL-6 improves ALD via activation of the signal transducer and activator of transcription 3 (STAT3) and the subsequent induction of a variety of hepatoprotective genes in hepatocytes. IL-10 inhibits alcoholic liver inflammation via activation of STAT3 in Kupffer cells and the subsequent inhibition of liver inflammation. Interactions between pro-and anti-inflammatory cytokines and other cytokines and chemokines are likely to play important roles in the development of ALD (Kawaratani et al., 2013). But continued alcohol consumption overrides this protective mechanism of body and liver damage progresses from fibrosis to cirrhosis. In the current study, the serum TNF-a, IL-6 and IL-1b levels increased while IL-10 decreased in Statistical analysis was carried out by 1way ANOVA and after that Dunnett's multiple comparison assay was done. a Denotes significance when compared to the control group (P less than 0.05); x Denotes significance when compared to AF (P less than 0.05); z Denotes significance when compared to AF (P less than 0.001) (n = 5). The graph is showing the results obtained from the immunohistochemical analysis of p53 in the livers of experimental rats. Statistical analysis was carried out by 1-way ANOVA and after that Dunnett's multiple comparison assay was done. a Denotes significance when compared to the control group (P less than 0.05); x Denotes significance when compared to AF (P less than 0.05); z Denotes significance when compared to AF (P less than 0.001) (n = 5). alcohol treated group (Aldred et al., 1999, Hill et al., 1992. TNF-a seems to be responsible for regulating products that stimulate inflammation and fibrosis in alcohol-induced hepatotoxicity (Aldred et al., 1999). CRE (100 mg/kg) treatment inhibited the increase of TNF-a and IL-6, suggesting CRE (100 mg/kg) attenuated an alcohol-induced inflammatory cascade in the liver. Considerable evidence suggested that TNF-a and IL-6 contribute to the pathogenesis of liver inflammatory diseases by activating the NF-jB signaling pathway (Nanji et al., 1999). CRE (100 mg/kg) treatment corrected the disturbed levels of inflammatory mediators and brought back to near normal. The TUNEL assay detects DNA breaks associated with necrotic cell death (Ansari et al., 1993;Nishiyama et al., 1996). Also, it detects active DNA repair (Kanoh et al., 1999). Therefore, TUNEL staining is a general method for the detecting of DNA breaks, one of several in situ DNA end-labeling techniques. DNA laddering is a distinctive feature of DNA degraded by caspase-activated DNase (CAD), which is a key event during apoptosis. CAD cleaves genomic DNA at internucleosomal linker regions, resulting in DNA fragments that are multiples of 180-185 base-pairs in length. Separation of the fragments by agarose gel electrophoresis and subsequent visualization, for example by ethidium bromide staining, results in a characteristic ''ladder" pattern. The results here showed CRE (100 mg/kg) treatment lowered the number of tunnel positive cells and effectively reduced the formation of the DNA ladder formed due to alcohol. CRE (100 mg/kg) could hold down the activities of these two caspases and hence restrain hepatocyte apoptosis. Western blot and RT-PCR allowed to study the detailed mechanism involve in hepato-protection by lichen extract. The results of this study reveal that Cladonia rangiferina (CR), may prove to be helpful in the treatment of alcohol-induced hepatotoxicity and oxidative stress. Results of different markers have shown that CRE (100 mg/kg) has demonstrated the best hepatoprotective activity among CRE treated groups (CRE 50 mg/kg and CRE 200 mg/kg). These observations say about the importance of the components of the extract i.e., depsides and depsidones as found in the analysis of lichen extract. The ameliorative action of CR in alcoholic liver damage may exist due to antioxidant, antiinflammatory, and anti-apoptotic activities. Exhaustive clinical studies have to be carried out for confirmation of the safety and benefits of CR before it can be used in human beings.
Valuing Physical and Social Output: A Rapid Assessment of a London Community Garden The value of urban farms and gardens in terms of their potential for supplying a healthy diet to local residents is well known. However, the prime objective of these spaces often differs from one of food production with this being the means by which other outputs are achieved. Valuing these spaces that provide diverse benefits is therefore a complex exercise as any measure needs to incorporate their physical as well as their social outputs. Only through such an integrated approach is the true value of these gardens revealed and the scale of their potential for contributing to health agendas made apparent. Social return on investment studies can be heavily resource dependent and the rapid cost benefit approach advanced here suggests that with limited expertise and minimal invasion of volunteer and beneficiary time and space, a public value return on investment ratio can be estimated relatively rapidly using an ‘off the shelf’ tool. For the food growing area of a London community garden, a return on investment of £3 for every £1 invested is calculated. This demonstrates the contribution that community gardens can make to social wellbeing within cities and justifies a call for further recognition of these spaces in urban planning policy. Introduction The diverse benefits of community gardens have been widely reported in the literature [1][2][3][4][5] and are evident on numerous fronts including economic gain, environmental benefits, contributions to society, and improvements to a population's health and wellbeing. Whilst many previous studies have described and quantified the benefits from one dimension, research that has simultaneously quantified more than one element is lacking. This dearth of quantitative evidence regarding the contribution of community gardens risks failing to give the sector the objective, and potentially financial, recognition that it deserves. As a contribution to the debate on the monetised benefits of community gardens, this paper uses a rapid cost benefit methodology to put a value to the social benefits emanating from one garden in central London and combines this with the economic value of fruit and vegetables produced. It calculates the public value return on investment (ROI) achieved by the garden and subjects this to various proposed changes in garden organisation to assess what the impacts are on this indicator. The approach has been developed and applied partly in response to calls from the community garden sector in the UK for a rapid assessment tool that has peer-review status and that can demonstrate the financial value of the garden's intangible outputs alongside their more conventional produce. These intangible outputs, the multiple social benefits of community gardens, are much quoted in the literature. These spaces bring people together to work with one another, helping them to overcome loneliness and exclusion, to develop a skill as well as generating feelings of happiness and a Sustainability 2020, 12, 5452 2 of 20 sense of self-worth [6][7][8]. Gardens provide training opportunities for people of all ages and from all backgrounds as well as being able to help educate younger members of society in the origins of the food on their plates. Community gardens break up so-called food deserts, allowing local populations to access previously unavailable healthy food and school children to try foods that they would perhaps have not have chosen without the education or improved accessibility [6]. Fruit and vegetables direct from the garden often look more appealing and are certainly fresher, with the likelihood that uptake will be increased [5,9]. Gardeners and those living in communities surrounding the garden benefit from the produce that is on offer, particularly as prices may be lower as a result of the short supply chains. Better diet can help towards better physical health and help to reduce the burden of obesity and related illness which is prevalent in the UK [10]. The physical act of gardening also helps to maintain fitness, can improve mental health and lead to more healthy lifestyles [11]. For the UK community gardener, passing time and working in the garden brings personal advantage and simultaneously creates benefit for society and the public good, thereby enhancing individual gain. In the UK, there is growing support for therapeutic and prescriptive gardening to assist individuals to overcome or live with mental health conditions [2,[12][13][14]. Although acknowledging that it is difficult to specifically attribute improvements in mental health to gardening activities, there are a range of studies that report better social interactions between garden users, improved physical activity and general quality of life [15]. Mondelēz's Health for Life in the Community programme reported in 2017 [16] that 87% of survey respondents had met new people since participating in its outdoor programme of Green Gym sessions and healthy eating sessions demonstrating the value of outdoor activity for reducing isolation and feelings of loneliness. Howarth et al. [15] having reviewed the body of evidence on the impact of gardens on physical and mental health and wellbeing, summarise the areas where gardens contribute as being: reduced social isolation; improved physical activity; improved nutritional intake; reduced anxiety and stress; reduced depression; and, individual and community wellbeing [15]. However, studies that value both the physical as well as the social outputs of community gardens specifically are few. This may be due to the complexity of measuring social outcomes [17] as well as the time requirement for this and for recording the physical output of community spaces. Measurement tools have been developed (for example, Farming Concrete [18], Harvest-ometer [19], and MYHarvest [20] are all online tools for recording farm or garden output). However, these do not allow for a valuation of garden produce over time for individual gardens while simultaneously putting a monetary value to the wellbeing improvements recorded by those volunteering in the gardens. Buckley and Peterson [21] provide a basic guide to cost benefit analysis (CBA) for 'urban agriculture' but examples of the application of this methodology are absent. A social return on investment (SROI) study was completed for Gorgie City Farm Community Gardening Project in 2011 [22]. An SROI "measures change in ways that are relevant to the people or organisations that experience or contribute to it. It tells the story of how change is being created by measuring social, environmental and economic outcomes and uses monetary values to represent them" [23]. It largely derives from cost benefit analysis (CBA) but is much more heavily dependent on stakeholder involvement in determining measurement indicators [17]. In this way, SROI is more likely to include benefits that are obvious to service users, but perhaps less obvious in terms of financial indicators. Because of the resulting divergences in sets of indicators between organisations, it is recommended by the SROI Network that results are not compared between organisations but may be used over time within an organisation to study the effects of internal changes. Where a study is evaluative as opposed to a forecast, there is a need to report on the outcomes achieved and if data is not regularly collected as a project progresses, then it can take many months to gather this data through contact with various stakeholders [23]. The Gorgie City Farm SROI found a ratio of £1:£3.56, i.e., for every pound invested in the project in 2009, a total of £3.56 of social value was generated. Despite such a return, Gorgie City Farm closed at the end of 2019. It has since reopened, but its initial difficulties demonstrate how a lack of recognition of its substantial contribution to society makes it difficult to continue to operate with limited funds. In this SROI analysis, no allowance appears to have been made for any physical output from the project, with all benefits accrued being social returns to the various beneficiary groups. Other food and community based SROI studies have been completed, but not for community gardens per se. A 2013 study by the University of Gloucestershire's Countryside and Community Research Institute (CCRI) of three food growing case study projects from the Big Lottery funded Local Food programme found a return on investment (ROI) of £6 to £8 for every £1 invested in the programme in terms of economic and social returns [24]. A study that looked at the SROI of The Wildlife Trusts' volunteering programmes found an ROI of £6.88 for every £1 invested for people with low levels of wellbeing at the start of the programme, and an ROI of £8.50 for every £1 invested for people with average to high wellbeing at the start of the programme [25]. A 2016 SROI evaluation of The Conservation Volunteers' Green Gym programme that provides opportunities for volunteers to work together in outdoor activities, found an ROI of £4 for every £1 invested [26]. SROI derives from CBA [27], a methodology that allows for the inclusion of the social and environmental effects of a project or other intervention but is less heavily dependent on stakeholder engagement than SROI, possibly allowing a more rapid analysis to be undertaken. The approach applied in this paper to value the costs and benefits of a London community garden follows a cost benefit methodology rather than an SROI. The community gardening sector makes a valuable contribution in terms of social benefits but harnessing any income as a result of this is difficult without demonstrating the monetary value of such gardens to society. Community gardens earn an income from sales of produce and outreach activities and they use this, as well as grant income, to fund their social provision. These activities generate little or no income. Putting a monetary value to the services provided should assist the sector in gaining recognition for the savings it makes for society. It also adds to debates on the future of the UK social services sector and the potential role of organisations offering therapeutic gardening sessions as a means of reducing the burden on the NHS. Overweight and obesity alone are forecast to cost the NHS £9.7 billion by 2050 and the wider annual cost to society will reach £49.9 billion by this time [28]. Barry and Blythe [2] identify these predicted costs as one of the drivers behind increased dialogue between health and 'green' organisations, increased focus by the NHS on improving its own greenspaces, and increased use of social prescribing. The current research has three main objectives. The first is to demonstrate the application of an off-the-shelf tool to calculate the cost to benefit ratio of a London community garden. The tool selected has been used to evaluate large public sector projects but as yet there is no evidence of its application in smaller scale community-led initiatives. Using a simplified CBA tool allows non-experts who would not be confident with conventional CBA methodology to complete a rapid summary of garden 'performance' and could help with garden planning when different planting schedules or social activities are being considered. Secondly, by explaining the approach to cost benefit analysis used for this case study garden, it is anticipated that the tool will become more accessible to those working in community gardens and potentially other community based initiatives whose experience of using the method of analysis is limited. Being relatively simple to conduct, it allows for implementation by non-trained staff in a short time period, acknowledging the many demands on frequently overstretched garden employees and volunteers. These first two objectives are addressed in Section 3 of the paper where the implementation of the tool is explained and the relevance to the community garden discussed. The third objective of the paper is to use the tool to demonstrate the social value of one community garden in London and to discuss the implications of such value in terms of policy. Calculating the public value return on investment provides the garden with a means to introduce hard data into funding applications. Much of the SROI work done to date has been reported in project documents and whilst valuable to the organisation and its funders, lacks the peer review that academic journal publication offers. By introducing a more formal valuation of the community garden sector to the academic arena should help to validate claims made in future funding applications put forward by this, and hopefully other, community gardens in the UK. It should also enable them to gain greater recognition in urban planning and food policy debates. This last objective is addressed in the Discussion in Section 4. Methods Using a published tool, the method employed here calculates the public value return on investment of a London community garden. Public value represents the overall benefit to the public of a project or initiative and includes improved health and wellbeing as well as economic growth [29]. The analysis takes data from in-depth interviews with staff at a London case study garden and combines this with information gathered by the FEW-meter project (see Appendix A for a description of the project). In the UK, nine case study community gardens in London recorded harvest data during the 2019 growing season. The case study garden considered in this paper was one of the nine FEW-meter participants and their harvest data was used to inform the analysis presented. In addition, volunteers to the nine gardens were asked to complete a short questionnaire that included a section on the impact on their health and wellbeing as a result of participating in the garden. The results of this survey were used to derive the 'affected population' estimates for the CBA, as described later in the paper. The cost benefit tool selected for the analysis was the Greater Manchester Combined Authority (GMCA) Cost Benefit Analysis (CBA) Excel tool [30] that allows for the calculation of the return on investment in the community garden. This tool was initially developed by the research team at GMCA in 2011 and was adopted as supplementary guidance to support the CBA recommendations of the HM Treasury Green Book (Central Government Guidance on Appraisal and Evaluation) [29,31] in 2014. The tool has been continually updated and used widely to evaluate the financial, economic and social benefits accruing to individuals and businesses as a result of different projects or interventions. Examples of its application by local and national government, CSOs and emergency services are available [32][33][34][35]. A team from GMCA provided training to volunteers and employees of gardens that are members of Social Farms & Gardens in March 2019 in the basics of using the CBA tool and provided a short manual on the key elements of how to apply the methodology [36]. The decision to apply the tool in a FEW-meter project case study garden came as a result of the feedback from this training, where attendees noted that it was, 'Very useful training of a critical area for improving services and increasing access to funding' and that the tool 'can demonstrate benefits to service users in the whole organisation, good for grant applications' and its application would be, 'a way to quantify our value to the borough' [37]. However, participants also mentioned the need for a more simple tool/model and in later discussions with FEW-meter garden staff who attended the event there was confirmation of its potential usefulness but concerns about the complexity of its application. Further detail on the mechanics of this tool are available on GMCA website: it uses pre-uploaded reference guides to value the benefits from different social outcomes of a project. In discussions with the garden concerned, the tool has been adapted to fit the specific circumstances in terms of size and make-up of beneficiary groups, retention rates and expected outcomes, with assumptions and measurements outlined below. Whilst two outcomes and four benefits were selected for this case study, the tool offers a menu of many other different potential outcomes from social interventions. For example, where a garden runs classes for school children or training sessions for the unemployed, outcomes such as reduced truancy or increased employment may be selected to measure impact, with values for these benefits already included in the model. As much of the valuation data is already made available through the GMCA tool, it offers a relatively 'rapid' means by which to complete a CBA. This term borrows from the development literature where a 'rapid rural appraisal' is carried out by a team of staff from different disciplines in a short period, making use of secondary data and more informal data collection procedures [38][39][40]. This recognises the many demands on garden staff who are frequently employed part-time, on fixed term contracts dependent on grant funding with a busy schedule of volunteers and visitors to organise in addition to garden administration. The simplified CBA methodology advanced here does not Sustainability 2020, 12, 5452 5 of 20 include the depth and rigour of analysis that is usual in an economist-led evaluation, but its purpose is to provide a methodology that is accessible to a wider audience which would not be experienced in the use of conventional CBA. One of the authors is a community gardening practitioner and is aware of the need amongst urban gardeners and growers of a method to quantity their garden's contribution to society that meets these requirements. A first iteration of the model was run using data obtained from the case study garden and the findings from this were discussed with the garden staff. After this, the model was adjusted to allow for modifications in some of the underlying assumptions regarding the numbers benefitting from participation in the garden. The results of this second iteration are described here. Computing the CBA In order to demonstrate how the CBA tool can be used within the community garden setting to generate a rapid assessment of physical and social output and to assist with internal planning, it was applied here to the food growing area of a London community garden. The garden (illustrated in Figure 1) was established by the local community in the early 1980s and aims at improving the physical and emotional wellbeing of those living and working in the locality and surrounding areas. It offers a range of facilities, including a football pitch, a children's play area, a café, community rooms, a safe area for mother and baby groups, small allotments for local growers and a community food growing area. There are numerous activities on offer: sports training for young people, community classes that cross the age divide, horticultural therapy for adults with learning disabilities and mental health issues and sustainable food growing to supply the onsite vegan café. The total site occupies approximately 1400 m 2 with the food growing area accounting for about one quarter of this space (350 m 2 ). Sustainability 2020, 12, x FOR PEER REVIEW 5 of 20 purpose is to provide a methodology that is accessible to a wider audience which would not be experienced in the use of conventional CBA. One of the authors is a community gardening practitioner and is aware of the need amongst urban gardeners and growers of a method to quantity their garden's contribution to society that meets these requirements. A first iteration of the model was run using data obtained from the case study garden and the findings from this were discussed with the garden staff. After this, the model was adjusted to allow for modifications in some of the underlying assumptions regarding the numbers benefitting from participation in the garden. The results of this second iteration are described here. Computing the CBA In order to demonstrate how the CBA tool can be used within the community garden setting to generate a rapid assessment of physical and social output and to assist with internal planning, it was applied here to the food growing area of a London community garden. The garden (illustrated in Figure 1) was established by the local community in the early 1980s and aims at improving the physical and emotional wellbeing of those living and working in the locality and surrounding areas. It offers a range of facilities, including a football pitch, a children's play area, a café, community rooms, a safe area for mother and baby groups, small allotments for local growers and a community food growing area. There are numerous activities on offer: sports training for young people, community classes that cross the age divide, horticultural therapy for adults with learning disabilities and mental health issues and sustainable food growing to supply the onsite vegan café. The total site occupies approximately 1400m 2 with the food growing area accounting for about one quarter of this space (350 m 2 ). Project income for the year to March 2019 totalled approximately £371,000 of which 60% was earned from room and pitch hire and sales from the café; 27% came from grants and 12% from the local authority (and 1% from interest earned). The financing from the local authority will draw on taxpayer resource and that is an additional cost to society reflected in a higher deadweight loss from the operation of the food growing area. However this wider effect is not considered in the simplified CBA model as it is presented here. The CBA is formed of two parts-the presentation of costs incurred by the project over the course of the year and calculation of the value of the observed benefits from the project in the same year. Assessing the Costs Cost data was collected during an interview with the project director in which all invoices for the project for 2019 were reviewed. From these, the costs relating to the food growing area were selected, and for some that applied to the project as a whole, a proportion was attributed to the food growing part. There are six different types of cost, shown in Table 1. Project income for the year to March 2019 totalled approximately £371,000 of which 60% was earned from room and pitch hire and sales from the café; 27% came from grants and 12% from the local authority (and 1% from interest earned). The financing from the local authority will draw on taxpayer resource and that is an additional cost to society reflected in a higher deadweight loss from the operation of the food growing area. However this wider effect is not considered in the simplified CBA model as it is presented here. The CBA is formed of two parts-the presentation of costs incurred by the project over the course of the year and calculation of the value of the observed benefits from the project in the same year. Assessing the Costs Cost data was collected during an interview with the project director in which all invoices for the project for 2019 were reviewed. From these, the costs relating to the food growing area were selected, and for some that applied to the project as a whole, a proportion was attributed to the food growing part. There are six different types of cost, shown in Table 1. 1. Costs applying to the whole site: these include water, rent, rubbish collection, business rates and insurance and apply to the whole garden area. In agreement with the project director, for the CBA, these are estimated at one quarter of the total project cost, based on the physical area occupied by the food growing area. 2. Garden maintenance: these are costs involved in maintaining the garden area such as repairs to infrastructure and Skip hire. 3. Consumables: these are annual costs for items used for food and plant production, such as seeds, compost and petrol. 4. Replacement costs: these are directly attributable to the food growing area and include items such as secateurs, watering cans, gloves and sharpening tools. A similar outlay is made on such small tools each year. 5. Salaries and associated employment costs: Salaries allow for 2.2 FTE staff plus training and DBS costs. 6. Volunteer labour input: there are four main groups of volunteer gardeners working in the food growing area. Whilst these individuals receive no payment, in the CBA their time is valued, on the assumption that without this labour input, paid labour would be employed. The issue of valuation of volunteer time is discussed further in Section 3 of the paper. The groups of volunteers are as follows: • Adults with learning disabilities from a local college: 399 h of unpaid labour per year. This gives a rounded total of 3572 h of unpaid labour per year. On the advice of the project manager, this has been valued at the 2018/19 London Living Wage rate of £10.55 per hour, the rate on which the garden bases its staff salaries. This is similar to the figure of £10 per hour suggested by the Heritage Fund for the lowest of three grades of volunteer labour: Professional (£50 per hour), Skilled (£20 per hour) and Volunteer (£10 per hour) [42]. Assessing the Benefits Assessing the benefits within a CBA, simplified and conventional, is more complex given that health and social benefits are difficult to quantify and thus to value. Social benefits are achieved by improvements in wellbeing but such a qualitative indicator is difficult to value. Vardakoulias [27] Sustainability 2020, 12, 5452 7 of 20 suggests that many studies draw values for benefits from other studies so that the 'wellbeing values' used in many analyses are very rarely based on empirical research. The complexity of the valuation is compounded by the absence of a counterfactual: it is difficult to establish how much of any improvement in a person's wellbeing is due to visiting the garden and how much improvement would have occurred without these visits. The GMCA CBA tool applied here uses previously identified data sources to attribute monetary values to the qualitative benefits flowing from a project or intervention, in this case the food growing area of the case study garden. The tool presents an extensive list of possible outcomes stemming from projects concerned with social interventions. Part of the analysis performed here involved selecting the outcomes most likely to be achieved through the work of a community garden. This was based on a review of the evidence on the health and social impacts of involvement in community and allotment gardens. The studies, largely taken from the Howarth et al. review [15], are listed in Table 2. These supported the selection of two potential outcomes from gardening from the GMCA list: 'Improved wellbeing of individuals' and 'Reduced hospital admissions'. The specific benefits to individuals of these outcomes are also listed in Table 2. Use of the 'Improved wellbeing of individuals' outcome was discussed with the project manager and the head gardener at the case study garden and both supported this. They were less convinced about the second potential outcome, 'Reduced hospital admissions' as a result of improved health, citing the difficulties of directly linking garden attendance with reduced need for GP or hospital care. The effect of including this was therefore tested in the sensitivity analysis where 'Reduced hospital admissions' was included as a potential outcome to observe the effect on the public value cost benefit ratio. This latter outcome is shown as a fiscal benefit in the simplified CBA model, representing monetary saving to the NHS as a result of prospective patients not needing to seek treatment. Having identified the benefit indicators for the food growing area, it was necessary to put values to these, starting with numbers of gardeners likely benefitting from attendance (the 'target population'). Discussions and observation at the garden allowed for the identification of four main beneficiary groups, the same groups that were included in the costs in terms of their volunteer labour: adults with learning disabilities from a local college; adults with mental health and associated issues; elderly gardeners; and fit adults. From the target population, an estimate was needed of the 'affected population', that is the proportion likely to benefit from the outcome of 'Improved wellbeing' (consisting of increased self-confidence, reduced isolation and improved emotional wellbeing). The proportions for each benefit were taken from the results of a volunteer survey in London for the FEW-meter study. This asked garden volunteers how involvement in the garden had helped them on a number of social indicators, including their self-confidence, their interactions with others and on their overall mood. For the outcome 'Improved wellbeing of individuals' in the CBA the percentage of FEW-meter volunteers responding positively to each of these three indicators has been used to calculate the likely proportion of the population in the case study garden to see an improvement in wellbeing. The proportions used were calculated as shown in Table 3. So, for example, if 100 people attended the garden on a regular basis, it could be assumed that 68 would be likely to feel an improvement in self-confidence, 86 would see a reduction in feelings of isolation, and 93 would see an improvement in their emotional wellbeing. The size of the likely affected population once these percentages have been applied are shown in Table 4 with the reasoning behind the additional rows in Table 4 discussed below. Level of Engagement and Retention Rate Level of engagement with the 'affected population' attending the garden refers to the percentage of this population with which it is possible to engage through the activity in question, i.e., gardening. In the scenario presented, it is assumed that given the groups of gardeners are small and are working with a trained leader, engagement will be high and the rate of engagement is therefore given as 100%. The retention rate refers to the percentage of the group that continue with the activity over the course of the year. A 60% rate has been allowed for in the analysis that reflects the recorded attendance rates of current participants in the groups that the garden organises. This is later tested in the sensitivity analysis. Impact and Deadweight The analysis is based on the assumption that those attending the garden see an overall improvement in their wellbeing measured through increased self-confidence and self-esteem, reduced isolation and improved emotional wellbeing. The extent to which an improvement in these benefits is experienced is reflected in the impact percentage. In the initial scenario, this is valued at 30% but given this is difficult to quantitively assess, it is tested later in the sensitivity analysis. Deadweight is set to zero on the assumption that in the absence of the supported group it may be that the participants' wellbeing is unchanged. The tool deducts deadweight from impact to give a net percentage improvement in wellbeing that is due to the project. Public Value per Person The row in Table 4 labelled, 'Public value per person' refers to the monetary value (the shadow price) that is given to the improvement in wellbeing (increased self-confidence and self-esteem, reduced isolation and improved emotional wellbeing) brought by attendance at the garden. This value is pre-loaded into the GMCA model with an explanatory reference given as: 'Bespoke analysis carried out by New Economy Manchester. Based on apportioning the willingness to pay value for the QALY impact of depression (£35,400 per annum) across all the domains of wellbeing as set out in the National Accounts of Wellbeing' [30]. At a macro level, the calculation of value was based on the NICE benchmark for the full social value of a QALY (Quality Adjusted Life Year) of £60,000 [66]. The indicators used to assess social benefits were drawn from the National Accounts of Wellbeing [67] and values apportioned according to analysis undertaken by Cox et al. [68]. Final adjustments to the values were made following meetings with the Department of Health by the tool developers to adjust the proportion of a QALY related to depression [69]. Inflation Adjustment The GMCA tool uses 2009/10 prices to value the per person public value benefit of engaging in an activity, as established when the tool was originally developed. To account for inflation from 2009/10 to 2019/20, the GDP deflator allows for price inflation over the 10-year intervening period and calculates monetary benefits at 2019/20 prices. In the model, the 2009/10 GDP deflator is calculated as 82.939 when 2019/20 is the base year (100.00). Applying this index to the 2009/10 per unit public value figures of £3500 and £8500 gives 2019/20 values of £4220 and £10,248 respectively. The tool automatically makes these adjustments. Public Value Benefit Calculation Using the assumptions regarding affected population, engagement, retention, impact and deadweight, as previously described, and applying the GDP deflator and public benefit values as used by the GMCA CBA tool, the social value of the benefits the garden offers are shown in Table 4 to total £250,041. Value of 'Sales' from Food Growing Area In the GMCA CBA tool, the value of produce from the garden is listed as an 'offset cost' so that the total sales value of plants, fruit and vegetables produced in the garden is taken off the total cost of the food growing area, before comparing with the social benefits. For the case study garden, all produce was weighed and recorded during the 2019 growing season. This produce is not sold but supplies an on-site café and was valued at £1500 for approximately 205 kg, using organic price data from the Soil Association [70]. In addition, the garden manager estimated the value of plant sales to visitors to be approximately £1000 for the year. For purposes of convention, here the total sales value of £2500 is added to the value of benefits before the comparison with costs is made to create the cost benefit ratio. This addition is shown in Table 5. Table 5. Calculation of total benefits from the food growing area of the garden Value of Benefits Total public value benefit £250,041 Sales of fruit and vegetables £1500 Plant sales £1000 Total benefits £252,241 Comparing this total benefit value of £252,241 with total costs of £85,148 (Table 1) gives a cost benefit ratio (public value ROI) of £1:£2.96, that is, for every £1 invested in the garden, £2.96 of public value is created. Additional Beneficiaries Additional beneficiary groups could be added to the analysis-those visiting less formally as well as those working in the neighbourhood or passing by on a regular basis. These have not been included as the analysis is specifically about the food growing area of the project and those attending for non-food growing related purposes would be unlikely to visit this part of the project. Passers-by are less likely to see this part of the project area, being tucked away in a far corner of the site. However, if the analysis was extended to include all aspects of the project, a more in-depth approach could include a survey of local residents and businesses. Sensitivity Analysis The calculation of a public value ROI of £2.96 for every £1 invested in the food growing area of the project is based on a number of assumptions regarding engagement, retention rates, impact and deadweight amongst others. These are largely taken from interviews with project staff, from analysis of the FEW-meter volunteer survey and from discussions with the GMCA CBA tool designer. In order to check the effects of changes to these components as well as to determine what the critical rates are for the garden in terms of generating a positive return, sensitivity analysis was carried out on a number of the assumptions. Retention and Impact These were estimated at 60% and 30% respectively in the model as shown in Table 4. The 60% was based on garden attendance records. However, impact was more difficult to assess. Bagnall [25] attributes 80% of benefits of attendance at a Wildlife Trust volunteer programme as being due to the programme. Pank [22] attributes 58% of mental health improvements by garden volunteers as being due to garden participation. These suggest the estimate of 30% used in this study to be quite conservative. Table 6 shows the effect on the ROI of adjusting the retention and impact rates. If retention is held at 60% and impact is increased to 50%, an improvement in the ROI to £4.92 is observed. If the garden has an interest in showing a return of almost 1:5, it might consider undertaking a survey with its volunteers to ascertain whether a 50% improvement in wellbeing as a result of garden attendance is a fair estimate. Maintaining retention at 80% over the course of a year may be more of a challenge. Clearly reductions in retention and impact do not serve the garden well: it could perhaps afford a reduction in retention to 40% if impact remains at 30% but if repeat attendance starts to drop off and volunteers note a reduction in contribution that the garden makes to their overall wellbeing the garden becomes less socially viable with an ROI of only 1:1.33. Value of Volunteer Labour The effect on the ROI of adjusting the valuation of volunteer labour while holding retention at 60% and impact at 30% are shown in Table 7. In consultation with the project manager, it was agreed that in the absence of volunteer labour, it would be necessary to employ paid labour at the London Living Wage of £10.55 per hour. However, the productivity of this labour would be higher. The sensitivity analysis therefore looks at the impact of a 50% reduction in the wage rate to the volunteer gardeners, assuming that an active paid labourer could achieve the same output in half the time of the elderly or socially disadvantaged volunteers who frequently see time spent at the garden as a social activity. Table 7 shows an ROI of 3.81 at this reduced wage rate. There is a debate in the literature concerning the valuation of volunteer hours [17,71]. Part of this derives from the fact that it is often not known what volunteers would be doing if they were not working in the garden: they may, for example, be employed in skilled, high wage jobs or they may be enjoying more leisure time. These two would have different opportunity costs. Vining and Weimer [71] suggest that if the volunteer derives an amount of satisfaction (utility) from the activity (gardening in this case) at least equivalent to the opportunity cost of the time they have spent volunteering, then perhaps their labour should be valued at zero for the purposes of the CBA. This was the approach used by Pank [22] in the base model: volunteers' time was however costed in the sensitivity analysis in that study to account for the fact that without the input from those volunteers, the gardens would not be so well maintained and the satisfaction of other stakeholders would fall. In the case study presented here, the utility or social value gained from volunteering is already accounted for in the benefits calculation but the sensitivity analysis includes a calculation without volunteer 'wages' to see the effect on the cost benefit ratio. As expected, when labour is valued at zero, suggesting attendees would not otherwise be using their time in a more productive manner, the ROI reaches £5.32. It is for the garden to decide how it wishes to use its CBA and therefore the most appropriate manner in which to treat volunteer labour for the purposes of the analysis. Change in the Price of Fruit and Vegetables Currently produce from the garden is not sold but it has been given an imputed value in the CBA model, based on prices from the Soil Association [70]. This values the produce at organic, farm shop prices that are subject to variation according to seller, market and location. Given actual prices for produce sold are not available, prices applied are therefore selected subjectively and are subject to variation. However, when prices of fruit and vegetables are increased two-fold to £3000, there is only a very slight increase in the ROI to 2.98. A three-fold increase in the price of produce increases the ROI to 3. This demonstrates that the tool is quite robust to the source of prices used. However, this result may be important in directing the input of the lead gardener, if the increased sales revenue stems from increased output rather than increased prices. Table 8 shows the effect on the ROI of such a two-fold increase in garden output at the expense of the lead gardener reducing the time spent supervising volunteers. Revenue from fruit, vegetable and plant sales is set at £4000 and the target population reduced to 79. With a reduced number of volunteers achieving social benefits, when value of volunteer labour is maintained at £10.55 per hour and retention rate at 60%, the ROI falls to 2.65. If increased focus on fruit and vegetable production reduces focus on volunteers and the retention rate falls to 30%, then the ROI is reduced to a level of 1.35 where it may be considered less viable. If volunteer labour is costed at zero and retention stays at 60%, then there is a healthy ROI of 4.75 but this is below the 4.92 achieved when produce output remained at a value of £2500, target population at 89 and impact was raised to 50%. The garden could use such information to determine whether its future priorities lie with food output or with maximising the wellbeing that volunteers gain from working in the garden. Adding Volunteer Groups and Outcomes Once set up, the CBA tool can be used to forecast the effect on the ROI of changing the number and type of volunteer sessions offered as well as the potential outcomes from volunteering in the garden. Three scenarios are discussed here: including 'Reduced hospital admissions' as a possible outcome, adding an additional group for elderly gardeners and including a group for adults with alcohol addiction. Including 'Reduced Hospital Admissions' as a Potential Outcome It was stated earlier in the paper that staff at the project questioned the evidence that garden attendance reduced the need for GP or hospital care. Hence 'Reduced hospital admissions' was not included as a potential outcome in the original CBA. However, the addition of such an outcome is tested here as certainly there is evidence in the literature (see Table 2) to support this as a potential benefit from gardening. The results are shown in Table 9: in the GMCA CBA tool, the fiscal saving from a visit to hospital foregone is valued at £1864 at 2017/18 prices (adjusted using the deflator shown in Table 9) using weighted NHS data [72]. Keeping the engagement, retention, impact and deadweight rates the same, with volunteer labour valued at £10.55 per hour, results in an ROI of 3 when reduced hospital admissions are included. This very slight rise in the ROI is in part due to the low numbers included as the likely affected population but also as the fiscal benefit per person is small in comparison with the size of the public value benefits of improved wellbeing. The garden may consider that it is better to concentrate time on establishing robust impact data for its social outcomes than to spend time gathering evidence of its ability to generate fiscal savings that affect its ROI so little. Adding Additional Groups At the case study garden, the current ongoing group of elderly gardeners is oversubscribed and one consideration is to run an additional group for 20 adults on a second day of the week. This would necessitate an increase in staff costs of a half day per week and although it would lead to an increase in total public value, the ROI would fall slightly to 2.94. When volunteer labour is given a zero value this rises to 5.85. Such a calculation could assist the garden in making a decision on whether to expand its numbers, as well as supporting a claim for further funding to support the additional group. A final consideration is the addition of a group of 10 adults recovering from alcohol addiction, an idea suggested by the project manager as something that had been considered in the last year. These additional 10 adults add to the previously calculated public value from improved wellbeing value as well as creating fiscal savings of £1800 per person (adjusted for inflation) and an additional £1398 per person for public value benefits. These additional benefits are due to assumed reduced health and criminal justice costs. The valuations are based on NICE guidance documents [73,74] as referred to in the GMCA CBA tool. With volunteer labour valued at £10.55 per hour, the ROI after the addition of this group is similar to the base case at 2.99, rising to 5.67 when volunteer labour is valued at zero. In terms of garden planning, even using the base case with a retention rate of 60% and the London Living Wage assumed, there is a slight improvement in social outcome values and so it may be worth considering as an addition to its current offer. Data Needs for Community Gardens to Calculate a CBA The analysis above has highlighted how the GMCA CBA tool may be applied to a community garden and how the garden may use the results to assist with internal planning decisions, to monitor their performance and to write potential grant applications. Table 6 showed the importance of maintaining retention rates and either looking to increase these or to increase the effectiveness of the support groups in terms of helping attendees to see improved wellbeing. Section 3.2.2 discussed the valuation of volunteer labour and showed the extent to which this affects the ROI. Knowing the target audience for the results of the analysis will assist in directing the choice of valuation. Section 3.3.3 looked at how changes to the focus of the lead gardener affect the ROI and finally the paper considered the impact of additional groups or more emphasis being put on gathering data to support claims of fiscal savings. In order for other community gardens to be able to undertake this analysis, they need to have the data to hand to undertake the CBA. Once this is gathered, the calculations can be completed rapidly to produce a first iteration of results which may be discussed with garden staff and volunteers. Data requirements include: input costs; harvest weights; weekly produce prices; number of organised groups running each week, numbers attending, turnover and characteristics of clients; and an estimate of the impact of group attendance and deadweight. In the case study presented here, some data has been taken from the FEW-meter project and some from interviews with garden staff. If the garden keeps accurate records of its purchases (quantities and prices), produce harvested over the course of the growing season, and groups attending on a regular basis, then the additional data demands to carry out similar analysis are minimal. A short before and after survey of attendees could confirm the impact of attending the garden on their perceived health and wellbeing in order to estimate the impact percentage needed for the CBA. Valuing the produce from the community garden is a challenge where this is not sold but is directed to an onsite café, given to volunteers or used for communal lunches. Historical consistent and accurate price data is difficult to locate and if the garden was able to record prices from external sources such as the Soil Association as the growing season progressed, then valuing the produce at the end of the season would be an easier task. Having established some basic record keeping at the community garden, it would be relatively simple for garden managers to use a pre-set model such as the GMCA CBA tool to allow them to view the overall social worth of their gardens and to see what the effect of any suggested organisational changes might be. Including Environmental Benefits As it stands the GMCA CBA does not allow for the inclusion of environmental benefits from urban gardens. This does need to be addressed if the model is further developed as environmental benefits from such green spaces are well known. The community garden produces food near to the population for which it is grown, thus reducing food miles and resultant carbon emissions [75]. Pollution is reduced [9]. Many potential enhancements to biodiversity from community gardens have been identified [76][77][78] as well as opportunities for improved drainage and overcoming the urban heat island effect. Gardeners, by creating and maintaining the garden, improve the environment and benefit from such improvements. To an extent, these benefits to the individual are captured in the social benefit listed as improved 'emotional wellbeing'. However, when community gardens are often the only green area within a part of the city, an indicator of environmental gain within the CBA should reflect this. Discussion The evidence in support of the benefits of urban farming and growing is compelling and the consensus is that involvement in this activity has a positive effect on physical and mental health, as well as resulting in the production of nutritious local food. The above analysis has put a value on the combined physical and social output from the case study community garden and its value is clear at both the micro and macro levels. At the level of the individual garden, the tool can be used to assist with internal planning, to justify funding applications and to allow comparisons for the same garden over time and between similar gardens to generate ideas for maximising garden benefits. At the same time, the analysis can be used at a macro level to demonstrate the contribution that such gardens make to society, with associated implications for urban planning and health policy. Demonstration of the value of community gardens using the public value ROI was the third objective of this paper. Community gardens are one of the models for urban food production, which include allotments and city farms. All these models stem from a tradition of growing food in cities [79] motivated by issues such as subsistence or simply access to healthy food. Amongst these models, community gardens are those that most emphasise the social dimension of food production, using food as a catalyst for social amelioration. This role in supporting community social activities is well rehearsed in the literature [1,[80][81][82][83][84]. Fêtes aimed at community-building and workshops for schools are only two examples of the social events that community gardens organise to generate the many benefits mentioned in the Introduction. The CBA presented here shows that, when a monetary value is assigned to these benefits, the social return is high and outnumbers the value of food produced. This bears some implications. Firstly, it is important to clarify that without food such social returns would not exist. In fact, attributing a monetary value to material and immaterial benefits cannot justify a comparison between elements that are not comparable because both are essential. Yet, the considerable imbalance in value between these two elements may lead to questioning the viability of urban food production and to redirecting activities exclusively for social returns. It is therefore necessary to frame correctly findings of the CBA, the purpose of which is not to suggest that activities with a higher return should be prioritised, but rather to provide evidence (to policy-makers and to community garden managers) that, when translated into monetary advantages, services provided by the community gardens are in fact overlooked. Secondly, it is worth considering the opportunity that community gardens offer as places that can provide social support services at a time in which the UK government, as well as in many other European governments, are curtailing state intervention. In the UK, Social Farms & Gardens (the national organisation for community gardens and urban farms) represents over 1600 members of varied sizes (May 2020). Assuming the garden case studied here is representative of a middle sized community garden with modest food output and social benefits, a rough calculation (1600 × £200,000) gives a very approximate benefit of £320 million, 3% of the planned £12.2 billion spend on mental health in England by the NHS in 2018/19 [85]. Such powerful evidence may motivate policy makers to invest in these organisations thereby formalising the contribution to social wellbeing that they can offer. In fact, community gardens and other forms of urban farming have the potential to play a role in the three main policy areas receiving focus from the UK government: health, climate change and environment, and community cohesion/development. From a practitioner point of view, there is a need for effective urban policies that can turn this potential into actual impact. In another policy area, it is also important to recognise the value of community gardens in delivering policy objectives such as those captured in the Mayor of London's Food Strategy [86]. The relationship between city food strategies/policies and the growth in community food movements has been recognised previously [87]; it is perhaps now time for a re-appraisal of such a relationship through the development of inter-sectorial strategies. In fact, there is the potential for community gardens to deliver multiple benefits in diverse areas such as food security, health and economic growth. But such multiple benefits can be elicited only if policy can join strategies from diverse sectors. At present, there is little recognition at a national level that the tool commissioned by GMCA and based on the quantification of the willingness to pay in relationship to depression is a valid indicator, especially with regard to gardening activities. There are sporadic episodes of collaborations between individual GP surgeries and community gardens specialised in offering support to patients through mechanisms of social prescription (e.g., Sydenham Garden [88]). But these initiatives are not framed and probably not motivated by economic evaluations, rather medical evidence of the benefits accrued by contact with nature and physical exercise. Hence the importance that ROI analysis is recognised as critical evidence in policy. The need for stronger recognition of community gardens and farms within planning policy is also one which could benefit from the quantification this study provides, if it is replicable on a larger scale, and the results reflect those in this example. Some local authorities have already produced planning advice highlighting this, for example, Brighton & Hove [89]; others make reference to urban farming and growing in their city plans, and the growth of the Sustainable Food Places movement [90] will also improve the nature of this debate. Conclusions The analysis has shown the applicability of the GMCA CBA tool to one London community garden and has given a detailed explanation as to how the CBA was undertaken. From this, ideally other community gardens in London and elsewhere will realise the advantage of performing a rapid CBA and the example provided here will be sufficient to enable such analysis to be carried out by others. In addition, the analysis has shown the contribution that the case study community garden in London makes to society and how, from a small base in terms of area covered, numbers employed and fresh produce output, it manages to achieve a 1:3 value ratio in terms of money in and social value produced. Undertaking a similar rapid CBA for other community gardens in London would allow these spaces to see how to maximise public value created. This could lead to recommendations for community gardens in terms of maximising societal productivity, either in terms of balancing fresh produce output with societal objectives, or deciding between which societal needs to focus efforts on, for example, more groups for elderly gardeners or more time for school visits. Using the CBA tool to demonstrate their full value to society may lead to greater recognition for these urban spaces, better funding as health resources are diverted from treatment to prevention, and a more sustainable future for urban green space. Acknowledgments: Special thanks to the staff and volunteers at Calthorpe Community Garden, London, for their assistance with harvest data collection and comments on the application of the CBA. The advice of David Morris, Little Lion Research, on use of the GBCA CBA tool, is also gratefully acknowledged. Conflicts of Interest: The authors declare no conflict of interest. Appendix A The FEW-Meter (Food-Energy-Water) project is a five country study, funded through the call, 'Sustainable Urbanisation Global Initiative (SUGI)/Food-Water-Energy Nexus' (SUGI), jointly established by the Belmont Forum and the Joint Programming Initiative Urban Europe. The project investigates the use of energy, water and other resource use on case study farms and gardens in five countries (UK, US, Poland, France and Germany). Data is gathered over two growing seasons (2019 and 2020) to model the resource flows of urban agriculture, allowing the identification of methods to improve efficiency on-farm and also at a city scale. An online platform for urban food producers will be created to share knowledge and experience gained within the project. The project seeks to incorporate the physical output indicators with the social benefits offered by the garden in order to show how communities can be advantaged through engagement in growing while at the same time producing fruit and vegetables with methods and resources that are sustainable.
Lipid and hyperglycemia factors in first‐ever penetrating artery infarction, a comparison between different subtypes Abstract Background The pathogenesis and progression of branch atheromatous disease (BAD), which differs from lipohyalinotic degeneration (LD), remains controversial. Few studies have investigated the lipid indices and glycometabolism status factors for BAD in first‐ever penetrating artery infarction (PAI). Methods We retrospectively examined acute stroke patients with PAI admitted within 3 days after stroke. All patients underwent diffusion weight magnetic resonance imaging (DWI) and magnetic resonance angiography (MRA) and/or computed tomography angiography (CTA). Progression was defined as an increase by 2 point or higher in the National Institutes of Health Stroke Scale score. The characteristics, clinical data were statistically analyzed. Results BAD and LD were diagnosed in 142 (57%) and 107 (43%) patients, respectively. Patients with BAD had higher low‐density lipoprotein cholesterol (LDL‐C) compared with those with LD (p = .013). Elevated LDL‐C was related to early neurological deterioration in patients with BAD (p = .045). The percentage of lenticulostriate arterial (LSA) infarction was greater than that of the pontine penetrating arterial (PPA) infarction in acute PAI (75.1% vs. 24.9%; p < .001). PPA infarction was more prevalent in the BAD group compared with the LD group (34.5% vs. 12.1%, p < .001). The PPA infarction had older age at onset and higher HbA1c concentrations than those with the LSA infarction (p = .014, p = .036 respectively) in the BAD and LD patients, respectively. Conclusion LDL‐C may be associated with both the pathogenesis and progression of intracranial BAD. The LSA infarction was the most frequently subtypes in PAI. Age at onset and HbA1c seem to be closely associated with the PPA infarction of first‐ever PAI. | INTRODUCTION Small deep brain infarct because of occlusion of one single penetrating artery was termed lacunar infarct in 1965 (Fisher, 1965). This classical concept of lacunar infarct persisted for decades. In 1989, according to different pathological changes, Caplan improved the theory and proposed a new type of ischemic cerebrovascular disease leading to an isolated deep brain infarction: branch atheromatous disease (BAD), which referred to the occlusion or stenosis of the proximal end of one penetrating artery based on atherosclerosis (Caplan, 1989). The concept of BAD, differing from lacunar infarction, which referred to the lipohyalinotic degeneration (LD) of the distal end of a penetrating artery, has recently been accepted in scientific research and clinical practice. In the Chinese ischemic stroke subclassification launched in 2011, both BAD and LD are classified as penetrating artery disease (PAD) (Gao, Wang, Xu, Li, & Wang, 2011). To develop more effective therapeutic strategy and secondary prevention of penetrating artery infarction (PAI), a better understanding of its risk factors and pathogenetic mechanisms will be helpful. Several recent studies have reported, BAD tends to occur with more severe neurological deficits, and is more likely to undergo progression than LD (Baumgartner, Sidler, Mosso, & Georgiadis, 2003;Yamamoto et al., 2010Yamamoto et al., , 2011. The risk factors, including male gender, diabetes mellitus, and intracranial atherosclerosis were related to pontine penetrating arterial (PPA) infarction in the BAD. Our research has reported the inflammatory factors, homocysteine (Hcy), and C-reactive protein (CRP) were associated with progression and prognosis of BAD. Accumulating data suggest that lipids are central to the development of atherosclerotic plaques, and the oxidation of low-density lipoprotein (LDL) is thought to play a critical role in the initiation of atherosclerosis (Matsuura, Lopez, Shoenfeld, & Ames, 2012;Rost et al., 2001;Tabuchi et al., 2007;Vila, Castillo, Davalos, & Chamorro, 2000;Yilmaz, Arumugam, Stokes, & Granger, 2006), lowdensity lipoprotein cholesterol levels were associated with increased intracranial atherosclerotic stenosis (ICAS) (Park, Hong, Lee, Lee, & Kim, 2011). However, little attention has been paid to the impact of lipid indices and glycometabolism status in small artery disease especially in acute penetrating artery infarction. | Patients A succession of 1458 inpatients with acute ischemic stroke (≥18 years old, <3 days of onset) whose focal neurologic deficits lasted over 24 hr was included in this study. All patients were admitted to the Department of Neurology, Third Affiliated Hospital of Sun Yat-sen University from January 2008 to February 2012. Patients with PAD were screened out based on the lesions observed using head diffusion-weighted magnetic resonance imaging (DWI; with one slice thickness of 5 mm), magnetic resonance angiography (MRA) and/or computed tomography angiography (CTA). The included cases had to meet the criteria of an isolated infarct in a clinically relevant territory of one penetrating artery, regardless of the size of infarct (Gao et al., 2011). BAD in the lenticulostriate artery (LSA) territory was defined as a supratentorial lesion >15 mm in diameter or visible for three or more axial slices; BAD in the paramedian pontine artery (PPA) territory was defined as a unilateral lesion extending to the ventral surface of the pons. LD was defined as an infarction with diameter <15 mm in the LSA territory, or an isolated infarction confined to the pontine parenchyma in the PPA territory (Yamamoto et al., 2011) (Figure 1). | Exclusion criteria The exclusion criteria included: (1) prior history of stroke or TIA and other causes of cerebral infarction, a potential source of cardiac embolism; (2) any degree of stenosis in the parent artery that ought to be responsible for the infarct; (3) the presence of vulnerable plaques or a stenosis ≥50% or occlusion in the corresponding intracranial or extracranial large arteries, such as middle cerebral artery and internal carotid artery; (4) the presence of cortical infarcts, border zone infarcts, or acute multiple infarcts shown by DWI; (5) the infarcts were not located in the LSA or PPA distributions; (6) a history of thrombolytic therapy or other endovascular interventions; (7) a history of arterial dissection, Moyamoya disease, vasculitis, autoimmune rheumatic disease, malignancy, trauma, coagulopathy, or hematological disorders; (8) a history of long-term (≥1 month) statin therapy before admission; (9) some basilar artery diseases such as dissection, aneurysm, and hypoplasia, dolichoectatic basilar artery, embolism, vasospasms, and (10) incomplete records and follow-up ( Figure 2). | Clinical data In all the selected patients, these were treated with anti-platelet aggregation, improved recurrent, and other routine treatment. They have not received thrombolytic therapy or other endovascular interventions. A computed tomography scan was performed within 24 hr, and magnetic resonance imaging was performed within 72 hr after admission. MRI investigations included diffusion-weighted imaging (TR: 6000 ms/TE: 61.5 ms), T2-flair (TR: 8802 ms/TE: 129 ms), T2-weighted (TR: 4800 ms/TE: 100 ms) and magnetic resonance angiography (TR: 27 ms/TE: 6.9 ms), which were obtained using a GE 1.5T MR scanner (General Electric, Milwaukee, WI, USA). Fasting (no caloric intake for at least 8 hr) venous blood samples were obtained within 24 hr after admission. Each patient accepted carotid ultrasonography, magnetic resonance angiography (MRA), or computed tomography angiography (CTA), 24 hr Holter monitoring, and other routine examinations during hospitalization. All brain images were analyzed by two neuroradiologists blinded to clinical information independently. The National Institutes of Health Stroke Scale (NIHSS) was performed daily to trace the disease course within the first 5 days. Progression was defined as a > 2-point increase in the National Institutes of Health Stroke Scale for motor function during observation (Kwan & Hand, 2006). Basic clinical data, including age, gender, and NIHSS scores on admission and within 5 days, were collected from the patients' records (Lind, Vessby, & Sundstrom, 2006). | Ethics statement This study was approved by the local Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University. Informed consent for this study was obtained from all patients or their family members. | Statistical analysis Data from the selected patients were compared between the different groups. We used the t-test for analyzing normally distributed variables and the Mann-Whitney U test for non-normally distributed variables. The Pearson's Χ 2 test was used for categorical variables. The risk factors of ischemic stroke patients with BAD and LD, were analyzed by univariate logistic regression analysis. A p-value <.05 was considered statistically significant. All statistical analyses were performed using SPSS version 16.0 software (SPSS Inc, Chicago, IL, USA). | Demographic and clinical characteristics of patients with PAD As shown in Table 1, of 1458 consecutive patients with acute ischemic stroke, 249 cases were included in our study, 210 patients underwent MRI, 39 patients underwent CT and CTA. 84.34% patients had MRI, among whom 142 (57.0%) were diagnosed as BAD and 107 (43.0%) as LD. Compared with the LD group, patients with BAD had significantly higher levels of LDL-C (p = .013) and LDL-C/HDL-C ratio (p = .036). Meanwhile, characteristics including age, gender, HbAlc, and other serum lipid indices did not differ between the two groups (Table 1). | Association between LDL-C and progression of BAD and LD Patients with BAD showed a significantly higher progressive ratio (39.4%) compared with those with LD (9.3%) (p < .001). The concentration of LDL-C was significantly higher in patients with progressive BAD (4.23 ± 1.39 mmol/L) compared with those with non-progressive BAD (3.66 ± 1.05 mmol/L) (p = .045); whereas, gender, age, HbA1c, and other lipid indices were not significantly different between the the progressive and non-progressive patients in the BAD group (Table 2). Table 3 shows the logistic regression analysis results. In the univariate logistic analysis, LDL-C level (more than 4.14 mmol/L) (OR = 1.96, p = .01) were found to be more strongly correlated with the progressive than non-progressive patients in the BAD group. NIHSS score at admission (more than 3 points) (OR = 1.865, p = .036) were found to be more strongly correlated with the progressive than non-progressive patients in the LD group (Table 3). 12.1%, p < .001). In the BAD group, the patients with PPA infarctions had older age at onset than those with the LSA infarctions (p = .014). Moreover, among patients with LD, a significantly higher HbA1c concentration was observed in the PPA group (p = .036). | Association between onset age and HbA1c and subgroups of BAD ( Table 4). | DISCUSSION Previous research has shown that ischemic stroke is closely associated with metabolic risk factors, including hyperglycemia, hyperlipidemia, and hyperhomocysteinemia (Clarke et al., 1991). The association of homocysteine levels with PAD pathogenesis and progression has been demonstrated (Men et al., 2013). However, there is limited data on the impact of the lipids or glucose on PAD. In the present study, we F I G U R E 2 Study patients selection focused on the potential risk factors, lipid indices and glycometabolism status and investigated the pathogenesis of first-ever penetrating artery infarction. Inflammation has a crucial role in the development of atherosclerosis, and the pathogenesis of stroke (Rost et al., 2001;Vila et al., 2000;Yilmaz et al., 2006). LDL-C is an important pro-inflammatory mediator in the oxidative process, After oxidization, LDL becomes more pro-inflammatory. In the present study, we found that the level of LDL-C and the LDL-C/HDL-C ratio were higher in the BAD group than in the LD group. The increase in LDL-C/HDL-C ratio could be mainly attributed to the elevated level of LDL-C according to the data analysis. The elevated LDL-C levels in patients with BAD may indicate a stronger oxidative reaction because of different pathogeneses, resulting in larger infarct volume. Our findings suggest that a high level of LDL-C is a significant pathogenetic factor that can differentiate the clinical subtype of PAD. The exact cause of the increased proportion of progressive stroke in the BAD group remains unclear. We speculate that: In the BAD, the high level of LDL-C could release large amounts of pro-inflammatory cytokines, aggravate the atherosclerotic stenosis, contribute to the rupture of atherosclerotic plaques or expansion of thrombus located at the proximal end of the penetrating artery. Furthermore, although, currently, there is no consensus as to whether BAD should be included either among small vessel or among large vessel intracranial disease; our result may support that BAD pathogenesis and progression has a closer relationship with atherosclerosis and inflammation than LD, In fact, the multiple mechanisms LD caused by, such as lipohyalinosis, hypoperfusion, microatheroma, arteriosclerosis, and cardioembolic occlusion. By studying, we suggest the roles for LDL in the pathogenesis of LD occurrence account for the weak association between LDL and the risk of LD. Further research is necessary to elucidate the relationship between BAD and LD. Our finding of the relationship between LDL-C levels and progressive stroke has been supported by substantial clinical evidence. It has been reported that the concentration of plasma oxidized LDL-C increases in the acute phase of all types of stroke (Nakase, Yamazaki, Ogura, Suzuki, & Nagata, 2008), indicating that oxidized LDL-C participates in stroke development and progression. An animal experiment has confirmed that an increased concentration of LDL-C in the arterial wall may be an early indication of lesion formation and a necessary step in the pathogenesis of the fatty streak lesion, leading to atherothrombosis (Schwenke & Carew, 1989). We speculate that in the process of ischemic stroke, the released pro-inflammatory factors, including LDL-C (more than 4.14 mmol/L), cause further lesion of the affected artery, which accelerates the expansion of thrombus, leading to the increase of infarct volume and progression and promote neurological deterioration. Accordingly, the LDL-C level of patients with acute ischemic stroke should be strictly controlled with medications. However, the association of lipids with ischemic stroke and its different subtypes remains controversial. Previous study reported that the triglyceride and non-HDL-C were associated with large artery atherosclerotic (LAA) stroke (Bang, Saver, Liebeskind, Pineda, & Ovbiagele, 2008). The elevated ApoB/ApoAI ratio was a predictor of ICAS in acute ischemic stroke (Park et al., 2011). However, in our study, no significant difference was found in the ApoB/ApoAI ratio between BAD and LD, and elevated LDL-C levels may have the higher efficiencies for predicting BAD than any other lipid cholesterol parameters. Thus, studies are required to further investigate the relationship. Besides, several studies have reported that hyperglycemia is associated with progression in acute ischemic stroke (Nakase, Yoshioka, Sasaki, & Suzuki, 2013;Tanaka et al., 2013). But, our results revealed that the level of HbA1c was not statistically associated with progression in acute ischemic stroke, while there was a trend toward progression in BAD, The relatively small sample size may have limited the statistical power of the study. Futhermore, Our findings also demonstrated that the LSA infarction was most common subtypes in PAI, the PPA infarction was more prevalent in the BAD. Our findings agree with previous studies (Bassetti, Bogousslavsky, Barth, & Regli, 1996;Kataoka, Hori, Shirakawa, & Hirose, 1997) which showed that BAD was etiologically the most common explanation for isolated pontine infarctions. The possible mechanism may be related with the specific vasculature and hemodynamics in pons. Besides, This may be reasonable to presume that the LSA infarction was caused by atheromatous changes at the origins or proximal portions of lenticulostriate arteries and lipohyalinotic degenerative changes of the distal small perforating arteries. However, The mechanism for PPA infarction was predominantly seems to be atheromatous changes at the origins or proximal portions of pontine penetrating arteries. Li et al. (2013) reported that the level of HbAlc may be associated with stroke severity and progression in brainstem infarctions. Our findings suggest that age at onset and HbA1c are the common risk factors for PPA infarction. The atherosclerotic change of small arteries should be differently altered by aging and glycometabolism status factors between the middle cerebral artery and the basilar artery. Previous studies, such as Kim et al. (2012) have reported that diabetic atherosclerosis is the most common vascular risk factors for posterior circulation ischemia, which was consistent with our results, However, the mechanisms involved remains uncertain. The relationship between age at the onset and PPA, LSA infarction has been studied, however, yielded conflicting results. Subramanian et al. (2009) reported age seems to be closely associated with anterior circulation stroke. There are several limitations to our study. First, intracranial vessels were not assessed with adequately uniform vascular imaging, therefore, the study was unable to compare the frequency of underling parent artery disease in patients with progression versus those without progression, and in patients with suspected BAD versus those with LD infarctions; Secondly, compared with LDL-C, oxidized LDL-C is a better biomarker in reflecting the oxidative activity of patients; however, limited by the technical conditions, we were not able to detect the plasma oxidized LDL-C. Thirdly, because a limited number of patients met our criteria for inclusion in the study, subgroups were not set up for validation of the results. Furthermore, bias is inevitable in retrospective studies. In conclusion, Our findings supported the LDL-C as the predictive marker of the pathogenesis and progression of intracranial BAD. The LSA infarction was the more frequently observed subtypes in PAI. PPA infarction was more often associated with BAD. Age at the onset and HbA1c were the major risk factors favoring PPA infarction of first-ever PAI.
On the generalized Buckley-Leverett equation In this paper we study the generalized Buckley-Leverett equation with nonlocal regularizing terms. One of these regularizing terms is diffusive, while the other one is conservative. We prove that if the regularizing terms have order higher than one (combined), there exists a global strong solution for arbitrarily large initial data. In the case where the regularizing terms have combined order one, we prove the global existence of solution under some size restriction for the initial data. Moreover, in the case where the conservative regularizing term vanishes, regardless of the order of the diffusion and under certain hypothesis on the initial data, we also prove the global existence of strong solution and we obtain some new entropy balances. Finally, we provide numerics suggesting that, if the order of the diffusion is $0<\alpha<1$, a finite time blow up of the solution is possible. I. INTRODUCTION In this paper we study the case of the Buckley-Leverett equation with generalized regularizing terms provided by fractional powers of the laplacian with initial data 0 ≤ u(x, 0) = u 0 (x) ≤ 1, and where M > 0 is a fixed constant. Here Ω is either Ω = R or Ω = T. Let us immediately emphasize that u 0 (x) ≤ 1 is not a smallness condition, since, in applications, u denotes a certain proportion (compare the following literature outline). Equation (1) is a nonlocal regularization of the classical Buckley-Leverett equation The nonlinearity in Equation (1) is regularized in two different ways: first, due to the diffusive term −νΛ α u, and second due to the conservative term −µΛ β ∂ t u. Equation (2) was derived by Buckley and Leverett in Ref. 4 and it has been well studied since then (see LeVeque 27 and Mikelić and Paoli 30 ). This equation is used to describe a two-phase flow in a porous medium. For example, oil and water flow in soil or rock. In this situation u represents the saturation of water and M > 0 is the water-over-oil viscosity ratio. Equation (2) is a prototype of a) Electronic mail: jb@impan.pl b) Electronic mail: rgranero@math.ucdavis.edu c) Electronic mail: kluli@math.ucdavis.edu A. Aim and outline The purpose of this paper is to study (1). We are mainly interested in the global existence of solutions together with their qualitative behaviour as well as in the finite time singularities. We provide details of our results in Subsection I B. Subsection I C contains notation, including the definition of a weak solution and certain preliminaries. Section II provides new entropy inequalities for the fractional laplacian that are interesting by themselves, therefore these inequalities are stated for an arbitrary dimension d. Sections II-VIII contain proofs of our results. Finally, in Section IX, we provide some numerical results suggesting the existence of finite time singularities for the cases 0 < α < 1 and µ = 0. These numerics also suggest that in the critical case α = 1 the solution exists globally. This is in agreement with the results for the Burgers equation with fractional dissipation by Kiselev et al., 25 and Dong et al. 14 Let us remark that, when the term µΛ β ∂ t u is added to the equation, even for α = β = 0.25 there is no evidence of blow-up. Consequently, our numerics appear to discard a finite time blow-up scenario when µ > 0. To the best of our knowledge, all our results are new. B. Results First, let us provide a result concerning the global existence of weak solutions for (1), corresponding to rough initial data, i.e., merely 0 ≤ u 0 ≤ 1 a.e., as well as concerning new entropy balances (that are needed in the existence part of the result, but are interesting by themselves). Proposition 1. Let 0 ≤ u 0 ≤ 1, u 0 ∈ L 1 (Ω) ∩ L ∞ (Ω) be the initial data for (1) with ν > 0, 0 < α < 2, µ = 0 and M > 0. Then there exists a global weak solution such that Furthermore, if u is an L 2 (0,T; H 1 (Ω)) solution to (1), then the following entropy inequalities hold  and Let us remark that the terms  t 0  Ω Λ α u(s) log(u(s))dxds will provide a L 2 t bound on a fractional derivative of the solutions. The proof of Proposition 1 will be established in Section III. Our next results concern the qualitative behaviour of smooth solutions. In case Ω = T, we denote the average of u 0 by Proposition 2. Let u be the classical solution to (1) with initial data 0 ≤ u 0 ≤ 1, where ν > 0, 0 < α ≤ 2 and M > 0. Then, Our main results address the problem of global existence of smooth solutions. More precisely, we have results for three cases, depending on the values of the parameters α and β. 1. the subcritical case: the higher space derivative is in the dissipative term, i.e., 1 < max{α, β} ≤ 2. Here we show global existence of smooth solutions with no restrictions on the initial data. Compare Theorem 1. 2. the critical case: the transport term exactly balances the regularizing terms, i.e., 1 = max{α, β}. Here, for µ > 0 and β = 1, we prove global existence of smooth solutions without any size restriction on the initial data. In the other cases we need certain smallness conditions. Namely, for µ = 0, ν > 0 and α = 1, we obtain global existence of smooth solutions for initial data satisfying a smallness restriction on the lower order norm L ∞ ; this smallness restriction is explicit in terms of ν and M. Finally, in the case µ > 0, α = 1 and 0 < β < 1, we obtain the global existence for initial data satisfying a smallness condition in H 1+β 2 . The smallness restriction is here slightly less explicit, but easily computable. See Theorem 2. 3. the supercritical case: the higher space derivative is in the transport term, i.e., 0 ≤ α < 1 and µ = 0. Even here, for Ω = T, we are able to prove global existence of smooth solutions for smooth, periodic initial data satisfying an explicit smallness restriction on the Lipschitz norm The remaining open problems are in the critical and supercritical regime. In particular, our results do not apply to the case where max{α, β} < 1, µ > 0, and there is no large data, global results for the critical case with µ = 0, ν > 0, α = 1. In the context of the latter, let us observe that on one hand, there are certain new methods available for nonlinear problems with nonlocal critical dissipation, like the method of moduli of continuity by Kiselev et al., 25 the fine-tuned DeGiorgi method by Caffarelli and Vasseur 6 or the method of the nonlinear maximum principles by Constantin and Vicol 10 (see also Constantin et al. 9 ). But on the other hand, our nonlinearity is more complex than the typical ones. Now, let us state the main theorems. First, we study the subcritical case max{α, β} > 1. Moreover, for t ≤ T, the solution satisfies For the critical case, let us define the following constants. Definition 1. Let γ * be a constant such that and let γ be any fixed number such that 0 < γ < γ * . Next, let C S be the Sobolev's constant corresponding to the embedding H 1+β 2 ↩→ L ∞ . We have Theorem 2. Let 0 ≤ u 0 ≤ 1, u 0 ∈ H s (Ω), s ≥ 1 be the initial data for (1) with M > 0. Then (1) has a global solution u(t) ∈ C([0,T], H s (Ω)) ∩ L 2 (0,T; H s+0.5 (Ω)) ∀ T < ∞ that satisfies the energy balance Under the following conditions: (i) Either ν ≥ 0, 0 ≤ α ≤ 2 and µ > 0, β = 1 (conservative regularization with no smallness conditions on the data). (ii) Or ν > 0, α = 1, µ = 0 and the initial data is such that In this case the solution satisfies the maximum principle (iii) Or ν > 0, α = 1, µ > 0, 0 < β < 1 and the initial data is such that Then, the solution satisfies the maximum principle . Remark 1. The fact that independently of the value of u, allows for a global result relying on a condition related to M, ν, and µ. However, we are interested in results that deal with every possible value of the physical parameters present in the problem. In our opinion, there are two reasons, at least in the case µ = 0, why the smallness condition (6) may be seen as a rather mild restriction. The first one is that the size restriction affects a lower norm, merely L ∞ , keeping the higher seminorms as large as desired. The second one is that, given M and ν, the constant γ * can be easily computed. For instance, if we further assume γ * ≤ 1, the expression for γ * is explicit: The last case, namely where 0 < α < 1, is harder because the leading term in the equation is the transport term. However, under certain conditions, we can prove the global existence of solutions. Before we can state the relevant result, we need some notation. Definition 2. Let γ and M be given, positive constants. Define Σ(γ) as follows: Next, let γ * be a small enough constant such that Then we have the following. and the energy balance In the above theorem, we impose domain restrictions and stronger smallness assumptions. These domain restrictions are due to the better behavior of the fractional laplacian in a bounded domain. The size restrictions on data are again on a lower order norm (Lipschitz) and with a rather explicit constant. Finally we obtain the standard finite time blow up for certain initial data in some Hölder seminorm. Proposition 3. Fix a constant M > 0 and consider µ = 0, min{ν, α} = 0. Then, there exist 0 ≤ u 0 ≤ 1 ∈ H 2 (Ω) and T * < ∞ such that the corresponding solution, u(t), of Equation (2) has a finite time singularity in C δ for 0 < δ ≪ 1, i.e., lim sup The proof of this result is obtained by a virial-type argument. However, we remark that it can also be obtained by means of pointwise arguments (see Castro and Córdoba 7 for an application of these pointwise arguments to prove blow up). These virial-type arguments have been used for several transport equations even in the case of nonlocal velocities (see Córdoba et al., 12 Dong et al., 14 Li and Rodrigo, 28 and Li et al. 29 ). In this case, the transport term is highly nonlinear and this method fails in the case of viscosity ν > 0, 0 < α ≪ 1. Singular integral operators We denote the usual Fourier transform of u byû. Given a function u : Ω → R, we write Λ α u = (−∆) α/2 u for the fractional laplacian, i.e., This operator admits the kernel representation if the function is periodic and if the function is flat at infinity. Notice that we have where Γ(·) denotes the classical Γ function. Functional spaces We write H s (Ω d ) for the usual L 2 -based Sobolev spaces with norm |∂ ⌊s⌋ Entropy functionals For a given function u ≥ 0, we define the following entropy functionals These two entropies have an associated Fisher information, The third entropy that we are using reads with its Fisher information Notation Recall that we denote the mean of a function by Let us introduce f and a as follows: and Finally, let us introduce the notation for the mollifiers. For ϵ > 0, we write J ϵ for the heat kernel at time t = ϵ and define Weak solutions to (1) and their local existence We start this section with Definition 3. Let µ, ν ≥ 0 and 0 < T < ∞ be a fixed positive parameter. The function with initial data u ϵ,δ (0) = u 0 . Standard energy estimates give us uniform bounds. Then we can pass to the limits ϵ, δ → 0. The proof of the continuation criteria can be obtained by energy methods. II. THE ENTROPY INEQUALITIES In this section we provide the proof of three entropy inequalities that, in our opinion, may be of independent interest. Proposition 4. Let u be a given function and 0 < α < 2, 0 < ϵ < α/2 be two fixed constants. Then provided that the right hand sides are meaningful. Proof. Let us fix i = 1. First, we symmetrize Furthermore, since (a − b) log a b ≥ 0, every term in the series is positive, i.e., for every γ ∈ Z d , we have  ) dy dx ≥ 0. In particular Let us consider first the case 0 ≤ u ∈ L 1 . We have This latter integral is similar to the Riesz potential. Due to the positivity of u, we have We have Consequently, we get The case 0 ≤ u ∈ L ∞ was first proved by Bae and Granero-Belinchón. 2 For the sake of completeness, we include here a sketch of the proof. Using (24), we have The proof for the case i = 2 is similar. III. PROOF OF PROPOSITION 1: WEAK SOLUTIONS We prove the result for Ω = T, but the same proof can be adapted to deal with Ω = R. We consider the regularized problems with the regularized initial data u ϵ (x, 0) = J ϵ * u 0 (x) + ϵ, and ϵ ≤ 1/2. These approximate problems have global classical solution due to the Theorem 3.1 in Ref. 24. Consequently, we focus on obtaining the appropriate ϵ-uniform bounds. By assumption 0 ≤ u(x, 0) ≤ 1. We apply the same technique as Córdoba and Córdoba 11 (see also Refs. 1, 2, 5, 7, 8, 13, and 16-19 for more details and application to other partial differential equations), i.e., we track Due to smoothness of u ϵ , we have that M ϵ (t), M ϵ (t) are Lipschitz, and consequently almost everywhere differentiable. Hence, using together with the kernel expression for Λ α , we have the ϵ-uniform bounds 0 ≤ ϵ ≤ u ϵ (x,t) ≤ 1.5. By space integration of (25), we get We compute We have and as a consequence  Thus, we conclude As we get a ϵ-uniform estimate. Now we apply (23) from Proposition 4 and u ϵ (x,t) ≤ 1.5 to get Consequently, we have ϵ-uniform bounds and the first entropy inequality (4). For the second entropy inequality (5), we compute We are going to handle both integrals separately. We have where in the second term we have used 2u∂ . The second integral reads Collecting all these computations we get IV. PROOF OF PROPOSITION 2: DECAY ESTIMATES Let us prove first the periodic case. The L 1 norm is preserved. Consequently, the mean propagates. We again apply the technique of tracking M(t) and M(t). Recall The smoothness needed to proceed with M(t) ′ is, for this proposition, an assumption. Since x, y ∈ T, |x − y | 1+α ≤ (2π) 1+α . Hence, using (11) and (13), we get Consequently, Integrating this ODI, we have Let us turn our attention to the flat at infinity case. Again the L 1 norm propagates. We take a positive number r > 0 (that will be specified below) and define We choose now thus, recalling (13), we have that for both u( x t ) > 0 and u( x t ) = 0. With the same argument as in the periodic case, we get thus, using explicit value of C α,1 we arrive at V. PROOF OF THEOREM 1: GLOBAL SOLUTIONS FOR max{α, β} > 1 Equipped with the Lemma 1 and its proof, we can focus on the appropriate energy estimates that ensures global existence (rigorously, we should do this on the level of the regularized problem) Notice that we also have a global bound We split the proof in three parts: the first one is devoted to the proof of the purely parabolic case µ = 0. Then, in step 2 and 3 we consider the cases µ > 0, β < 1 < α and µ > 0, α < 1 < β, respectively. Now we test Integrating, we obtain Testing against −∂ 2 x u we can perform energy estimates as in Step 1. We obtain 1 2 We use the interpolation inequality Now we use the interpolation For α > 1 we have that . Now, we test against −∂ 2 x u. We can conclude as in Step 1. We obtain VI. PROOF OF THEOREM 2: GLOBAL SOLUTION FOR max{α, β} = 1 Step 1: Case ν > 0, α = 1, µ = 0. We do the case s = 1, the other cases being analogous. Testing (1) against Λu and using the self-adjointness, we have Equations (28)- (31). Notice that under the hypothesis Using that we conclude for a small enough 0 < δ. Notice that this δ only depends on M,u 0 , and ν. Next, testing (1) against −∂ 2 x u and integrating by parts, we get (33)-(35). If we integrate by parts in (34), we get The first inequality above uses also (36) and the second the interpolation and, due to Gronwall's inequality together wit (37), we obtain This ends the proof of case (ii) of our thesis. Step 2: Case ν > 0, α = 1, µ > 0, β < 1. In this case we cannot use the pointwise methods, so we cannot get immediately ∥u(t)∥ L ∞ ≤ ∥u 0 ∥ L ∞. Estimate (27) implies a global bound in H β/2 , but this bound is too weak to give us a pointwise estimate for u. However, as testing (1) against Λu and using the definition of γ and γ * , we have, as in step 1, As a consequence, we obtain the global bound Now we test against −∂ 2 x u and we conclude as in Step 1. Case (iii) is proved. Step 3: Case ν ≥ 0, µ > 0, β = 1. In this case, (27) implies a global bound in H 0.5 . Then, testing (1) against Λu and using (8), we have As a consequence, we can apply Gronwall's inequality to get a global bound Now we test against −∂ 2 x u and we conclude as in Step 1. Case (i) is proved. VII. PROOF OF THEOREM 3: GLOBAL SOLUTION IF 0 < α < 1 AND µ = 0 We consider the case s = 2, the other cases being similar. Let us writex t for the point where ∂ x u reaches its maximum, i.e., With a similar argument as in the proof of Proposition 2 (see also Ref. 11), we have Due to the kernel expressions (11) and (13), we have Due to the smallness choice (10) and ∥u(t)∥ L ∞ ≤ ∥u 0 ∥ L ∞, we have Let us writex t for the point where ∂ x u reaches its minimum, i.e., As before, due to the kernel expression (11) and (13), we have Consequently, with the same argument, we have (for negative ∂ x u(x t ,t)) We test Equation (1) against ∂ 4 x u and integrate by parts. We have 1 2 with x udx (41) ds ≤ ∥u 0 ∥ 2 H 2 e c(u 0 , M )t . VIII. PROOF OF PROPOSITION 3: FINITE TIME SINGULARITIES First, we study the case ν = 0. Let us take u 0 such that u 0 ≥ 0, u 0 (0) = 0, We argue by contradiction: assume that we have u(t) a global C 2 solution corresponding to u 0 . Recalling the expression a(x) given in (21) we define the characteristic curve y(t), solution to y ′ (t) = a(u( y(t),t)), y(0) = 0 and v(x,t) = u(x + y(t),t). We obtain the ODI d dt J(t) ≥ δ 1 + M J(t) 2 , and the blow up of J(t) in finite time T * = T * (δ,u 0 , M). We have proved the case ν = 0, but the proof of the case 0 < ν and α = 0 is analogous and can be easily adapted from here. IX. NUMERICAL SIMULATIONS In this section we present our numerical simulations suggesting a finite time blow up in the case ν > 0, 0 < α < 1. To approximate the solution, we discretize using the fast fourier transform with N = 2 14 spatial nodes. The main advantage of this numerical scheme is that the differential operators are multipliers on the Fourier side. Once the spatial part has been discretized, we use a Runge-Kutta scheme to advance in the time variable. In our simulations, we consider the initial data and values M = ν = 0.5 and µ = 0. Then, we approximate the solution for (1) for different values of the parameter 0 < α ≤ 1. In particular, we study four cases, Interestingly, we observe (see Figure 4) that even for small values of α and β, in the case with µ > 0, there is not evidence of finite time singularities.
Persisting neuroendocrine abnormalities and their association with physical impairment 5 years after critical illness Background Critical illness is hallmarked by neuroendocrine alterations throughout ICU stay. We investigated whether the neuroendocrine axes recover after ICU discharge and whether any residual abnormalities associate with physical functional impairments assessed 5 years after critical illness. Methods In this preplanned secondary analysis of the EPaNIC randomized controlled trial, we compared serum concentrations of hormones and binding proteins of the thyroid axis, the somatotropic axis and the adrenal axis in 436 adult patients who participated in the prospective 5-year clinical follow-up and who provided a blood sample with those in 50 demographically matched controls. We investigated independent associations between any long-term hormonal abnormalities and physical functional impairments (handgrip strength, 6-min walk distance, and physical health-related quality-of-life) with use of multivariable linear regression analyses. Results At 5-year follow-up, patients and controls had comparable serum concentrations of thyroid-stimulating hormone, thyroxine (T4), triiodothyronine (T3) and thyroxine-binding globulin, whereas patients had higher reverse T3 (rT3, p = 0.0002) and lower T3/rT3 (p = 0.0012) than controls. Patients had comparable concentrations of growth hormone, insulin-like growth factor-I (IGF-I) and IGF-binding protein 1 (IGFBP1), but higher IGFBP3 (p = 0.030) than controls. Total and free cortisol, cortisol-binding globulin and albumin concentrations were comparable for patients and controls. A lower T3/rT3 was independently associated with lower handgrip strength and shorter 6-min walk distance (p ≤ 0.036), and a higher IGFBP3 was independently associated with higher handgrip strength (p = 0.031). Conclusions Five years after ICU admission, most hormones and binding proteins of the thyroid, somatotropic and adrenal axes had recovered. The residual long-term abnormality within the thyroid axis was identified as risk factor for long-term physical impairment, whereas that within the somatotropic axis may be a compensatory protective response. Whether targeting of the residual abnormality in the thyroid axis may improve long-term physical outcome of the patients remains to be investigated. Trial registration ClinicalTrials.gov: NCT00512122, registered on July 31, 2007 (https://www.clinicaltrials.gov/ct2/show/NCT00512122). Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13054-021-03858-1. Background Critical illness is hallmarked by pronounced neuroendocrine alterations, with those within the thyroid axis, the somatotropic axis and the adrenal axis most extensively studied [1]. The typical responses follow a biphasic pattern, distinguishing between the acute and prolonged phase of critical illness [1,2]. In the acute phase (first hours to days), the anterior pituitary is actively secreting hormones, with a transient rise in thyroid-stimulating hormone (TSH) and an increase in growth hormone [1][2][3][4][5]. An acute rise in adrenocorticotropic hormone (ACTH) has been described in patients with sepsis or multiple-trauma [6], but such rise was not observed in heterogeneous general ICU patients [7,8]. The active pituitary hormone secretion occurs in the face of altered peripheral hormone metabolism, altered target organ sensitivity, and altered hormone binding proteins [4][5][6][7][8][9][10][11][12][13]. These alterations reduce the availability of most anabolic effector hormones, including triiodothyronine (T 3 ) and insulin-like growth factor-I (IGF-I), while the availability of the catabolic stress hormone cortisol increases. When illness is prolonged beyond the first few days, the neuroendocrine axes are uniformly suppressed, with low target organ hormone levels or, in case of cortisol, insufficiently elevated or normal levels [1,2,5,7,10,14]. This suppression during the prolonged phase of illness is of central/hypothalamic origin [1,2,15]. Whereas the neuroendocrine responses in ICU have been well documented, data after ICU discharge are scarce and mostly limited to patients who suffered from brain damage either due to traumatic brain injury (TBI) or brain surgery. Although TBI-associated neuroendocrine disturbances often resolve, persistent hypopituitarism remains present in many patients up to years after the insult, associated with poor recovery and worse longterm outcome (e.g. cognitive impairment, decreased exercise capacity, poor quality-of-life) [16][17][18]. Likewise, survivors of brain tumors show a high risk of hypopituitarism and need for hormone replacement therapy many years later [19,20]. Heterogeneous prolonged critically ill adult patients showed a uniform rise in ACTH and cortisol to supra-normal levels from ICU discharge to one week later [14]. In children, salivary cortisol levels were normal months to years after ICU admission [21,22]. In the absence of other data, it remained unclear whether the neuroendocrine abnormalities that are present in general adult ICU patients recover in the long-term. In this study of patients who were followed-up 5 years after ICU admission for heterogeneous diagnoses, we compared hormonal parameters within the thyroid axis, the somatotropic axis and the adrenal axis with those of demographically matched controls, and investigated whether any long-term neuroendocrine abnormalities in former ICU patients associate with long-term physical functional impairments. Study design and participants This is a preplanned secondary analysis of patients included in the EPaNIC study and its long-term follow-up. The EPaNIC study randomly allocated 4640 adult critically ill patients who were nutritionally at risk (score of 3 or more on the Nutritional Risk Screening scale) to initiation of supplemental parenteral nutrition (PN) completing insufficient enteral nutrition (EN) within 48 h (early-PN), or to withholding of supplemental PN in the first week of intensive care (late-PN) [23]. All patients received EN as soon as possible, insulin infusions to maintain normoglycemia, and parenteral trace elements, minerals and vitamins. Patients were not eligible for participation in the EPaNIC study if younger than 18 A subgroup of the patients had been assessed for long-term morbidity, 5 years after ICU admission, during hospital or home visits [24]. For that follow-up study, all long-stay patients in ICU for at least 8 days were eligible, whereas for feasibility reasons only a random subset of short-stay patients in ICU for fewer than 8 days were eligible [24]. The subgroup of shortstay patients was a random, computer-generated "3 out of 10" sample, weighed within admission diagnostic categories to a distribution similar to that among long-stay patients. Patients suffering from conditions that could confound the morbidity endpoints had been excluded (n = 128). Such conditions comprised pre-existing neuromuscular disorders or inability to walk without assistance before ICU admission, or other physical disabilities present before follow-up potentially confounding morbidity endpoints (cardiac assist device, pulmonary resection, psychiatric disease, dementia, vegetative state, in hospital/rehabilitation center/nursing home). The total 5-year follow-up cohort consisted of 674 patients, among which 398 short-stay and 276 long-stay patients (Fig. 1). As controls, 50 individuals had been recruited via primary care givers and outpatient clinics, with the only exclusion criteria being having required an ICU admission or suffering from conditions that could confound the morbidity endpoints (i.e. neuromuscular disorders, inability to walk or other physical disabilities). Flowchart of study participants and study design. The major focus of this study is on the patients who participated in the 5-year morbidity follow-up of the EPaNIC study and for whom a serum sample had been collected at this time point. A small subgroup of these patients had also participated at one or more earlier time points, 1-, 2-, 3-or 4-years post-ICU. a For feasibility reasons only a random subset of short-stay patients in ICU for fewer than 8 days were eligible for the 5-year EPaNIC follow-up study [24]. The subgroup of short-stay patients was a random, computer-generated "3 out of 10" sample, weighed within admission diagnostic categories to a distribution similar to that among long-stay patients in ICU for at least 8 days (who all were eligible). Of the short-stay patients, 1721 were not in that random selection. The total 5-year follow-up cohort consisted of 398 short-stay and 276 long-stay patients. b Of the eligible patients, 275 were subsequently excluded for meeting one or more exclusion criteria. These were patients with pre-ICU neuromuscular disorders, unable to walk without assistance prior to ICU or other disabilities present before follow-up potentially confounding morbidity endpoints (i.e. cardiac assist device, pulmonary resection, psychiatric disease, dementia, vegetative state, in hospital/rehabilitation center/nursing home); patients who could not be not contacted; patients who died after five years post-ICU before the planned testing; patients for whom the time window had passed (predefined time window for the 5-year follow-up had been set at 5 ± 0.5 years after ICU admission); or patients for whom there was a language barrier [24]. ICU: intensive care unit Of the patients who participated in the 5-year followup study, 436 provided a blood sample during the followup visit, among which 265 with a short-stay and 171 with a long-stay in ICU (Fig. 1). All 50 controls had provided a blood sample. Some patients had also participated and donated a blood sample in earlier follow-up studies (1-year follow-up: n = 13, 2-year follow-up: n = 25, 3-year follow-up: n = 50, 4-year follow-up: n = 54). Serum was extracted from the blood samples and stored at − 80 °C. Institutional Review Board approval of the study and of the consent forms was obtained (ML4190). All patients or their next-of-kin provided written informed consent for participation in the EPaNIC study and all former ICU patients and all controls provided written informed consent for participation in the follow-up study. The EPaNIC study protocol and primary results on short-and longterm outcomes of the participating patients have been published [23][24][25][26][27]. Free cortisol was converted from µmol/l to µg/dl by multiplying by 1000 and dividing by 27.59. Apart from the serum samples available for the patients at 5-year follow-up and at the intermediary follow-up moments, we also investigated all patient serum samples available upon ICU admission, at day 4 and 7 if still in ICU, and at the last ICU day to document time series up until the 5-year follow-up. Measures of physical function At 5-year follow-up, several measures of physical functional capacity had been evaluated [24,25]. Handgrip strength was measured with a hydraulic handgrip dynamometer (Jamar Preston, Jackson, MI), with values expressed as percent of predicted values for sex and age. The 6-min walk distance was used as a measure of exercise capacity, expressed as percent of predicted values taking into account sex, age, height and weight, and with imputation of a zero-value for patients unable to perform the test due to physical limitations. Physical functioning was assessed with the physical component score (PCS) of the Medical Outcomes Report-Short Form 36 (SF36) health-related quality-of-life questionnaire (score range 0-100, higher values indicating better performance). Statistical analyses Characteristics and physical outcomes of patients and controls are reported as numbers (frequencies) or medians (interquartile ranges), and were compared with Chi-square/Fisher-exact or Mann-Whitney U tests. Outcomes were also studied with multivariable linear regression analysis, adjusted for age, sex, and BMI, reported as β-estimates (95% CIs). The evolution within former ICU patients of the serum concentrations of hormones and binding proteins from the last ICU day to the 5-year follow-up moment was assessed with repeated-measures analysis of variance (ANOVA). The comparison of these hormonal parameters between former ICU patients at 5-year follow-up and matched controls was performed with Student t test. For these analyses, not-normally distributed data were transformed to a near-normal distribution (square root or square root-square root transformation as specified in the figure legends). Control subjects or patients who during ICU stay or at follow-up received hormones interfering with the measurements of the hormonal parameters of the respective axes were excluded from these analyses. Values at 1-, 2-, 3-and 4-years post-ICU, for those patients who had donated a sample at the respective time points, were also visualized for illustrative purposes only in view of the small sample sizes. Likewise, values upon ICU admission and on ICU day 4 and 7 were plotted to verify whether the patients showed the typical critical illness-induced neuroendocrine changes during ICU stay, thus assessing representativeness of the cohort. In exploratory analyses, we investigated whether we could identify illness-associated or post-ICU factors that may independently associate with the concentrations of the hormonal parameters at 5-year follow-up. Therefore, we performed multivariable linear regression analyses adjusting for demographics (sex, age and gender at 5-year follow-up), randomization to late-PN or early-PN, risk of malnutrition, type and severity of illness, a sepsis diagnosis upon ICU admission, duration of critical illness (dichotomized as an ICU stay shorter than 8 days or at least 8 days [24]), history of diabetes or malignancy, medications taken chronically at follow-up, and need for hospital readmission between ICU discharge and followup. We also performed a stratified analysis comparing patients with a duration of critical illness shorter than 8 days or at least 8 days in univariable analyses and in multivariable analyses adjusted for demographics. To assess whether any long-term neuroendocrine abnormalities are independently associated with physical impairments in ICU survivors at 5-year follow-up, we performed multivariable linear regression analyses among the former ICU patients, entering the hormones or binding proteins that were different for patients and controls at 5-year follow-up as variables in the models, adjusting for age, sex, and BMI. Patients who at the time of follow-up received hormone treatments interfering with the measurements for the respective axes were excluded from these analyses. Statistical analyses were performed with JMP ® Pro15.1.0 (SAS-Institute, Cary, NC). Two-sided p values < 0.05 were considered to indicate statistical significance. No corrections for multiple comparisons were done. Results Age, sex and BMI distributions of the 436 patients who participated in the 5-year follow-up study were similar to those of the 50 controls (Table 1). Patient characteristics upon ICU admission and ICU outcomes are shown in Table 2. This study obviously focused on a subgroup of survivors, who were overall younger, had fewer comorbidities, were less severely ill upon ICU admission, and showed fewer complications during ICU stay as compared with non-surviving patients (Additional file 1: Table S1). Inherent to the study design, the studied patient cohort was relatively enriched in long-stay patients when compared with the original EPaNIC cohort [23], thus presenting with more severe illness upon ICU admission and suffering from more complications during ICU stay as compared with the cohort of other surviving patients (Additional file 2: Table S2). Among the participants in the 5-year physical outcome study [24], blood samples were only drawn from patients who were able to come to the hospital, who were younger and had fewer comorbidities than those who were examined at home (Additional file 3: Table S3). Five-year impact of critical illness requiring ICU admission on the neuroendocrine axes Thyroid axis Thirty-three patients and 2 controls treated with thyroid hormone in ICU or at follow-up were excluded for this analysis. Of the excluded patients, 3 received thyroid hormone treatment only in ICU, 15 were on thyroid treatment in ICU and at follow-up and the other 15 only at follow-up. During ICU stay, the studied patients revealed the typical low-normal serum TSH and low T 4 , T 3 and TBG concentrations, whereas rT 3 was high, resulting in a low T 3 /rT 3 ratio (Fig. 2). When compared with the last ICU day, TSH concentrations of former ICU patients assessed at 5-year follow-up had remained stable, whereas T 4 and T 3 had increased, rT 3 had decreased, and T 3 /rT 3 Hospital length of stay (days), median (IQR) 21 patients were comparable to those in controls. In contrast, rT 3 was higher (p = 0.0002) in former ICU patients than in controls, with rT3 concentrations even being higher than the upper normal reference range for 46.3% of the patients. Consequently, T 3 /rT 3 (p = 0.0012) was lower in former ICU patients than in controls. Somatotropic axis Three patients on chronic growth hormone-releasing hormone (GHRH) or somatostatin analogue treatment at follow-up were excluded for this analysis. During ICU stay, the studied patients revealed the typical high concentrations of growth hormone and IGFBP1 and low IGF-I and IGFBP3 concentrations (Fig. 3). From the last ICU day toward the 5-year follow-up, growth hormone and IGFBP1 had decreased, and IGF-I and IGFBP3 had increased (p < 0.0001). Mostly, concentrations observed at 5-year follow-up were already reached at earlier follow-up time points 1 to 4 year after ICU admission. At 5-year follow-up, former ICU patients had similar serum concentrations of growth hormone, IGF-I and IGFBP1, but higher IGFBP3 concentrations (p = 0.030) than the control group, with 3.2% of patients even exceeding the upper normal reference range of IGFBP3. Adrenal axis Patients on corticosteroid treatment in ICU or at follow-up (n = 151) were excluded for this analysis. Of the excluded patients, 104 received corticosteroid treatment only in ICU, 44 were on corticosteroid treatment in ICU and at follow-up and the other 3 only at follow-up. The studied patients revealed the typical high total and free cortisol and low CBG and albumin concentrations beyond the ICU admission day until ICU discharge (Fig. 4). From the last ICU day toward 5-year follow-up of former ICU patients, total and free cortisol concentrations had decreased, whereas CBG and albumin concentrations had increased (p < 0.0001), with similar changes already present at earlier follow-up time points 1 to 4 year after ICU admission. At 5-year follow-up, serum total and free cortisol, CBG and albumin concentrations of former ICU patients were comparable to those of controls. Factors associated with hormonal parameters at 5-year follow-up Some occasional associations were found with type of critical illness requiring ICU admission or medications taken at follow-up (Additional file 4: Table S4). Severity of illness was only associated with growth hormone and IGFBP1 concentrations at follow-up. Sepsis upon admission, randomization to timing of initiating supplemental parenteral nutrition in ICU, or prolonged need of intensive care were not independently associated with any of the hormonal parameters. Also a stratified analysis for duration of critical illness did not reveal any significant difference in the hormonal parameters of patients who depended on intensive care for less than 8 days or at least 8 days (Additional file 5: Fig. S1). Association of residual neuroendocrine abnormalities at 5-year follow-up with physical function As documented for the original patient cohort [24], also the presently studied subgroup of former ICU patients with a serum sample available at 5-year follow-up showed long-term impairment of physical function as compared with matched controls, as evidenced by less handgrip strength, shorter 6-min walk distance, and worse selfreported physical quality-of-life as revealed by the SF36 PCS, both in univariable and multivariable analyses adjusted for age, sex and BMI (Table 1). Since we showed that T 3 /rT 3 and IGFBP3 in patients at 5-year follow-up were different from controls, we next investigated whether T 3 /rT 3 and IGFBP3 associated with the 5-year physical impairments of the former ICU patients, after exclusion of patients on chronic treatment with thyroid hormone, GHRH or somatostatin analogues (Table 3). Among former ICU patients, a lower T 3 /rT 3 ratio at 5-year follow-up was independently associated with less handgrip strength for dominant (p = 0.036) and non-dominant hand (p = 0.030) and with a shorter 6-min walk distance (p = 0.014). More specifically, for every standard deviation decrease in T 3 / rT 3 , grip strength decreased with 2.4 kg for the dominant hand and with 2.8 kg for the non-dominant hand, and the 6-min walk distance decreased with 3 m. A lower T 3 / rT 3 ratio was not independently associated with worse physical quality-of-life (p = 0.13). A higher IGFBP3 concentration at 5-year follow-up was independently associated with more handgrip strength for the dominant hand (p = 0.031), with an increase in grip strength with 2.2 kg for every standard deviation increase in IGFBP3. The IGFBP3 concentration was not independently associated with the other functional outcomes. Discussion Patients admitted to the ICU develop typical neuroendocrine changes within the thyroid axis, the growth hormone axis and the adrenal axis in response to critical illness, with pronounced disturbances remaining present until the day of ICU discharge. In this study, we demonstrated that 5 years after critical illness, most of these neuroendocrine abnormalities had normalized, with the exception of rT 3 concentrations that remained supranormal, resulting in persistently low T 3 /rT 3 ratios, and of IGFBP3 concentrations that rose from subnormal levels in the ICU to supranormal levels at 5-year follow-up. The lower T 3 /rT 3 ratios observed in former ICU patients were independently associated with several measures of worse long-term physical capacity, whereas the higher IGFBP3 concentrations were independently associated with better physical capacity (handgrip strength). Studies investigating neuroendocrine function after recovery from critical illness outside the setting of TBI or brain surgery are scarce if not absent. It has been shown that former ICU patients who were transferred to longterm care facilities still reveal the typical non-thyroidal illness syndrome, which is not unexpected given that such patients have not fully recovered [29]. However, information on the thyroid axis in fully recovered former ICU patients was hitherto lacking. We here observed normal serum TSH, T 4 , T 3 , and TBG concentrations 5 years after ICU admission, whereas rT 3 remained elevated, Fig. 3 Somatotropic axis 5 years after ICU admission: comparison with controls and within-patient evolution from ICU discharge. Data are shown as mean and standard error of the mean. The gray rectangles at the right side of the panels reflect mean plus or minus the standard error of the mean of the controls matched to the patients at 5-year follow-up. Growth hormone and IGFBP1 concentrations were square root-square root transformed to obtain a near normal distribution, allowing repeated-measures ANOVA and t test. Y-axes were transformed back to original values. Patients on chronic GHRH or somatostatin analogue treatment at follow-up were excluded. Adm: ICU admission, d4: day 4 in ICU, d7: day 7 in ICU, LD: last day in ICU, 1y: one year after ICU admission, 2y: two years after ICU admission, 3y: three years after ICU admission, 4y: four years after ICU admission, 5y: five years after ICU admission, ICU: intensive care unit resulting in an abnormally low T 3 /rT 3 ratio. The underlying mechanism of this long-term abnormality within the thyroid axis remains unclear, but could involve persisting changes in the expression or activity of the deiodinases that control the conversion of T 4 to T 3 and rT 3 . We also demonstrated that in the former ICU patients assessed 5 years later, serum growth hormone, IGF-I and IGFBP1 had normalized, whereas IGFBP3 was increased reaching supranormal levels. Increases in IGF-I and IGFBP3 (and T 4 ) have also been documented over a 2-year timeperiod in children after severe burn injury, after they had acutely dropped in the critical phase of the injury, but comparison with healthy children was not performed [30]. IGFBP3 is a binding protein that positively regulates tissue availability of IGF-I [31]. Its expression is regulated by growth hormone, insulin, androgens, and vitamin D, among others, with involvement of DNA methylation and transcriptional, posttranscriptional and translational control [32]. Whether alterations in these regulators play a role in the supranormal IGFBP3 levels of former ICU patients remains unknown. Although we have previously shown that 1 week after ICU discharge of prolonged critically ill adult patients ACTH and cortisol levels had risen to supranormal levels [14], the current data suggest that the adrenal axis recovers thereafter. This finding is consistent with normal salivary cortisol levels observed in former critically ill children, assessed months to years after pediatric-ICU admission [21,22]. Interestingly, however, in acute-respiratory-distress-syndrome survivors, an inverse correlation has been reported between long-term basal serum cortisol level and increasing traumatic ICU memories [33]. Unlike in human patients, long-term perturbations of the adrenal axis have been observed in rodent models of sepsis. In mice, increased stress-induced corticosterone and increased adrenal weights have been observed 2 to 7 weeks after induction of sepsis via cecal ligation and puncture [34]. In rats injected with lipopolysaccharide, desensitization of the adrenal axis has been described weeks later [35]. Observing the residual long-term neuroendocrine changes evidently raises questions about their physiological relevance. Reverse T 3 has long been considered an inactive metabolite of thyroid hormone, considering its weak affinity for nuclear thyroid hormone receptors [36]. Recently, however, in vitro studies have suggested an active role for rT 3 through interaction with extranuclear receptors, though physiological relevance hereof remains to be established [36]. In ICU, higher rT 3 and lower T 3 / rT 3 ratios have been associated with worse outcome of critically ill patients [37,38]. In a rat stroke model, however, a protective effect of rT 3 has been suggested [39]. Altered IGFBP3 levels may affect IGF-I transport, bioavailability and activity [31]. However, also pleiotropic IGF-independent actions of IGFBP3 have been described, regulating gene transcription with effects on cell growth, survival and apoptosis [31]. To explore any potential physiological relevance of the documented long-term neuroendocrine abnormalities in former ICU patients, we studied associations with long-term physical function. Interestingly, we observed an independent association of a persisting low T 3 /rT 3 ratio with decreased physical performance, including lower handgrip strength and shorter 6-min walk distance. For handgrip strength, the effect size appeared clinically relevant, considering 5 kg as minimal clinically important difference [40,41] was reached with a two-standard-deviations change in the T 3 /rT 3 ratio. The effect size for the 6-min walk distance was far below the minimal clinically important difference of 14-30 m [42]. Nevertheless, the combination of all data suggests that the residual abnormality within the thyroid axis may confer an increased risk of long-term physical impairment, and would thus be a harmful long-term consequence of critical illness, at least for a subgroup of patients. Such an interpretation is plausible as thyroid hormone affects diverse aspects of skeletal muscle physiology, being key in the regulation of the muscle's contractile function, energy metabolism, myogenesis and muscle regeneration [43]. Thus, thyroid hormone action plays an important role in the maintenance of muscle strength and physical functioning. Outside the context of critical illness, patients with newly diagnosed thyroid disease complain about weakness, fatiguability, muscle pain, stiffness and cramps, which usually resolve after treating the thyroid disease [44]. In middle-aged and older euthyroid subjects, a higher free T 3 level has been independently associated with a higher handgrip strength and physical function, and with an attenuated decline in handgrip strength over time, whereas no association was found for free T 4 or TSH [45,46]. Another study evaluating only TSH and free T 4 found an independent association of (See figure on next page.) Fig. 4 Adrenal axis 5 years after ICU admission: comparison with controls and within-patient evolution from ICU discharge. Data are shown as mean and standard error of the mean. The gray rectangles at the right side of the panels reflect mean plus or minus the standard error of the mean of the controls matched to the patients at 5-year follow-up. Total and free cortisol concentrations were square root-square root transformed to obtain a near normal distribution, allowing repeated-measures ANOVA and t test. Y-axes were transformed back to original values. Patients on corticosteroid treatment in ICU or on chronic corticosteroid treatment at follow-up were excluded. Adm: ICU admission, d4: day 4 in ICU, d7: day 7 in ICU, LD: last day in ICU, 1y: one year after ICU admission, 2y: two years after ICU admission, 3y: three years after ICU admission, 4y: four years after ICU admission, 5y: five years after ICU admission, ICU: intensive care unit low-normal TSH with lower handgrip strength in elderly euthyroid men, but not in post-menopausal women [47]. In young, euthyroid men, rT 3 has been inversely associated with lean body mass, as have thyroid hormones [48]. In independently living elderly men, high supranormal rT 3 levels, but also higher free T 4 levels within the normal range, were independently associated with worse physical performance and lower muscle strength (handgrip, leg extensor), whereas an isolated low T 3 level remarkably was associated with better physical performance [49]. For IGFBP3, association with long-term physical outcome of former ICU patients was less clear than for T 3 /rT 3 , as such association was only found for handgrip strength. Considering a higher IGFBP3 was associated with better handgrip strength, the supranormal levels of IGFBP3 years after critical illness could be interpreted as a beneficial, compensatory response and thus would not explain long-term physical impairment after critical illness. Positive associations of IGFBP3 with physical outcome in aging have previously been documented for activities of daily living (ADL) in women but not men, for handgrip strength in a cohort of 89-year-old women, and for get-up-and-go times in a mixed gender historical cohort [50,51]. Most studies in middle-aged to elderly people, however, failed to independently associate IGFBP3 with functional performance measures such as walking speed, grip-strength, or ADL [50,[52][53][54][55]. The identification of long-term abnormalities in the thyroid axis as a potential contributor to the long-term physical legacy after critical illness is important, as pathophysiological insight in the long-term physical impairments in ICU survivors is scarce [56][57][58]. Many prolonged critically ill patients who developed critical illness polyneuropathy showed signs of chronic partial denervation up to 5 years after ICU discharge, whereas persisting evidence of myopathy in patients who developed critical illness myopathy appeared unusual [59,60]. One small study in prolonged critically ill patients with persistent weakness as assessed 6 months after ICU discharge suggested normalization of proteolysis, autophagy, inflammation and mitochondrial content in muscle, but persistence of impaired regenerative capacity [61,62]. Studies in mice suggested involvement of sustained mitochondrial dysfunction in chronic sepsisinduced muscle weakness [63], and also showed that engraftment of mesenchymal stem cells improved muscle regeneration and strength after sepsis [64]. This study has limitations to consider. First, we analyzed single samples, whereas several hormones show pulsatile patterns [1,2]. Second, no direct measurements of free, bioavailable target hormones were performed. Indeed, as the use of heparinized lines in-ICU interferes with free thyroid hormone measurements [65] we also did not measure free T 4 and T 3 in the follow-up samples, and the complex, time-consuming methodology to measure bioavailable IGF-I [66] and free cortisol [13] does not allow analysis of such a large number of samples. Third, we have no information on ACTH concentrations as blood samples were not immediately stored on ice after collection, which precludes reliable ACTH measurements. Fourth, tissue hormone concentrations or metabolizing enzymes could not be evaluated. Fifth, our search for independent determinants of hormonal parameter concentrations at follow-up was only of exploratory nature and should not be overinterpreted. Of importance here, no information was available about the participants' chronic nutritional status, whereas this could affect hormone concentrations as well [5,38]. Finally, the studied patient cohort may be prone to selection bias. Non-survivors obviously could not be studied, whereas they are generally more severely ill, have a more complicated ICU trajectory, and overall show worse neuroendocrine disturbances in ICU than survivors [37,[67][68][69][70]. However, the three studied neuroendocrine axes also showed severe disturbances in the present cohort of survivors while in the ICU, representative of the impact of critical illness. The studied cohort was relatively enriched in sicker, long-stay patients as compared with the total EPaNIC cohort. Nevertheless, exclusion of patients with disabilities potentially confounding morbidity endpoints in the follow-up study, as well as availability of blood samples only from former ICU patients who were able to come to the hospital for participation in the study, may have introduced bias toward those with better physical Table 3 Association of long-term neuroendocrine abnormalities with long-term physical function 5-years after critical illness Patients on chronic thyroid hormone, GHRH or somatostatin analogue treatment were excluded for these analyses. Models were adjusted for sex, and age and BMI at 5-year follow-up Conclusions Most critical illness-induced changes within the thyroid axis, the somatotropic axis and the adrenal axis had normalized 5 years after ICU admission, except for rT 3 that remained supranormal resulting in persistently low T 3 /rT 3 ratios, and for IGFBP3 concentrations that had increased to supranormal levels. In particular the residual long-term abnormality within the thyroid axis could be a harmful long-term neuroendocrine consequence of critical illness, contributing to the long-term physical impairment of former ICU patients. Whether targeting of this residual abnormality may improve long-term physical outcome remains to be investigated.
Swarm behavior of self-propelled rods and swimming flagella Systems of self-propelled particles are known for their tendency to aggregate and to display swarm behavior. We investigate two model systems, self-propelled rods interacting via volume exclusion, and sinusoidally-beating flagella embedded in a fluid with hydrodynamic interactions. In the flagella system, beating frequencies are Gaussian distributed with a non-zero average. These systems are studied by Brownian-dynamics simulations and by mesoscale hydrodynamics simulations, respectively. The clustering behavior is analyzed as the particle density and the environmental or internal noise are varied. By distinguishing three types of cluster-size probability density functions, we obtain a phase diagram of different swarm behaviors. The properties of clusters, such as their configuration, lifetime and average size are analyzed. We find that the swarm behavior of the two systems, characterized by several effective power laws, is very similar. However, a more careful analysis reveals several differences. Clusters of self-propelled rods form due to partially blocked forward motion, and are therefore typically wedge-shaped. At higher rod density and low noise, a giant mobile cluster appears, in which most rods are mostly oriented towards the center. In contrast, flagella become hydrodynamically synchronized and attract each other; their clusters are therefore more elongated. Furthermore, the lifetime of flagella clusters decays more quickly with cluster size than of rod clusters. I. INTRODUCTION Systems of self-propelled particles (SPP), which exhibit an interaction mechanism that favors velocity alignment of neighboring particles, often display collective behaviors like swarming and clustering. There are many examples for this swarming behavior, ranging from systems of microscopic particles (sperm, bacteria, nano-rods) to systems of macroscopic objects (birds, fish). Since the pioneering simulation work of Vicsek et al. [1], SPP systems have attracted a lot of interest at the theoretical [2][3][4][5][6][7][8] and computational [9][10][11][12][13][14][15] level. Typically, in simulation models of swarm behavior, point-like agents move with an imposed non-zero velocity and tend to align their direction of motion with others in a prescribed neighborhood [1,10,11,14]. Although the alignment mechanism may differ from one model to the other, the basic properties of swarm behavior are quite universal [16]. Upon variation of parameters such as particle density, particle velocity, or environmental noise, the system can undergo a transition from a disordered state, where the average total velocity or orientation vanishes, to a nematically ordered state. Near the transition point, the cluster-size probability density function is characterized by a power-law decay [11,16]. For intermediate densities, phase separation into regions of different density and band formation has been found [15]. Self-propelled motion is common in biological systems at micro-or mesoscopic length scales, such as suspensions of bacteria, like E. coli [17] and Bacillus subtilis [18][19][20], or tissue cells (keratocytes) [9], whose sizes are all on the micrometers scale. A special class of biological systems are rod-like self-propelled particles (rSPP), for example myxobacteria (approximately 10µm long) [21,22]. When starved, myxobacteria are elongated to an average aspect ratio of approximately 1:7, glide on a substrate along their long axis and undergo a process of alignment, rippling, streaming and aggregation that culminates in a three-dimensional fruiting body. A model, which takes into account the exchange of a morphogen during cell-cell contact and a preferred cell motion in the direction of largest morphogen concentration, has been designed to describe the streaming and two-stage aggregation of myxobacteria [23]. Sperm (with a length of about 50µm) [24,25] and nematodes [26] (about 1mm long) employ a sinusoidal undulation of their slender bodies to push the fluid backwards and to propel themselves forward. Large train-like clusters of wood mouse sperm [27,28] are believed to result in greater thrust forces to move more efficiently through a highly viscous environment. The wood mouse sperm has a hook-like structure at its head, by which it can be hitched to the mid-part or the tail of a neighboring cell for robust cooperation. However, nematodes which do not have hook structures, also display a pronounced tendency to adhere to each other in a film of water, to form assemblies consisting of many organisms, and to exhibit a striking co-ordinated movement [26]. Also, sea urchin sperm organize into a hexagonal pattern of rotating vortices at surfaces [29]. A nice physical realization of self-propelled rods (SPR) are bimetallic nano-rods consisting of long Pt and Au segments [30]. The rods, about 300nm in diameter and 2µm long, move autonomously in an aqueous hydrogen peroxide solutions by catalyzing the formation of oxygen at the Pt end. They move predominantly in the direction of the Pt end, with a velocity depending on the concentration of hydrogen peroxide. When a gradient of the hydrogen peroxide concentration is imposed, the rods exhibit directed motion towards regions of higher concentrations through active diffusion [31]. A related system is a fluidized monolayer of macroscopic rods in the nematic liquid crystalline phase [32]. The rods confined between two hard walls are energized by an external vertical vibration, and gain kinetic energy through frequent collisions with the floor and the ceiling of the container. Long-lived giant number fluctuations are found, which shows that simple contact can give rise to flocking, coherent swirling motion and large-scale inhomogeneities [33]. However, in this experiment, the rods do not have a preferred direction of motion. All of these examples of self-propelled particles employ different propulsion mechanisms and have different interactions. However, their swarm behavior, such as flocking, streaming and clustering, is surprisingly similar. The common characteristic of these systems is their rod-like structures and their quasi-two-dimensional active motion. Myxobacteria glide on surfaces [21], while sperm and nematodes gather at substrates [26,29,34]. In suspensions of rod-like particles in thermal equilibrium, volume exclusion favors the alignment of rods. At high densities, it stabilizes a nematic state characterized by long-range orientational order [35]. While constant-velocity polar point particles interacting locally by nematic alignment in the presence of noise have been studied intensively in recent years [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15], much less is known theoretically about the behavior of elongated particles with volume exclusion, or about the collective behavior of swimmers with hydrodynamic interactions. Previous simulation studies of self-propelled rods (SPR) in two dimensions show that self-propelled motion enhances the tendency for nematic ordering [36], as well as aggregation and clustering [37]. Also, rods have an increased probability to be located near surfaces (depending on their velocity, length and thermal noise) [38] and form hedgehog-like clusters at surfaces [39]. In Ref. [37], two regimes of clustering have be distinguished by their unimodal or bimodal weighted cluster-size distribution functions; however, the system contained a relatively small number of particles compared to those employed in simulation studies of swarming of SPPs. Continuum equations for the description of SPR systems have been derived recently within a mean-field approximation [6,7]. This theory predicts that hard-core interactions are insufficient to generate a macroscopically polarized state, because they cannot distinguish the two ends of a rod, and makes interesting predictions for the fluctuations in the nematic and isotropic state (such as a crossover from diffusive to propagating density fluctuations). However, the mean-field approximation of volume exclusion has the limitation of omitting correlation effects, and thus works best for slowly varying density distributions. In addition, hydrodynamic interactions between rSPP have so far been largely neglected. These interactions depend on the type of self-propulsion, where "pullers" repel and "pushers" attract each other [40,41]. Nematic suspensions of swimming rod-like pushers are found to be unstable at long wavelengths as a result of hydrodynamic fluctuations [42]. For sperm and flagella, it has been shown theoretically that the hydrodynamic coupling synchronizes the phases of their sinusoidal beating tails [24,43,44]. Also, the hydrodynamic interaction between these microswimmers implies attraction and cluster formation [43]; similarly, it makes an essential contribution to the capturing of sperm near walls [45]. However, the relative importance of directed self-propulsion, particle shape, volume exclusion, and hydrodynamic interactions to the emergence of swarm behavior remains unclear. In this paper, we employ a model of hard rods with strict volume exclusions and simulate large systems containing at least 1000 particles. We focus on rSPP systems at a density below the isotropic-nematic transition of Brownian rods. We employ a model consisting of rigid SPR performing an overdamped translational motion in two dimensions, and analyze the resulting cluster-size probability density distribution, cluster configurations and lifetimes. Three types of cluster-size probability density distribution functions allow to distinguish three different states, and to construct a phase diagram as a function of particle density and environmental noise. As a special case of rSPP with an explicit propulsion mechanism, we investigate a suspension of flagella, which move by sinusoidal beating of their body in a two-dimensional fluid. The motion of the surrounding fluid is described by particle-based mesoscopic simulation method called multi-particle collision dynamics (MPC) [46,47]. This method has been shown to capture the full hydrodynamics and flow behavior of complex fluids over a wide range of Reynolds numbers very well [48]. By comparing the results for SPR and flagella, we elucidate the contribution of hydrodynamic interactions to the swarm behavior. This paper is organized as follows. Section II gives a brief description of our models and simulation methods. We analyze the collective behavior of SPR systems in Sec. III. In Sec. IV, we study the swarm behavior of flagella, and compare the results obtained with both models. The influence of hydrodynamic interactions and the flagellar beat on the swarm behavior are discussed. We summarize our main conclusions in Sec. V. A. Self-Propelled Rods We consider a system of N rod rods of length L rod in a twodimensional simulation box of size L x × L y . Each rod is characterized by an orientation angle θ rod,i with respect to the xaxis, a center-of-mass position r rod,i , a center-of-mass velocity v rod,i and an angular velocity ω rod,i around its center of mass (see Fig. 1a). The rods move ballistically according to their velocities, where ∆t rod is the simulation time step. The particle velocity can be decomposed into a parallel and a perpendicular component relative to the rod axis, v rod,i = v rod,i, + v rod,i,⊥ . We consider the rods to be embedded in an overdamped fluid medium where hydrodynamics can be approximated by an anisotropic friction on the rod-like particles. The motion is then determined by where e and e ⊥ are the local parallel and perpendicular unit vectors of the rod orientation. F rod,0 is a constant propelling force applied along e . The friction coefficients are given by γ ⊥ = 2γ , γ = L rod and γ r = γ L rod 2 /6. The random forces ξ , ξ ⊥ and ξ r are white noises, which are are determined by their variances σ 2 rod L rod , σ 2 rod L rod and σ 2 rod L 3 rod /12, respectively. Finally, F i j is the force generated by volume exclusion between rods i and j, and M i j is the torque generated by F i j on rod i in the reference system of center of mass of rod i. For the calculation of the interactions, each rod is discretized into n rod = L rod /l b beads of diameter l b , as illustrated in Fig. 1a. The volume exclusion between rods is then modelled by a shifted and truncated Lennard-Jones potential between beads belonging to different rods, where r is the distance between two beads, l b is the bead diameter, and is the strength of the potential. We use as the energy scale in our SPR simulations. A single rod without noise then moves with a constant velocity v 0 = F rod,0 /γ . In the non-zero noise regime, the diffusion constant along the parallel direction is D = σ 2 rod L rod ∆t rod /2γ 2 . The dimensionless Péclet number, which measures the ratio of self-propelled and diffusive motion, is thus We use 1/Pe ∝ σ 2 rod to characterize the strength of the environmental noise. In SPR systems [6,37], alignment is naturally introduced by the volume exclusion between the anisotropic particles; this also implies that the interaction neighborhood needs no further assumptions, but is directly related to the rod length. Hard-core interactions do not distinguish the two ends of an symmetrically elongated object. Thus, both parallel and antiparallel velocity configurations are induced. In simulations of point-like SPPs, noise is implemented by adding a random component to the velocity orientation of each particle. In our model of SPR, random forces are applied on each rod, which results in fluctuations in both the magnitude and the orientation of the velocity vectors. For a single rod, the orientation fluctuations lead to rotational diffusion, which implies a persistence length of its trajectory. Note that the noise forces are not caused by thermal fluctuations, which would require a factor two between the variance of the random forces in parallel and perpendicular directions. In most biological and synthetic rSPP systems, thermal fluctuations are indeed negligible due to large size of the particles. In these systems, the environmental noise arises, for example, from density fluctuations of signalling molecules for chemotactic swimmers, or from fluctuations of the motor activity. We use rods of length L rod = 11l b and undisturbed velocity v 0 = 1.21 /(γL rod ). Effects of a polydispersity of rod lengths or a distribution of propulsion forces are not considered. The motion of rods are calculated with a discrete time step ∆t rod = 0.001. Most of our rod simulations start from random initial states, where the rods are placed into the simulation box with random orientations and random positions without overlap. If not explicitly mentioned, the size of the simulation box is L x = L y = 400l b , which is much larger than the rod length. Periodic boundary conditions are employed. Our model differs from the model of Ref. [37] by the type of repulsive interaction between the rods. In Ref. [37], rods interact by a "soft" volume exclusion, where the repulsion force is proportional to the square of overlapping area, while in our model the interaction is a short-range Lennard-Jones potential between discretized beads. In the limit of a large overlap energy, the two models become equivalent. B. Flagella We consider a system of N f l flagella of length L f l in a box of size L x × L y . Each flagellum consists of semi-flexible string of monomers of mass m f l , connected by springs (see Fig. 1b). The shape of the flagellum is determined by the elastic energy Here, the first term is the harmonic potential generated by springs with spring constant k and rest length l 0 . R i is the bond vector pointing from monomer i to monomer (i + 1). The second term of Eq. (9) is the bending energy the flagellum, with bending rigidity κ. R(l 0 c) is an operator which rotates a two-dimensional vector clockwise by an angle l 0 c. The local spontaneous curvature c varies with time t and position x along the flagellum to generate a propagating bending wave, The detailed analysis of the beating pattern of nematodes [26] and bull sperm [25,49] has shown that a single sine mode represents the beating pattern to a good approximation. We use the wave number q = 2π/L f l , such that the phase difference between the first and the last monomer is 2π and one complete wavelength is present on the flagellum. The beating frequency f is constant for each flagellum; it is chosen from a Gaussian distribution, centered at f 0 and with variance σ 2 f l f 2 0 . ϕ is the initial phase of the first monomer, which is chosen from a uniform distribution in [0, 2π]. As t increases, a wave propagates along the flagellum from the first to the last monomer, pushing the fluid backwards and propelling the flagellum forward. Although the spontaneous local curvature c is prescribed by Eq. (10), the flagellum is elastic and its configuration is affected by the viscosity of the medium and the flow field generated by other flagella. The third term in Eq. (9) describes the interaction between flagella due to volume exclusion; here, we employ again the shifted and truncated Lennard-Jones potential (Eq. (6)) between monomers of different flagella. Our model of a flagellum differs from the model of a sperm employed in Ref. [43] by the absence of a passive midpiece and a circular head. Also, in the sperm simulations [43], two sine waves were present on the tail, while a single sine wave is present on the flagellum. We use flagella of length L f l = 50l 0 . The elastic moduli in Eq. Each simulation run of the flagella systems covers a total time interval of about 3300 beats. The first 800 beats are not taken into account in the calculation of averages, in order to allow the system to reach the stationary state. This time is longer than the largest relaxation time of about 650 beats observed in the system with a width σ f l = 0.1% of the frequency distribution. C. Multi-Particle-Collision Dynamics (MPC) MPC is a particle-based mesoscopic simulation technique used to describe the hydrodynamics and flow behavior of complex fluids. The fluid is modeled by N sol point particles of mass m sol,i , which are characterized by their continuous space position r sol,i and velocity v sol,i . During every time step ∆t MPC , there are two distinct simulation steps, streaming and collision. In the streaming step, the fluid particles do not interact with each other and move ballistically according to their velocities, In the collision step, the particles are sorted into the cells of a square lattice of side length a according to their position, and interact with all other particles in same collision box through a multi-body collision. The collision step is defined by a rotation of all particle velocities in a box in a co-moving frame with its center of mass. Thus, the velocity of the i-th particle in the j-th box after collision is where is the center-of-mass velocity of j-th box, and R j (α) is a rotation matrix which rotates a vector by an angle ±α, with the sign chosen at random. This implies that during the collisions particles exchange momentum, but the total momentum and kinetic energy are conserved within each collision box. In order to ensure Galilean invariance, a random shift of the collision grid has to be performed [50]. The total kinematic viscosity ν is the sum of two contributions, the kinetic viscosity ν kin and the collision viscosity ν coll . In two dimension, approximate analytical expressions are [51,52], where ρ is the average particle number in each box and h = ∆t MPC k B T/m sol a 2 is the rescaled mean free path. We use k B T = 1, m = 1, a = 1, ∆t MPC = 0.025, α = π/2, and ρ = 10. This implies, in particular, that the simulation time unit (ma 2 /k B T ) 1/2 equals unity. With these parameters, the total kinematic viscosity of fluid is ν = ν coll + ν kin ≈ 3.02. During the MPC streaming step, the equations of motion of the flagella monomers are integrated using a velocity-Verlet algorithm, with a molecular-dynamics time step ∆t f l = ∆t MPC /50 = 5×10 −4 . The bond length between the monomers is related to the collision cell size by l 0 = a/2. The flagella only interact with the fluid during the MPC collision step. This is done by sorting the flagella monomers together with the fluid particles into the collision cells and rotating their velocities relative to the center-of-mass velocity of each cell. Since energy is continuously injected into the system by the actively beating flagella, we employ a thermostat to keep the fluid temperature constant by rescaling all fluid-particle velocities in a collision box relative to its center-of-mass velocity after each collision step. With the parameters given above, a single flagellum with f 0 = 1/120 swims forwards with the velocity v single = 0.020 ± 0.001 in a MPC fluid. Thus, we estimate a Reynolds number power law rods flagella Re = 2A f l v single /ν ≈ 0.04 for our flagellum model, where A f l = 0.12L f l is the beating amplitude. The velocity of our flagella can be compared with the velocity of an infinitely long string beating in a two-dimensional fluid at Re = 0, which was calculated analytically by Taylor [24] to be where λ wave is the wave length and v wave = λ wave f is the propagation velocity of the sinusoidal wave on the flagellum. Applying the parameters in our simulations, we obtain v single = 0.0183, in excellent agreement with the simulation result. This demonstrates that the simulation model describes the limit of low-Reynolds-number hydrodynamics very well. III. SWARMING BEHAVIOR OF SELF-PROPELLED RODS After starting from a random initial state, the rods aggregate and form clusters. Large clusters can form by collisions of smaller ones, while at the same time they can break up due to collisions with other clusters or due to the noisy environment. After a transient phase, the system reaches a stationary state, in which the formation rate of any cluster size equals its break-up rate. The degree of aggregation in the system depends on its parameters such as the Péclet number and the number density We define a cluster as follows. We consider two rods to be in the same cluster if the angle between their orientation vectors is less than π/6 and the nearest distance is less than 2l b , which is about two times the width of a rod. A cluster is defined as a set of rods that are neighbors either directly or through other rods at a given moment in time. Its size is simply the number of rods it contains. A freely gliding rod without any neighbor is considered as a cluster of size n = 1. We study systems at intermediate densities, where ρ rod is neither very low, such that there are hardly any collisions, nor high enough for a nematic phase to appear for rods in thermal equilibrium, i.e. densities lower than the critical density ρ c = 3π/(2L 2 rod ) of the isotropic-to-nematic phase transition [35]. The statistical quantities, which will be analyze in Secs. III and IV, are listed in Table I. A. Cluster-Size Probability Density Functions and Stationary States For a system with particles distributed at random in space, the probability of finding n particles in some area obeys a binomial distribution; in our SPR systems, the probability to find large particle numbers n is increased by aggregation and clustering. The stationary cluster-size probability density function (PDF) Π(n) results from the balance between the cluster formation and break-up rates. While the former depends on the collision rate of clusters, the latter depends also on the environmental noise and the cluster size. We distinguish three different stationary states in our SPR systems by comparing the shapes of their corresponding PDFs. Snapshots are shown in Fig. 2, a movie can be found in Ref. [53]. A disordered state, where rods are distributed in the whole space and oriented in different directions, is characterized by a PDF denoted as Π 1 in Fig. 3. In a snapshot (Fig. 2a), a weak aggregation tendency can be recognized in this case, where several small clusters of well polarized members glide in arbitrary directions. Π 1 decreases as a power law for small cluster sizes, then decays exponentially for large n. The same kind of PDF has also been found in simulations of swarms of pointlike SPP interacting via a phenomenological alignment mechanism [11,16]. The range of the power-law-decay regime of Π 1 depends on the rod density and the environmental noise. Increasing density or decreasing noise shifts the exponential cut-off to larger n. The system with the second type of PDF, denoted Π 2 in Fig. 3, is more ordered, with an obvious tendency to form large clusters. A snapshot (Fig. 2b) shows several large and motile clusters moving in different directions. Π 2 also displays a power-law decay at small cluster sizes, but shows an increased probability (compared to the power-law decay) of finding large clusters. Increasing the number density or decreasing the noise shifts the prominent shoulder to larger cluster sizes. For very large aggregates, greater than the shoulder location, Π 2 decreases rapidly. The system with the third type of PDF, denoted Π 3 in Fig. 3, is characterized by a giant cluster, in which most rods are oriented radially towards the center (Fig. 2c). The giant cluster forms when several smaller motile clusters collide head-on in a short time interval, such that a nucleus with a blocked structure emerges. This nucleus continues to grow until most of rods in the system are gathered in it. Π 3 has two parts, a peak at large n representing the giant clusters, and another peak at very small n corresponding to some freely swimming rods not collected by the giant cluster. The average rod density outside the giant clusters is very low. Both Π 1 and Π 2 display a power-law decay at small cluster sizes, The exponent β is a function of the rod density ρ rod and noise 1/Pe; it increases with increasing ρ rod and decreases with increasing 1/Pe (Fig. 4). However, the dependence of β on ρ rod or 1/Pe in the Π 1 regime is much stronger than in the Π 2 regime; in the latter case, β approaches −2. By systematically varying the rod density ρ rod and the environmental noise level, we can construct a phase diagram with regions characterized by different types of PDFs, see Fig. 5. Clearly, Π 1 is found in the low-density and high-noise regime, Π 3 in the high-density and low-noise regime, and Π 2 is associated with the transition region between Π 1 and Π 3 . Note that all systems in Fig. 5 were started from disordered initial states. Systems characterized by the probability density function Π 2 bear some similarity with liquid systems supercooled below the freezing point. Note the system with 1/Pe = 0.00095 and ρ rod L 2 rod = 0.7744 in Fig. 5 displays both Π 2 and Π 3 distributions corresponding to simulations with different initial random states. Systems with the probability density function Π 3 show the characteristics of a glassy behavior, where the dense packing of rods arises from the random collisions, and remains frozen at later times. Our results are consistent with those of Ref. [37]. By comparing short runs for systems with and without fluctuations, the transition from Π 1 to Π 2 was found in Ref. [37] to shift to larger values of the aspect ratio L rod /l b and total area fraction of rods η = ρ rod L rod l b . Fig. 5 demonstrates that in our system the transition shifts with increasing 1/Pe to larger ρ rod L 2 rod , which is proportional to ηL rod /l b . B. Orientational Correlation Functions Although we distinguish three swarming states in our SPR systems, there are only two types of cluster structures. The motile clusters in the Π 1 and Π 2 states consist of polarized rods, as shown in Fig. 2a,b. In contrast, the giant clusters found in the Π 3 state consist of a large number of rods blocking each other in their forward motion, as shown in Fig. 2c. These two types of clusters can be distinguished by analyzing the orientational correlation function Hereû i is the unit vector denoting the orientation of rod i, r i j (r, φ) is the vector pointing from the center of mass of rod i to rod j, and φ is the angle betweenû i and r i j . G(r) → 1 for r → 0 because two neighboring rods at close distance are always aligned. At large distance, G(r) → 0. When the system is in a state characterized by Π 1 or Π 2 , G(r) is symmetric with respect to the direction φ = 0 • with a maximum at r = 0 (Fig. 6a). The slight elongation of G(r) in the directions φ = 0 • and φ = 180 • indicates that the clusters tend to slightly extend in the direction of the average rod orientation due to packaging. The width of G(r) is narrower in the front and wider in the back, because of their partially blocked structure (see Fig. 2d) and because large clusters are more likely to collide with other clusters head-on. If an headto-head collision does not result in the formation a larger cluster or a blocked structure, the front tips are sharpened due to the "attrition" of the two clusters. If the system is in the state with a giant cluster, G(r) shows a very different behavior, see Fig. 6b. G(r) still has a positive maximum near r = 0, which represents a high local orientational order. However, a region with negative correlations, G < 0, develops, with a minimum at some (r , φ ). Because all rods point preferentially towards the center of cluster, the propelling forces of the rod nearly compensate each other. Therefore, the locomotion speed of a giant cluster is much smaller than the gliding speed of a single rod. Moreover, the propelling forces generate a net torque due to the deviation of the rod orientations from pointing exactly towards the center of mass, which implies a rotational motion of the giant cluster. φ is related to this rotation. For 0 • < φ < 90 • , the cluster rotates counterclockwise; for −90 • < φ < 0 • , it rotates clockwise; for φ = 0 • , there is no net torque and the giant cluster does not rotate. C. Average Cluster Size The average cluster size n of the system is where Π(n) is the normalized cluster-size distribution function. n increases with increasing ρ rod , as shown in Fig. 7a; in the low-density limit, n approaches unity. n decreases with increasing noise level, 1/Pe, as shown in Fig. 7b. In the Π 2 regime, the system exists in two metastable states, depending on the initial conditions. With random initial conditions, a "supercooled" state emerges, which transforms into the Π 3 state once a giant-cluster nucleus has formed. This can be seen in Fig. 7b for 1/Pe = 0.00095, where two data points show simulation results with different random number for random initial states. With a giant cluster as initial state, the system (Fig. 5). The open symbols represent systems with Π 3 ( , blue), Π 2 ( , red), and Π 1 ( , black), starting from random initial states. The solid symbols represent the systems with Π 2 ( , blue) and Π 1 ( , black), starting from a state with a giant cluster. stays in the Π 3 state unless the noise is large enough to destroy the giant cluster; this occurs in Fig. 7b for 1/Pe = 0.04. Interestingly, n shows a power-law decay n ∼ Pe ζ (20) in the Π 1 and Π 2 region when the system starts from a disordered state, with exponent ζ 0.37. D. Cluster Lifetime We define the lifetime of a cluster as the length of the time during which its members do not change. The lifetime of a cluster is analyzed with a time interval ∆τ = 100; thus, cluster lifetimes less than ∆τ cannot be resolved. The average cluster lifetime T li f e is a function of cluster size n. As shown in Fig. 8, the lifetimes of the clusters of size n = 1 are always much longer than of other cluster sizes, because single-rods "clusters" cannot disintegrate. For n ≥ 2, T li f e (n) decreases smoothly with increasing cluster size. The data for mid-size clusters (2 < n < 30) show an effective power-law dependence, with an exponent δ 0.2. Because the environmental noise determines the break-up rate of clusters, T li f e increases with decreasing 1/Pe. We only show the lifetime of motile clusters in systems characterized by Π 1 and Π 2 . The giant clusters found in the state characterized by Π 3 can persist for a very long time until a sufficiently large fluctuation occurs. To understand the dependence of the cluster lifetime on n, we can assume that only single rods are lost at the cluster surface [37]. In this case, the probability to loose a rod per unit time is proportional to the perimeter length, which scales as n 1/2 (for compact clusters of approximately circular shape). Therefore, this simple argument implies a scaling law (21) with exponent δ = 0.5. The growth of clusters is more complex, since it can occur by collision with all types of other clusters; however, the collision cross-section should again be proportional to n 1/2 . The value of δ = 0.5 is considerable smaller (corresponding to shorter lifetimes for larger clusters) than observed in our simulations. This indicates that there must be another mechanism of cluster decay. Indeed, the typical cluster configurations of Fig. 2d indicate that only at few places along the perimeter, rods may have the possibility to leave the cluster. E. Finite-Size Effects In our simulations, the finite simulation-box size implies a finite number of particles. A cluster can never grow larger than the total number of rods in the system. Consequently, all quantities related to the cluster size, such as the cluster size distribution Π and the stationary average cluster size n For the probability density function Π(n), the absence of cluster larger than N rod does not only introduce a cut-off at large cluster size, but also affects the exponent β of the powerlaw part, as shown in Fig. 9. For systems with Π 1 , the data for small box sizes (L x = L y = 4.5L rod and 13L rod ) still obey a power-law decay at small n, without an obvious change of the exponent, as shown in the inset of Fig. 9, but they deviate from the power law when n approaches N rod . When the simulation box is large (L x = L y = 36L rod and 51L rod ), the PDFs almost coincide, and their exponential cut-offs are observed at the same value of n; also, β approaches an asymptotic value when L x increases. Therefore, we conclude that our results for the larger systems represent the thermodynamic limit. Similarly, the power-law part of Π 2 extends with increasing box size, and the location of the prominent shoulder shifts to larger cluster size. The finite-size effects are significantly stronger for systems in the Π 3 region of the phase diagram. When the system is too small, the total rod number is not sufficient to trigger the formation of a blocked structure. The system then stays in a Π 2 state. This supports the claim that the state with Π 2 is a "supercooled" state. We believe that the absence of the Π 3 state in Ref. [37] is due to finite-size effect; a system of only 100 rods is too small to form a blocked structure. The dependence of the average cluster size n on the linear system size L x is shown in Fig. 10. For systems with Π 1 and Π 2 , n increases with L x and eventually reaches a plateau value. For the system with Π 3 , n strongly diverges when L x increases. Thus, n can be considered as an intensive quantity in the first two states, and as an extensive quantity in the third state. Suppose the probability density function Π(n) obeys a power law for all cluster sizes, where β < −1 and N = ρ rod L x L y is the total number of rods in the system. It is easy to verify that for N 1, where sums over n can be well approximated by integrals, N 1 Π(n)dn = 1, so that Π(n) is properly normalized. The sharp drop due to the limited box size is neglected. In this case, the average cluster size of the system is obtained to be For −2 < β < −1, the average cluster size strongly depends on the total number N of rods, whereas for β < −2, n is independent of N. For large negative β, n approaches unity, which means that all rods are gliding freely. In our simulations, the effective exponents in the Π 1 and Π 2 regimes are −6 β −2.5 and −2.5 β −2.0, respectively, see Fig. 4. Thus, Eq. (23) implies that finite-size effects are weak in the Π 1 regime, and are pronounced in the Π 2 regime, in agreement with the simulation results of Fig. 10. (Eq. (23) does not apply to the Π 3 state since the assumption of a powerlaw dependence (22) A. Hydrodynamic Synchronization, Attraction, and Aggregation The synchronization and attraction of two flagella is shown in Fig. 12. Synchronization is achieved within about four beats, while the formation of a tight pair from an initial distance of about one-third of the flagellar length takes about 20 beats. The flow field of a flagellum is shown in Fig. 13. The flow field at a certain time in the beating cycle (Fig. 13a) shows that formation of two vortices, which propagate from the front to the rear end as the flagellum moves forward. The hydrodynamic interaction of swimmers depends on the type of self-propulsion. The average flow field of flagellum, integrated over the whole beating cycle, demonstrates that the flagellum, which might be expected to be a "neutral" swimmer (i.e., neither a pusher nor a puller) is indeed a very weak pusher -where the dominant propulsion is located closer to the rear end -because the line connecting the centers of the two vortices intersects the average flagellum shape behind its mid-point (Fig. 13b). This generates a in-flow from both sides of the flagellum near the front end, which is responsible for hydrodynamic attraction [40,41]. In multi-flagellum systems, large clusters can form by collisions of smaller clusters, supported by the hydrodynamic attraction between neighboring flagella; large clusters can disintegrate into smaller components due to the diversity of flagellar frequencies, or the hydrodynamic flow fields of other clusters. With hydrodynamic interactions, large clusters of flag- ella are usually strongly extended in their direction of motion, as shown in Fig. 11 and movie [53]. The flagella inside the cluster are well synchronized. This structure is reminiscent of the "sperm-train" structure observed in rodent-sperm experiments [27,28]. The elongated clusters can extend to distances as large as the side length of the simulation box, which induces strong finite-size effects. Similar to the definition of a rod cluster in Sec. III, a flagellum cluster is defined as a set of flagella that are connected or neighbors either directly or through other agents at a given moment in time. Its size is the number n of flagella it contains. A freely-swimming single flagellum is considered as a cluster of size n = 1. B. Cluster-Size Distributions Both probability density functions Π 1 and Π 2 are observed in our multi-flagellum systems, as shown in Fig. 14. The variance σ f l of the distribution of beat frequencies is used as a measure of the noise level. At low ρ f l or high σ f l , we find Π 1 ; at high ρ f l or low σ f l , we observe Π 2 . In contrast to Π 2 for SPR systems in Sec. III A, Π 2 for flagella systems displays a deviation from the power-law behavior for very small cluster sizes, n = 1 and n = 2. We believe that this is due to the hydrodynamic synchronization and attraction of neighboring flagella. For flagella, we have never observed a giant cluster with a blocked structure, in contrast to the SPR system of Fig. 2c. Although the distribution of beating frequencies is an internal property of the swimmers, the influence of σ f l on the exponent β of Eq. (17) is similar to the influence of the environmental noise in our previous SPR simulations, as shown in the inset of Fig. 14. β is nearly constant for σ f l < 3%, then decreases smoothly with increasing σ f l . The average cluster size n in the stationary state is a func- tion of σ f l , as shown in Fig. 15. Increasing σ f l results in an increase of the overall break-up rate; hence n decreases. In the large σ f l limit, n approaches unity, corresponding to a disordered state with randomly distributed flagella. The power-law decay of the average cluster size with exponent ζ 0.26 emphasizes the universality of the swarming behavior of rSPP systems in two dimensions. The power-law scaling of n as a function of σ f l implies a divergence when σ f l → 0. We believe that the small deviation from the power-law behavior for σ f l = 0.1% in Fig. 15, as well as the deviation of β from the plateau value for σ f l = 0.1% in Fig. 14, are due to finite-size effects. C. Cluster Lifetimes The average cluster lifetime T li f e (n) decreases as an effective power-law function of cluster size n, see Eq. (21), with an exponent δ 0.5, as shown in Fig. 16. The value of δ is very close to the prediction based on the assumption of a mechanism of particle accumulation and shedding proportional to the cluster perimeter, as presented in Sec. III D. This good agreement provides further evidence for the different mechanisms of cluster stabilization for rods and flagella, which are a (partially) blocked motion and a hydrodynamic attraction, respectively. Note that the system size of the flagella simulations is not as large as for the SPR systems. Thus, the effective power law can only be observed over a smaller range of cluster sizes. In SPR simulations, single rods (n = 1) always have a much longer lifetime compared to expectation from the effective power law, see Fig. 8. In contrast, for flagella with full hydrodynamic interactions, T li f e (1) is much closer to the powerlaw extrapolation, and can even be lower than the power-law prediction (e.g. for σ f l = 0.5% in Fig. 16). D. Comparison of Sperm and Flagella As explained in Sec. II B, our model of a flagellum differs from the model of a sperm employed in Ref. [43] by the absence of a passive midpiece and a circular head. Also, in the sperm simulations [43], two sine waves were present on the tail, while a single sine wave is present on the flagellum. How similar or different is the collective behavior of sperm and flagella? There are three different aspects to this question. Synchronisation depends mainly on the interaction of the time-dependent oscillatory flow field of two neighboring flagella [24,44], and is therefore very similar, as can be seen from the results presented in Sec. IV A and those of Ref. [43]. On the other hand, the hydrodynamic attraction of sperm and flagella is quite different. A sperm cell, consisting of a flagellum and a large head, is clearly a pusher, as demonstrated by the average flow field of a sperm in Fig. 17. The flagellum pushes the fluid backward in both cases, but the bulky head of the sperm drags the fluid forward much more strongly, which generates the characteristic sidewise inflow of fluid towards the midpiece region [40,41,45]. In contrast, flagella are very weak pushers, as demonstrated in Fig. 13b above. Therefore, sperm have stronger hydrodynamic attraction than flagella. Finally, the swarming behavior in both flagella and sperm system is characterized by cluster-size distributions and the dependence of the average cluster size on the width σ f l of the distribution of beat frequencies. While the cluster-size distribution of flagella follows a power-law decay over a wide range, it was not possible to clearly identify a power-law behavior for sperm in Ref. [43] due to the relatively small systems of 25 and 50 sperm. The average cluster size is found to depend on σ f l as n ∼ σ −ζ f l , with ζ = 0.20 for sperm [43] and ζ = 0.26 for flagella. Larger systems have to be investigated to see whether the exponents ζ for sperm and flagella are different or not. In any case, the stronger hydrodynamic attraction of sperm, which favors larger cluster sizes, is partially offset by the bulky head of sperm, which implies that the sperm clusters in Ref. [43] are much more loosely packed than the flagella clusters studied here. V. SUMMARY AND CONCLUSIONS We have simulated systems of rigid rods propelled by a constant force along their long axis, and systems of flagella propelled by a sinusoidal beating motion, in two dimensions. In both systems, we observe cluster formation and break-up, controlled by the particle density and the internal or external noise. In our simulations, the particle density is always much lower than the critical density of a nematic phase in thermal equilibrium. Without any attractive potential, self-propelled rods (SPR) exhibit an aggregation behavior triggered only by volume exclusion. Three characteristic types of cluster-size probability density functions Π(n) appear in different regions of a dynamic phase diagram of stationary states. At high noise and low density, the system is characterized by Π 1 , which shows a power-law distribution over a range of cluster sizes, with an exponential cutoff at large cluster sizes. At low noise and high density, the system is in a state characterized by Π 3 , which has a peak at sizes near the total number of particles in the system, representing a giant cluster. Systems in an intermediate region of noise and density are characterized by Π 2 , which is a transition state between Π 1 and Π 3 . It has a bimodal shape, with a power-law decay at small cluster sizes and a shoulder at larger sizes. Clusters in Π 1 and Π 2 systems retain a high motility, whereas the giant clusters found in the third state is almost immobile due to its blocked configuration. The average cluster size at equilibrium, directly related to cluster-size distribution Π, displays a power-law dependence with decreasing noise amplitude before the system reaches the Π 3 state. Sinusoidally beating flagella were simulated in a low-Reynolds-number fluid with full hydrodynamics as an example of self-propelled rod-like particles with explicit propulsion mechanism. Flagella synchronize their beats and attract each other through the hydrodynamic interactions. Despite the different propulsion mechanisms, the basic swarm behavior of aggregation and clustering observed for swimming flagella is remarkably similar to the behavior seen in SPR systems. We observe both Π 1 and Π 2 cluster-size probability density functions by varying the width σ f l of the flagellar beat-frequency distribution, which acts as a source of internal noise in the system. The average cluster size also display a power-law dependence on σ f l , as for SPR systems. Despite these similarities in the clustering behavior, the two systems show some important differences. They can be traced back to the hydrodynamic attraction between beating flagella, which is absent in our simulations of self-propelled rods. First, the configurations of the flagella clusters consist of tightly stacked flagella with synchronized shapes, and extend in their moving directions. Those elongated clusters are reminiscent of the huge, mobile "sperm trains" observed in rodent-sperm experiments [27]. Clusters in the SPR systems are more compact, and have a wedge-like structure, which arises from the partially blocked rod motion responsible for the cluster aggregation, as well as from collisions with other clusters. Second, the Π 3 state of a completely blocked structure, which is observed for SPR at high density and low noise, does not seem to exist in flagellar systems. Third, the cluster lifetimes decay with different effective power laws, δ = 0.2 for SPR and δ = 0.5 for flagella. Finally, hydrodynamic interactions between different flagella clusters act as an additional source of noise and contribute to increase the break-up rate. The existence of the giant, immobile cluster should depend sensitively on the aspect ratio and the type and range of the interactions between self-propelled rods, where longer rods and shorter-range interaction favors the giant-cluster formation. This conclusion follows from the result of Ref. [37] for rods of aspect ratio L rod /l b ≤ 12 that Π 1 -Π 2 boundary shifts to higher density with decreasing rod length, and our result of Fig. 7 that the Π 2 state corresponds to "supercooled" liquid state which transforms into the Π 3 state once a giant-cluster nucleus has formed. Blocked clusters were not seen in Ref. [37] for rod lengths L rod /l b ≤ 12 due to the relatively small system size with N rod = 100. However, blocked states were observed in Ref. [36] for a much larger rod length, L rod /l b = 40, already for a system of only about 50 rods at density ρ rod L 2 rod = 2. Our simulations have been restricted to the isotropic phase of rods in thermal equilibrium. It will be interesting to see in the future whether immobile, blocked states can also exist (or even dominate) in the nematic regime, or whether they are suppressed by the preferred rod orientation. In the light of our results, we conclude that different systems of rod-like self-propelled particles display a universal swarming behavior, but also specific properties related to their propulsion mechanisms and the presence or absence of hydrodynamic interactions.
Divine impoliteness: How Arabs negotiate Islamic moral order on Twitter In this paper, I examine impoliteness-oriented discourse on Arabic Twitter as a resource for the negotiation of Islamic moral order. I do so by highlighting the responses Arabs post in reaction to a tweet which attacks Islamic cultural face. As the triggering act poses an indirect request to change an authoritative Islamic practice deemed immoral by the instigator of the tweet, sundry responses were generated to repair the damaged collective face through keeping intact or arguing against the questionable moral order. The main strategy I identify as a response to the professed face-attack is divine (im)politeness, intertextually referencing religious texts in favor of (or against) the existing (im)moral order. The rites of moral aggression also draw upon questions, provocation, personal attacks and projection of Islamic behavior onto unaddressed third parties (e.g., Christians and Hindus). The findings capture one moment of a historic shift in Islamic moral order and the role that impoliteness plays in digital Arabic contexts. Introduction Within Arabic Islamic contexts, technology is found at the center of social and religious activism. Examples include Omanis' use of cassette tapes to disseminate religious sermons to the masses in the 1980s (Eickelman 1989), young women's use of mobile technology in the Arabian Gulf to challenge Arab gender norms in the 1990s (Al Zidjaly and Gordon 2012), Arabs's Habermasian digital religious and political debates in the 2000s (Eickleman and Anderson 2003), and their appropriation of Yahoo chatrooms and the WhatsApp chatting messenger to revisit sociocultural concerns and reconstruct Arab identity from the bottom up (Al Zidjaly 2010Zidjaly , 2014Zidjaly , 2017a. Across these examples and platforms, Arabs have continually, creatively, and surreptitiously used emerging technologies to circumvent their society's limits on free expression and enact political and social activism (KhosraviNik and Sarkhoh 2017;Nordenson 2018;Sinatora 2019aSinatora , 2019bSumiala and Korpiola 2017;Zayani 2018). Therefore, as demonstrated by my decade-long ethnographic examination of Arab identity on social media, from the inception of new media technology (and, in particular, Yahoo chatrooms), Arabs have appropriated social media platforms as a tool to incite social and political change by turning traditionally nonnegotiable discourses into ones which are open for discussion (see Al Zidjaly 2010, 2012, 2019a, 2019b, 2020. Further examining the extent of such activities would help fill a critical gap in digital discourse research, given the centrality of Arab identity to international concerns (Nordenson 2018) and the complexity of Arab identity based on religion (versus language or geography; Lewis 2001). In this paper, I explore the linguistic strategies used by a group of Arabs on Twitter in responding to an aggravating tweet that questions a ubiquitous cultural practice. To do so, I build on research that has identified the role that intertextual references play in facework and identity negotiation in digital Arabic contexts (Al Zidjaly 2010, 2012, 2017b, 2019aBadarneh 2019;Badarneh and Migdadi 2018;Labben 2018) 1 . Specifically, I draw upon a relational approach toward impoliteness (Locher and Watts 2005) and the concept of moral order (Kádár 2017a), defined as situated cultural norms (Graham 2018) or a set of ideas and beliefs arranged into an ordered whole or ritualistic practice. I additionally examine the following research questions: How does a group of Arabs respond to public cultural face-attacks, especially at a time when the Muslim identity is undergoing local and global debate? How do they digitally negotiate Islamic moral order? What role does impoliteness and facework play in shifting Islamic intersubjectivity on Twitter? Impoliteness, relational work and the moral order Impoliteness as a linguistic lens to examine aggravation-oriented pragmatic variation has undergone two major shifts 2 . The first shift was a natural consequence of the relational, discursive turn the general field of Politeness Studies underwent, as led by Watts (2003), Locher (2006) and Locher and Watts (2005). Accordingly, binary face-enhancing and face-threatening data were replaced with discourses that include disagreeable to acrimonic behavior, which, according to Locher and Bolander (2017), established the importance of face (Goffman 1967) to impoliteness research, although facework was conceptualized broadly. The interrelation between face and impoliteness research proved especially beneficial in interpersonal pragmatics, an approach developed by Locher and Graham (2010) that foregrounds the creation of relationships through interaction. Graham and Hardaker (2017: 786) posited that interpersonal pragmatics is particularly important for digital interaction because of its focus on "the ways that interactants interpret and use their understandings of (im)politeness in given digital contexts to regulate their identities and interactional choices within emergent discourse". In this view, impoliteness is not only an interactive phenomenon, but also an interpersonally and culturally embedded social practice. Locher (2015) further noted that the first shift opened the academic field of impoliteness to multidisciplinary approaches and methodologies, especially identity construction research (for more on impoliteness and identity construction, see Garcés-Conejos Blitvich 2018). The second shift was adopted by a slightly smaller group and concerned the moral order. As moral order consists of a "cluster of social and personal values that underlie people's production and interpretation of (im)polite action", Kádár (2017a;xii) argued that investigating impoliteness requires peering into the perception of morality and interpersonal relationships within the broader context and rituals in which they are based. A ritual, in particular, can trigger polite or impolite evaluations, as rituals maintain the order of things and tend to imply a moral stance (Kádár 2017a). Consequently, some scholars have retheorized impoliteness as a matter of morality (Haugh 2015(Haugh , 2018Kádár 2017a;Xie 2011Xie , 2018Xie et al 2005). The relational, identity, and moral aspects of impoliteness highlight the complexity involved in impoliteness research, as argued by Xie (2018) in his introduction to a special Исследование вежливости и невежливости в глобальном контексте issue dedicated to examining the multifaceted interrelation between moral order, digital discourse and different types of impolite-oriented interactions. Three key studies from the special issue merit mention, as they depict the many ways identity is explored through impoliteness. First, Reiter and Orthaber's (2018) study of Slovenian commuters' impolite expression of moral indignation on Facebook against bus drivers highlights a historic moment of socioeconomic change that legitimates moral relativity in times of unrest. Second, Georgakopoulou and Vasilaki's (2018) demonstration of impoliteness as a resource for restoring the moral order of a specific group constructs impoliteness as a tool to accentuate group identity 3 . Third, Sinkeviciute's (2018) take on the relationship between impoliteness and othering demonstrates how moral transgressions concerning national identity justify aggressive verbal behavior against the offending party (e.g., accusations of drug use and mental ability insults). Collectively, the studies in Xie's (2018) special issue on impoliteness and moral order signal the multifunctionality of impoliteness-oriented utterances and the role that impoliteness as an analytical tool can play in examining and challenging individual and group identities. Questions regarding the exact nature of the link between impoliteness, morality and group identity remain, however. Impoliteness and intertextuality in Arabic discourse Research on impoliteness in Arabic contexts remains sparse and mostly disconnected from the retheorizations of impoliteness summarized in the previous section. For instance, a 2017 study by Hammod and Abdul-Rassul on impoliteness strategies in English and Arabic Facebook comments simply applies Culpeper's (1996) taxonomy of impoliteness to the selected examples (see also Mohammed and Abbas 2015) 4 . The remaining studies highlight the types of speech acts (e.g., agreement, compliment, apology, disagreement) used in various Arab countries (e.g., Iraq, Jordan, Saudi Arabia) to provide an understanding of Arabic communicative styles (e.g., Abdul Sattar et al. 2009;Al-Adaileh 2011;Alaoui 2011;Al-Shlool 2016;Al-Zumor 2011;Ebadi and Salman 2015;Emery 2000;Farhat 2013;Feghali 1997;Mazid 2006;Najeeb et al. 2012;Nelson et al. 2002;Samarah 2015). An exception is Tetreault (2015), who examined the linguistic impoliteness-oriented practices of a group of second-generation Algerian immigrant teenagers in France: Following Locher and Watts (2005), the author adopted an expansive take on facework, theorizing it as more than mere mitigation of face attacks, to highlight the teens' use of Hashek, a North African politeness formula. The analysis demonstrated how the function of the Arabic formula is changed from showing deference in the North African discourse contexts to facilitating facework in multiparty contexts involving peers in France. Tetreault (p. 297) discusses impoliteness strategies "as part of larger reflexive processes in which meta-pragmatic strategies and language use come to encompass identity perception beyond the interaction". The author therefore argues for a complex view of impoliteness research, especially as it relates to Arabs, given the centrality of face (i.e., the concept of le respect) in Arabic discourse. Impoliteness, intertextuality and Arab identity in digital contexts has been the focus of a growing body of studies, including my own, that demonstrate the centrality of authoritative religious texts in negotiating Islamic identities on various digital platforms. In Al , I analyzed posts that Arabs of differing backgrounds had left on the Al Jazeera news agency website in response to an article critiquing Arab cultural face. I examined this facework through the analytical lens of reasonable hostility (Tracy 2008), wherein expressing outrage is expected and encouraged in the democratic discourse endemic to social media (although Arabs were and remain new to this democratic/civic form of communication). Moreover, I referenced the Goffmanian (1967) concept of face (i.e., individuals' public image) to characterize the strategies Arabs used in the restorative stage (Ting-Toomey 2004) to maintain cultural face (i.e., the collective face of a nation [Ting-Toomey 1994]). The findings revealed the prevalence of selfface-attacks in response to the triggering article, which I associated with Arabic cultural practices of lamenting or self-flagellation (Hegland 1998;Wilce 2005) and getting the lower hand (Beeman 1986;Al Zidjaly 2006;. I argued that by attacking their own face, Arabs put themselves at the mercy of more powerful agents, which paradoxically helps them gain the upper hand and exercise agency (through feeling bad for them which often leads to helping them out). Al Zidjaly (2010) demonstrated Arab Muslims' transformation of authoritative Islamic texts into internally persuasive ones that bring new understandings to old texts. In another study, Muslim Arab psychologists' referencing of Islamic authoritative texts in their consultations on an Islamic website was examined as a means of perpetuating Islamic texts in medical contexts (Al Zidjaly, 2017b). In Al Zidjaly (2019a) and (2020), intertextual religious references served the function of linguistically repairing Islamic ideologies to better align Islam with twenty first century tenets. Building on Al Zidjaly (2010Zidjaly ( , 2017b and Badarneh and Migdadi (2018), Badarneh (2019) further examined how intertextual references from Arabic religious and non-religious texts are used by a group of Arab elite intellectuals to perform acts of impoliteness in reader comments posted to a London-based pan-Arab news agency platform. The author (11) terms these types of posts as intertextual impoliteness, defined as "the use of a synchronic or diachronic intertextual reference whose content has conventionally become, in the Arabic sociocultural context, and from the point of view of society at large and its "ceremonial idiom" (Goffman 1967, 89), a "symbolic linguistic means for conveying impoliteness" (Culpeper 2010, 3232)". Badarneh's findings suggest that referencing religious and literary texts in this way positions the post author as intellectual and oppositional and the other as deserving of the putdown. Having established the role of authoritative texts in motivating acts of impoliteness, the implicit nature of impoliteness as used in the Arabic context is indicated 5 . This paper further explores the implicit use of this Arabic cultural practice of implicit impoliteness through intertextual references. Data and analytical framework The data set is extracted from a larger longitudinal and ethnographic project on Arab identity and social media I commenced in 2015 (the impoliteness project is part of the section on the Arabic Reform Community [Al Zidjaly 2019a]) 6 . Accordingly, I collected various types of data (e.g., over 50,000 tweets and memes with comments by Arabs from various religious and political backgrounds) to contextualize the use of impoliteness. I also collected larger discourses relevant to examining the role that impoliteness plays in the context of digital religious activism (e.g., the history of Islam, Islamic religious texts, cultural discourses, beliefs and daily practices). In this paper, I zoom in on a thread of Twitter responses posted in reaction to a triggering comment that indirectly questions the morality of an established Islamic prayer, thus requesting its change (i.e., the questioned prayer or supplication limits requests of healing to Muslims). The triggering tweet generated 1500 comments; however, this paper includes only the first 300 comments, as these were posted within days of the original tweet and the remaining 1200 comments repeated strategies and sentiments demonstrated in the first 300, indicating the debate was resolved around the 300th comment. The set of comments selected for analysis represents what Herring (2004) refers to as sampling by theme (i.e., including the publicly accessible messages in a particular thread). The thread under analysis was selected because it represents the most liked, circulated, and commented-upon comments. I further include only first-order comments (those addressed to author) and exclude replies to comments (Culpeper 2006). Although I did not contact the tweet author or any commentators for feedback, I was already aware of the debate and sentiments involved, as I follow the tweet author and I too am a Muslim reformer. Analytically, I draw upon Herring's (2004) computer-mediated discourse analysis and Bateman's (2014) multimodal approach to analyze the data. My examination further integrates recent theorizations of facework and impoliteness (discussed in the previous section) with the Bakhtinian concept of intertextuality (1981; Kristeva 1980), which asserts that texts and actions are in constant dialogue with past and future discourses. Specifically, I theorize impoliteness as an analytical lens to capture the impetus of moral shift in the Arabic context and intertextuality as crucial to a conceptualization of impoliteness as a culturally embedded practice (not just a local achievement). My analysis distinguishes two types of intertextual references: authoritative discourse, defined by Bakhtin (1981) as being from the past and closed to negotiation (e.g., the Quran, considered the word and instructions of God, and the hadiths, the reported sayings of the prophet of Islam documented in the authoritative books of Sahih Al-Bukhari and Sahih Al-Muslim); and internally persuasive discourse, which are open for discussion (e.g., praying for five or three times a day). I necessarily ground the aggravating tweet and its responses in these larger Islamic discourses, as the actions documented in this study are representative of various practices Arabs regularly use on social media platforms. Highlighting the role of intertextuality in negotiating Arabic moral order enabled me to construct Twitter as a cultural tool (Scollon 2001) of change (in contrast to its traditional conceptualization as an affiliation medium (Zappavigna 2011). I, therefore, contribute to the sparse research on impoliteness on Twitter and identify a new function of impoliteness (i.e., creating a shift in consciousness), which extends well beyond its role as accentuation of group identity or resistance of moral order (Graham 2018). Analysis In this paper, I integrate moral and relational approaches to impoliteness. Impoliteness is defined herein as a mediated practice that is discursively and culturally embedded for the purpose of underscoring the "negotiability of the emic understandings of evaluative concepts such as polite, impolite, rude, etc., and, in connection with this, to highlight the embeddedness of the observed social practices within their local situated framework of the moral order (see, e.g., Kádár and Haugh 2013, p. 95)" (Locher 2015: 6). Moral order is defined in this paper as a ritualistic, authoritative practice engrained in the historical body (Nishida 1958) that requires cultural intersubjectivity for its change. The analysis section is divided into three parts: 1) the triggering tweet, 2) cultural contextualization to ground the tweet, and 3) the responses to the tweet (The rites of aggression); the response or comments section is in turn divided into three subsections: keeping the questioned supplication intact, responding to the first part, and changing the questioned practice. The triggering tweet The offending event (Jay 1992(Jay , 2000 or heckling (Kádár 2017a), as the majority constructed it, occurred on December 14, 2018. By February 10, 2019, when the data were collected, 1582 retweets, 5797 likes, and 1500 comments had been generated. Notwithstanding public condemnation, therefore, many did like the tweet. The author, animator and principal (Goffman 1967) of the tweet is a writer from the United Arab Emirates 7 , as established by his profile with 89,800 followers. Judging by his tweets and his published novel, he is one of the Arab reformers engaged in propelling forward the social and religious revolution in Arabia (for details on the Arab Reform Community see Al Zidjaly 2019a). Unlike Ex-Muslims on Twitter, the author's tweeting style could be considered as moderately provocative because he refrains from questioning the Quran and the hadiths, the two most authoritative texts in Islam. Keywords for analysis are underlined in the translation. ( This tweet, which consists of two parts, is triggering in three primary ways. First, by declaring an opinion that the widely practiced prayer is provocative (e.g., The supplication that provokes me the most) 8 , the author uses direct expressed agitation to resist and indirectly construct as immoral an established Islamic moral order (i.e., to pray for the salvation of Muslims only). Second, the author issues a face-attack wherein he calls out Muslims on their exclusive acts (thereby indirectly constructing Muslims as racists by questioning the lack of humanity involved in limiting God's love to one kind [Muslims] while excluding others [Christians and Hindus, the two major non-Muslim identities found in the Islamic Arabian Gulf]) 9 . In total, by inviting debate, the tweet poses an indirect request (directive) that Arabs turn an authoritative practice into an internally persuasive one. Third, the triggering tweet features a code switch between two dialects of Arabic: The first part (e.g., The supplication that provokes me the most) and the prayer itself (i.e., O Allah heal the Muslim sick) are in classical Arabic, constructing the expressed agitation as formal and the prayer as an authoritative text. The insult or face-attack (e.g., So, the mercy of Allah must be delimited and restricted to Muslims only ... You don't wish good health to them?!) is written in colloquial Emirati dialect, constructing it as informal. This intentional code switch between formal and colloquial Arabic, termed heteroglossia with awareness (Bakhtin 1981;Tovares 2019), makes the attack personal. The intended audience of the face-attack is the Arabian Gulf people, given the location and nationality of the author of the tweet and his use of the Arabian Gulf dialect, making the online context less collapsed (Georgakopoulou 2017), rendering it a little easier to identify the persons involved in the negotiation of an Islamic moral order. The tweet, however, does reach Arab Christians from Egypt who join to correct misconceptions about their Christian prayer practices 10 . Bousfield (2007Bousfield ( : 2188 explained, "impoliteness is only that which is defined as such by individuals negotiating with the hypothesized norms of the Community of Practice". Therefore, what one culture might consider polite might be "sanctioned aggressive facework" (Watts 2003: 260) or heckling (Kádár 2017a) in another. Thus, the cultural context merits consideration when reviewing the triggering tweet to understand how Arabs appraise and respond to such acts. Cultural contextualization First, one reason that many constructed the triggering act as offensive is because it was meant to disrupt Islamic intersubjectivity, defined as a display of mutual understanding of both conversational activities at hand and larger cultural norms that govern such conversations and actions (Schegloff 2007) -what Kádár (2017b) names as moral order or situated norms (Graham 2018). This construction is in line with the kinds of tweets that the author often posts. The action (and its indirect request to change the prayer) is also part of a larger reflexivity action that Arabs have been engaged in since the introduction of Yahoo chatrooms (Al Zidjaly, 2019aZidjaly, , 2020. The main purpose of this movement is to disrupt assumed norms or intersubjectivity to create a new Muslim identity which is more accepting of difference by repairing (or changing) Islamic authoritative texts (the Quran and hadiths), constructed by the reform community as problematic texts (Al Zidjaly 2020). This paper provides a representative example of another strategy to create reform by requesting the change of Islamic practices that are neither Quran nor hadith. Because these practices are not authoritative (i.e., not from Quran or hadith), change is often carried out by known members of the society (as questioning them does not result in incarceration). Both groups are part of a movement on Twitter and YouTube to create a more humanitarian Islam that cares about people of all background first and Muslim identity second. This is the motto and goal of the Arab Reform Project. This contextual information is necessary to keep in mind because, in many instances, what provoked commentators about the triggering tweet's request to pray for all instead of for Muslims only is that it is part of an ongoing attack on Muslim identity and request to reform Islamic texts and practices from outsiders (e.g., non-Muslim governments) and insiders (e.g., Muslims) alike. This request to shift practices pertains to identities and loyalties, and an indirect construction of Muslims as racists towards non-Muslims (a charge made directly in other contexts) explains their construction of this tweet as a face-threatening, aggressive act aimed to disrupt. Therefore, in many of the responses, commentators are addressing both the implied request to, locally, change the verbal text of the prayer from "heal Muslims" to "heal all", and, globally, engage in larger acts of critiquing and reforming Islam. While many of these acts are carried out by ex-Muslims or outsiders, the triggering tweet was especially face-threatening because it was an insider attack. Second, Islamic identity is based on religion (Lewis 2001) and Islam is engrained in daily activities. In other words, religion is not just a part of Islamic identity, it is the identity. Any attack on a practice is therefore an attack on Muslims themselves. Because religion is the source of their identity, verses from the Quran and hadith as well as ritualistic prayers are memorized by all since childhood-it is a rite of passage into adulthood. Therefore, many intertextual references in this study are only implied ("this is what we have learnt", Example 2) because they are part of the Islamic historical body (Nishida 1958). Furthermore, because Muslim identity is based on religion, any act by Muslims has to be justified through religious texts. For instance, Badarneh (2019) illustrated how Muslim Arab intellectuals' reference authoritative texts from the Quran, the hadiths and even poetry to justify impolite verbal attacks on others. Because any form of action has to be sanctioned by religious texts, intertextual references are therefore key to the reform project. Third, I opted for the terms divine politeness and divine impoliteness versus devout (im)politeness because by referencing divinely inspired authoritative texts that include impolite, hostile or exclusionary verbal attacks against non-Muslims, it follows that a devout Muslim must be impolite as outlined in the texts. In this scenario, it is not the Muslims who are being impolite, but rather God or the Almighty Himself. Muslims are simply following the divine texts. However, as it becomes clear in this analysis, the questioned prayer actually was not a hadith and the request to change it was taken up. Notably, when the triggering tweet's author took a follow-up poll five months after the original tweet, 89% (of 3600 votes) agreed to changing the prayer. Relative to the Arabic cultural context, I therefore introduce the term divine impoliteness that is, similar to indirect ritual offence (Kádár 2017b) and authorized transgression (Vásquez 2016), a linguistic strategy, position, speech act, or utterance that easily can be constructed as face-threatening, but which is legitimized by hostile religious texts or cultural practices 11 . Kádár (2017a) proposes that the rites of moral aggression comprise a natural process to defend what has been attacked. However, unlike typical defensive counterattacks against national or cultural face, the rites enacted in this example unexpectedly take a positive turn, with a proportionate number of Arabs siding with the instigator in favor of changing the questioned Islamic moral order. This is unexpected given the religious nature of Muslim identity and community (Al Zidjaly 2017b), which explains why the majority defended in favor of keeping the moral order intact. The rites of aggression I have organized the analysis of these rites of moral aggression into three sections: 1) comments in favor of maintaining Islamic norms or moral order (justified through divine impoliteness); 2) positive and negative reactions to the expressed divine impoliteness, which reference religious texts that are aggressive towards non-Muslims; and 3) comments supportive of shifts in the norms and moral order (justified by divine politeness or religious texts that are not aggressive towards non-Muslims). Within the three categories, I identify ten strategies commentators used to enact the rites of moral aggression, which alternately employed and rejected divine impoliteness. Each strategy consists of various sub-strategies that concomitantly occurred (e.g., projecting Islamic practices onto other groups, author attacks). Questioning was a key strategy used across the categories to query the norms and their authoritativeness in service of advancing the negotiation rather than halting it by claiming its authoritativeness. Accordingly, the discussion proceeds from non-negotiability to openness. Divine impoliteness: Those opposed, say no The triggering author's use of the word provoke (meaning in Arabic to anger or bother), construction of Muslims as racists and consequent threat to Arabs' positive face (Brown and Levinson 1987) and request to change an authoritative (Bakhtin 1981) Islamic practice stirred many negative emotions in commentators, evidenced by their impolite-oriented responding tweets of moral indignation and judgment (Culpeper 2011). Most of these commentators perceived the tweet as a public face-attack on Islamic Arabic identity and felt socially pressured to justify the questioned cultural practice. Many of the immediate reactions to the original tweet reject its inference of immorality by constructing the prayer as a marker of group identity rather than an insult to people from other cultures. Commentators also express the view that everyone else (e.g., Christians and Hindus) does this same act, thus projecting Islamic beliefs and practices onto others. In Example 2, chosen as representative of a larger set, divine impoliteness is referenced indirectly through stating (This is what we have learnt) and that there are (divine rewards) in doing what we have learnt, indexing the authoritative nature and the normalization of Islamic exclusive practices learnt long ago in childhood. ( Example 2 demonstrates interplay between visual and verbal components (Bateman 2014) to discredit the triggering tweet's indirect moral judgment of Muslims to change what commentators consider a perfectly well-established and normal cultural act. The perplexed emoji visually signals the absurdity of the request, while the comment verbally justifies the existing practice in three main ways. First, exclusiveness is constructed as a virtuous collective feature of Muslim identity through pronouns such as we, yourself, your Muslim brethren, your people as in "Your restricting the supplication to yourself and your Muslim brethren", stating "it shows that you wish the best for yourself and your people", and noting that, in the Quran and hadiths, God says he rewards those who highlight their Muslim identity (e.g., "there is reward in it"). Second, stating that "We have learnt", without sourcing the original texts, additionally highlights the authoritative nature of such acts. In other words, the us-and-them stance created by the prayer and defended in Example 2 construct exclusiveness as a positive expression of the group identity key to Arabic cultures based on tribalism (Hofstede 1990), rather than a politeness concern. The commentator's divisive stance is further enhanced by contrasting Muslim behavior with the prayer behavior of others through a set of questions aimed at inviting others to align with the position of the commentator. This whataboutism strategy attempts to justify Muslim behavior through questioning: Do Christians pray for the good health of (their projected nemesis) Jews? Do Hindus pray for mercy for (their projected nemesis) Buddhists? These unsubstantiated examples project both cultural practices and imagined nemeses onto the other, all in defense of the existing norms or religious practices based on cultural texts (i.e., everyone else does it so why should we stop or be criticized for it). This normalization of behavior strategy to save threatened collective face receives many likes, retweets and responses, as indicated in Example 4 (below). According to Tovares (2006), while many rhetorical questions do not receive answers, the ones posed in Example 2 do receive answers from Egyptian Christians -but only after commentators move from projecting their practices onto others into attacking Christians and Hindus for doing worse than simply not praying for others, as Example 3 indicates. ( So, it looks as if you are just now arriving from the pre-Islamic era, and you have missed hearing the saying of the Almighty: "It is not for the Prophet and those who believe to pray for the forgiveness of the idolaters -though they be close kin -after it becomes clear to them that they are destined for hell" [Quran 9: 113]. Over and above that, they don't pray for you; rather, they hope for your extermination. Example 3 concomitantly draws upon three types of impolite responses to reject the request put forth by the original tweet: The first part mocks the author by constructing him as a pseudo-intellectual who apparently does not know that things have changed since Islam was introduced (e.g., "it looks as if you are just now arriving from the pre-Islamic era"). Part two indicates that divine texts exist that prohibit Muslims from praying for non-Muslims -even if kin -because God the Almighty has decided they are destined for hell for not believing in him. This direct example of divine impoliteness justifies an aggressive cultural behavior of exclusion (choosing Muslim identity over humanity) based on a text from the Quran that bans Muslims from asking for forgiveness to non-Muslims. The logic is: because we have such divine texts that forbid us from praying for non-Muslims, we have no choice but to abide by the almighty, as he wants us to be impolite to others. Part three goes one step further by directly accusing Christians and Hindus of wishing bad will to Muslims (e.g., "Rather, they hope for your extermination"). In this accusation, the pronoun your is used in place of the group marker our, constructing the author of the triggering tweet as disaligned with both the Muslims and the non-Muslims he defends. The response in Example 4, written in Egyptian dialect, draws upon two linguistic strategies to highlight respect: 1) Egyptian polite discourse markers (e.g., addressing the attacker with honored Mr. and the Egyptian formal address term Hadretuk [akin to Vous in French]) and 2) formal letter writing rather than spoken style (with direct address and name). To counter the projections and accusations, the author of the response poses questions aimed to discredit the source of the accusation, linguistically mirroring the style of Example 2 (e.g., what evidence do attackers have to claim knowledge of the content of Christians' prayers in churches or that Christians wish ill for Muslims?). To demonstrate the contrary, the commentator answers his own questions by reminding tweeters of the Bible's instructions to love their enemies and those who attack innocent people. By using conventionally polite discourse markers (Schiffrin 1989) and indicating the Bible's stance on love for all humans, the Egyptian commentator demonstrates the divine source of Christian politeness. Other Christian Egyptian commentators engage in the discussion, signaling the wide reach of the tweet. Projecting one's behavior onto others (that others do it too) fails as a productive strategy when Egyptian Christians defend themselves (see Example 4) by challenging the questions and correcting the fallacious accusations made by Once projection fails, the majority of Muslim Arab commentators give up on rhetorical questions as a productive strategy and select a new strategy of divine impoliteness: directly drawing upon the two most authoritative texts (Bakhtin 1981) in Islam (the Quran and the hadiths) to sanction the impolite ritualistic prayer. Also, in contrast to Example 3, which referenced the divine text in conjunction with other linguistic strategies (e.g., ridiculing the author, verbally accusing others), Example 5 indicates an unapologetic stance against change, as the divine texts that index an aggressive behavior towards the other are listed as stand-alone. A. [A quotation from the Quran 9: 80] (Whether you ask for their forgiveness or not. If you ask for their forgiveness seventy times, Allah will not forgive them. That is for their rejecting Allah and His messenger. Allah does not guide an immoral people). God is the truth speaker. B. A redacted Hadith: The Prophet asked leave of Allah to request forgiveness for his mother, but it was not granted him. After that, he asked leave to visit her grave, and it was granted. Example 5A draws upon a verse from the Quran that disqualifies prayers of forgiveness (of any length) for non-Muslims as punishment for not believing in him. Consequently and indirectly, the commentator argues that praying for non-Muslims is futile and should not be done. The authoritativeness of the text (its unquestioned status) is signaled by the use of vocalization, a Quranic linguistic strategy, and the end statement of (God is the truth speaker), verbalized after reciting Quranic texts. Example 5B, posted by a different person, strengthens the argument for preserving the questioned practice by intertextually referencing the second most authoritative text in Islam to justify divine exclusiveness, as the story (extracted from a known hadith) indicates the prophet of Islam was allowed to visit the grave of his non-Muslim mother, but was forbidden by God to pray for her salvation 12 , thus demonstrating the exclusion of non-Muslims from God's mercy. Together, the intertextual references to authoritative Islamic texts legitimize verbal impoliteness against non-Muslims and present examples of divine impoliteness (impoliteness sanctioned by religious texts) 13 . Divine impoliteness creates a moral dilemma among the responders and bystanders. To exonerate themselves from the implicit charge of impoliteness (and the entailing racism against the other), the tweeters discuss the moral responsibilities of Muslims towards non-Muslims in accordance with the general Islamic moral order. A negotiation of the Islamic moral order (i.e., Islamic norms and cultural practices) ensues. In Example 6A, a commentator cautions against confusing Islamic prayers (i.e., rituals to accentuate group identity) with polite behavior, indicating they do not see how not praying for others is an impolite verbal act. The commentator instructs: If Muslims have to pray for non-Muslims, prayers for the salvation of their souls should trump prayers for the salvation of their bodies. This moral stance is condoned in Example 6B by a different poster, who declares, based on known but uncited Islamic teachings, that Muslims' only (verbal) moral obligation toward others is to pray for their guidance towards Islam, as anything else is forbidden. This declaration is prefaced with the plural pronoun we, highlighting group identity and the us-versus-them moral stance created from the onset of the rites to correct the perceived moral aggression of the triggering tweet. Reaction to divine impoliteness The divine impoliteness of prohibiting others to pray for all people angers many, leading to ridicule and accusations of racism in both directions. Those in favor of changing the prayer verbally align with the triggering tweet's author and attack those who want to preserve the impolite ritualistic prayer. The following are some of the most-liked tweets in support of changing Islamic prayers to include good will for all humans. The tweets also oppose all who justify racism (in prayers) against the other through referencing divine texts. Example 7B demonstrates support for the tweet that opened commentators' eyes to new possibilities. This admission is significant, as is the original triggering text, as such discussions of Islamic moral order are frowned upon (in some cases, forbidden); it is only the fact that the prayer was proven not to be a hadith (and also the choice of anonymity of social media) that have created a platform for such exchanges. Their assent is accentuated with sarcastic remarks against those favoring maintenance of the moral order, claiming it exposes them as racist (7C) and reflects distorted thinking regarding salvation (7D). Subsequent tweets repair the ritualistic behavior, replacing it with inclusive prayers for all to heal (Example 7E) 14 . To retaliate against the support shown for the triggering act and the rejection of divine impoliteness, those in favor of preserving the Islamic prayer enact five types of actions: attacking the tweet author, ridiculing him, questioning his identity as Muslim, provoking him and the supporters by repeating the questioned prayer, and defining Muslim identity. Example 8A, for instance, attacks the author's character by portraying him as a pseudo-intellectual. They ask God to heal him, which is a sarcastic statement uttered in the Arabic context to those who are mentally disturbed. In Example 8B, the commentator questions whether the author really is Muslim (being ex-Muslim is a crime that may lead to incarceration in Islamic societies). By asking the author if he would be bothered in the afterlife when the prophet of Islam calls out for his nation on Judgment Day, Example 8B further claims that group identity (including the belief in exclusiveness) is central to Islamic identity. Accordingly, this example questions the author's loyalty to Islam and Islamic identity. Example 8D also defends exclusiveness, simultaneously and directly stating that exclusiveness is a natural (rather than sinful) trait, and an outcome of love for oneself first. Example 8C seeks to accentuate the prayer's provocation and Islamic exclusiveness by adding the modifier only (God heal all Muslims who are ill. Only). The author is then bullied through a direct, ironic inquiry about his attitude before the commentator signs off with go to hell, sir, indicating rudeness and lack of care. Example 8E questions the author's authority and the legitimacy of his triggering question, indicating that in Islam any form of social change must be sanctioned by religion, whether by God as in past examples, or by religious men as in this example. Putting forth the need to sanction behavior by religious texts or men alerts those supportive of change to mirror the linguistic strategies their counterparts deploy to shift consciousness. When emotions run high and the legitimacy of requesting the moral shift is questioned, those supportive of change realize the necessity of referencing divine sources if they are to succeed. They additionally realize the resourcefulness of questioning (as per their counterparts) as a linguistic strategy to bring about change. In Example 9, a commentator turns the table on the divine impoliteness group by asking whether the questioned prayer is a hadith (an authoritative text)? If so, it shall remain unchanged, but if it is simply a historic prayer, then change is possible. This question shifts the balance and exposes the resistance to change endemic to Islamic societies. Despite a lack of supporting evidence, many Islamic texts and practices are treated as authoritative (sanctioned by God and his prophet). The prayer is found to not be a hadith. Recognizing that the questioned prayer was not an authoritative text (i.e., neither a Quranic verse nor a hadith) helped advance the discourse of shifting the Islamic moral order, motivating others to pose more questions and proceed with humanitarianinspired change that questions the Islamic moral order oriented around exclusiveness. Divine politeness: Those in favor, say aye The examples in this section present many questions and answers that move the discussion of the Islamic moral order from being nonnegotiable (authoritative) to one being internally persuasive, capable of being discussed and reconciled with humanitarian tenets and love for all. Together, this analysis illustrates the workings of (im)politeness and Islamic intersubjectivity concerning Islamic moral order: Namely, how the instigator used impoliteness as a linguistic strategy to disrupt Islamic intersubjectivity, how some commentators used impoliteness to maintain intersubjectivity by referencing religious texts that sanction exclusion, and how other commentators used politeness to disrupt and shift intersubjectivity by referencing texts that encourage inclusion. Discussion and Conclusion In this paper, I used impoliteness as an analytical lens to capture the shift in the Islamic moral order as manifested on Arabic Twitter. The analysis specifically identified ten strategies commentators used to enact the rites of moral aggression and alternately employ and reject divine impoliteness in response to a triggering cultural attack: discrediting the moral judgment, projecting onto other cultural groups, responding to the projection, referencing authoritative texts, considering Islamic moral responsibilities toward others, attacking the existing Islamic moral order, launching ridicule and counterattacks against the triggering author, turning to religious clarification, proposing legitimate negotiation of the Islamic moral order, and initiating the start of an actual shift in the moral order. This examination of impoliteness was useful for understanding what Arabs do in digital contexts and why such actions matter, thus aiding in capturing one historic digital moment (of many) made possible by the agency of Arabs on Twitter. Although present sociolinguistic research suggests that attacked individuals appropriate impoliteness to enhance group identity (Georgakopoulou and Vasilaki 2018) or to resist a particular moral order (Graham 2018), this paper demonstrated that, through the rites of moral aggression, impoliteness-oriented discourse served to create and maintain alliances (Graham 2007(Graham , 2008, help negotiate personal relations (Locher 2018), and ignite a reshaping of cultural identities. Specifically, commentators shifted from using divine impoliteness to justify a questioned moral order to appropriating divine politeness to justify the change in Islamic moral order and reconcile them with humanitarian principles. Accordingly, this study demonstrates that impoliteness is not only a relational concern at the linguistic level, but a cultural concern at the social level -key to disrupting an old intersubjectivity and erupting a new intersubjectivity. In their efforts to create this new intersubjectivity, Arabs are not just repairing problematic religious texts (as I demonstrate in Al Zidjaly [2020]), they also are highlighting the non-aggressive, the non-impolite texts as a source to create a new moral order. These findings foreground the call made by Kádár (2017a) to examine the workings of impoliteness and moral order in under-studied non-Western cultures. Doing so is needed to properly theorize impoliteness-oriented discourse because as a cultural tool, its functions are deemed to vary. Impoliteness therefore merits continued examination in digital contexts, as social media platforms provide heretofore unprecedented access to different types of data, cultures and actions (KhosraviNik 2016; Al Zidjaly 2019b). According to Blommaert (2018), social media moreover provide the opportunity to test and fully theorise terms and concepts-in this case, allowing me to linguistically identify a new function of impoliteness that goes beyond relational work to cultural work with larger, yet to be realised effects. Linguistically analyzing Arabs' Twitter-based negotiation process following cultural attack also revealed the role that religion can play as a resource for impoliteness, rituals and the moral order (while highlighting the role that intertextuality, questions Исследование вежливости и невежливости в глобальном контексте and pronouns can play in the negotiation process). The centrality of religion to Arab identity suggests that the key to advancing Arab reform might lie in intertextually referencing inclusive religious teachings and texts needed to sanction the reconciliation of Islam with so it reads the tenets of the 21st century -shifts that are key in an increasingly digitized and globalized world. Although this might be irksome for Ex-Muslim reformers, this route may offer the most expedient path to change, given the religiously engrained nature of Arabic societies (Lewis 2001). Further, as this analysis indicated, Islamic authoritative texts allow for various interpretations and even anecdotes of actions and Islamic practices assumed to be authoritative may actually be malleable cultural practices (see Example 9), underscoring the importance of ongoing examination of such texts. Impoliteness as a cultural practice connected to moral orders of societies therefore was shown to be a driving force of the Arabic reform project, as it was the negative reactions produced by divine impoliteness that prompted an attitude shift. Impoliteness also was central to unraveling and to understanding social change. This bears further examination in different cultural contexts and social media platforms to adequately theorize the links between impoliteness, moral order and social change. In sum, this paper contributes to advancing the Arabic reform movement I documented in Al Zidjaly (2019a). The analysis not only contributes to impoliteness and social media research, but also to research on Arab identity and sociolinguistic theory and method. Impoliteness-oriented discourse, as a key to cultural revolution, is an important tool in the process of cultural reflexivity occurring in digital discourses among Arabs. Giddens (1990) noted that such reflexivity is a main ingredient in the creation of democratic societies. Being able to witness the negotiation has made it easier to fathom what goes into the making of Arab identity and analyzing the workings of such cultural reflection has provided a rare glimpse into the shifts needed for Arabs to integrate into an increasingly globalized, connected world. This is a notable counterpoint to the cynicism typically surrounding social media actions and actual change (See Mozorov 2011 for a discussion). The ramifications and extent of such changes in Islamic society are yet to be measured; in the meantime, divine politeness appears to have ignited change among the participating Arab commentators. My ongoing ethnographic documentation of Arabs' digital actions demonstrate that since the represented tweet and ensuing discussions, inclusive Islamic prayers frequently appear on Twitter and WhatsApp. They signal an actual shift in the Islamic identity which is historically centered around exclusiveness. Twitter therefore has played a key role in providing Arabs with a platform to engage in cultural reflexivity, and impoliteness has provided Arabs the linguistic tool to elevate their societies.
Association between Genetic Variants in DNA and Histone Methylation and Telomere Length Telomere length, a biomarker of aging and age-related diseases, exhibits wide variation between individuals. Common genetic variation may explain some of the individual differences in telomere length. To date, however, only a few genetic variants have been identified in the previous genome-wide association studies. As emerging data suggest epigenetic regulation of telomere length, we investigated 72 single nucleotide polymorphisms (SNPs) in 46 genes that involve DNA and histone methylation as well as telomerase and telomere-binding proteins and DNA damage response. Genotyping and quantification of telomere length were performed in blood samples from 989 non-Hispanic white participants of the Sister Study, a prospective cohort of women aged 35–74 years. The association of each SNP with logarithmically-transformed relative telomere length was estimated using multivariate linear regression. Six SNPs were associated with relative telomere length in blood cells with p-values<0.05 (uncorrected for multiple comparisons). The minor alleles of BHMT rs3733890 G>A (p = 0.041), MTRR rs2966952 C>T (p = 0.002) and EHMT2 rs558702 G>A (p = 0.008) were associated with shorter telomeres, while minor alleles of ATM rs1801516 G>A (p = 0.031), MTR rs1805087 A>G (p = 0.038) and PRMT8 rs12299470 G>A (p = 0.019) were associated with longer telomeres. Five of these SNPs are located in genes coding for proteins involved in DNA and histone methylation. Our results are consistent with recent findings that chromatin structure is epigenetically regulated and may influence the genomic integrity of telomeric region and telomere length maintenance. Larger studies with greater coverage of the genes implicated in DNA methylation and histone modifications are warranted to replicate these findings. Introduction Like most eukaryotic organisms, human chromosomes are capped with a 6 base pair telomeric repeat (-TTAGGG-) that helps prevent incomplete DNA replication and genomic degradation [1]. Telomeres shorten with each cell division, and this progressive shortening has been postulated to be a causal factor, or at least an indicator, of organismal aging [2]. Short telomeres in blood cells have been inversely related to chronological age, and age-related disorders such as hypertension [3] and cardiovascular disease [4,5]. Significant associations with telomere length have been also found for SNPs in MEN1, MRE1A, RECQL5, and TNKS in a study evaluating 43 telomere-associated genes such as genes encoding telomerase, shelterin proteins and proteins involved in DNA repair [19]. Although not included as a candidate pathway in this study [19], emerging data suggest that epigenetic modifications might be another regulatory mechanism of telomere length. In mouse models, knockout of histone methyltransferases [20] or of DNA methyltransferases [21] both have been shown to result in abnormal telomere elongation: Telomeric DNA repeats lack CpG sites and are not directly methylated, but subtelomeric DNA is heavily methylated and correlates with telomere length and telomeric recombination in human cancer cell lines [22]. The purpose of the present study is to investigate genetic variants in DNA and histone methylation as well as other telomere biology-associated proteins in relation to telomere length in blood cells. Results Relative telomere length, estimated from the ratio of telomeric DNA relative to a single copy gene DNA (t/s ratio), ranged from 0.43 to 2.71 with an average of 1.25 among 989 women in the present study. Table 1 shows the associations between relative telomere length and 38 SNPs in genes involved in telomere biology and DNA damage response. Only one out of these SNPs (ATM rs1801516 G.A) was found to be associated with relative telomere length after adjustment for age and breast cancer diagnosis (p = 0.031 for recessive model). In contrast, we found suggestive evidence for an association with relative telomere length for five of 33 Table 2). For the six SNPs that were significantly associated with relative telomere length at a = 0.05, we further estimated multivariableadjusted relative telomere length by genotype under different genetic models and report the model with smallest p-value ( Table 3). The minor alleles of BHMT rs3733890 G.A, MTRR rs2966952 C.T and EHMT2 rs558702 G.A were associated with shorter telomeres, while minor alleles of ATM rs1801516 G.A, MTR 1805087 A.G and PRMT8 rs12299470 G.A were associated with longer telomeres. Age and lifestyle factors like obesity and smoking are known to be important determinants of telomere length. In our data, however, there were no significant associations of obesity or smoking with relative telomere length [23]. Age was significantly associated with telomere length, but there was no evidence of effect modification of the association between telomere length and individual SNPs by age groups (age ,55 years vs. $55 years). Discussion A few SNPs have been related to telomere length, but other common genetic variations related to telomere length remain to be discovered. In the present study, we carried out an analysis of common genetic variations in candidate genes in relation to telomere length in blood cells, and observed suggestive evidence for associations with telomere length for several polymorphisms in genes involved in DNA and histone methylation. We found that women inheriting the variant allele of BHMT rs3733890, MTRR rs2966952 and EHMT2 rs558702 had shorter telomeres, whereas women inheriting the variant alleles of MTR rs1805087 and PRMT8 rs12299470 had longer telomeres. Epigenetic modifications are associated with telomere length [24]. Telomeres are flanked by large blocks of heterochromatin, which stabilize repetitive DNA sequences by inhibiting recombination between homologous repeats [25]. DNA methylation and histone H3 methylation at lysine 9 are associated with repressed chromatin [26] and deregulation of epigenetic modifications have long been known to affect the integrity of the telomeric region [25]. Conversely, in the absence of telomerase the progressive shortening of telomeres results in a variety of epigenetic changes including increased histone acetylation, decreased histone methylation, and subtelomeric DNA methylation [24]. Together, these suggest that epigenetic changes and the regulation and maintenance of telomere length are intertwined processes. We observed that rs12299470, located in intron 1 of PRMT8, is associated with long telomeres. PRMT8 belongs to a family of protein arginine methyltransferases (PRMTs) [27], and recognizes a glycin-and arginine-rich (GAR) motif as a preferred methylation site [28]. It was recently found that the shelterin component TRF2 contains the GAR motif, and deletion of PRMT1 promotes the formation of dysfunctional telomeres via inhibiting the binding of TRF2 to telomeric DNA [29]. The role of PRMT8 in telomere stability and function remains to be fully elucidated. However, it is interesting to note that PRMT8 was identified because of a high degree of sequence homology with PRMT1 [27]. Euchromatic histone-lysine N-methyltransferase 2 (EHMT2) is a key histone methyltransferase [30] and is known to be particularly important for histone methylation of euchromatin [30]. The EHMT2 rs558702 is located in the 59 flanking region of the EHMT2 gene (4,862 bp upstream from transcriptional start position), and predicted by TFsearch (http://www.cbrc.jp/ research/db/TFSEARCH.html) to be in a putative binding site of v-Myb or c-Myb transcription factors. Therefore, it is possible that the variant allele limits the binding of Myb transcription factors to its consensus site and reduces the EHMT2 gene expression. Enzymes of folate single-carbon metabolism play an essential role for the synthesis of DNA precursors and remethylation of homocysteine for S-adenosylmethionine (SAM)-dependent DNA methylation [31]. Among those enzymes are betaine:homocysteine methyltransferase (BHMT), methyltetrahydrofolate:homocysteine methyltransferase (MTR) and 5-methyltetrahydrofolate-homocysteine methyltransferase reductase (MTRR). In the present study, the BHMT rs3733890 G.A was associated with shorter telomere length, and the MTR rs1805087 A.G was related to longer telomeres. The BHMT rs3733890 is a missense mutation resulting in the conversion of an arginine residue to a glutamine residue at codon 239 in exon 6, although the variant allele does not appear to change in enzyme activity or homocysteine levels [32,33]. However, carriers of the variant alleles have been reported to have favorable health profiles such as low prevalence of coronary artery diseases [32] and reduced risk of several congenital anomalies such as orofacial cleft [34] and neural tube defect [35]. The MTR rs1805087 is a missense change resulting in an amino acid substitution from aspartic acid to glycine at codon 919 in exon 26. The variant allele of this SNP has been associated with moderate increase of homocysteine levels [36]. We also observed shorter telomeres associated with the MTRR rs2966952 C.T. This SNP was selected in the present analysis because it is located at the 59 flanking region of the MTRR gene (1187 bp upstream from transcriptional start position) and is predicted by TFSEARCH (http://www.cbrc.jp/research/db/ TFSEARCH.html) to destroy the binding site of transcriptional factor C/EBPb [37]. However, the SNP also leads to a lysine to arginine amino acid change at codon 56 in exon 2 of the FASTKD3 gene, which encodes fast kinase domain-containing protein 3, a mitochondrial protein essential for cellular respiration [38]. Although the functional implication of this gene in telomere length is not known, it is possible that the observed association between rs2966952 and telomere length is mediated through effects on FASTKD3 rather than MTRR. Two SNPs in the MTHFR gene (rs1801131 and rs1801133) were not associated with relative telomere length. This finding is in agreement with a previous report showing a weak association between MTHFR 677C.T polymorphism (rs1801133) and longer telomeres only among those having lower than median plasma folate concentration but no overall association between the variant allele and telomere length in men [39]. Possible effect modification by plasma folate status could not be evaluated in the present study. The ATM (ataxia telangiectasia mutated) gene encodes a protein kinase, ATM, that regulates a large number of proteins including the checkpoint kinases CHK1 and CHK2 [40,41]. Induction of the checkpoint kinases is crucial for cell cycle arrest in response to DNA damage, and defective checkpoint responses can cause genomic instability and neoplastic transformation [40]. The present study found longer telomere length associated with the ATM rs1801516 G.A. The polymorphism is a missense mutation resulting in an amino acid change from aspartic acid to asparagine (dbSNP) that was predicted by PolyPhen [42] to be possibly damaging. However, a previous study by Mirabello et al. examined this SNP and 8 additional SNPs in the ATM gene, and did not find significant association with telomere length [19]. Several limitations in the present study should be discussed. First, as a large number of statistical tests were performed, our findings are particularly subject to type I (false-positive) error. However, we chose to report the p-values without correction for multiple comparisons because the SNPs in our study were not selected randomly but from candidate genes based on functional prediction using SIFT [43] and PolyPhen [42]. Still, the results need to be interpreted with caution given that none of the associations would have reached the same level of significance after adjustment for multiple comparisons. Second, some important candidate genes were not evaluated in this study. For example, H3-K9 methylatransferase, SUV39H1 and SUV39H2 were shown to be associated with the heterochromatin protein HP1 [44], and their absence resulted in modification of telomeric chromatin structure, and subsequent alteration in telomere length [20]. However, none of SNPs for these two genes met our criteria for selection. Lastly, we should point out that this study took place within a cohort enriched for a family history of breast cancer. While these women might have different allele frequencies or telomere lengths than a sample of the general population, we have no a priori reason to believe that specific characteristics of this population would impact the observed relationship of SNPs and telomere length. However, such an effect remains a possibility until verified in other general populations. In conclusion, the present study found associations with telomere length for candidate SNPs (BHMT rs3733890, MTRR rs2966952, EHMT2 rs558702, MTR rs1805087 and PRMT8 rs12299470) that are implicated in DNA and histone methylation. These results support existing findings of epigenetic regulation of telomere length. These novel associations with telomere length require further replication in larger studies with more substantial genomic coverage as well as functional characterization of the variant alleles. Ethics Statement All individuals were informed about purposes, requirements, and rights as study participants. Written informed consent was Study Population and Telomere Length Measurement Data are from the Sister Study, which is a nationwide cohort study of environmental and genetic risk factors for breast cancer among women aged 35 to 74 years who have a sister with breast cancer [45]. A case-cohort analysis within the Sister Study was performed to examine the relationship between telomere length in blood cells and breast cancer risk in 342 incident breast cancer cases and 736 subcohort members who were randomly selected from 29,026 participants enrolled by June 1, 2007. Methods for relative telomere length measurement and characteristics of the study population have been previously described [23]. Briefly, genomic DNA was extracted from prospectively collected frozen blood samples using an Autopure LS (Qiagen) in the NIEHS Molecular Genetics Core Facility, and 10 ng of the extracted DNA was robotically aliquoted and plated in duplicate onto each of 4 replicate 384-well plates. Telomere length was determined as the ratio of telomere repeat copy number to single copy gene copy number (T/S ratio) relative to that of an arbitrary reference sample, using the monochrome multiplex quantitative PCR protocol. This method has been shown to give a high correlation (R 2 = 0.84) with telomere length determined by the traditional Southern blot analysis [46]. Plates were run on a BioRad CFX384 (Hercules, CA) with the cycling parameters previously described [23]. A 5-point standard curve ranging from 1.9 to 75 ng in a 2.5fold dilution series run was generated in each assay plate to estimate the value for each sample T (telomere) and S (albumin single copy) using Biorad CFX Manager software. Standard curve efficiencies for both primer sets were above 90%, and regression coefficients were at least 0.99 in all PCR runs. Plates were verified for overall quality control parameters. Average coefficient of variation (%CV) was 11% and intraclass correlation coefficient (ICC) of a single T/S ratio was 0.85. Individual estimates were obtained from the average of up to eight replicate T/S ratio values. Genotyping As a part of our study of telomeres and breast cancer risk, we selected a broad group of candidate genes related to telomere biology such as genes encoding telomerase and telomere-binding proteins, DNA repair and cell cycle checkpoint proteins, and epigenetic regulators of chromatin structure. Candidate SNPs were selected using the SNPinfo GenePipe tool, a web-based SNP selection tool that can integrate GWAS results with SNP functional predictions and linkage disequilibrium (LD) information [47]. Briefly, a list of candidate genes (N = 140), including some previously linked to regulation and maintenance of telomere length, was first filtered against the Cancer Genetic Markers of Susceptibility (CGEMS) breast cancer genome-wide association study (GWAS) [48] results to exclude genes that showed no evidence of association with breast cancer (i.e., had no SNPs with p,0.05). However, some candidate genes that were very poorly represented in the CGEMS GWAS panel were retained even if they had no SNPs with p,0.05.We define poor representation having ,20% of known common SNPs (as reported for that gene in dbSNP and with MAF $0.05) in high LD (r 2 $0.8) with at least one SNP in the GWAS panel. We then used SNPinfo to select SNPs from the remaining set of candidate genes. In addition to those showing associations with breast cancer risk, SNPinfo enhances selection of SNPs with predicted functional effects that are in high LD with the GWAS panel. A total of 72 SNPs on 46 genes were included in the final analysis. Genotyping was conducted by the NIEHS Molecular Genetics Core Facility, using a custom-designed Illumina Gold-enGate genotyping panel. A total of 20 HapMap trios (20*3 = 60 samples) were genotyped to evaluate parent-parent-child (P-P-C) error. A total of 20 Sister Study sample duplicates were included to monitor replication error. Illumina BeadStudio genotyping software (version 1.6.3) was used to call genotypes. Individual genotypes with an Illumina GenCall (GC) score below 0.25 were assigned as missing. The overall call rate was 0.998. Both averaged P-P-C genotype error and averaged replication error were 0. The concordance between our genotype data and data in HapMap for the 20 HapMap trios was an average of 0.998 for each SNP. Statistical Analysis Out of 1,078 women who had relative telomere length measurements in baseline samples, the current analysis was restricted to 989 non-Hispanic white women comprised of 325 incident breast cancer cases and 664 subcohort members. Relative telomere length was not associated with breast cancer in our data [23], and minor allele frequencies of the SNPs were highly correlated between cases and subcohort members (Pearson r = 0.9981). Relative telomere length measurements were skewed to the right and therefore were logarithmically transformed. The association of each SNP with relative telomere length was estimated using linear regression models that included age as a continuous variable and breast cancer status. The model fit was evaluated comparing additive, dominant and recessive linear regression models. Reported P-values are nominal two-tailed p-values and have not been corrected for multiple comparisons. All analyses were performed using the Stata 10.0 (College Station, TX).
Transcription termination factor Pcf11 limits the processivity of Pol II on an HIV provirus to repress gene expression. Many elongation factors in eukaryotes promote gene expression by increasing the processivity of RNA polymerase II (Pol II). However, the stability of RNA Pol II elongation complexes suggests that such complexes are not inherently prone to prematurely terminating transcription, particularly at physiological nucleotide concentrations. We show that the termination factor, Pcf11, causes premature termination on an HIV provirus. The transcription that occurs when Pcf11 is depleted from cells or an extract is no longer sensitive to 6-dichloro-1-beta-D-ribofuranosylbenzimidazole (DRB), a compound that causes premature termination. Hence, Pcf11 can act as a negative elongation factor to repress RNA Pol II gene expression in eukaryotic cells. Transcription elongation can be defined as the process by which RNA polymerase traverses a gene while synthesizing RNA. For protein-encoding genes in eukaryotic cells, a plethora of proteins have been identified that either stimulate or inhibit elongation by RNA polymerase II (Pol II) (Sims et al. 2004;Saunders et al. 2006). Many proteins influence elongation by directly associating with Pol II, while others act indirectly by modifying the chromatin structure. Under conditions that inhibit elongation, Pol II is prone to premature termination, which is often described as a reduction in processivity. For example, treatment of cells with the transcription inhibitor 6-dichloro-1-␤-D-ribofuranosylbenzimidazole (DRB) causes production of short transcripts both in vitro and in vivo, suggesting that DRB reduces the pro-cessivity of the Pol II by promoting premature termination (Fraser et al. 1979;Tamm et al. 1980;Marciniak and Sharp 1991). Mutations in several elongation factors in yeast were found to have little impact on the rate of elongation; instead, the processivity of Pol II was diminished (Mason and Struhl 2005). These and other studies of Pol II elongation imply the existence of forces that cause Pol II to prematurely dissociate and terminate transcription. Yet there is little evidence that an elongation complex containing only Pol II and nucleic acids is intrinsically unstable as the complexes remain intact following treatment with high salt, detergents, and even proteases. Moreover, at physiological nucleotide concentrations, Pol II typically transcribes to the end of a DNA template (Izban and Luse 1991). Only two proteins are known that have the capacity to dissociate Pol II elongation complexes from the DNA template. TTF2 dissociates the elongation complex in an ATP-dependent manner (Jiang et al. 2004). The protein is concentrated in the cytoplasm through out most of the cell cycle and associates with chromosomes during M phase to dissociate elongation complexes during chromosome condensation. There is no evidence that TTF2 affects transcription outside of M phase of the cell cycle in vivo. The second protein is Pcf11. Pcf11 is one of a collection of proteins involved in 3Ј end processing of mRNA and transcription termination of protein-encoding genes (Buratowski 2005;Rosonina et al. 2006). It is the only protein in this collection that has been demonstrated to dissociate transcriptionally engaged Pol II from DNA; thus, Pcf11 could play a pivotal role in termination (Zhang et al. 2005;Zhang and Gilmour 2006). Chromatin immunoprecipitation (ChIP) analyses show that Pcf11 associates with the promoter region of some genes, although it is most concentrated at the 3Ј end (Licatalosi et al. 2002;Kim et al. 2004;Zhang and Gilmour 2006). Here we investigate the possibility that Pcf11 can act to inhibit transcription in vivo by causing premature termination. We chose the HIV provirus as a model because studies have demonstrated that inefficient transcription of the provirus is associated with premature termination (Kao et al. 1987;Laspia et al. 1989;Cullen 1990Cullen , 1991Feinberg et al. 1991;Marciniak and Sharp 1991). We provide evidence that Pcf11 represses transcription of the provirus by causing premature termination. Results and Discussion Depletion of Pcf11 with small interfering RNA (siRNA) results in increased virus production U1 cells contain two transcriptionally repressed copies of the HIV provirus (Verdin et al. 1993). Upon treatment with the phorbol ester PMA, provirus transcription and replication were increased sevenfold as measured by levels of HIV p24 in the culture media (Fig. 1A). This agrees with previous studies, and suggests that transcription of the provirus in our untreated cells was subject to repression (Folks et al. 1987(Folks et al. , 1988. When we depleted Pcf11 from U1 cells using siRNA, we observed a threefold increase in the level of HIV p24 (Fig. 1A). Induction of HIV provirus transcription with PMA in- volves NF-B, and ChIP showed that the level of NF-B associating with the provirus increased fourfold (Fig. 1B, cf. lanes 5 and 6). In contrast, depletion of Pcf11 did not alter the level of the p65 subunit of NF-B associated with the provirus (Fig. 1B, cf. lanes 7 and 8). Therefore, induction of HIV transcription and replication did not appear to be a result of ectopic activation of U1 cells by siRNA but rather a direct consequence of depleting Pcf11. RT-PCR and Western blot analysis indicated that the Pcf11 siRNA was indeed depleting Pcf11 message and protein (Fig. 1C,D). ChIP analysis detects Pcf11 in the promoter region of the HIV provirus If Pcf11 was mediating premature termination, we predicted that it might be located at the HIV promoter within the 5Ј long terminal repeat (LTR). ChIP analyses revealed significantly more Pcf11 in the HIV promoter region than in a region ∼2.4 kb downstream ( Fig ChIP analysis reveals that depletion of Pcf11 increases the processivity of Pol II To investigate the mechanism by which depletion of Pcf11 increased HIV expression, we used ChIP to monitor the interactions of Pol II and TBP. TBP was monitored to determine if treatments increased the transcriptional potential of the LTR by promoting binding of TBP (Raha et al. 2005). TBP and Pol II were clearly present at the promoter in untreated U1 cells ( Fig. 2A [lanes 10,14], B). This agrees with previous studies showing that HIV transcription is controlled at least in part by regulating Pol II after it associates with the HIV promoter (Cullen 1991). Treatment of U1 cells with PMA increased binding of TBP and Pol II by fivefold, indicating that binding of TBP is in part limiting HIV transcription in U1 cells prior to PMA treatment. In contrast, depleting Pcf11 increased Pol II binding by 2.5-fold in the promoter region but had no effect on TBP. Therefore, the increase in transcription seen upon depletion of Pcf11 is not likely due to an increase in the number of initiation complexes. Depletion of Pcf11 could be increasing the processivity of Pol II, thus leading to an increase in virus production. Measurement of the association of Pol II with a region 2.4 kb downstream from the transcription start site supports this conclusion. Minimal Pol II could be detected at the downstream region in untreated cells; there was ∼10-fold more Pol II at the promoter than the region 2.4 kb downstream ( Fig. 2A [lanes 10,12], B). This indicates Figure 2. Depletion of Pcf11 increases the density of Pol II downstream from the promoter region. Interactions of Pol II, TBP, and Pcf11 with the provirus were determined by ChIP. (A) Representative data for one ChIP experiment. Immunoprecipitated DNA, 10% input DNA, and 2% input DNA were subjected to PCR amplification for the regions corresponding to the promoter region in the 5Ј LTR (+1 to +248) or a downstream region (+2415 to +2690). (B) Quantification of ChIP data from three independent experiments. The control samples recovered with nonspecific antibody are shown in Figure 1B. that those Pol II molecules associated with the promoter region rarely transcribe to the distal region in uninduced cells. In contrast, 2.5-fold and fivefold more Pol II was associated with these downstream sequences upon Pcf11 depletion with siRNA or upon PMA induction, respectively ( Fig. 2A [lanes 9,11], B). Furthermore, similar amounts of Pol II were detected at the 5Ј LTR and the 2.4-kb region, indicating that the polymerases were more processive than in untreated or siRNA control treated cells. It is interesting to note that we observed no change in the level of Pcf11 associated with the promoter region following PMA induction ( Fig. 2A, lanes 17,18). However, if we consider the increase in Pol II observed in the promoter region, then the ratio of Pcf11 to Pol II decreases fivefold. Hence, a significant fraction of the Pol II molecules initiating transcription in the presence of PMA appear not to interact with Pcf11. This is consistent with the hypothesis that transcriptional activation inhibits the association of Pcf11 with the elongation complex. To further investigate the association of Pcf11 with the provirus, we monitored the interaction of Pcf11 with the 3Ј LTR using ChIP. This analysis is complicated by the fact that the 5Ј and 3Ј LTRs of the provirus have the same sequence. Consequently, we PCR-amplified a region that corresponded to both the 5Ј and 3Ј LTR and compared the results with PCR amplification of the 5Ј LTR alone; the latter was achieved by having one PCR primer hybridize to a site just downstream from the 5Ј LTR. Amplification of both LTRs from Pcf11 ChIP DNA resulted in ∼2.5-fold more signal for cells induced with PMA than uninduced or siRNA-control treated cells (Fig. 3B [cf. lanes 9 and 10 and 12], C). As expected, the signal for the cells treated with Pcf11 siRNA was significantly less than the control samples (Fig. 3B, lane 11). Amplification of only the 5Ј LTR from the Pcf11 ChIP DNA resulted in similar signals for the PMA-induced and uninduced U1 cells (Fig. 3B [cf. lanes 9 and 10 and 12], C). Since amplification of both LTRs yielded a 2.5-fold stronger signal for PMA-induced cells than uninduced cells while amplification of only the 5Ј LTR yielded similar signals, we conclude that PMA induction results in increased recruitment of Pcf11 to the 3Ј LTR. Presumably the Pcf11 that associates with the 3Ј LTR participates in 3Ј end formation and transcription termination. However, since depletion of Pcf11 increases virus production, Pcf11 is either not essential or the level of depletion is insufficient to interfere with processes involving the 3Ј LTR. For example, transcription termination mediated by the Xrn2 RNA exonuclease could provide an alternative pathway for termination (West et al. 2004). Transcription in cells caused by depletion of Pcf11 is insensitive to the elongation inhibitor DRB If Pcf11 induces premature termination, then transcription caused by depletion of Pcf11 might no longer be sensitive to DRB, a transcriptional inhibitor that causes premature termination (Fraser et al. 1979;Tamm et al. 1980;Marciniak and Sharp 1991). DRB sensitivity has been ascribed to two factors, NELF and DSIF, but neither of these factors has been shown to cause Pol II to dissociate from DNA; they merely slow the rate of elongation (Wada et al. 1998;Yamaguchi et al. 1999). We tested if Pcf11 might be involved in conferring DRB sensitivity by comparing the DRB sensitivity of transcription in cells induced by PMA and by siRNAmediated depletion of Pcf11. Transcription of the first 40 nucleotides (nt) of the provirus and of a region 5.3 kb downstream from the transcription start were monitored by RT-PCR. The level of transcript derived from the first 40 nt of the provirus was unchanged by any of our treatments (Fig. 4, all lanes, +1 to +40). This was expected because transcription of this region occurs even under uninduced conditions, and stable short transcripts accumulate in the cytoplasm of these cells (Toohey and Jones 1989;Kessler and Mathews 1992). PMA treatment caused a significant increase in transcription of the distal region consistent with an increase in the processivity of Pol II (Fig. 4, cf. lanes 1 and 3, +5396 to +5531). In accord with its action as an elongation inhibitor, DRB repressed PMA-dependent transcription of the distal region (Fig. 4, cf. lanes 3 and 4). In contrast, transcription induced by depletion of Pcf11 was insensitive to DRB (Fig. 4, cf. lanes 5 and 6). We note that similar results were obtained with DRB concentrations of 5 and 20 µM (data not shown). These low concentrations of DRB were chosen to reduce inhibition of transcription of cellular genes (Mancebo et al. 1997). Immunoprecipitated DNA and the indicated percentages of input DNA were subjected to PCR amplification of the 5Ј LTR alone or both the 5Ј and 3Ј LTRs. The amount of input for amplification of both the 5Ј and 3Ј LTRs was half that of the input for the 5Ј LTR. (C) Quantification of ChIP data from three independent experiments. The signals for the 3Ј LTR were calculated by subtracting the intensity of the signals for the 5Ј LTR alone from the signals for the sum of the 5Ј and 3Ј LTRs. GENES & DEVELOPMENT 1611 Cold Spring Harbor Laboratory Press on July 23, 2018 -Published by genesdev.cshlp.org Downloaded from Pcf11 contributes to the DRB sensitivity of transcription in vitro To determine if Pcf11 contributes to DRB sensitivity in vitro, we performed Pcf11 immunodepletion and Pcf11 add-back experiments with HeLa nuclear extracts. Western blot analysis shows that Drosophila Pcf11 antibody selectively depletes Pcf11 from the extract (Fig. 4B). Spt5 and NELF-B are subunits of DSIF and NELF, respectively, and neither is depleted by the Pcf11 antibody. Transcription reactions were performed on an HIV template spanning the region from −415 to +214 with mock-depleted and Pcf11-depleted nuclear extracts. DRB at 50 µM dramatically inhibited transcription in the mock-depleted extract but had little effect on the Pcf11depleted extract (Fig. 4C, lanes 1-4). To attribute the loss of DRB sensitivity to depletion of Pcf11, we tested if addition of purified Pcf11 to the extract would restore DRB sensitivity. Human Pcf11 is not well characterized (de Vries et al. 2000). Hence, we added back a 283-aminoacid piece of Drosophila Pcf11 that had been previously shown to dismantle an elongation complex (Zhang and Gilmour 2006). Addition of dPcf11 to the Pcf11-depleted extract had no effect on transcription in the absence of DRB (Fig. 4C, lanes 5-7). In contrast, transcription by the Pcf11-depeleted extract became sensitive to DRB upon addition of the Drosophila Pcf11 (Fig. 4, lanes 8-10). We tested if the ability of Pcf11 to restore DRB sensitivity correlated with the capacity of Pcf11 to dismantle elongation complexes by determining if a mutant would restore DRB sensitivity. Mutation of amino acids 75, 76, and 77 inactivates the dismantling activity of Pcf11 (Zhang and Gilmour 2006). Addition of this mutant to the Pcf11-depleted extract failed to restore DRB sensitivity (Fig. 4, lanes 11-13), suggesting that the ability of Pcf11 to dismantle the elongation complex was involved in conferring DRB sensitivity. Conclusion The processivity of Pol II is recognized as an important parameter in regulating transcription (Mason and Struhl 2005). Pol II with poor processivity is by definition a form of the enzyme that is prone to premature termination. However, the molecular basis for premature termination is unknown. Results presented here identify the termination factor, Pcf11, as a factor that causes premature termination and negatively regulates gene expression. Since HIV expression increases when Pcf11 is depleted, premature termination plays a significant role in maintaining the transcriptional latency of the provirus. The discovery that Pcf11 is involved in causing DRB sensitivity provides a possible explanation of how DRB causes premature termination. Biochemical analyses have resulted in the following model for DRB sensitivity (Peterlin and Price 2006). Transcription elongation by Pol II is inhibited by the combination of two proteins, NELF and DSIF, which associate with the elongation complex. The kinase, P-TEFb, overcomes this inhibition by phosphorylating one or more members of the trio of Pol II, NELF, and DSIF (Renner et al. 2001). P-TEFb is inhibited by DRB, thus allowing DRB to interfere with P-TEFb's function in overcoming the inhibition by NELF and DSIF. This model, however, lacks an explanation for premature termination because NELF and DSIF have only been shown to slow elongation. Based on our data, we propose that Pcf11 causes premature termination by acting on Pol II elongation complexes that have paused as a consequence of their association with NELF and DSIF. Previously, we obtained evidence that Pcf11 acts only on paused elongation complexes (Zhang and Gilmour 2006). Recently, we determined that NELF causes Pol II to pause in the promoter proximal region of the HIV provirus (Zhang et al. 2007). We posit that the pause induced by DSIF and NELF allows Pcf11 to dismantle the elongation complex. Since DRB inhibits transcription of a wide spectrum of genes in human cells (Lam et al. 2001), it is possible that Pcf11 could be a significant component in the regulation of gene expression. Indeed, we predict that any interaction that causes Pol II to pause could render the elongation complex susceptible to Pcf11. This might explain why depleting Pcf11 caused an increase in HIV transcription in cells but not in the cell extract (Fig. 4, cf. A and C). The DNA template in cells is packaged into chromatin, which could slow elongation and render Pol II susceptible to Pcf11. Hence, in cells, chromatin structure Twelve hours post-transfection (or not), cells were incubated with or without 5 µM DRB. After 12 h, cells were stimulated with 2 ng/mL PMA or not for 12 h. Total RNA was isolated, and transcripts spanning +1 to +40 and +5396 to +5531 of HIV RNA and actin were detected by RT-PCR. (B) Western blot analysis of Pcf11-immunodepleted and mock-depleted HeLa nuclear extracts. Spt5 and NELF-F B are subunits of DSIF and NELF, respectively. (C) In vitro transcription of the 5Ј LTR in Pcf11-immunodepleted (lanes 3-13) or mock-depleted (lanes 1,2) HeLa nuclear extracts. DRB at 50 µM was present in those samples marked with plus signs. dPcf11 and dmPcf11, which were isolated from Escherichia coli, encompass amino acids 1-283. The amounts of exogenously added Pcf11 correspond to 0.1 µg, 0.5 µg, and 1 µg. could act in addition to repression by NELF and DSIF. In contrast, the DNA templates in our in vitro transcription reactions were unlikely to be packaged into chromatin. Consequently, NELF and DSIF might be primarily responsible for rendering the Pol II susceptible to Pcf11 in these cell-free reactions. A possible role for the plethora of positive elongation factors is to counteract the premature termination activity of Pcf11. Significant new insight into transcription regulatory mechanism might be uncovered by investigating parameters that control the activity of Pcf11. A hint of such regulation is provided by our observation that the ratio of Pcf11 to Pol II is substantially greater at the 5Ј LTR of the inactive provirus than the 5Ј LTR of the PMA-induced provirus. Thus, part of the activation mechanism could involve steps that prevent Pcf11 from associating with the elongation complex. Antibodies Anti-p65 (A,, and anti-Spt5 (sc-28678) were obtained from Santa Cruz Biotechnology, Inc. Mouse anti-␤-actin was obtained from Sigma-Aldrich. Anti-NELF-B antibody was obtained from Rong Li (University of Texas Health Science Center at San Antonio, San Antonio, TX) (Aiyar et al. 2004). Rabbit anti-Pcf11 antiserum was previously described (Zhang and Gilmour 2006). RT-PCR RT-PCR was performed as previously described (Zhang et al. 2007). The following primers were used to amplify different regions of the HIV-1 gene: initiated short transcripts, +1 to +40 (GGGTCTCTCTGGTTAGA and AGAGCTCCCAGGCTCA); elongated transcripts, +5396 to +5531 (GACTAGAGCCCTGGAAGCA and GCTTCTTCCTGCCATAGGAG). ␤-Actin was detected with the primers GTCGACAACGGCTCCG GC and GGTGTGGTGCCAGATTTTCT. Ten microliters of each PCR reaction were run on a 9% nondenaturing acrylamide gel. Gels were fixed in 10% acetic acid for 20 min, dried, and then analyzed with a Phosphor-Imager (Molecular Dynamics). The intensity of each band was quantified by volume analysis using ImageQuant software. Western blot analysis for Pcf11 in U1 cells Nuclei were prepared by lysing 2 × 10 6 cells with 1% Nonidet P-40 in buffer A (10 mM HEPES at pH 7.9, 10 mM KCl, 0.1 mM EDTA, 0.1 mM EGTA, 1 mM DTT, 0.5 mM PMSF) and collecting crude nuclei by centrifugation in a microfuge at 10,000 rpm. Nuclei were resuspended in 50 µL of buffer C (20 mM HEPES at pH 7.9, 0.4 M NaCl, 1 mM EDTA, 1 mM EGTA, 1 mM DTT, 1 mM PMSF). An equal volume of nuclear suspension was added to 2× SDS loading buffer (100 mM Tris-HCl at pH 6.8, 4% SDS, 0.2% bromophenol blue, 20% glycerol, 200 mM DTT) and run on an 8% SDS-polyacrylamide gel. After transfer of proteins to the nitrocellulose membrane, anti-Pcf11 polyclonal antiserum was used to probe for Pcf11, or mouse anti-␤-actin Ab was used to detect ␤-actin. HRP-conjugated goat anti-rabbit IgG and goat anti-mouse IgG (Sigma-Aldrich) were used as secondary antibodies. Proteins were detected using the ECL Plus Western blotting detection system (Amersham Biosciences). Immunodepletion of Pcf11 from HeLa nuclear extracts and in vitro transcription Pcf11 antibody sepharose was made by incubating 100 µL of Pcf11 antiserum with 50 µL of protein A-sepharose followed by covalent coupling with dimethyl pimelimidate (Harlow and Lane 1999). Control antibody sepharose was made from preimmune antiserum. To reduce nonspecific binding during depletions, the antibody sepharose was first incubated for 4 h with 100 µL of nuclear extract generated from U1 cells (Cook et al. 2003). Antibody-sepharose was then washed extensively with 0.1 M KCl/ HEMG (100 mM KCl, 25 mM HEPES at pH 7.6, 12.5 mM MgCl 2 , 0.1 mM EDTA, 10% glycerol, 1 mM DTT), followed by two washes with 0.1 M glycine (pH 2.5) and extensive washing with 0.1 M KCl/HEMG. Antibody-sepharose was stored at 4°C. HeLa cell nuclear extracts were obtained from Promega (HeLaScribe Nuclear Extract in vitro Transcription System). For mock and Pcf11 depletion, 40 µL of nuclear extract were incubated for 1.5 h with 20 µL of antibody-sepharose at 4°C. Supernatants were collected with a Handee Spin Cup (Pierce) and then incubated with a fresh 20-µL portion of antibody-sepharose. Depleted extracts were stored at −80°C. The DNA template used for in vitro transcription consisted of the HIV 5Ј LTR region spanning from −450 to +214 that had been PCR-amplified with the primers TGGAAGGGCTAATTCACTCCC and TTCGCTTTCAGGTCCCT GTTC. Transcription reactions were performed as instructed by the HeLa-Scribe Nuclear Extract in vitro Transcription System (Promega) with minor modifications. First assembled was a 22-µL solution containing 200 ng of HIV DNA template, 7 µL of HeLa Nuclear Extract 1× Transcription Buffer (Promega), 4 µL of HeLa Nuclear Extract (Pcf11-or mockdepleted), 1.5 µL of 50 mM MgCl 2 , 10 U of RNasin, and 0.5 mM DTT. Where indicated, 1 µL of dPcf11 1-238 or dmPcf11 1-283 (freshly diluted in TE) at 0.1 µg/µL, 0.5 µg/µL, or 1.0 µg/µL was then added. The mixture was incubated at 37°C for 10 min to allow formation of preinitiation complexes. Following preinitiation complex formation, 0.5 µL of 2.5 mM DRB (in ethanol) or ethanol was added. Transcription was started by adding 3 µL of the following nucleotide mix: 1.0 µL of 10 mM ATP, 10 mM UTP, 10 mM GTP, 0.4 mM CTP, and 2.0 µL of [ 32 P] CTP (6000 Ci/mmol, 10 mCi/mL). Transcription was allowed to occur at 37°C for 30 min. Transcription was stopped by adding 175 µL of HeLa Extract Stop Solution (Promega), and RNA was isolated as directed by the manufacturer. RNA was analyzed on a 6% polyacrylamide gel containing 8 M urea.
Roma Children Going to Primary School: The Contribution of Interagency Working to Support Inclusive Education Inclusive education can be promoted through partnerships between agencies supporting children and families, such as those in the scope of education, healthcare, social care, and welfare. Partnerships, often referred to as interagency working, were found to determine positive children’s educational outcomes, and home-learning environment in previous studies. However, evidence on impact and best practices is still limited. The aim of this paper was to study facilitating factors, and impacts of interagency working for inclusive education. Perspectives of service providers were analysed in regard to a Portuguese project aiming to promote inclusive education and academic progress of children in primary school from a Roma community living in a low-income neighbourhood. Findings indicated that facilitating factors of interagency working for promoting inclusive education included political support, participation of service users in the planning and delivery of interventions, and informal and collaborative working relationships. Positive outcomes were found regarding improved children’s school attendance and academic progress, and increased involvement of parents in children’s education. In Europe, Roma are among the most deprived people, facing limited access to high quality education, labour market barriers, segregation in housing and other areas, and poor health outcomes (European Commission, 2018). The educational attainment of Roma children is lower compared to non-Roma children, and the former tend to be over-represented in special education and segregated schools (European Commission, 2011; European Union Agency for Fundamental Rights [FRA], 2018). According to a survey conducted in eleven European Union member states 1 (FRA, 2014), on average, 14% of the Roma children of compulsory school age were not attending formal education, compared to 3% of the non-Roma children living close by. The complexity and interdependence of the problems that affect Roma demand an integrated approach, cross-sectoral cooperation, and social investment in local capacities and strategies (European Commission, 2011. In line with Bronfenbrenner's bio-ecological model (Bronfenbrenner, 1979;Bronfenbrenner & Ceci, 1994;Bronfenbrenner & Morris, 2006), the provision of services needs to consider all the relevant environments in which children are embedded to effectively support them (Davidson, Bunting, & Webb, 2012). Social policy provisions can support efficient use of human capital, while fostering social inclusion of groups that have traditionally been excluded (Morel, Palier, & Palme, 2016), such as Roma communities. 1 In Bulgaria, the Czech Republic, France, Greece, Hungary, Italy, Poland, Portugal, Romania, Slovakia, and Spain. Interagency working and the Portuguese context An integrated and holistic approach to support children and their families can be accomplished most effectively through partnerships between agencies, such as those in the scope of education, health, social work, and welfare, among others (Christensen, 2015; Davidson et al., 2012;Statham, 2011). Partnerships and coordination of services have become increasingly recognized as important for the development of effective policies to promote inclusive education (Einbinder et al., 2000;Vargas-Barón, 2016). More than one agency working together in a planned and formal way is often referred to as interagency working (Lloyd, Stead, & Kendrick, 2001). Interagency working Interagency working can include information sharing, and joint assessment of needs, planning and delivery between agencies (Statham, 2011). It may contemplate joint activities involving financial and physical resources, programme development and evaluation, collaborative policies, formal and informal agreements, and voluntary contractual relationships (Dedrick & Greenbaum, 2011). The organization of services can occur at different levels (e.g. national, regional and local), include various policy domains (European Committee of the Regions, 2009;Frost, 2005;Stubbs, 2005), and diverse types and degrees of integration between agencies (for a review see Barnes et al., 2017). Interagency working can lead to improved provision of services according to the needs of the users, by reducing duplication and gaps in provision, and consulting users (Atkinson, Jones, & Lamont, 2007;Statham, 2011). This contributes to avoid repeated requests to families, as frequently found, and lack of services when families do not reach a specific agency (Barnes et al., 2018;Griffin & Carpenter, 2007). Positive outcomes of interagency working were found regarding children's educational attainment and attendance (Oliver, Mooney, & Statham, 2010;Statham, 2011), and home-learning environment of families with young children (Melhuish et al., 2008). According to a literature review by Atkinson and colleagues (2007), effective working relationships depend on clear roles and responsibilities, commitment at all levels of the hierarchy, trust and mutual respect (e.g. skills sharing, equal resource distribution), and understanding between agencies (e.g. joint training and recognition of individual expertise). The authors also identified the relevance of effective communication and information sharing, a joint purpose, strategic planning and organisation, what entails ensuring resources, continuity of staffing, and an adequate time allocation . A review by Barnes and colleagues (2018) also identified the use of a bottom-up approach as a facilitating factor of interagency working. This approach values the input from the local community and the influence of "street level bureaucrats" (Lipsky, 1980(Lipsky, /2010, involves an increased participation of non-state actors, and addresses new forms of public-private partnerships (Stubbs, 2005). Other facilitating factors found in this review were: political support; commitment and shared values about collaborative work between agencies; security of funding; strong leadership and clear governance structure; agreement and commitment at all levels on roles and responsibilities; mutual trust and values (e.g. developed through regular meetings); joint training; attention to data sharing; co-location, which may facilitate communication and a shared vision, but it was not found as essential; positive personal relationships between professionals; and professionals' cultural sensitivity (Barnes et al., 2018). This brief explanation about interagency working highlights the complexity of the politico-institutional and organizational underpinnings that frames the design of social policies. Facilitating factors and impacts of interagency working can be specific to the context where it occurs. For that reason, the features of the Portuguese context are relevant to frame the present study. The Portuguese context A national study on Roma communities (Mendes, Magano, & Candeias, 2014) identified some heterogeneity in the Portuguese Roma population in terms of lifestyle, social and spatial insertion, and socioeconomically resources. However, the percentage of Roma household members at risk of poverty was 96% in 2011 (FRA, 2018). In 2013, a National Roma Communities Integration Strategy (2013−2020) was set as the first national plan specifically addressed to Roma communities (High Commission for Immigration and Intercultural Dialogue, 2013). The strategy addressed the need to ensure that all individuals complete compulsory education and have access to further education or professional training. Data from 2016(FRA, 2018 showed that the educational attainment of Roma children remained lower compared to non-Roma children in Portugal. The share of children with 4−5 years old participating in early childhood education, and the share of children of compulsory-schooling age participating in education were respectively 42% and 90% in the Roma population, compared to 94% and 99% in the general population. The share of the population aged 18−24 years not involved in further education or training beyond the lower secondary education was 90% in the Roma population, compared to 14% in the general population. The share of Roma children attending segregated education increased from 3% to 11% between 2011 and 2016 (Directorate-General for Statistics of Education and Science, 2017; FRA, 2018). The report from the European Commission (2019) on the implementation of national Roma integration strategies concluded that the coordination of Roma inclusion policy is not yet consolidated in Portugal. The report refers that this is due to lack of information about the strategic guidelines and commitments undertaken by the ministries among the professionals in the areas of intervention, and deceleration on inclusion processes in some municipalities related to the socioeconomic context and recent economic crisis . In the national study on Roma communities (Mendes et al., 2014), professionals working in the field of social inclusion reported difficulty in meeting all the requests for support, many of which were beyond institutional resources available or involved other sectors, and acknowledged the possibility of replication of services provided across institutions. The professionals also referred the lack of methodologies for approaching Roma communities and promoting their greater participation in community activities, as well as better educational outcomes for children . In this context, partnerships between agencies are relevant to optimize the existing resources. In Portugal, interagency working on inclusive education and social support has been created as part of a central government strategy to support the delivery of programmes at the local level. The Choices Programme (Programa Escolhas) is a nationwide programme aiming to promote social inclusion of children and young people (with ages between 6 and 30 years old) from more vulnerable contexts, particularly descendants of migrants and from Roma communities. The programme is developed by a public institute (High Commission for Migration) and includes a large number of projects. Interagency working at the local level is a cornerstone of the projects, what gives an innovative value to this social policy. The local partnerships are established in consortia seeking complementarity, resources coordination, and co-responsibility for initiatives, in order to promote the sustainability of actions. Partners from different levels of action and authority may participate in the consortia, namely government institutions, social partners, entrepreneurs, NGOs, education and scientific sector, representatives of the civil society, and others. This organization reinforces partnership practices, both vertically between local, regional, and national levels, and horizontally between local and civil society organizations. The projects funded by the Choices Programme are subordinated to a set of principles, namely strategic planning, partnerships, participation, intercultural dialogue, mediation, social innovation, and entrepreneurship. The contractual relation between partners defines roles and responsibilities, in terms of financial, human and material resources. Each project identifies areas requiring intervention and local needs, what reinforces the sharing of a strategic vision and compatible targets among partners. All are equal members in a predetermined organizational structure. In the current national context, more information is needed about the facilitating factors, and impacts of this new policy implementation involving the delegation of competencies between central and local government to support inclusive education. The importance of attending to this need is reinforced by the inexistence of an independent agency responsible for monitoring interagency working aiming to support young children and their families . The present study The goal of the present study was to identify facilitating factors, practices, and impacts of interagency working for inclusive education of Roma children. For this purpose, a case study of a Portuguese project was conducted. The project aimed to promote inclusive education and academic progress of children in primary school from a Roma community living in a low-income neighbourhood. The project was funded by the Choices Programme, and it was considered a promising case of interagency working according to criteria specified next. Perspectives of professionals working in public, private and non-profit organizations were collected, since these sectors are playing an increasingly important role in the implementation of social policies in Portugal. Additionally, multi-actor analysis reflects the philosophy of the Choices Programme . Identifying facilitating factors, practices, and impacts regarding interagency working can contribute to ensure adequate social responses and services to support the needs of children, young people and their families, as well as of the wider community. Researching impacts of interagency working on children and families is a priority, since the information available is scarce (Barnes et al., 2018). Method A project aimed at a large Roma community and funded by the Choices Programme was selected for the present case study. Participants In 2017, the project supported 83 children, and 75 parents and other family members. Participants in the focus group and semi-structured interviews of the present study included seven service providers, namely one regional coordinator, one professional of the technical team of Choices Programme, one executive manager, one teacher, one community facilitator and two stakeholders from the private sector. The views of one community facilitator and one activities monitor were also registered through the analysis of dissemination materials of the project. The project selected was identified as an example of successful interagency working in Portugal (Barnes et al., 2018), based on four criteria. First, the Choices Programme was nationally and internationally recognized as an efficient public policy (e.g. receiving the Juvenile Justice without Borders International Award) and exists for more than 15 years. Second, collaboration was established with a university for evaluation purposes (beyond the internal evaluation within the Choices Programme). Third, the select project was operational for at least two years (it started in 2010), constituting an example of continuity. Fourth, the project used social media regularly to share information with a large audience, and to connect people within and outside the community neighbourhood, providing platforms for civic engagement and action . Procedures The seven participating service providers were invited to a focus group and semi-structured interviews. The questions focused on: perception of the project in terms of interagency working; details of personal experiences of working with other agencies; overall conclusions and recommendations; mission of the project; philosophy of interagency working; articulation between agencies and levels (public, private and third sector); advantages and impacts; facilitating factors, and challenges regarding the interagency work conducted . Assessment reports (one internal and two external) and dissemination materials (four videos and two articles in magazines) were also analysed in regard to the relevance attributed to leadership, mission, goals and resources of the project. Legislation on the Choices Programme was analysed with a focus on mission, goals, principles, and responsibilities of each partner involved in the consortium. Results Findings based on the focus group, semi-structured interviews, documents and dissemination materials indicated that all participants, in general, viewed interagency working as positive, contributing to the promotion of social inclusion by developing interventions that took into account the needs of the target users . Working with social exclusion phenomena requires integrated and shared actions between the different actors of the society. (…) Work in partnership has been a relevant factor in promoting social inclusion because local actors can identify the needs that local people feel. (Regional coordinator of the Choices Programme) We think it is crucial to network and to intervene with other local entities. (Project manager) Some services worked together towards consistent goals, but maintained their institutional independence, while other services worked together in a planned and systematic way, with agreed shared goals, formal decision rules and a continuum of joint action. The project was an example of multi-partner governance because it was developed together with local government, neighbourhood leaders and voluntary organizations, in order to maximize the community empowerment, feelings of neighbourhood identity and belonging . Facilitating factors Analyses of the focus group, semi-structured interviews, documents and dissemination materials indicated the relevance of the use of a bottom-up approach as a facilitating factor of interagency working. The project recognized the value of considering all perspectives, including from the users of the services, namely chil-dren, young people and their families, in order to provide more appropriate services, efficiency in delivery, and effective outcomes. Users were asked by the project staff about their views, and participated in the planning and delivery of services to better meet their needs, interests, and expectations. Our activities pay close attention to the main interests of our children and youth. We have many activities that were proposed by the residents. I think we are very sensitivity to the local context and Roma culture. (Project manager) To consolidate a bottom-up approach, the project set up a group of volunteers, and established an informal working relationship with local actors of the private sector seeking to participate in the community life. The bottom-up approach involving numerous actors shaped the final output according to the demands, resources and competencies available at the local community. Other facilitating factors of interagency working found were shared values and trust between partners involved in the project, and a positive organisational climate, with low conflict and high cooperation. All participants suggested that the organizational climate was the primary predictor of positive service outcomes. Participants referred that agencies and organizations were gaining knowledge about each other, and that there was an increasing need for alliances to address the complex issues of our society. However, in general, informality and adaptability tended to characterize the functioning and overall management, and evaluation model of local partners. Everything is quite easy when we know each other and when we can pick up the phone. (Project manager) Is quite easy to interact with school because I am going there almost every day. (ICT teacher) The collaboration with informal partners played an important role because they could participate and invest in activities which brought value to the project goals. For instance, a local corporation financially supported the mission of rewarding 20 children by assigning a gift card based on school attendance and results, adequate behaviour, participation in study activities and in non-formal education activities. When informal partners shared the same values and principles of the project, this cooperation was perceived as good for both sides. Our company enjoys a lot to support children. On the other hand, it is gratifying to work with a noble institution with a very important mission. (Corporation member) In the scope of social responsibility activities, I consider local corporations contributions a major help to stimulate and create activities for the children we support. (Executive assistant) The project formal partners developed a relationship based on the principles of good partnership. It was effective because the partners shared a strategic vision, pursued compatible targets, and were all equal members in a predetermined organizational structure. In terms of barriers regarding interagency working, findings from the focus group and semi-structured interviews indicated financial uncertainty, and potential reorganisation or ending of the national funding Choices Programme. Additionally, identified barriers included local needs at odds with national priorities, diverse agency policies, procedures and systems, professional stereotyping, lack of explicit commitment to interagency working by stakeholders, and reluctance of some important local actors to engage. Practices and impacts The activities delivered in this project focused on academic success and school support throughout the year (for instance, helping children with their homework, and providing lessons on new technologies). Analyses of the focus group, semi-structured interviews and assessment documents indicated improved children's school attendance and academic progress. The project, installed in the neighbourhood, was able to develop a close relationship with parents, children and families. The systematic school support, with regular activities and focused on empowering personal development, increased parents' interest in their children's academic life. An example is the Wake Up Programme developed by a group of Roma volunteers, who called children in the morning to go to school, going door-to-door, if parents agreed to participate, with the goal of promoting school attendance. Children and families had also access to classes to learn technological skills, music, dance and sports. We work in schools with children, and in the relationships between school and the families from the social neighbourhood, in order to identify teachers' and students' problems and difficulties, and do the link with the families. We also work during their free time, with a range of activities just as music, dance and sports classes. (Project manager) When residents see local authorities, teachers, "the city" with concerns about safety, issues around children, learning, and health, just as an example, they reinforce their motivations to empower themselves. (Executive assistant) We want to open both parties' minds and to help the school and the teachers in their relationships with the families, and to encourage our families to study, giving my own example to their families that is possible for us to study, work and evolve as persons, without ceasing to be who we are. (Community facilitator 1) The project also promoted parents' empowerment through the provision of support to develop job skills. CID@net is open to all the community and we welcome our children's parents for them to develop their abilities in order to get a job. They can also search for job opportunities and improve their technological skills. (Activities monitor) Another important feature was the proximity between parents and professionals to solve problems or clarify doubts. As the project manager said, "residents request for help to read documents, and present questions related to health, about children's school, social support, among other issues". The professionals involved in the project often played as brokers between the users of the project and other social services. The permanent participation of Roma users from several generations suggested their preference for the project. One important outcome was the increasing number of children attending project activities over time. "I am very proud of been a volunteer, then a community facilitator and now a monitor" (Community facilitator 1). In accordance with the Choices Programme, a positive approach was used to promote social inclusion, focusing on the problems, but also on the opportunities, and resources of the communities. At the organizational level, the selected case evolved in order to strengthen the relationship with the residents of the neighbourhood and with the local community. For instance, private corporations prevailed in its financial support. The project aimed to promote the conditions necessary for children and young people to maintain and develop their cultural heritage, using their own resources, together with the promotion of school attendance and academic progress. In the scope of the project, young people participated in the production of videos aiming to reduce stereotypes associated with Roma communities, which were then disseminated on the social media of the project. Young people also conducted sessions presenting Roma history and cultural heritage, and children and young people were involved in artistic events, presenting Romani dances and songs. These activities also had the goal of enhancing the relationships between Roma and non-Roma persons. The project organized youth assemblies to potentiate the active participation of young people in processes of decision-making concerning the planning and implementation of activities. A constant focus of the project was on children's and young people's empowerment. I really think that raising their self-esteem is very important, because they sometimes feel ashamed and are not aware of their multiple abilities that go way beyond working in fairs. Going to school is very important. I also run an animation activity during break time whose goal is to improve the relationship between Roma and non-Roma persons. (Community facilitator 2) Discussion The present case study was focused on a Portuguese project funded by a national public policy aiming to promote school integration of Roma children in a low-income neighbourhood. The project built on the theory drawn from the interagency working as a valuable instrument to overcome weaknesses of the national policy to support Roma children and families . Local government, neighbourhood leaders, volunteers, private and public organizations were brought together. The results of the study supported that interagency working with multi-actors can constitute a strategic plan to support Roma children and families by promoting inclusive education. As reported in the national study on Roma communities (Mendes et al., 2014), multidisciplinary teams were perceived as essential to work on the promotion of social inclusion . Findings pointed out the advantages of a bottom-up approach in guaranteeing adequate social responses and services to the needs of children, young people and their families. Despite the advantages associated with this bottom-up approach, some concerns arise when the government transfers to the private sector some roles or services that usually were central State-directed (Donahue, 2006;Verma, 2016). That is why there are benefits to discuss the Portuguese example, considering that it tried to bridge the gap between top-down and bottom-up approaches by incorporating the insights of both perspectives. In this context, political support has been a critical facilitator to provide services with adequate conditions and funding. Local government agencies have historically functioned as institutions using vertical lines of communication, top-down decision making, differentiation of tasks, hierarchical supervision, and formal rules and regulation. As such, it is common for professionals and administrators to be predisposed to a "chain of command" rather than a shared way of thinking and doing. Professionals and organizations often are highly motivated to form partnerships, but flounder because of the structure, confusion about roles, or expectations for outcomes. The shifting configuration of actors at the local level gaining significant power can contribute to ensure the exercise of democracy and citizenship. In this scenario, the participation of Roma in the design of initiatives that aim to support them can contribute to their empowering, use of own resources, and to successful policies implementation (European Commission, 2018;FRA, 2018). Area-based partnerships provide a mechanism for local organizations to work together and adapt their policies to better reflect the needs of people at the local level (OECD, 2015). The organisational climate and the establishing of informal working relationships were also found in the present study as important facilitating factors when developing interagency working. In the national study on Roma communities (Mendes et al., 2014), factors such as adequate professional training, support or patronage to guarantee a certain continuity of time, and monitoring of the projects were also identified. However, according to the European Commission (2019) report on Roma integration strategies, in Portugal the number of measures with funding allocated was below the average of the Member States. According to the same report, one of the key recommendations for the country is that ministries should focus on the training and qualification of their professionals and other key players in fighting discrimination in partnership with civil society organizations (Roma associations and other associations working directly in this field). Another recommendation is the need for municipalities to assume a key role by engaging in local needs assessment, planning and implementation. Concerning the practices implemented in the scope of the project under study, these had a strong focus on promoting positive attitudes in regard to school, supporting children to develop study and learning practices, providing extra-curricular activities, and reinforcing school-families partnerships. Community facilitators had also an important role to establish links between Roma and non-Roma communities, and, more particularly, between children, families, and schools. According to the national study on Roma communities, organizations and services should also work with the non-Roma community to integrate the Roma and non-Roma persons in different activities and projects (Mendes et al., 2014). Findings suggested that the project had positive impacts on the Roma families and community. The community was very collaborative with the project in order to motivate their children to academic achievement, learning of technological skills and occupation of leisure times with music, dance and sports. Children's school attendance and academic performance were also reported as higher. The follow-up strategies to support Roma children and their families in the community reduced school dropout to zero and improved school performance. Parental understanding about the benefits of school attendance, preparing their children for the unpredictability of the future, and above all securing job opportunities and future skills requirements, contributed to this achievement. According to the Ad Hoc Committee of Experts on Roma and Traveller Issues (Council of Europe, 2018), 65% of children entering primary school today will ultimately end up working in completely new job types that do not yet exist. In Portugal, the Roma community is also tackling economic restraints due to the consumerism profile of the modern societies that undermined the traditional economic activities . Parents are facing the idea that ignoring the school compulsory education is to compromise their children's future. Various practices implemented in the project followed the recommendations to improve Roma persons' life provided by the national study on Roma communities (Mendes et al., 2014). The project aimed to develop a more systematic and structured networking with Roma children and families, and a broader and more effective dissemination of their references and cultural repertoire to institutions and the population in general. The project sought to combat stereotypes found in Portugal, such as disinterest by Roma communities in regard to education. According to the same national study, organizations and services should decentralize interventions to involve other populations, as well as the wider geographical environment, and not overly focus on Roma and residential spaces that promote Roma isolation (Mendes et al., 2014). Local governance is crucial for the effective implementation of strategic action plans to improve the enrolment of Roma children in compulsory schooling. At the local level co-operation is needed between key stakeholders: schools and Roma school mediators, the local authorities responsible for education, multi-disciplinary teams and families . The partnerships between services working for and with children and families, due to their multi-actor and multi-form structures, were observed in many cases to lead to indeterminate outcomes (Verma, 2016). However, the evidence collected in this case study can contribute to reinforce information to policymakers and professionals about the best strategies to ensure inclusive education .
Estimation of states and parameters of multi-axle distributed electric vehicle based on dual unscented Kalman filter Distributed electric drive technology has become an important trend because of its ability to enhance the dynamic performance of multi-axle heavy vehicle. This article presents a joint estimation of vehicle’s state and parameters based on the dual unscented Kalman filter. First, a 12-degrees-of-freedom dynamic model of an 8 × 8 distributed electric vehicle is established. Considering the dynamic variation of some key parameters for heavy vehicle, a real-time parameter estimator is introduced, based on which simultaneous estimation of vehicle’s state and parameters is implemented under the dual unscented Kalman filter framework. Simulation results show that the dual unscented Kalman filter estimator has a high estimation accuracy for multi-axle distributed electric vehicle’s state and key parameters. Therefore, it is reliable for vehicle dynamics control without the influence of unknown or varying parameters. Introduction The application of distributed drive technology in heavy vehicle could bring greater benefits than passenger car. Since the clutch, transmission, transfer case, differential, drive shaft, and other mechanical components are removed, it simplifies the vehicle structure and makes its layout more convenient. In addition, each wheel is controlled by the wheel hub motor independently to increase the vehicle's maneuverability. However, the degree of freedom (DOF) of vehicle control increases significantly with the number of axle. Vehicle dynamics control is enhanced if the real-time and accurate acquisition of vehicle state and key parameters is achieved. 1,2 Therefore, for the multi-axle distributed electric vehicle, it is important to estimate the state which is expensive or difficult to measure, or the parameter which is unknown or variable. The vehicle state can be estimated by kinematic-based methods and dynamicbased methods. The former involve direct integral method for measurements, which is subject to cumulative error, 3,4 whereas the latter are more often used for vehicle state estimation because of the higher accuracy even if based on low-cost sensors. The typical algorithms include Kalman filter, particle filter, sliding-mode observer, Luenberger observer, and so on. [5][6][7][8][9] To date, the nonlinear Kalman filter provides a prospective method for the nonlinear vehicle system, including the extended Kalman filter (EKF) algorithm and the unscented Kalman filter (UKF) algorithm. [10][11][12] Sebsadji et al. 13 and Dakhlallah et al. 14 used the EKF method to estimate the longitudinal velocity, lateral velocity, and yaw rate. On the basis of this method, estimation of road gradient, road friction coefficient, and vehicle sideslip angle is achieved. Best et al. 15 proposed the adaptive EKF to estimate the tire force, which improves the accuracy due to the change in the tire-cornering stiffness. In Antonov et al., 16 an UKF algorithm was used to estimate tire slip and vehicle slip angle based on magic formula and bicycle model. In Doumiati et al., 17 UKF and EKF were proposed and compared to address system nonlinearities and unmodeled dynamics. Experimental results demonstrate that the two approaches could provide accurate estimations for calculating lateral tire force and vehicle sideslip angle. However, UKF has better second-order accuracy than EKF and reduces the amount of computation without Jacobian matrices. The UKF method relies on the vehicle dynamics model. Compared to those of the traditional vehicle, the dynamic characteristics of the distributed electric vehicle are obviously different. One of the characteristics of the latter is that the driving torque of each wheel can be obtained timely by the wheel hub motor. In addition, it brings new requirements for the estimation of vehicle longitudinal speed and road surface, since no wheel is nondriving wheel. In Geng et al., 18 a fuzzy estimator was adopted for the sideslip angle of four-wheel distributed electric vehicle. In Gu et al., 19 the longitudinal velocity, lateral velocity, and yaw rate of a four-wheel drive-by-wire vehicle are estimated based on the UKF algorithm and a 7-DOF model. Nam et al. 20 used the EKF algorithm to estimate the sideslip angle and tire cornering stiffness simultaneously for the distributed electric vehicle. Some vehicle parameters such as the position of mass center, vehicle mass, and yaw moment of inertia may change during the different driving conditions, especially for multi-axle heavy vehicle. This change of key parameters would affect the accuracy of state estimator, which is established based on fixed parameters. Therefore, the estimation of vehicle parameters needs to be introduced for the unknown and varying parameters. With the advantage of mutual correction, some traditional vehicles have started using the joint estimator of state and parameters. Wenzel et al. 21 used two EKFs in parallel to reduce the effect of varying vehicle parameters. In Li et al., 22 the estimation of vehicle state and road friction coefficient based on dual-capacity Kalman filter was proposed. The bench test shows that it has better estimation accuracy than dual EKF. Considering the precision and real time, a dual unscented Kalman filter (DUKF) estimator for the multi-axle distributed electric vehicle is designed in this article. It is an important prerequisite for future vehicle dynamics control. The remainder of this article is organized as follows. Section ''Modeling of multi-axle distributed electric vehicle'' introduces a vehicle dynamics model suitable for estimation. Section ''DUKF algorithm'' gives an overview of the DUKF algorithm and then details the design for joint estimator of state and parameters based on multi-axle distributed electric vehicle. Section ''Simulation and analysis'' gives the simulation results under double lane change (DLC) test and sinusoidal input of steering wheel test. Conclusions and future work are presented in section ''Conclusion.'' Modeling overview Real-time and accurate acquisition of vehicle's lateral and rollover state is important in modeling. The main characteristic of the distributed electric vehicle is the use of electric wheel, which brings the challenge for estimation. For example, longitudinal velocity cannot be achieved directly since each wheel is the driving wheel. In addition, the driving torque of each wheel is easy to measure through the torque of motor. In this article, we look to estimate eight states which represent vehicle's motion stability and three key parameters for an 8 3 8 distributed electric vehicle. The vehicle model is established as shown in Figure 1. O-xyz is the vehicle-based coordinate system, u is the longitudinal velocity along the x-axis, v is the lateral velocity along the y-axis, w is the vertical velocity of the sprung mass centroid along the z-axis, r is the yaw rate around the z-axis, p is the roll rate around the x-axis, and q is the pitch rate around the y-axis. The longitudinal force, lateral force, and vertical force of each wheel are represented by F xi , F yi , and F zi (i = 1, 2, ., 8), respectively. L 1 is the distance from the first axis to the second axle, L 2 is the distance from the center of mass to the second axle, L 3 is the distance between centroid and the third axle, L 4 is the distance from the third axle to the fourth axle, and B is the wheel track. Besides, some parameters are not shown in the figure, including C, the yaw angle; F, the roll angle, u, the pitch angle; and b, the sideslip angle. The height of the mass center and the distance from the mass center to the roll axis are represented by h and h#, respectively. Vehicle dynamics To represent the vehicle states of planar/rollover stability effectively, a 12-DOF model is adopted, including longitudinal motion, lateral motion, yaw motion, roll motion, and rotary motion of eight wheels. The equation governing its dynamics is expressed as where f is the roll angle, _ f is the roll velocity, K f is the roll stiffness coefficient, and C f is the roll damping coefficient. In addition, a x and a y are, respectively, the longitudinal and lateral acceleration and M z is the yaw moment, which can be obtained by the tire dynamics model as follows where d 1l , d 2r , d 2l , and d 2r are the steering angles of the first two axles, respectively. Besides, the sideslip angle b can be calculated by longitudinal speed and lateral speed Tire dynamics The tire dynamic model contains the longitudinal force module and the lateral force module. Since the motor torque of the electric wheel can be obtained for the distributed electric vehicle, it is different from the conventional vehicle and makes the calculation of the tire's longitudinal force easier. According to the wheel rotation equation (2), the equation of the tire longitudinal force can be obtained as By contrast, the calculation of the lateral force of tire F yw is more complicated. An HSRI (Highway Safety Research Institute) tire model 23 is adopted as follows where C s and C a are the longitudinal and lateral stiffnesses of the tire, respectively; S x and S y are the longitudinal slip ratio and lateral slip ratio, respectively; m is the coefficient of road adhesion; and F zw is the vertical load of the tire. The longitudinal slip ratio of tire S x can be calculated according to equation (10) where v i is the angular rate of each wheel, and the horizontal velocity of the wheel center v whi can be represented as The lateral slip ratio of tire is a relational expression for the tire slip angle, as shown in equation (12), and the tire slip angle is calculated according to equation (13) S yi = tan a i ð12Þ DUKF algorithm Algorithm overview Two UKF filters operate individually and simultaneously in DUKF. In this article, the estimator of vehicle state and vehicle parameters exchanges and corrects the information of each other. Both the estimators consist of prediction module, construction of Sigma point module, and correction module. The principle of DUKF is shown in Figure 2. We assume that the vehicle system can be represented by the following nonlinear discrete system equation x where x and u are the variables of vehicle state and vehicle parameter, respectively. The specific steps of algorithm are as follows: Initialization First, the initial values of state vector, parameter vector, covariance of state error, and covariance of parameter error are set asx 0 , P x0 ,û 0 , and P u0 , respectively. Parameter prediction The parameter vector and the covariance of parameter error at time k + 1 are predicted as followsû À k+1 =û k ð15Þ where Q u is the covariance matrix of process noise. Construction of Sigma point of state vector The symmetric point sampling strategy is used to construct a Sigma point set x i of state vector at time k and its weights of first-order and second-order statistical characteristics W m x, i and W c x, i wherex k and P x, k are the estimated state vector and the covariance of the state error at time k, respectively; n x is the dimension of the state vector; and l x is the coefficient of the sample point of the state vector, and we have where a is a very small positive number, which we set to 10 -3 , and k is the secondorder coefficient, whose value is 0 when n x is greater than 3. State prediction The state prediction contains the prediction of state vector and observation vector. First, the mapping set of points is achieved by transformation of the Sigma point of state vector according to the state equation (14) x Second, a priori estimation at time k + 1 is obtained by weightinĝ Then, this set of points is transformed according to the observation equation (14) g Finally, the estimated value of the observation vector at time k + 1 and its covariance are obtained by weightinĝ State correction The state correction includes the correction of the state vector and the covariance of the state error. First, the cross-covariance between the state vector and the observation vector is calculated Then, the Kalman gain matrix of the state vector is calculated Finally, the posterior estimation of the state vector and the covariance of the state error are calculated and updated, respectivelŷ where n u is the dimension of the parameter vector and l u is the coefficient of the sampling point equation (21). Parameter correction We use the estimated states and measured observations to correct the parameter vector. The parameter correction includes the correction of the parameter vector and the covariance of parameter error. First, the observation equation given by equation (14) performs a nonlinear transformation on a set of Sigma sampling point of the parameter vector The estimated value of the observation vector and its covariance are obtained by weightingŷ where R u is the covariance matrix of the observation noise. Then, the cross-covariance between the parameter vector and the observation vector, and the Kalman gain matrix of the parameter vector are calculated Finally, the posterior estimation of the parameter vector and the covariance of the state error are calculated and updated, respectivelŷ DUKF framework In equation (14), the state vector includes longitudinal velocity, lateral velocity, yaw rate, roll angle, roll velocity, yaw moment, longitudinal acceleration, and lateral acceleration The state equation of the Eulerian discrete velocity and the state transition matrix are obtained, respectively, as four-axle vehicle, it is represented by the distance from the mass center to the second axle L 2 According to the structural feature of the four-axle distributed electric vehicle, the steering angle of the first two axles, the motor torque, and angular rate of the eight electric wheels are selected as the input vector of the DUKF estimator Besides, we take the lateral acceleration and the yaw rate as the observation vectors in DUKF After discretization of the vehicle dynamics model and the selection of the state vector and the parameter vector, the joint estimator based on DUKF can be implemented recursively, as shown in Figure 2. In addition, the number of Sigma sampling points depends on the dimension of the state vector and the parameter vector. In light of the symmetric point sampling, the numbers of Sigma points of the state vector and the parameter vector are 17 and 7, respectively. In the vehicle joint estimator as shown in Figure 3, the vehicle dynamics model provides control variables including steering angle of the first two axles, angular rate, and motor torque of each electric wheel to the tire dynamics model, and then the tire model outputs the longitudinal force and lateral force of tire to the DUKF estimator. The vehicle dynamics model also provides steering angle of the first two axles and observations such as yaw rate and lateral acceleration to the DUKF estimator. In addition, the estimated longitudinal velocity, lateral velocity, and yaw rate from the DUKF estimator are fed back to the tire model. Simulation and analysis The DLC condition and sinusoidal input of steering wheel angle condition are used to verify the DUKF estimator by Simulink/TruckSim simulation. However, the four-axle vehicle model in TruckSim needs to be modified as the distributed electric vehicle, and the modeling method is available in Xiong et al. 24 The vehicle speed is assumed to be 60 km/h, and the road adhesion coefficient is 0.9. Then, vehicle parameters are set according to Table 1 except the three parameters to be estimated. DLC condition As the input of the DUKF estimator, we set vehicle velocity to 60 km/h and the steering wheel angle according to Figure 4. First, we compare the estimated values of vehicle state with the output of vehicle model. The results of state estimation are shown in Figure 5. It can be seen that the estimated results of the vehicle state are consistent with the overall trends in the outputs from TruckSim. According to the estimation error shown in Table 2, the maximum error is very small and the maximum relative error is less than 5%. This estimated error is within a reasonable range, so the DUKF estimator of state has a high accuracy. For the estimator of parameter, we set the initial value of vehicle mass, yaw moment of inertia, and distance from the second axle to the mass center to The distance from the first axis to the second axle 1.3 L 2 (m) The distance from the center of mass to the second axle 0.5 L 3 (m) The distance between centroid and the third axle 0.85 L 4 (m) The 10,000 kg, 50,000 kg m 2 , and 0.3 m, respectively. The estimated values of vehicle parameter are compared with its nominal value in Table 1, and estimated results are shown in Figure 6. It can be seen from the above estimation results that three parameters accelerate the convergence when vehicle starts to turn at time 3 s, and they are close to the nominal term in a short time. Although the distance from the second axle to the mass center would fluctuate in a small range during DLC, the overall trend is convergence and eventually stabilized at time 10 s. Since the observations including vehicle's yaw rate and lateral acceleration have a strong correction effect on the estimated parameters, they can quickly converge to the nominal value after one process of DLC. Sinusoidal steering condition We set the vehicle velocity to 60 km/h and sinusoidal input of steering wheel angle as shown in Figure 7. The vehicle parameters are set as in Table 1. Using the DUKF estimator, the vehicle state and output parameters are achieved under sinusoidal steering condition. The results of state estimation are shown in Figure 8. It can be seen that the estimated state including longitudinal velocity, sideslip angle, yaw rate, roll angle, and roll velocity have a good consistency with the corresponding output of vehicle model, and the estimation errors are shown in Table 3. The indexes of estimation accuracy indicate both the maximum error and the maximum relative error are less than 5%. Although the error is increased compared with the DLC condition, it is still within a reasonable and small range. Therefore, the introduction of parameter correction makes the estimation of state more accurate by better compensation for the mismatch of model parameters. However, the estimation error of DUKF cannot converge to 0 since it is caused by unknown random signal noise, or unmodeled dynamics system cannot be eliminated completely. Three vehicle parameters estimated by DUKF are compared with the nominal values, as shown in Figure 9. After turning the steering wheel at time 2 s, the vehicle mass converges to nominal terms quickly. At the same time, the distance from the second axle to the mass center approaches the nominal value and fluctuates within a small range with the varying steering wheel angle. In addition, the yaw moment of inertia is close to its nominal value gradually, and finally steady at time 10 s. During the sinusoidal steering situation, the estimation of three parameters is achieved because the observations such as yaw rate and lateral acceleration would have a strong correction to the estimated parameters. From the estimation results under DLC condition and sinusoidal steering condition, we can see that the DUKF estimator has a good effect on the estimation of vehicle state and parameters. By real-time estimation of vehicle parameters, the problem of unknown parameter or its dynamic change can be solved. Furthermore, the accuracy of state estimation can be improved with compensation of parameter error. Conclusion Considering some parameters of heavy vehicle such as vehicle mass, yaw moment of inertia, and position of mass center are hard to measure or change during driving, a parameter estimator is introduced on the basis of the state estimator. In this article, we proposed a DUKF estimator for multi-axle distributed electric vehicle. The simulation results show that the proposed estimator has a good estimation accuracy for vehicle state without the influence of parameter errors. At the same time, the key parameters could converge to the nominal terms quickly by parameter correction of joint estimation. In future, an adaptive DUKF algorithm will be applied, and more real test of heavy vehicle under different road adhesion is needed. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
SIMULATION OF EGNOS SATELLITE NAVIGATION SIGNAL USAGE FOR AIRCRAFT LPV PRECISION INSTRUMENT APPROACH . Satellite navigation has become a very important topic in the air transport industry along with its application in instrument approach procedures. Recently, extracted statistical characteristics of the European Geostationary Navigation Overlay Service (EGNOS) satellite signal have been made available from real measurements in the Czech Republic. The numerical modeling approach is taken for a feasibility study of automatic aircraft control during the Localizer Performance with Vertical Guidance (LPV) precision approach based on such navigation data. The model incorporates Kalman filtering of the stochastic navigation signal, feed-back control of L-410 aircraft dynamics and the calculation of approach progress along the predefined procedure. Evaluation of the performance of the system prototype is performed using the scenarios developed with a strong interest in altitude control. The specific scenario is focused on a curved approach which offers a huge advantage of the approaches based on the Satellite-based Augmentation System (SBAS) compared to ones with the Instrument Landing System (ILS). Outputs of simulation executions are statistically analyzed and assessed against predefined navigation performance goals equivalent to ILS categories with a positive outcome. Introduction Many European airports now use satellites as navigation aids for the approach of aircraft (Vencius, 2013). The signal itself as provided by satellites does not have sufficient navigation performance parameters to enable safe aircraft operation during approaches (ICAO, 2012). The focus of approach procedure designers is going in the direction of satellite augmentation systems, ground or satellite based, particularly to use advantages of European Geostationary Navigation Overlay Service (EGNOS) in Europe. This is also true in the Czech Republic where Localizer Performance with Vertical Guidance (LPV) procedures are in operation or planned on the runways of most airports with international transport. Current LPV procedures reach equivalence with ILS CAT I approaches once decision heights are at the 60 m (200 ft) level, however statistical analysis of real measurements of the EGNOS navigation signal in this area shows huge potential for even wider use of the signal, so it is being studied across the European Union (Vassilev & Vassileva, 2012;Grunwald et al., 2019). The European Global Navigation Satellite Systems Agency reports 460 operational approach procedures using the navigation signal provided by the EGNOS (EGNSSA, 2020). The opportunities to use the signal for aviation purposes are currently investigated for the geographical locations on the EGNOS coverage edge (Beldjilali et al., 2020). The data from the experiments are available from the measurements made using the onboard sensors during the landing of the aircraft (Krasuski & Wierzbicki, 2020). These characteristics of the navigational signal can be further used in the simulations involving aircraft mechanics and onboard systems. Such simulations may help to explore the area of usability of the EGNOS system as a provider of the inputs for the aircraft navigation and control systems used during the execution of approach procedures. This feasibility study, based on the model-based development (MBD) approach commonly used within the aviation industry (Scilab, 2020), attempts to develop a model of the aircraft control system during the approach. The system is fed by the EGNOS navigation signal and operates within a framework of precision approach below decision heights of existing LPV approaches. The aim is to exercise the possibilities of the navigation signal and prove its usability via utilization of the developed system functional prototype. Such simulation efforts can effectively exercise some aspects of planned aircraft operations which otherwise have to be validated by costly trial flights (Fellner, 2011). Architecture of the system proposed for evaluation Evaluation of controlled aircraft performance is simplified by avoiding complex human behaviour so that the developed model is a model of the automated system as presented in Figure 1. The control system involves two parts: (1) navigation filter modules, and (2) controller modules. Other modules are involved to ensure that the task of the evaluation of the design of the aircraft approach automated system based on the EGNOS navigation signal is completely sufficient. The flight mechanics is represented by the aircraft dynamics and by the aircraft kinematics. The inputs of the module of the dynamics are connected to the outputs of the controller which creates a closed loop simulation. The dynamics is reflected by the kinematics adjusted to the approach flight phase. Sensors (especially an EGNOS navigation one) provide simulation of the data necessary for the navigation filter and for the controller. Such a modelling approach creates a simulation which is an alternative to other approaches (Antemijczuk et al., 2012). Properties of the proposed solution is that it covers all necessary functional modules at appropriate complexity levels. Such a closed loop simulation is consistent and complete. These properties also ensure that the functionality of the simulation can be easily verified. Modelled avionics The filter module provides the position of the aircraft based on input signals measured by satellite navigation sensors. The Kalman filter (Sorenson, 1985) is used within the model to estimate the aircraft position while incorporating measurements with the gain of: where: | 1 n n Sis the covariance matrix of the estimated filter state errors derived from the previous filtration step; H is the matrix representing projection from the filter state space to the measurement space, and R is the measurement error covariance matrix. Filter input is the difference of the actual and planned position of the aircraft and the estimated states are again the difference and its change, so that: (2) An important feature of the filter is that it can continue in its function even when measurements are not provided, e.g. because of signal dropout. This operation mode is covered by the model and is incorporated in the set of verification scenarios discussed below. Estimated filter states are updated as follows: where: 1| 1 n n x --is the vector of the estimated filter states in the previous filtration step; | 1 n n xis the vector of the estimated filter states in the current step, and F is the matrix representing the evolution of the inner filter model. The best estimation functionality of the filter is achieved when, in this case, the inner system is a linear approximation of convoluted behavior of aircraft mechanics and its automated controller. The model developed within this work is sufficiently robust to enable such analysis, design and configuration. However, such a detailed focus on the inner model is beyond the scope of this work. Instead, any unknowns in the behavior of the whole system in loop-back as seen from the filter perspective are represented by the noise covariance matrix of the inner filter system. Then, the solution proposed for the evaluated system is expressed as: where: ( ) E x is the estimation of x based on its measurements; x Δ is the difference of the actual and planned position of the aircraft, t Δ is the time period between filtering steps; Figure 1. Modules and corresponding signal flow of the modelled system and processes 1| 1 n n S --is the covariance matrix of estimated filter state errors derived from the previous filtration step, and Q is the noise covariance matrix representing errors caused by the description of the controlled aircraft dynamics using linear approximation described by matrix F only. A smoother (Meloun & Militky, 2002) module is introduced into the model to support full flight director functionality. Discrete data provided by the EGNOS system and filtered by the digital Kalman filter need to be applied in the field of consequent continuous control logic. The smoother is designed as a sextic spline which ensures that both the position and its derivative are continuous in time. Such a requirement is necessary not only to ensure that the controller operation is sufficient but it's more important with respect to the aircraft control surfaces being loaded by the filtered signal. More details are provided in Appendix A. The results of the functional operation of the filter and smoother are demonstrated in Figure 2. Decreased variance within the data after filtering is observable along with the delay in response to input change as an essential part of the smoothening process. The autopilot controller has configurable parameters and its architecture and features are further discussed along with solutions of navigation cases. It corresponds to the method of searching for the optimal gain and time constant of controller components to achieve a good balance between the reaction speed and the overshooting resulting in the controlled aircraft position. Simulated processes The essential part of the sensor simulation is to provide the EGNOS navigation signal as an input of the Kalman navigation filter. The principle is to apply known statistical characteristics of the data to the current simulated position of the aircraft. The characteristics are shaped as stochastic noise with Gaussian distribution which is pre-generated based on the following parameters (Ptacek, 2014): -Mean of 0.30 m in altitude; -Standard deviation of 0.48 m in altitude; -Mean of 0.65 m in the north -south direction used for lateral position; -Convolution of standard deviation of 0.30 m in the north -south direction and of the standard deviation of 0.26 m in the east -west direction; -Discretization period of 1 s. These characteristics provide specific values of Kalman filter settings for matrix R and sample period t Δ . The matrix is degraded to scalar only once the position is measured in either a lateral or vertical experiment configuration. Also: 1 t Δ = . (8) In the presented model, the actual position of the aircraft is impacted by multiple simulated factors. These are 1) aircraft dynamics, 2) approach procedure, and 3) environmental conditions. The regular way of simulation of aircraft dynamics (Cook, 2012) is in six degrees of freedom as described for forces and moments within aircraft body coordinates: Indices of right-hand side operands of equations (9) to (14) designate their origin: aaerodynamic, ggravitational, econtrol surfaces, ppropulsion, datmospheric. However, the 6-degree of freedom simulation is not used to limit the scope of the work and keep the focus on navigation data processing. The aim to examine the navigation data using the method of functional prototype experiments leads to the selection of the aircraft. The L-410 seems to be the best option when considering factors such as aircraft speed category (ICAO, 2009), availability of dynamic models, and its common use in the Czech Republic. Dynamics are -used (CTU, 2015) and integrated as a linear state space system, i. e.: where: x is the state vector; u is the control vector; y is the output vector of the system; and matrices A and B are taken from the L-410 aircraft model. Lateral movement dynamics use the following states and controls: Movement in the longitudinal direction is represented as: The kinematics of the aircraft is implemented within the coordinate frame determined by the approach procedure. The process defining operation of the aircraft studied herein follows the operation of the ILS system. The coordinate system is defined by localizer and glide slope planes as illustrated in Figure 3. The progress of aircraft approach is represented by the decrease of aircraft distance from the runway along the approach axis. The axis is an intersection of the planes. The approach process defines the expected height for the given runway distance per predefined glide path angle. States of the aircraft dynamics are used directly as kinematic simulation inputs so that: for the matrices from equation (16). The solution of the task adopted in this work should also fulfil the regular ILS operation which can be expressed by the definition of the approach windows per the ILS categories in Table 1. Parametrization of the system The views on the top level of the system implementation for lateral and longitudinal are presented in Appendix B. Execution of the simulation will not be possible without an appropriate parametrization of the system. The system contains a set of configurable parameters which are set by the Scilab script during the initialization of the model for the simulation execution. The parameters are: -Statistical characteristics of the navigation signal; -Elements of the noise matrix of the inner Kalman filter system; -Coefficients of the proportional, derivative and integration components of the controller; -Size and source file name of the stochastically generated errors of the navigation signal measurements. These parameters (except the ones related to the navigation signal) were experimentally determined during simulation executions with the help of system identification methods (Balate, 2003). Comparison to the Instrument Landing System The definition of the approach process on the ILS system functionality (ICAO, 2016) establishes a platform to recognize the advantages of the LPV approaches. These approaches, compared to ILS ones, reside on performancebased navigation (PBN). The decision height for 3D satellite-based augmentation system (SBAS) approaches (ICAO, 2018) can be determined based on the specific navigation signal performance in the airport area along with the PBN philosophy of giving responsibility of navigation aid selection to the pilot. Such new options help to decrease the number of missed approaches. While ILS approaches support the definition of procedures along the approach axis only, the LPV procedures can be defined using a curve. This brings the advantage of using them for runways where ILS cannot be used due to minimum obstacle clearance (MOC) constraints (ICAO, 2020). The ability of LPV to define safe procedures per PBN rules provides an opportunity to use satellite navigation for vertical guidance during approach flight phases. It has a positive impact on the flow and capacity of air Figure 3. Established coordinate system based on the localizer and glide slope planes defined by the ILS system. The system is enhanced to allow to definition of the curved trajectory as functions of distance from the localizer plane (x) and height (v) on distance from the runway (z). Actual aircraft position is referenced to the planned position by differences Δx and Δv in lateral and vertical directions respectively transport for the airports and may improve environmental conditions, e.g. noise reduction in the concerned area (EGNOS, 2002). Evaluation of the proposed model takes into consideration both new and existing scenarios. The advantage of the LPV approach along the predefined curve is specifically verified as discussed below. An extensive focus is given to model operation in the scenarios existing with ILS so that a sufficient LPV performance is demonstrated in altitude control. Furthermore, objectives for the definition of the whole set of verification scenarios are also provided. Approach using the predefined curve The curve for a planned approach is defined as a function of the distance from the approach axis within the glide slope plane on the runway distance measured along the axis. In such a way, planes defined by the ILS system are used for the description of the hyperbolic curve. An additional control signal besides the signal from the navigation filter is derived from known recipe of the curve which is expressed as: where: x is the lateral position of the aircraft as illustrated in Figure 3; n is the offset of hyperbolic centre with respect to the localizer plane; a is the axis of the hyperbolic curve; FAF z is the distance of the final approach fix from the runway; z is the distance of the aircraft from the runway as illustrated in Figure 3; m is the offset of the hyperbolic curve centre along the approach axis; 0 ψ is the direction of the hyperbolic curve asymptote which represents aircraft yaw. The tangent of the curve in the planned position of the aircraft on the curve of 0 z and 0 x is: The signal is constructed based on: which represents the change of the yaw angle along the curve. The property of the selected hyperbolic relation is that the signal is very close to the constant function which has a stabilizing impact on the aircraft dynamics. Initialization and transition effects can be seen in Fig Flight control in altitude The role of the control system (Balate, 2003) is to cover flight control in the horizontal direction and in altitude within various conditions encompassing range of approach angles and the impact of the front wind of a wide set of magnitudes. The system must support the decrease of the aircraft speed along the path as well. The speed change impacts abilities of the controller function which must be adjusted to support the whole speed range and the speed change. A general controller design composed of the proportional and derivative components (PD controller) is expanded to also include an integration component (PID one). An example of the model operation is presented in Figure 5. Set of verification scenarios Motivation for the study of the environmental conditions of the approach process controlled by the proposed model is to verify the model design by the set of possible use case scenarios. Deployment of a new avionic system within the industry is a complex and sophisticated process, e.g. RTCA recommendations for the system, hardware and software (RTCA, 2012) are used for the certification of avionic systems. Although the application of these methods exceeds the scope of this work, the verification of the proposed model needs to be performed at a sufficient level. The set of verification scenarios is prepared to verify the behaviour of the model within various simulated conditions. The conditions vary due to definition of the approach procedure, irregularities of the navigation signal, weather conditions or due to other causes not incorporated within the scope of this work. The functionality of the developed model is verified against established goals. One group of the goals for each precise approach category as a window is specified in Table 1. The other group is defined using the navigation performance parameters below. Simulations are performed in the scenarios which are designed based on verification objectives. The outcomes of simulations are statistically analysed and further compared with the value ranges expected by the goals. The verification objectives determined based on the flight manual of the L-410 aircraft (LET, 1996) are as follows: 1. Approach on curve. The planned curved trajectory is defined within the localizer plane; 2. Glide slope angles in an interval from 2.75 to 3.77 degrees with a specific interest in the value of 3.00; 3. Dropout of navigation signal for 2 seconds; 4. Wind drift of 10 m/s; 5. Head wind of 5 m/s. The selected scope of the approach starts in the final approach fix (FAF) point and ends by transferring to the flare. In the event that the FAF distance from the runway is 9630.4 m (5.2 nautical miles) and the threshold height for the start of flare is 3 m. The initial speed is 250 km/h and the target speed at the flare height is 155 km/h. The set of verification scenarios based on the verification objectives is listed in Table 2. The designed model proposed for verification includes a set of input signals dedicated for the purpose of simulation execution in the environment configured for the given scenario. The following are the signals for lateral aircraft control: a) Reference trajectory; b) Drift; c) Validity flag of the navigation signal; d) Navigational signal trimming. Longitudinal scenarios are configured using the following model inputs: a) Glide slope angle; b) Headwind; c) Validity flag of the navigation signal; d) Navigational signal trimming. Evaluation methodology The verification approach is mostly determined by the selected method of the modelling of the studied systems and processes. The attempt for the development of the functional prototype of the control system requires the generation of stochastic data and simulation executions. Guidance for the statistical analysis of the data which resulted from the executions are provided by the PBN principles stated in the ICAO Document 9613 (ICAO, 2008) which govern APV approaches. Requirements provided on the position of the aircraft are expressed in the following statistical manner: 1. The difference of the actual position of the aircraft from its planned position is evaluated; 2. Standard deviation of the difference should not exceed the distance given by the selected navigation category; 3. The maximum of the difference should not exceed double the given distance. Goals The navigation goals developed for the verification purposes of the presented solution are established as a combination of two concepts: of the navigation using the ILS system and of the performance-based navigation. The first concept allows to prove that the existing navigation requirements which currently ensure safety of the approach procedure are fulfilled. The corresponding goals are specified in Table 1. The specification of the navigation performance to be achieved by the APV approach is provided within the scope of PBN by specified required navigation performance (RNP). The RNP for the approaches provides the requirements on both lateral and vertical navigation (ICAO, 2009) as so as for recommended RNP 0.003/15 (FAA, 2006) for approach procedure being studied within scope of this work. Detailed statistical characteristics are derived as follows: 1. Standard deviation of the difference in the lateral position within 5.556 m; 2. Standard deviation of the difference in altitude within 3.048 m; 3. Maximum difference in the lateral position of 11.112 m; 4. Maximum difference in the altitude of 6.096 m. Results The evaluation of outcomes of the model simulations executed for the scenarios per verification objectives is presented in Figure 6 and Figure 7. The first four scenarios are executed with the lateral model while the other five are executed with the longitudinal one. The displayed results of verification are combined for both horizontal and vertical directions, i.e. for distance from the planned trajectory in the direction perpendicular to the localizer plane and at a height above the runway, respectively. Figure 6 shows the difference of the actual aircraft position from the centre of the approach window defined in Table 1 compared to definitions of the window sizes for the equivalent ILS categories. The developed model controls the simulated aircraft down to the reference height of the approach window so that the position of the aircraft is within the window for all scenarios. Similarly, goals for the distance and for the height are combined within the chart in Figure 7. Both maxima of values and the standard deviation of values during the simulated approach procedure are shown along with the respective RNP required values. Again, the model fulfills the RNP requirements for the executed scenarios. Conclusions The aim of this work was to follow up on the statistical analysis of real EGNOS measurement to exercise these data against a simulated process of the precise instrument LPV approach. Focus was given to the fact that, compared to ILS, navigational data are received as discrete samples. The other area of focus, besides studying the behaviour of the model control in altitude, was to attempt to challenge the proposed automated control system within the simulated environment of approach on the curve which is a huge advantage of LPV approaches versus ILS ones. Model-based development was selected as a platform for the study. In this way, it was possible to design and develop a functional prototype of the automated control system. Scilab 5.4.1 was used to completely cover modelling, simulation, data processing and data analysis of the effort while using graphical and scripting features of the environment. The presented work demonstrates how complex systems and processes can be modelled and simulated in an integrated and consistent manner. Further expansion of the model was considered during the implementation process so that the final architecture of the model is readable, and the modules are kept replaceable. Navigation performance goals for the simulation environment were defined based on performance-based navigation guidelines. Analysis of simulation executions of the model proves that it is possible to control the aircraft within RNP requirements and requirements equivalent to the ILS CAT I, II and III approaches below the decision height of 60 metres. The main assumptions of the achieved results are that other sensors are also utilized during the approach and that the verification is solely built on the laboratory experiment. The ILS CAT III operation is in question as far as the flare was not analysed. The priority of the work was to develop a functional prototype of the automated control system operation with EGNOS navigational signal inputs which implies following the most significant limitations of the work. Analysis was performed exclusively in the simulated environment Figure 6. Comparison of actual aircraft position within the windows against their defined sizes Figure 7. Comparison of statistical characteristics extracted from simulation outcomes against values required by the goals and solely with the use of stochastically generated data. The control model was based solely on EGNOS data; no other sensors were considered. The elaboration on the modules of the system was performed up to the simplest level which supported the experiment, but not in more detail. The presented work covers several topics of modelling and simulation areas and so it can easily be considered as a starting point for consequent studies. The scope may be expanded in replacing the aircraft dynamic module to examine the behaviour of other aircraft in a wider set of scenarios. An additional focus may also be given to improvements of the navigation filter, including other sensors into a control mechanism, a broader study of the initialization conditions of simulation and a specific focus on drift impact on the controlled approach process. Disclosure statement I am not aware of any competing financial or personal interests from other parties. Regarding professional interests, I have been involved in projects in the past where data processing and simulations were used. However, none of the information except for the professional experience gained was used within the scope of this work. ( ) ( ) as far as the estimation of the second derivative of the position is not calculated in the selected Kalman filter design. The resulted implementation as the script lines in Scilab is following. Appendix B Implementations of the evaluated model for control of the aircraft movement during the approach in the lateral and longitudinal directions are shown in Figure 8 and Figure 9, respectively. ( ) E x -Estimation of x based on its measurements; x -Lateral position of the aircraft with respect to the localizer plane; x 0 -Actual lateral position of the aircraft with respect to the localizer plane; X Y -The second element of force vector of origin X which has an effect on the aircraft body; y -Output vector of a state space linear system; Figure 9. Overview of automated control system model and simulation of the approach process in the longitudinal direction
Effects of Covid-19 Pandemic on The Academic Perception of Class 8 and Form 4 Students Towards Their National Exams: A Case Study of Narok County, Kenya National exams in Kenya have been seen as the bridge to better livelihoods. Passing the exams translates to more chances of selection to quality schools and professional courses. On contrary, failing these exams is perceived to render the candidate ‘a community failure’, with little chances of making it in life. The exams therefore carry a lot of weight in the minds of candidate students. The covid-19 pandemic resulted to indefinite closure of learning institutions. This closure affected many dynamics responsible for candidates’ performance in their national exams. A study was hereby conducted aiming to assess how the pandemic had affected candidates’ (class 8 and form 4) perception towards national exams in Narok county, Kenya. The research used a mixed design involving a case-study and cross-sectional design of study. Questionnaire guides were used. Descriptive statistics were used to analyze the findings. The findings indicated that the candidates’ perception of passing in essential subjects were completely altered. The pandemic had also made the learners to switch their dream professions. The study found out that there was little online learning activity with numerous excuses for the same. Most of the learners indicated that their perception on school resumption largely dependent on how the government would contain the pandemic. the pandemic had significantly ruined how the learners perceived national exams. The authors recommend all education stakeholders to move with speed in ensuring the candidates are engaged with learning activities either online or through community-based learning platforms. Introduction National exams are subjected to students at a certain level (usually in their exit classes). In Kenya, the two major national exams are conducted at the end of primary school level (class 8) and secondary school level (form 4) (standardmedia.co.ke, 2019). The exam conducted after class 8 is called Kenya Certificate of Primary Education (KCPE) while that conducted after form 4 is called Kenya Certificate of Secondary Education (KCSE) (standardmedia.co.ke, 2019). The examining body is Kenya National Examination Council (KNEC) under the Ministry of Education (MoE) (standardmedia.co.ke, 2019). The main goal of the exams is to evaluate students understanding of the existing syllabus using a national standard (Dolan and Collins, 2015). The exams enable curriculum developers to assess students learning process and outcomes systematically and sustainably (Darling-Hammond et al., 2020). In the current curriculum in Kenya, national exams have been used to determine students placing in the next level classes and their career professions. A lot of emphasis is thus placed on students' performance in national exams. The current curriculum integrating KCPE and KCSE is known as the 8-4-4 system involving 8 grades in primary school level (Amutabi, 2003), 4 in secondary school level and 4 at bachelor level (standardmedia.co.ke, 2019; Wanjohi, 2011). This curriculum began in 1985 after replacing the earlier 7-4-2-3 system (Wanjohi, 2011). The graduates of the current curriculum have been criticized for being incompetent and lacking hands-on technical skills. Due to pressure from curriculum reviewers and the general public, the MoE initiated a more technical curriculum. The curriculum taskforce experts working on the task settled on the competence-based curriculum (CBC) which was launched in 2019 (theelephat.info, 2019). The curriculum adopted the format 2-6-3-3. In this new system, less emphasis was placed on national exams. This was to ensure 100% transition from primary school to secondary school. The curriculum also proposed more emphasis on technical experience and diversification of career paths at secondary school so as to cushion those who do not pass the secondary school national exams. A lot of emphasis has also been placed on technical and vocational training (TVET) institutes which admit students who do not make it to universities (theelephat.info, 2019). These measures have relieved the psycho-economic burden that was earlier imposed on students who failed national exams. Nevertheless, the CBC system is being implemented in phases and KCPE and KCSE exams are still ongoing (by 2020) (theelephat.info, 2019). Passing KCPE exams imply transition to good quality government sponsored (partially) secondary schools (Wanjohi, 2011). On the other hand, passing KCSE exams imply more chances of selection to the learners' dream university and career profession. These paths later translate to higher probabilities of landing the learners dream jobs and by extension, good livelihoods. On contrary, failing these exams prophecy the exact opposite and their future is spelled doom. The students who do not get good grades are regarded as failures right from their schools, families, community and to the society. This mentality negatively affects their psycho-economic life. To avoid this, guardians, teachers and the learners' schools spend a lot of money and time to ensure the students pass national exams; by whichever means. Both ethical and dubious means are used to enhance the learners pass the exams. There have been cases whereby the four parties (student, guardian, teachers and the school) have colluded in examination cheating (theconversation.com, 2018). Students spend sleepless nights revising for the exams. Parents spend a lot of money to acquire modern stationery, private tutors and schools for their children. Teachers craft the education pedagogy to enhance more revision; including instilling a cramming-mentality to students. On the other hand, schools enforce numerous tuitions and punish students who do not perform well in pre-national exams. All these behaviors affect the perception of learners towards national exams. Being a marginalized county, transition rate from one education level to the next in Narok has always been quite low (Westervelt, 2018). Female students in the county also face a lot of challenges in their pursuit for higher learning; especially due to the Maa culture which discourage girl empowerment through learning. On the other hand, the boy child in Maa region has other options apart from learning (most notable pastoralism). Actually, to many of the students, learning has always been an alternative. Any interruption such as school closure is ever a welcome option as their illiterate parents force the students to take care of their livestock. Therefore, the pandemic might have pleased some of the parents as their livestock can now get more attention. It is therefore interesting to get the perception of these students during these unclear times. On the wake of covid-19 pandemic in Kenya, all learning institutions were indefinitely suspended (nation.co.ke, 2020). The suspension aimed at protecting learners, teachers and other education stakeholders from acquiring the deadly infections. The traditional KCPE and KCSE dates (October to November of each academic year) appeared to start wavering. With the persistence of the pandemic, the dates and actualization of the exams continued to be questioned. By June 2020, majority of the students were uncertain of actualization of the exam dates in the academic year 2020. A worse concern was on the level of students' preparedness for the exams even in a future uncertain date. This study aimed at assessing the impacts of covid-19 pandemic on students in their primary and secondary school exit classes perception towards national exams. The pandemic is hypothesized to affect examination dates and attendance and performance. By extension the transition rate and future professions of the learners will be affected. Design of Study The researchers adopted case-study and cross-sectional designs in the research. A case-study design was chosen because the phenomenon was universal, especially in most developing countries. In these countries, online learning was not popularly adopted and learning had ideally stopped for majority of the learners. It was therefore more feasible to concentrate on a small area and reflect the findings to other regions of similar set-ups. This will enhance the quality of the findings while minimizing research resources. Narok county (figure 1) was chosen for its metropolitan setup. Like other counties in the country, there is a rich matrix of schools in Narok county. The county has mixed and single schools, private, government and community owned schools, day and boarding schools at both primary and secondary level. A cross-sectional design of study was chosen to enable the researchers assess the effects of the pandemic on students' perception towards national exams during these unprecedented times. The study was conducted in June, 2020 when there were no clear predictions of how and when the looming pandemic would come to an end. Descriptive analysis was used in the research. Data were collected using questionnaire guides. 100 Figure 1: Map of the study area, Narok county (Source: docplayer.net, 2020) 2.2 Sampling Techniques Randomized block sampling design was used in this study. The study targeted Form 4 and Class 8 students of both gender (1:1). The study area was divided into blocks depending on geographical location. There was a total of 6 blocks spread throughout the county. A team leader (research assistant) was assigned to supervise distribution of the data collection items in each of the blocks. For each of the blocks, the data collection tools were sub-divided into four (for each gender and class) and respondents randomly sampled. Sample size A sample size of 120 was used in this study. For each of the 6 blocks, 20 samples were taken. The samples were equally sub-divided based on gender (10 male and 10 female) and grade (10 form 4 and 10 class 8). The research thus used 60 male (30 form 4, 30 class 8) and 60 female (30 form 4, 30 class 8) students as illustrated in figure 2. 2.4 Data collection items used Data was collected using questionnaires guides (Appendix 1). The questionnaires were unstructured and had both open ended and closed research questions. The questionnaires were divided into 3 main sections i.e respondents' demographic section, school information section and a section on the academic perception of the students towards national exams. The questionnaires were assigned by the block team leaders who would collect them from the respondents after 1 day. During the collection process, the researchers accompanied the team leaders to assist any respondent with clarifications or interpretations where need be. Any interesting clarifications by the respondents was noted down at the back of the questionnaire guides. 2.5 Validity and reliability of the research guides used Questionnaires were quite reliable since they were easy to administer and encouraged confidentiality thus, reducing biasness. They were subjected to the test-retest reliability method to check for their consistency in results after a duration of 10 days. A group of 12 respondents (volunteers) was used. There was a similarity index of 71% in the results at the two instances of testing. A face validity of the research guides was conducted by 5 volunteers. A pilot study was then conducted to determine the validity of the questionnaires. Each of the 60 initial distinct research questions in the questionnaire guide was assigned to 2 different respondents. In total, there were 120 respondents (volunteers). The data was then debugged and the minimum and maximum values recorded. After a critical analysis check, 12 questions were found to be confusing and leading. These questions were plucked out. The ultimate validity score was thus 80% and the outcomes were found to strongly favor the use of these research guides. 2.6 Data analysis The data collected was analyzed and presented using descriptive statistics. The data collected was analyzed using Ms Excel (2016). Results and Discussions 3.1 Learners demographic information The age distribution of the learners averaged 14-15 years for class 8 students and 18-19 students for form 4 students. About 42.8% and 35.7% of the students in class 8 were aged 14 and 15 years respectively. According to Ngondi (2018), most of the candidates doing KCPE are in this age bracket. Since the 2020 academic year calendar for class 8 students was altered, it was rather obvious that the students would be a year older while doing their KCPE exam. There is a slight correlation between learners age and passing national exams (Ngondi, 2018). The older the students, the wiser and brighter they are therefore in a position to score better than their younger counterparts. This is because comprehension of ideas increases with age of learners (Bhagat et al., 2015). From the findings, all the students will be a year older by 2021 when the exams will be conducted. It is therefore expected that their performance will be better; assuming age plays a key factor in their performance. The same is expected of the form 4 learners. Only 7.1% of the KCPE candidates were 13 years and below while 14.3% of the class 8 learners were 16 years and above. According to Chen et al., (2018), family size has a direct correlation with learners' performance in school. The smaller the family (nuclear), the more the learners get attention from their guardians. The learners also face less competition for resources such as stationery, fees and guardian follow up time. Only 2.9% of all the students sampled were in a family of less than 5. On contrary, 41.2% of the families had 5 to 8 members while 55.9% had more than 8 members. The bulk of the learners' families were thus skewed towards a higher family size. These two demographic factors (age and family size) have a direct impact on learners' grades and by extension their perception towards national exams. While it is certain the learners age will have increased after the pandemic, it is not clear whether the learners' family size will have changed. School information Most (85.3%) of the learners came from county-level schools. These were the former district schools which were upgraded to county schools after devolution. County schools have more resources such as infrastructure, teachers and stationery compared to constituency development fund (CDF) schools but less than regional and national schools. The high number of students in these schools imply moderate access of learning resources by the learners. 5.9% of the learners attended CDF schools while 11.7% of the learners were in regional schools. By default of the standard of their schools, the 11.7% learners in regional schools were better positioned to score higher grades in the national exams. Smedley et al., (2001) showed that students in schools with more learning resources have more confidence in national exams compared to those in schools with fewer learning resources. This in turn has a positive perception on the exams. 83% of the respondents were in government owned schools, while 14% were in private owned schools. Only 3% of the learners were in community owned schools. All the 14% of the learners in private owned schools were class 8 students. Alimi et al., (2012) showed that there was a significant difference in the performance of students doing West African Secondary School Certificate (WASSCE) and the type of school ownership. Students in private owned schools perform well than those in government owned schools due to the resources, especially learner-teacher time in the private schools. About 71.4% of the learners were in boarding schools while the rest were in day schools. Learners in boarding schools spend more time in school with their colleagues and teachers. These learners are accustomed to their school learning environment and can thus be affected by a different learning environment such as that at home. It is thus more certain that little learning was taking place to the students from boarding schools compared to those from day schools; who are already accustomed to learning at home. The relatively higher number of learners from boarding schools (now at home) is of worrying concern to the guardians and other education stakeholders. These students are also more susceptible to immoral activities compared to those from day schools (Evans-Campbell et al., 2012). This is because they were used to strict guidance at school by their teachers and school administrators. At home, there are no school guidelines to follow therefore students can end up doing immoral activities especially if their guardians and siblings are busy. The difference in learning environment is also a key factor that can alter the candidates' perception towards their national exams. About 68.5% of the learners were in single-gender school set ups. Out of this category, 51.5% of the learners were in form 4. This implies that more than half of the students in form 4 schooled in single-gender schools. Singlegender schools are preferred by guardians for their ability to minimize gender distraction amongst the learners. Students in mixed-gender schools face more gender distraction especially because of their age (adolescent). These distractions lead to coupling which later translate to low academic grades, unwanted pregnancies and sexually transmitted infections amongst the learners (Henry et al., 2012). To minimize gender distractions, most guardians take their learners to single-gender schools. Most of the students in mixed-gender schools were in class 8 and in day schools. These learners were still under their guardians watch. These learners thus faced less gender distraction. Due to school closure resulting from covid-19 pandemic, all the learners were now at home where there were more gender distractions. For some of the students, their guardians and elder siblings were not always with them at home (as they were busy with their economic lives). It is therefore evident that the learners perception towards national exams were altered by the pandemic. Most (72%) of the schools had between 9 to 32 teachers per school. According to planipolis.iiep.unesco.org (2018), most of the schools in Narok county have 8 classes for both primary and secondary classes (single stream in primary school and double stream in secondary schools). This translates to an average of 1 to 4 teachers per class. Most of the classes had 41 to 48 (median = 44.5) students in a single class. Using the calculations above, the teacher:student ratio ranges between 1:11 to 1:45. Table 1 illustrates the number of teachers per school and learners per class in Narok county. The global recommended teacher:student ratio is 1:40 (Perlman et al., 2017;Appiagyei et al., 2014). The findings thus indicate that there was no significant deviation in teacher:student ratio between the recommended standards and that of the learners in the study area. The lower the ratio, the more attention the learners get from their teachers. The opposite is also true. The lower ratio of 1:11 was mainly for learners in private school while the higher one (1:45) represented most of the government owned schools. 62.8% of the learners were allocated only 1 text book per subjected by their schools. This is despite the MoE issuing free text books to all learners in Kenya. The deficit in number of text books to the learners can be attributed to many factors, chief amongst them embezzlement of government funds by those in charge (Kirya, 20190. 2.9% of the learners did not have any text book at all. 34.3% of the learners had 2 test books per subject. The extra text book was privately owned by the learners. From the findings on teachers, classes and textbooks, it was observed that the learners had moderate resources required for their preparation for national exams. However, the learners admitted that they no longer have access to these resources while at home. Without access to the resources, the students are incapacitated to proper preparation for their exams. This can potentially affect their perception towards performance in national exams. 3.3 Effects of covid-19 pandemic on academic perception of the learners 3.3.1 Effects on career professions of the learners The goal of every learner is to land their career profession after completion of their studies. Students performance in national exams are directly related to their career professions. The Kenya University and Colleges Central Placement Service (KUCCPS) place learners to different profession courses based on their performance in KCSE (advance-africa.com, 2020). On the other hand, good KCSE performance are dependent on the type of schools. Most top-performing secondary schools pick students who score well in their KCPE exams. To achieve their dream professions, learners are therefore required to pass the two exams. The major professions chosen by the learners are indicated in figure 3. Figure 3: The major professions preferred by the learners after the pandemic Majority of the learners in Narok wanted to become a teacher or a doctor (both 38.2%). The learners indicated that the two professions had a lot of passion for humanity. Teachers were their role models back at school and they had a strong desire to fit into their shoes after their studies. One of the respondents revealed that teaching was good because the teachers earned their salaries even while passive. The learner was quoted as; …our teachers have been earning even during this pandemic. Despite staying at home and engaging in little teaching activities, our teachers (especially those employed by the Teachers Service Commission, TSC) are still earning normal wages. This is not the case in many other professions. For this reason, I want to become a teacher in future. From the explanation of the learner, it is evident that covid-19 pandemic had an influence on the learners' choice of profession. Another student whose choice of profession was affected by the pandemic was quoted as; …Initially, I wanted to become a pilot. I had a strong desire for high altitude and travelling to far off lands. I wanted to go to America, Turkey, Australia, Japan and many other countries. However, after the pandemic, most air travels were banned globally. Several pilots and cabin crew lost their jobs or took compulsory pay-cuts. It was then that I started questioning the profession. It is no longer as job-secure as I initially thought. Covid-19 pandemic had increased the desire of some of the learners to medicine and nursing profession. Apart from paying well, the learners cited that the profession had played a big role in mitigating the effects of the pandemic. One of the learners was quoted as; …I did not have any desire for working in hospitals and handling sick or dead people at all. I just loved to be in offices doing other jobs. When covid-19 pandemic struck, most of the other professionals were forced to stay at home. The only professionals operating were the essential service providers. The most notable ones were health workers. They did a lot of work to look after those infected by the virus. They appeared to be the most useful people. They earned a lot of respect and I strongly felt the urge to joint them. Since then, my dream is to work as a doctor or a nurse. About 11.8% of the learners indicated that they wanted to be engineers. This category of learners had a strong passion for science, geography and technical subjects. Engineering was thus the most feasible job to them. The learners indicated that working as engineers was comfortable and well paying. 8.8% of the learners yearned to be lawyers while 2.9% wanted to work in the hospitality sector. The reduction in urge to join these careers can be as a result of the pandemic whereby the two sectors have been critically affected. For example, courts were indefinitely closed in Kenya during the pandemic (pigarifimbi.africauncensored.online, 2020). On the other hand, majority of workers in the tourism and hospitality sector were greatly devastated by the pandemic due to closure of the sector. Most of the workers lost their jobs. 3.3.2 Registration for national exams About 91.4% of the learners had already registered for their national exams that were scheduled to be done at the end of the academic year. The high registration was attributed to effective MoE and school preparedness and strict adherence to deadlines. On the other hand, students fear missing out on the exams and the repercussions that would follow. This is an indicator of the stress put on national exams in Kenya. The learners who had not registered for the national exams (8.6%) were in the process of doing so before schools were closed. The learners cited different reasons such as being absent from school during registration and lock of necessary documents required for registration. Registration fee for the exams had been waived by the government through the MoE (capitalfm.co.ke, 2020). All the primary school students had registered the five compulsory subjects (Mathematics, English, Kiswahili, Science and Social studies & Religion). There was a slight variation in number of subject registration by the secondary school learners. 63.3% of the learners had registered 7 subjects, 29.2% had registered 8 subjects while 7.5% had registered 9 subjects. Registration of more subjects increase the chance of selection to the students' professional courses. Subjects that the students have failed are cushioned with the extra ones that the learners 11.8 38.2 38.2 8.8 2.9 Engineer Doctor/Nurse Teacher Lawyer Hospitality worker passed. Learners perception towards passing key subjects The key subjects for both primary and secondary school students in Kenya are Mathematics, English and Kiswahili (Piper et al., 2016). The three subjects are compulsory to all learners in the two academic levels. The subjects also contribute immensely during calculation of cluster points required for choosing a professional course and the specific university to offer the course (Kenyayote.com, 2020). The perception of learners concerning the three subjects cannot be directly used to represent the other subjects but is a good indicator of the learners' position towards national exams. The learners' perception towards passing the 3 subjects are illustrated in table 2. The perception of the learners towards passing mathematics was skewed towards the challenging end. This indicates that the learners did not have enough confidence in the subject. Half (50%) of the learners actually agreed that the subject was very challenging and they did not anticipate good grades in their national exams. According to Darling-Hammond (2020), to pass the subject, one requires close monitoring of the learners with regular exercises. Close monitoring of learners by their learners is not possible during the pandemic as the learners are physically detached from their teachers. This in turn affects the students' perception towards the subject. The learners had moderate perception towards passing English and Kiswahili (71% for each of the two subjects). Passing languages is also an involving task requiring teachers' guidance. In Kenya, the two subjects are tested for proper grammar, reading skills, comprehension and writing composition (Roy-Campbell, 2014). These activities also rely on teachers to monitor. Most of the learners could not recall the last unit they were covering in the two subjects. This highlights how the learners' attention towards the subjects had been affected by the pandemic. 45.5% of the learners could not recall the last topic they were learning in English. According to those who could recall, 12.3% of the learners had not completed the previous class English syllabus. 28.7% of the learners were in the first half of their final class syllabus. The majority of the candidates were therefore far from ready to do their national exams. A similar scenario was observed for Kiswahili subject whereby 39.9% of the learners could not recall the last topic covered. Another 10.1% were still tackling the previous class syllabus while 34.5% were in the first half of their final syllabus. The candidates therefore needed more time to cover on the untaught concepts. Poor syllabi coverage has a negative perception of learners towards national exams. This is because the learners are guaranteed they cannot correctly attempt questions whose concept they lack. The extent of syllabi coverage in the 3 essential subjects done in national exams is illustrated in table 3. Out of the three essential subjects, mathematics was the least reviewed subject by the learners (considering the high number of learners who could not recall the last topic taught). Only 40.2% of the learners recalled the last topic taught. Out of this fraction, 9.8% of the candidates were still tackling the previous class' syllabus while 20.4% were in the first half of the final year syllabus. These findings indicate that the learners were far from preparedness towards national exams. The covid-19 pandemic and the resulting learning break have worsened the students' perception towards passing the three subjects. 3.3.4 Status of the learners' preparedness for the national exams while at home About 73.3% of the learners were learning while at home. The learners cited uncertainty in the national exams and poor preparedness before schools were closed as a key factor that made them continue learning while at home. The candidates feared failure in national exams if they did not prepare in advance. The candidates used several methods in their studies as illustrated in figure 4. About 72.1% of the learners had their text and note books at 1 Students perception on essential subjects performance 2 Students coverage of essential subjects home which they used for revision of the national exams. This was the most common method used by majority of the learners because it did not involve using other people or resources. One of the learners was quoted saying; …Apart from my teachers, I do not trust other teachers or means of revision. While I was in class 8, I had many tuitions which confused me. I did not pass well and ended up in this school. Since then, I do not rely on other modes of learning except my teachers and notes. Therefore, revising text books and teachers note books is the only method that I wholly trust. Apart from reliability, teachers' notes are tailored to what the learners had learnt. The learners could thus easily comprehend the notes. On the other hand, text book notes had a bigger scope of what the learners had learnt or were to learn (Bonney, 2015). Most of the text books also contain topical revision questions at the end of topics or sub-topics. 13.6% of the learners depended on their elderly literate guardians or siblings to assist them revise. This category of learners indicated that their siblings and guardians understood their strength and weaknesses and therefore taught them at a friendly pace. However, the learners admitted that there was no serious follow up on the learning tasks given by their siblings and guardians. Some of the learners also admitted that they did not take their siblings seriously during the session. 6.7% of the learners had private tutors. The tutors would spend 2 to 4 hours daily with the candidates, usually in the afternoons and evenings. The tutors were ether registered teachers or university students with close relationship with the guardians. According to the learners, the tutors were more serious with the revisions and exercises given. However, most specialized in some subjects at the expense of others. Most of the tutors taught mathematics and science subjects. Only 1% of the candidates indicated they had meaningful revision with their friends. The learners indicated that their friends were never serious while studying together. One of the learners was quoted saying; ..when we plan to meet and discuss with my friends, it always ends up in unnecessary talk and plays instead of discussion. My friend and myself love football, so when we meet we forget about books and discuss matters football.. 2.3% of the learners relied on social media (WhatsApp and Facebook) for revision. They indicated that they had formed revision groups where education content and questions were sent to the learners. The answers to the given questions would also be posted later. The students admitted that this mode of learning was expensive because of network bundles and subscription fee to the group administrators. 4.3% of the learners relied on radio for learning. The learners who depended on radio indicated that this mode of learning was effective but they had to compete with their siblings or guardians for the radio gadgets. Figure 4: Some of the learning activities the candidates engage while at home About 26.7% of the candidates indicated that they were not learning while at home. The learners cited various reasons such as lack of time, learning resources, proper guidance amongst others as the key reasons for not studying while at home as shown in figure 5. 51.0% of the learners indicated that they did not have any learning materials at home. The learners had left their books at home as explained by one of the form 4 students in Loita (Narok),; …when we were asked to go home in March, I thought that we would resume after 2-3 weeks. I thought that corona was here for less than a month. I did not therefore see the urgency to carry along my books. All my books are in school and my siblings and neighbors are in different classes. I do not like bothering my friends by borrowing them books so I decided to relax until schools are re-opened. My guardians do not have money to buy other books. About 25.2% of the learners cited the illiteracy of their siblings and guardians as the main reason why they were not studying at home. The learners indicated that the concepts covered were quite challenging and intervention by a literate person was required. Since their teachers were away and their siblings and guardians were illiterate, the learners did not find a reason to study while at home. This can well fit to be an excuse used by lazy students since the students could exploit other avenues to study at home. 11.5% of the learners cited a lot of distraction while at home as the primary reasons for not studying. The learners' studies were frequently disrupted by many activities Learning activity such as television and radio, mobile phone calls and chats, siblings and friends' attention amongst others. 8.8% of the learners who could not manage to study while at home cited too much engagements from their guardians. The learners claimed that their parents involved them in tedious family duties which consumed a lot of time and effort. When they were done with the chores, they were very tired and could not concentrate in their books. Some of the learners indicated that they were engaged as family co-breadwinners. They went to work with their guardians for the family to earn more revenue in order to sustain its basic needs. 3.5% of the learners indicated that they had literally given up with their studies until the pandemic was over. They did not feel any urge to strain with books over an uncertain academic future. The distribution of the reasons used by the learners not studying at home are indicated in figure 5. Figure 5: Some of the reasons why learners could not study while at home The relatively high number of learners not involved in learning at home for one reason or the other is of great concern bearing in mind the uncertainty in school resumption dates. This category of learners can easily end up quitting their studies if the duration at home is prolonged. Bearing in mind the chauvinistic Maa culture, a good proportion of girls in this category might never resume their studies. 3.3.5 The effectiveness of e-learning to the candidates at home Only 37.1% of the candidates attested to know e-learning. All the rest of the learners indicated that they were not aware of e-learning. The little penetration of the concept of e-learning in the marginalized Narok county paints an image of very little learning not only in the county but other parts of Kenya and other developing nations. Elearning was intended to substitute physical learning since the latter is not realistic during the covid-19 pandemic era (Almaiah et al., 2020). Its absentia to candidates in Narok indicates very little awareness about the same by MoE and other government or school administrators. The media through which the learners knew about e-learning and participated in it are illustrated in figure 6. platform. The two common radio stations in Narok, according to the learners were Mayiang' FM and Sidai FM. Both broadcast their content in the local Maa dialect. 33.3% of the learners (a third) knew about e-learning in this platform out of which 73.6% of them continued with learning through the two radio stations. Another 29.8% of the learners were made aware of e-learning through television. Out of this proportion, 13.2% of them continued studying using television. The relatively higher ratio of e-learning through radio and television compared to other media imply that most learners spend a significant period of time listening or watching the two media. The learners also have a lot of trust in the content delivered by the two media. Therefore, the MoE should utilize the two platforms to increase the awareness of e-learning while allocating more time for e-learning programs. Friends, guardians and siblings constituted 36.9% of the total media used in enlightening the candidates about e-learning. This value might be a trickle-down effect from radio and television implying reiterating that more effort should be put on the latter. The learners indicated that they preferred e-learning radio programs because the educators could switch from English and Kiswahili to their local dialect (Maasai) to enhance students understanding. The learners could also ask questions by calling at waived charges and express their questions and opinions in any language. These activities indicate the candidates strong desire to adequately prepare for their national exams. 24.4% of the candidates were assisted by their guardians and siblings in the learning activities. Unfortunately, out of the 37.1% of learners aware of e-learning, 58.5% were not able to learn. The candidates gave several reasons for not participating in the process as indicated in figure 7. Figure 7: Some of the reasons why learners could not engage in e-learning About 28.6% of the learners cited lack of power as the key reason for not participating in e-learning. Being a marginalized county in a developing country, the electricity connectivity in Narok is quite low (energypedia.info, 2020). A good proportion of the residents do not actually have permanent houses to even warrant electrification. This is because they are a pastoralist community (Caulfield et al., 2016) who keep on moving with their herd of cattle in search of pasture. 21.5% of the learners indicated that they were too busy with other activities to find time for online learning. This category of learners preferred other forms of learning that spent little time. They thought that e-learning meant tuning to radio, TV or radio which they considered as leisure activities. They indicated that the time used for such activities, could be used engaging in domestic chores or other learning activities. 21.3% of the learners lacked electronic gadgets such as TV, radio sets and smart phones while a further 20.7% of the learners lacked internet bundles to assist them in e-learning activities. The two situations were worsened by the covid-19 pandemic which had affected the economy. Any finances obtained were prioritized to cover basic needs. A few learners cited poor network and mistrust to online educators as the reason for not participating in the learning activities. A more worrying concern is the number of students who indicated that online learning was ineffective i.e 81.3%. The learners indicated several reasons for considering it ineffective. Some of the reasons given include: the platform did not harmonize learners of different individual abilities, the platform assumed that the students were at equal levels in their syllabi coverage, some of the educators were too fast, some educators were boring and the educators did not follow up to check how the learners had performed in the assignments given. This indicates major loopholes in online learning in Narok county. 3.3.6 Learners perceptions and recommendations going forward Majority (60.5%) of the learners indicated that they would comfortably resume to school in January, 2021 and May, 2021 (5.2%) if the covid-19 pandemic is contained. The learners did not have any optimism of the condition being contained in the current year. or cure being found in 2020. It is also unlikely the government will effectively contain the disease for us to resume school in 2020. I also don't think that our school is effectively prepared to meet all the safety regulations as guided by World Health Organization (WHO) on institutions re-opening. Only 15.3% of the learners were optimistic of school resumption in 2020 (September). Majority of the learners in this category indicated that they were tired of staying at home. The learners were anxious that the president would instruct MoE (on 6 th June, 2020) to enforce safety regulations for them to resume learning. 19% of the learners had completely given up on resumption until the pandemic was completely contained. They did not see the urgency to resume to school and contract the disease. It was rather obvious that the candidates' perception on passing their national exams was completely altered by the pandemic. The pandemic was the key priority to the candidates; not their national exams. Regarding the candidates' status of readiness to do the national exam at the instance, only 3.1% were ready. 96.9% of the learners were not ready to do the exam because of various reasons such as poor syllabi coverage and lack of revision. The learners felt that they would fail in the exams if they did them at the moment. This would affect their selection to good secondary schools and their dream professions. 78% of the learners already felt that it would be difficult to attain their initial academic standards and had given up on their dream professions. This attitude can negatively impact the learners' perception towards learning and eventually national exams. Poor learning attitude is the key cause of failure by learners (Najimi et al., 2013). 65% of the learners were ready to repeat the current grade. They indicated that almost half of the year had been lost (by June) and it was not worthwhile continuing. The learners indicated that there was no need of the national exams being done in the first or second terms. The only viable option was to call off the academic year and restart again when the pandemic had been mitigated. On the other hand, 35% of the learners were adamant that the academic year should not be nullified but rather supplemented when schools resume. They indicated that the 2 moths learnt were critical and should not be rubbed off. Doing away with the 2 months would prolong their stay in school which was against their desires. It was rather evident that to these students, passing their national exams was not their key priority. About 83.3% of the learners had fee arrears in the range of Ksh. 1,000 to 10,000. The learners were however optimistic that the balance would be cleared by the time of doing the national exams. Failure to fee clearance would disqualify the learners from doing the exams or receiving the certificate. This rule is however dependent on the school management. About 79.5% of the learners did not trust public service vehicles preparations towards preventing them from acquiring corona virus. The learners perceived the transport sector not to be very serious with the condition. This factor reduced their desire to travel to school when the schools are re-opened. 83.3% of the learners were worried that they might be denied the chance to play with their colleagues when they resume learning. A similar proportion was equally worried that it would be difficult for them to share stationery with their colleagues at school. On a more positive note, 81.9% of the learners indicated that they were willing to have their masks on all the time while at school. Nevertheless, all (100%) the students attested that this was an uncomfortable ordeal which they would undertake to minimize their chances of acquiring corona viruses. The overall perception is that learners were ready to resume learning when it is safe to do so that they can adequately prepare for their national exams. Conclusions Covid-19 pandemic had totally altered the dream professions of most of the learners. The pandemic was also responsible of affecting their perception towards passing essential subjects tested during national exams (Mathematics, English and Kiswahili). The three subjects had very poor syllabi coverage by the time schools were indefinitely closed. Majority of the learners attested to studying notebooks and text books while at home. There was very little online learning ongoing with too much excuses of not having the same by the candidates. The learners' perception concerning dates, performance and attendance of national exams were wholly dependent on how the health pandemic would be contained.
Hypothalamic ERK Mediates the Anorectic and Thermogenic Sympathetic Effects of Leptin OBJECTIVE—Leptin is an adipocyte hormone that plays a major role in energy balance. Leptin receptors in the hypothalamus are known to signal via distinct mechanisms, including signal transducer and activator of transcription-3 (STAT3) and phosphoinositol-3 kinase (PI 3-kinase). Here, we tested the hypothesis that extracellular signal–regulated kinase (ERK) is mediating leptin action in the hypothalamus. RESEARCH DESIGN AND METHODS—Biochemical, pharmacological, and physiological approaches were combined to characterize leptin activation of ERK in the hypothalamus in rats. RESULTS—Leptin activates ERK1/2 in a receptor-mediated manner that involves JAK2. Leptin-induced ERK1/2 activation was restricted to the hypothalamic arcuate nucleus. Pharmacological blockade of hypothalamic ERK1/2 reverses the anorectic and weight-reducing effects of leptin. The pharmacological antagonists of ERK1/2 did not attenuate leptin-induced activation of STAT3 or PI 3-kinase. Blockade of ERK1/2 abolishes leptin-induced increases in sympathetic nerve traffic to thermogenic brown adipose tissue (BAT) but does not alter the stimulatory effects of leptin on sympathetic nerve activity to kidney, hindlimb, or adrenal gland. In contrast, blockade of PI 3-kinase prevents leptin-induced sympathetic activation to kidney but not to BAT, hindlimb, or adrenal gland. CONCLUSIONS—Our findings indicate that hypothalamic ERK plays a key role in the control of food intake, body weight, and thermogenic sympathetic outflow by leptin but does not participate in the cardiovascular and renal sympathetic actions of leptin. L eptin is a largely adipocyte-derived hormone that can act in the central nervous system to decrease appetite and increase energy expenditure, thereby leading to decreased body weight (1). Central actions of leptin play an important role in the regulation of several other physiological functions, including reproductive function (2), bone formation (3), and regional sympathetic nerve activity (SNA) subserving thermogenic metabolism and cardiovascular function (4). Leptin exerts its effects via interaction with specific receptors located in distinct classes of neurons. While several isoforms of the leptin receptor have been identified, the Ob-Rb form that includes the long intracellular domain that has signaling capacity appears to mediate most of the biological effects of leptin (5,6). The signal transducer and activator of transcription-3 (STAT3) pathway was the first signaling mechanism associated with the leptin receptor (7). Neural-specific inactivation of STAT3 leads to hyperphagia and obesity in mice (8). In addition, disrupting the ability of the leptin receptor to activate the STAT3 pathway in mice leads to severe obesity and several other neuroendocrine abnormalities (9 -11). More recently, other intracellular signaling mechanisms, including phosphoinositol-3 kinase (PI 3-kinase) (12), AMP-activated protein kinase (13), and mammalian target of rapamycin (14), have been shown to play an important role in the action of leptin on food intake. Extracellular signal-regulated kinase (ERK), a member of the mitogen-activated protein kinase (MAPK) family, is an additional downstream pathway of the leptin receptor (15). Leptin was shown to activate ERK1/2 in a time-and dose-dependent manner in cultured cells (16 -18). Activation of ERK1/2 by leptin seems to be mediated through Src homology-containing tyrosine phosphatase 2 (Shp2) emanating from the tyrosine 985 (Tyr985) of the leptin receptor (19,20). Stimulation of ERK by leptin can also be achieved by direct interaction with Jak2 (15,19,20). In turn, in cell lines, ERK appears to mediate the activation of c-fos (20) and ribosomal S6 kinase and S6 (21) by the leptin receptor. This ERK pathway has been reported to mediate leptin effects in several tissues, including cardiomyocytes (22,23), the immune system (24, 25), and kidney (26). However, the physiological significance of this pathway for the hypothalamic-mediated effects of leptin remains poorly characterized. This study depicts the effect of leptin on hypothalamic ERK and investigates the potential role of this ERK pathway in mediating the effect of leptin on food intake, body weight, and regional sympathetic outflow. RESEARCH DESIGN AND METHODS Male Sprague-Dawley rats and lean and obese Zucker (fa) rats were obtained from Harlan Sprague-Dawley. Rats were housed at 23°C with a 12-h light/dark cycle (light on at 6:00 A.M.) and allowed free access to standard rat chow and water. Rats receiving injections in the third cerebral ventricle were equipped with intracerebroventricular cannulas at least 1 week before the experimentation as described previously (27). Ethical approval of all of the studies was granted by the University of Iowa Animal Research Committee. Biochemical studies. Rats were fasted overnight before intracerebroventricular administration of murine leptin (R&D Systems). Rats were killed at the indicated time points by CO 2 asphyxiation. The mediobasal hypothalamus was quickly removed from each rat, and the total proteins were extracted and stored at Ϫ80°C. Protein samples of homogenized tissues or immunoprecipitates [to assess the effect of leptin on PI 3-kinase, immunoprecipitates were obtained by incubating protein samples with anti-IRS-1 antibody (E-12; Santa Cruz Biotechnology) in the presence of protein A-sepharose] were resolved with 10% SDS-PAGE, and PVDF membranes were incubated with specific antibodies for STAT3 (C-20; Santa Cruz Biotechnology), phospho-STAT3 To assess the effect of leptin on p38 MAPK in the skeletal muscle, rats were injected intraperitoneally with leptin (1 g/g body wt) and killed 10 min after the treatment. Skeletal muscle from the hindlimb was removed, and the extracted proteins were assayed for phospho-p38 MAPK as described above. Immunohistochemistry. Rats were fasted overnight and treated with vehicle or leptin either intracerebroventricularly (10 g) or intraperitoneally (1 g/g body wt). Five to 20 min after the treatment, rats were killed by CO 2 asphyxiation and then perfused transcardially with PBS followed by 4% paraformaldehyde in PBS. The brains were removed and postfixed in 4% paraformaldehyde at 4°C overnight. Fixed brains were washed three times with PBS and incubated in 30% sucrose in PBS. Coronal sections (30 m) were cut with a freezing Microm cryostat. Free-floating sections were washed with PBS and permeabilized with 0.1% Triton X-100 in PBS. Sections were then incubated overnight at 4°C with a mouse phospho-ERK antibody (1:100; sc-7383; Santa Cruz Biotechnology) in 0.2% goat serum followed by 1-h incubation at room temperature with a secondary antibody, rhodamine (TRITC)-conjugated donkey anti-mouse IgG (1:100; Jackson ImmunoResearch Laboratories). For double labeling, brains sections were processed for the localization of phospho-ERK as above. Rabbit antibodies recognizing proopiomelanocortin (POMC; 1:50; Phoenix Pharmaceuticals) or neuropeptide Y (NPY; 1:50; Chemicon International) were used to identify the neurons in which leptin activates ERK. Biotin-SP-conjugated donkey anti-rabbit IgG (1:100; Jackson Immu-noResearch Laboratories) was used as secondary antibody. Primary antibodies were tested separately before performing the double immunostaining. Additional control experiments consisted in the omission of primary or secondary antibodies in each case. Further processing for immunodetection was performed using kits (Vector Laboratories) following the manufacturer's instruction. Slices were mounted on slides, coverslipped, and visualized using a Nikon eclipse E600 fluorescence microscope equipped with a SPOT RT digital camera. Food intake and body weight studies. Food was removed from the individually caged rats the day before the study. Rats were given a single intracerebroventricular injection of PD98059 (5 g), U0126 (7 g), vehicle (DMSO; 2 l), or artificial cerebrospinal fluid (2 l) followed 15 min later by an intraperitoneal administration of leptin (1 g/g body wt) or vehicle (saline) or an intracerebroventricular injection of 5 g Melatonan II (MTII) or corticotrophin-releasing factor. The doses of the various drugs were based on our previous studies (27,28). Food was returned 1 h after the last injection corresponding to the onset of the dark cycle. Food intake and body weight were then recorded after 4 and 24 h. Study of the sympathetic nervous system. Anesthetized rats were instrumented for direct multifiber recording of regional SNA as described previously (27,28). Brown adipose tissue (BAT) SNA was recorded simultaneously with SNA to kidney, hindlimb, or adrenal gland. After baseline recordings of SNA were obtained, each animal received two intracerebroventricular injections. Rats received first PD98059 (5 g), U0126 (7 g), LY294002 (5 g), or vehicle (DMSO; 2 l) followed 15 min later by leptin (10 g) or saline. After intracerebroventricular administration of experimental agents, SNA measurements were made every 15 min for 6 h. The data for SNA are expressed as percentage change from baseline. Statistical analysis. All results are expressed as means Ϯ SE and analyzed using Student's t test, one-or two-way ANOVA. When ANOVA reached significance, a post hoc comparison was made using Bonferroni or Newman-Keuls test. A value of P Ͻ 0.05 was considered to be statistically significant. RESULTS To investigate the hypothesis that the ERK pathway is important for leptin action in the central nervous system, we first assessed whether leptin activates this enzyme in the hypothalamus, in vivo. The effect of intracerebroventricular administration of leptin on the activity of STAT3 and ERK1/2 in the hypothalamus was examined in Sprague-Dawley rats. As previously reported (7,12,13), leptin caused a robust activation of the STAT3 in mediobasal hypothalamic extracts (Fig. 1A). Leptin also caused a rapid activation of hypothalamic ERK1/2 with a maximum effect at 5-15 min (Fig. 1B), consistent with previous findings (29,30). This response was dose dependent (5 and 10 g leptin increased ERK1/2 activity by 3.4-and 4.3-fold, respectively). In peripheral tissues, such as the skeletal muscle, leptin has been shown to activate different isoforms of MAPK (31). Therefore, we examined whether leptin alters the activity of another hypothalamic isoform of MAPK. Leptin did not affect the activity of p38 MAPK in the hypothalamus (Fig. 1C). Intraperitoneal (IP) administration of leptin (1 g/g body wt) did, however, cause a 3.1-fold increase (P Ͻ 0.01) in p38 MAPK in skeletal muscle (data not shown). Leptin activation of ERK1/2 appears to be mediated by the leptin receptor because it activates hypothalamic ERK1/2 in the Zucker lean rat, but not in the leptin receptor-deficient obese Zucker rat ( Fig. 2A). Blockade of JAK2 with AG490 inhibited, in a dose-dependent manner, the activation of ERK1/2 by leptin (Fig. 2B), demonstrating that the leptin receptor modulates the activity of ERK1/2 via JAK2. These data demonstrate that in the hypothalamus, the effect of leptin on ERK is receptor mediated, involves JAK2, and is specific to ERK1/2 isoforms. Within the hypothalamus, the leptin receptor has been detected in several nuclei, including the arcuate, ventromedial, paraventricular, and dorsomedial nuclei (32,33). We used an immunohistochemical approach to identify the hypothalamic nuclei in which ERK1/2 is activated by leptin. Both systemic and central administration of leptin caused a marked increase in the immunoreactive ERK1/2 in the arcuate nucleus (Fig. 3A). In contrast, no increase in ERK activity was observed in hypothalamic nuclei other than the arcuate nucleus (in Fig. 3B, the paraventricular nucleus is shown as an example) or in extrahypothalamic nuclei, including the nucleus tractus solitarii in the brainstem (data not shown). These data suggest that leptin activation of hypothalamic ERK1/2 is selectively localized in the arcuate nucleus. Two classes of neurons account for leptin sensitivity within the arcuate nucleus (32-34): first, a catabolic pathway represented mainly by POMC neurons that is activated by leptin; and second, an anabolic pathway represented principally by the NPY neurons that is inhibited by leptin. We therefore used double staining to determine whether activation of ERK1/2 by leptin occurs in one specific neuronal population. Interestingly, all the neurons in which the ERK1/2 immunoreactivity was increased by leptin were POMC positive (supplemental Fig. 1A, available in an online appendix at http://dx.doi.org/10. 2337/db08-0822). No activation of ERK1/2 was observed in NPY neurons (supplemental Fig. 1B). These data seem to suggest that ERK mediates leptin action through an effect on POMC neurons. To test the hypothesis that ERK is crucial for the physiological action of leptin, we assessed the effect of ERK inhibition (PD98059 and U0126) (35-37) on the feeding and body weight responses to leptin. We first (Fig. 4B). This effect was accompanied by decreased body weight at 24 h after treatment with leptin (Fig. 4C). Pretreatment with the ERK inhibitors (PD98059 or U0126) reversed the decrease in food intake induced by leptin at 4 h (Fig. 4A) and 24 h (Fig. 4B). The ability of leptin to decrease body weight was also blocked by PD98059 and U0126 (Fig. 4C). To exclude the possibility that blockade of the anorectic and weight-reducing actions of leptin by PD98059 and U0126 may be due to inhibition of other mediators of leptin action, such as STAT3 and PI 3-kinase (which are known to play an important role in leptin effects on food intake) (9,12), we tested the effect of leptin on the activity of ERK1/2, STAT3, and PI 3-kinase in the presence of PD98059 and U0126. As expected, leptin activation of hypothalamic ERK1/2 was prevented in the presence of PD98059 or U0126 (Fig. 5). In contrast, stimulation of hypothalamic PI 3-kinase and STAT3 by leptin was not affected by the presence of these inhibitors (Fig. 5). These data demonstrate that blockade of the effect of leptin on food intake and body weight by PD98059 and U0126 is due to inhibition of ERK1/2 and not to blockade of leptin-induced STAT3 or PI 3-kinase activation. Because ERK is a key enzyme for many intracellular signaling processes, we addressed the specificity of blockade of leptin-induced anorexia and weight loss by testing the feeding-and weight-reducing actions of other stimuli, i.e., an agonist of the melanocortin receptors MTII and corticotrophin-releasing hormone. Intracerebroventricular MTII caused a significant decrease in food intake and body weight at 4 and 24 h (Fig. 4D-F; data not shown). Intracerebroventricular pretreatment with PD98059 or U0126 did not alter the effect of intracerebroventricular MTII on food intake and body weight (Fig. 4D-F). In addition, the anorectic and weight-reducing actions of intracerebroventricular administration of corticotrophinreleasing hormone were not affected by the ERK inhibitors (data not shown). Together, these findings show that the pharmacological inhibitors of ERK produce a selective blockade of the effects of leptin and do not attenuate the responses to other agonists, such as MTII and corticotrophin-releasing hormone. Leptin stimulation of the sympathetic nervous system regulates diverse physiological processes, including energy expenditure and cardiovascular function. The possible role of hypothalamic ERK in the control of the sympathetic nerve traffic by leptin was investigated. Intracerebroventricular leptin caused regional sympathetic activation, including increases in sympathetic nerve outflow to BAT (Fig. 6A), kidney (Fig. 6C), hindlimb (Fig. 6E), and adrenal gland (Fig. 6G). Selective inhibition of ERK pre- vented leptin-induced sympathetic activation to thermogenic BAT (Fig. 6B). This effect of ERK inhibition on BAT sympathetic activation to leptin was dose dependent (BAT sympathetic activation to leptin was 183 Ϯ 13, 114 Ϯ 19, and 14 Ϯ 11% in the presence of vehicle, 3 g U0126, and 7 g U0126, respectively, P Ͻ 0.001). In contrast, PD98059 or U0126 did not alter renal (Fig. 6D), lumbar (Fig. 6F), and adrenal (Fig. 6G) sympathetic nerve responses to leptin. To examine whether the control of sympathetic outflow to thermogenic BAT by leptin exclusively involves ERK, we assessed the effect of inhibition of another pathway that has been shown to play a major role in the feeding and sympathetic responses to leptin, PI 3-kinase (12,38). Inhibition of PI 3-kinase with LY294002 significantly attenuated renal sympathetic activation to leptin (Fig. 6D), which is consistent with our previous report (38). In contrast, PI 3-kinase inhibition with LY294002 failed to alter BAT (Fig. 6A), lumbar (Fig. 6F), or adrenal (Fig. 6G) sympathetic nerve responses to leptin. Taken together, these results demonstrate that leptin regulates different regional sympathetic nerve activities through distinct and contrasting intracellular signaling pathways Left panels (A, C, E, and G) show the time course of SNA responses to intracerebroventricular leptin . Right panels (B, D, F, and H) with ERK contributing to the thermogenic BAT sympathetic response but not to renal, lumbar, or adrenal sympathetic nerve responses to leptin. DISCUSSION We have characterized a new hypothalamic signaling mechanism of the leptin receptor. Our results show that leptin activates ERK1/2 in the arcuate nucleus and that this pathway contributes to leptin control of food intake, body weight, and thermogenic sympathetic outflow. Hypothalamic ERK appears to mediate selective leptin actions because its inhibition prevented some but not all actions of leptin. This suggests that modulation of the hypothalamic ERK pathway could affect the metabolic actions of leptin (appetite and thermogenic sympathetic metabolism) without altering its sympathetic cardiovascular and renal actions. Previous studies have implicated hypothalamic ERK in the regulation of energy homeostasis. Fasting was shown to activate ERK in the hypothalamic arcuate and paraventricular nuclei in mice (39,40). Fasting-induced activation of hypothalamic ERK was reversed by re-feeding, suggesting that activation of ERK in the hypothalamus is relevant for energy balance (40). Our current findings that hypothalamic ERK contributes to the actions of leptin extend the role of this ERK pathway in the control of energy homeostasis. Our results are in line with the observation that the development of obesity in mice with neuronal-specific ablation of Shp2 is related to the inability of leptin to stimulate ERK (41). To study the role of neuronal Shp2, Zhang et al. (41) used the cre-loxP system to create a conditional Shp2 mutant allele in mice. This allowed selective deletion of Shp2 in postmitotic forebrain neurons. Surprisingly, the predominant phenotype exhibited by this mouse model was the development of early-onset obesity. In subsequent studies, Zhang et al. found that leptin-induced phosphorylation of ERK1/2 in the arcuate nucleus of hypothalamus was dramatically reduced in mice with neuronal-specific ablation of Shp2 compared with the controls (41). In contrast, the ability of leptin to induce phosphorylation of arcuate STAT3 was preserved in this mouse model. Although the signaling cascade leading to the activation of ERK by the leptin receptor remains unclear (15), the inability of leptin to stimulate ERK in the absence of Shp2 suggests that this protein mediates leptin activation of ERK. However, recent findings challenge the importance of Shp2 in mediating leptin effects (42). This is based on the observation that in mice, mutation of the Tyr985 of the leptin receptor that blocks Shp2 recruitment did not recapitulate the obesity phenotype observed in mice in which Shp2 was deleted in the forebrain neurons (42), suggesting that, in vivo, Jak2dependent mechanism may be the predominant pathway for the stimulation of ERK by the leptin receptor. The downstream hypothalamic pathways controlled by the leptin receptor-ERK axis remain to be elucidated. Importantly, leptin activation of ERK seems to occur in the POMC neurons in the arcuate nucleus of the hypothalamus, which narrows the search for the mechanisms that are controlled by the leptin receptor-ERK axis. However, additional studies are needed to analyze in more detail the role of ERK in POMC vs. NPY neurons in mediating leptin action. A key downstream target of the POMC neurons are neurons expressing melanocortin 4 receptors (MC4Rs) that are activated by ␣-melanocyte stimulating hormone (product of POMC) (33,34). Pharmacological blockade of MC4Rs reverses the effect of leptin on body weight and food intake (43) and deletion of the MC4Rs leads to severe obesity in mice (44). We have previously shown that blockade of the brain MC4R inhibits the renal, but not the BAT, sympathetic nerve response to leptin (27). We also showed that leptin-induced renal sympathetic activation is absent in the homozygous MC4R knockout mice, demonstrating the importance of this receptor in the control of renal sympathetic outflow by leptin (45). In addition, blockade of PI 3-kinase also inhibits renal, but not BAT, SNA. In contrast to the role of MC4R and PI 3-kinase, blockade of ERK in this study inhibited SNA to BAT but not kidney. This supports the concept that differential intracellular mechanisms are involved in leptin-induced sympathetic activation to kidney and thermogenic BAT. Sympathetic activity to BAT is ERK dependent, whereas sympathetic activity to kidney is PI 3-kinase and MC4R dependent. These findings are in line with the notion that leptin controls various physiological processes through a variety of signaling mechanisms. For instance, the STAT3 pathway appears to be involved in mediating the effects of leptin on food intake and energy homeostasis but not on reproductive function, growth, or glucose homeostasis (9). However, the relative role of each of the downstream pathways associated with the leptin receptor in the control of food intake by leptin remains perplexing, because disrupting any of these pathways seems to have a profound effect on leptin-induced food intake. Disruption of the leptin receptor-STAT3 pathway causes hyperphagia in mice (9,10), which demonstrates that STAT3 is important for the control of food intake by leptin. In addition, the anorectic response to leptin can be reversed by blockade of PI 3-kinase (12,38) and ERK (present study). Some limitations of the present study need to be addressed. First, our conclusions regarding the role of ERK in mediating leptin effects were based on acute studies. Whether chronic inhibition of these signaling pathways will result in a similar effect as in the acute studies remains to be determined, but deletion of Shp2 is accompanied by increased food intake and obesity, presumably through disruption of ERK signaling. Second, the inhibitors that we used in our studies to block ERK might have other nonspecific actions. However, cell-based assays and in vitro studies have shown that both PD98059 and U0126 appear to specifically suppress ERK signaling (35)(36)(37). These inhibitors have been widely used to suppress activation of ERK and to examine the physiological roles of these enzymes. In addition, we have shown that pretreatment with ERK inhibitors did not alter leptin-induced activation of STAT3 or PI 3-kinase and did not affect the appetite-and weight-reducing actions of MTII and corticotrophinreleasing hormone. Third, our studies lack the neuroanatomical specificity regarding the brain nuclei where the ERK signaling pathway mediates the effects of leptin on food intake, body weight, and BAT sympathetic outflow, because the inhibitors were administered intracerebroventricularly. Nonetheless, using an immunohistochemical approach, we have shown that leptin activation of ERK occurs in the arcuate nucleus. In conclusion, our experiments provide evidence that hypothalamic ERK is a significant downstream target for the effects of leptin to regulate food intake, body weight, and thermogenic sympathetic outflow to BAT. However, ERK does not appear to be involved in leptin activation of the sympathetic nervous system to other tissues, such as kidney, hindlimb, and adrenal gland. These findings provide new insights into the intracellular mechanisms engaged by the leptin receptor to control various physiological functions.
Point Form Electrodynamics and the Gupta-Bleuler Formalism The Gupta-Bleuler formalism for photons is derived from induced representation theory. The representation for the little group for massless particles, the two dimensional Euclidian group, is chosen to be the four dimensional nonunitary representation obtained by restricting elements of the Lorentz group to the Euclidian group. Though the little group representation is nonunitary, it is shown that the representation of the Poincar\'{e} group is unitary. As a consequence of the four dimensional representation, the polarization vector, which connects the four-vector potential with creation and annihilation operators, is given in terms of boosts, coset representatives of the Lorentz group with respect to the Euclidian group. Several polarization vectors (boost choices) are worked out, including a front form polariation vector. The different boost choices are shown to be related by the analogue of Melosh rotations, namely Euclidian group transformations. Introduction The goal of this series of papers is to construct a relativistic many-body theory of hadrons using the point form of relativistic quantum mechanics, in which all interactions are vertex interactions, arising from products of fields evaluated at the space-time point zero. In the first of this series of papers [1] such vertex interactions were constructed for the hadronic part of the mass operator. The second paper showed how to construct one-body current operators for arbitrary spin particles [2]. In order that the electromagnetic interaction also be a vertex interaction, the hadronic currents should be coupled to the four-vector potential operator in such a way that the two contracted operators give a scalar density under Lorentz transformations. Moreover the four-vector potential operator should transform as a four-vector under Lorentz transformations, yet be constructed out of photon creation and annihilation operators that transform as one-photon states under Lorentz transformations. To construct such a vertex, it will be necessary to generalize the Gupta-Bleuler formulation [3] for photons. For unlike the usual Gupta-Bleuler formulation, the photon creation and annihilation operators should themselves transform under the appropriate irreducible representation of the Poincaré group, namely the massless representations for which the little group is E(2), the two dimensional Euclidean group. The reason is that in order to have an electromagnetic interaction that is a vertex interaction, the polarization vector should be a boost, connecting the creation and annihilation operators with the four-vector potential operator, as is the case for massive particles with spin (see the previous paper, reference [2], Eq.99). The problem here is well-known; since the two dimensional Euclidean group is noncompact, it has only one dimensional or infinite dimensional unitary representations. The one dimensional representations are usually thought of as providing the relevant representations for a massless spin one particle like the photon and the two polarization states of the photon arise as a consequence of parity. Then it is simple to construct photon creation and annihilation operators with two helicities that transform in the same way as one-photon states. However a problem arises when one wishes to construct a four-vector potential field from these photon creation and annihilation operators, for the "four-vector" potential will not transform as a four-vector under Lorentz transformations [4]. The solution to this problem is also well-known, and goes under the heading of the Gupta-Bleuler formalism [3]; one introduces photon creation and annihilation operators with four components, and eliminates the two spurious components using gauge invariance. However, if the four components transform as a four-vector under Lorentz transformations, there is no natural way to link the polarization vector to boosts, as is done in constructing fields for massive particles. This paper will show how photon creation and annihilation operators transforming under E(2) representations naturally link to the four-vector potential field in such a way that the polarization vector is given as a boost, coming from the nonunitary four dimensional representation of the Lorentz group. One of the advantages of such a procedure is that any boost (coset representative of the Lorentz group with respect to E(2)) can be used as a polarization vector. As will be shown, the usual choice of polarization vector corresponds to a helicity boost. But other choices, such as a front form boost discussed in the appendix, can also be used. In section 2, motivated by induced representation theory, the relevant one-photon states and wave functions are obtained, and the analogue of Wigner rotations for massless particles is derived. Though the four dimensional representation of the Euclidian group is nonunitary, it will be shown that the full Poincaré group representation is unitary. A many-photon theory is generated by photon creation and annihilation operators which have the same Poincaré transformation properties as the single particle photon states. In section 3 the four-vector potential operator is defined in terms of the photon creation and annihilation operators, the link being the polarization vector, that is, a boost Lorentz transformation. What is important here is that the four-vector potential transform as a four-vector under Lorentz transformations, so that, when contracted with the current operators of the previous paper, the electromagnetic vertex be a Lorentz scalar. The section closes with a discussion of gauge transformations for photon creation and annihilation operators and free field four-vector operators. Photons and the Gupta-Bleuler Formalism As is well known the little group for all massless particles is the Euclidian group in two dimensions. A simple proof using SL(2, C) is given in the appendix. If Λ is an element of SO(1, 3), the proper Lorentz group, then the two dimensional Euclidian group, E(2), can be defined as the subgroup of the proper Lorentz group leaving a standard four vector invariant: where k st := (1, 0, 0, 1) is the standard four vector. To get a Poincaré group representation for massless particles, it is necessary to choose a representation for the little group. Wigner [5] (and others, for example Weinberg, reference [4], page 71) choose the degenerate one dimensional unitary representation for E (2) in which the E(2) translations are trivial, and the rotation angle φ is represented by e iλφ , with λ equal to plus or minus one. Parity connects the plus or minus one helicities, so that photon states can be written as |k, λ = ±1 >, corresponding to plus or minus helicity states. The problems with this construction have to do with gauge invariance and the link to the four-vector potential operator, which has four components which do not transform among themselves under Lorentz transformations (see reference [4], page 250). To get around these problems, in this paper another representation for the little group E(2) will be chosen, namely the representation given by the group itself as defined in Eq.1. In terms of the SL(2, C) definition of the two dimensional Euclidian group given in the appendix, Eq.1 can be thought of as a four dimensional nonunitary representation of E(2), for which the representation of an e 2 element is written as Λ(e 2 ), to indicate the Lorentz transformation representing the Euclidean group element e 2 . The elements of the Euclidian group in this representation can be written explicitly as where a gives the two translations of the Euclidean group. Any Lorentz transformation can be written as a boost times a Euclidian group element, where the boost B(k) is a Lorentz transformation, that is a coset representative of SO(1, 3) with respect to E(2). Boosts have the property of sending k st to the four vector k: k = B(k)k st , from which it follows that k·k := k T gk = k α k α = 0. g is the Lorentz metric matrix, g := diag(1, −1, −1, −1). The usual boost choice for massless particles is the helicity boost, B H (k), which, as will be shown in section 3, gives the usual polarization vector: where R(k) is the rotation matrix takingẑ to the unit vectork, Λ z (| k|) is a Lorentz transformation about the z axis with | k| = e χ and k 1 = (cosφcosθ, sinφcosθ, −sinθ),k 2 = (−sinφ, cosφ, 0). Another boost choice, a front form boost, is given in the appendix, Eq.42. Since any Lorentz transformation can be written as Λ = B(k)Λ(e 2 ), it follows that the product of two Lorentz transformations, namely ΛB(k) can again be decomposed into such a product, namely B(k ′ )Λ(e W ): where k ′ = Λk is found by applying Eq.6 to k st . Λ(e W ) is the massless analogue of a Wigner rotation(defined for representations of the Poincaré group when particles have nonzero mass). For a given boost, such as a helicity boost defined in Eq.4, the massless Wigner transformation is defined in Eq.7. With these tools it is possible to define photon states with four degrees of polarization and investigate their transformation properties under Lorentz and space-time transformations: where Λ(e W ) is the Wigner transformation defined in Eq.7; in particular if the Lorentz transformation in Eq.10 is a rotation and the boost a helicity boost, Eq.4, then Λ(e W ) becomes a diagonal matrix of phases, exactly as is the case for massive particles with helicity boosts [6]. The transformation properties of photon wave functions are inherited from those of the states. If a state |φ > is written in terms of wave functions and basis states, and the action of the Lorentz transformation in Eq.10 on states is transferred to the wave function, one obtains Here To be more mathematically precise, the Hilbert space discussed in the following paragraphs, and the transformation properties of the Hilbert space elements under Lorentz transformations given in Eq.13 could all be derived directly from induced representation theory [7]. The only somewhat unusual feature would be that the representation of the Euclidian group is four dimensional, rather than the more usual one dimensional representation. Such a mathematical background would be necessary to prove the irreducibility of the massless representation, a subject that will not be pursued further in this paper. To show that the representation defined by Eq.13 is unitary, it is necessary to define an inner product on photon wave functions. Since the representation of the little group, Eqs.2,3, is not unitary, the usual Hilbert space of square integrable functions will not lead to a unitary representation for the Poincaré group. But the representation of the little group is in terms of Lorentz matrices which satisfy ΛgΛ T = g; this property can be used to define an inner product which produces a unitary representation for the Poincaré group. Define a photon inner product as then the representation defined by eq.13 is unitary, that is ||U Λ φ|| 2 = ||U a φ|| 2 = ||φ|| 2 . But the inner product as defined in Eq.14 is not positive definite (for example choose φ(k, 0) = φ(k), with the other components zero). As is well known [3], to get around this problem, the zero component of the wave function is chosen to equal the third component of the wave function: Using Eq.13 it is easy to see that the condition, Eq.16, is Lorentz invariant. This allows one to define the photon Hilbert space H γ as with the inner product given in Eq.14, resulting in a positive definite inner product, in which the zero and three components of the wave function do not contribute to the inner product because of Eq.15. Moreover, quantities such as the expectation value of the energy, (φ, P 0 φ) are also seen to be positive definite, with the value zero occurring only when the one and two component parts of the wave function are zero. With this background it is now possible to introduce many photon states and wave functions, living in the Fock space generated by sums of symmetrized tensor products of H γ , through photon creation and annihilation operators, whose transformation properties are inherited from the one particle photon properties: The important difference between the usual Gupta-Bleuler analysis and this paper is seen in Eq.21. The creation and annihilation operators do not transform as four-vectors under Lorentz transformations, but as irreducible representations of the Poincaré group for massless particles, in which the little group representation is a four dimensional nonunitary representation of the two dimensional Euclidean group. The wave function condition,Eq.15, now becomes the annihilation operator condition for all k and for all |φ > in the Fock space. By applying a Lorentz transformation to Eq.24 and using Eq.21, it is straightforward to show that Eq.24 is a Lorentz invariant condition. Moreover, it guarantees that no timelike or longitudinal components will contribute to the inner product. That is, if |φ n > is an n-photon state, Eq.24 guarantees that the α i = 0 components will cancel the α i = 3 components in the n-photon wavefunction. Hence the inner product, where all the α i = 0 components have cancelled with the α i = 3 components, so that the norm is always nonnegative. The Free Four-Vector Potential Operator In this section, as in all the papers in this series, fields will be defined as translates of the four-momentum operator from the space-time point zero. In this paper the four-momentum operator is taken to be the photon fourmomentum operator, defined in Eq.23, while in the next paper [8], it will include matter and electromagnetic four-momentum operators. However, the four-vector potential operator at the space-time point zero is always defined by As a consequence of this definition and the transformation properties of the photon creation and annihilation operators, Eq.21, the four-vector potential operator will transform as a four-vector under Lorentz transformations, . Further the polarization vector, usually written as ǫ µα (k) (see for example, Schweber, reference [9], page 249), is seen to be the boost matrix discussed after Eq.3. Usually a helicity boost is chosen (see Eq.4) but it is clear that any other boost will serve equally well, for example the boost defined in the appendix, Eq.42. The free four-vector potential operator at the space-time point x is defined to be From this definition it follows that this operator is local, that is, the commutator [A µ (x), A ν (y)] is zero for (x − y) 2 spacelike. Eq.28 can also be used to relate the generalized Gupta-Bleuler formalism developed in this paper with the usual Gupta-Bleuler formalism. If the boost in Eq.28 is chosen to be a helicity boost, Eq.5, then the annihilation operator transforming as a fourvector under Lorentz transformations is related to the annihilation operator transforming as a one particle state under Lorentz transformations (Eq.21) by c(k, µ) = B µα (k)g α,α c(k, α). Finally, the positive frequency part of the four vector field satisfies a (nonlocal) Lorentz gauge condition: where use has been made of the fact that an inverse boost on the four vector k results in the four vector k st . This section concludes with a discussion of gauge invariance for the creation and annihilation operators and the four-vector potentials. If c(k, α) is the annihilation operator defined in Eq.19 ff, then the gauge transformed annihilation operator is defined to be where f (k) is a complex function of the four vector k and I is the identity operator 2 . c ′ (k, α) must satisfy the same conditions as c(k, α), namely Lorentz covariance (Eq.21), boson commutation relations (Eq.20), and the subsidiary condition (Eq. 24). The subsidiary condition for c ′ (k, α) follows immediately from the definition, since k st · k st = 0. Also the commutation relations follow since the term added to c(k, α) is a multiple of the identity operator. The Lorentz condition can be written as the desired result if f (k) = f (Λk), that is, f (k) is a Lorentz scalar. Gauge transformations have the effect of adding or subtracting equal amounts of timelike and longitudinal components and thus do not change the norm of many-photon wave functions. Finally, the positive frequency part of the gauge transformed four vector field is also conserved: Conclusion In order to be able to write electromagnetic interactions as vertex interactions, it is necessary to generalize the Gupta-Bleuler formalism so that photon creation and annihilation operators transform as single particle states under Lorentz transformations. As first pointed out by Wigner, the little group for massless particles is the two dimensional Euclidian group E(2); but the choice of representation for E(2) is not given a priori. Wigner [5](and later others, including Weinberg, reference [4]) choose the degenerate one dimensional representation of E(2), in which the action of the translations in E(2) are trivial. While it is possible to obtain the usual photon states and wavefunctions with such a representation of E(2), troubles arise not only with gauge invariance, but also with providing the natural link between fields and photon creation and annihilation operators, of the sort available for massive particles with spin (see for example, reference [4], page 233). In this paper the massless little group representation is chosen to be the four dimensional nonunitary representation of E(2), obtained by restricting elements of the Lorentz group to E(2); the form of these E(2) elements, as Lorentz transformations, is given in Eqs.2,3. Such a representation must be nonunitary, since it is a finite dimensional representation of a noncompact group. Nevertheless the representation of the full Poincaré group is unitary. Such a result makes use of the fact that E(2) matrices are Lorentz matrices; by suitably modifying the inner product, the resulting Poincaré representation is unitary and the inner product agrees with that given by the Gupta-Bleuler formalism. The inner product that makes the representation of the Poincaré group unitary is not positive definite. So-as with the Gupta-Bleuler formalism-the photon Hilbert space is defined as the subspace of wavefunctions for which the timelike and longitudinal components are equal. Such a subspace is a Poincaré invariant subspace. Many-photon states and wavefunctions can then be defined in terms of photon creation and annihilation operators. These creation and annihilation operators do not however transform as four vectors, as is usually the case (see reference [9],page 243); rather under Lorentz transformations they transform as Euclidian analogues of Wigner rotations (see Eq.21), which is the natural generalization of the transformation properties for massive particles with spin. Because of these transformation properties, the proof of the operator condition that longitudinal and time-like components cancel is Lorentz invariant is particularly simple. The main result of this paper concerns the link between the four-vector potential operator and photon creation and annihilation operators. For massive particles with spin this link is always given by Lorentz group representations of boosts, coset representatives of SO(1,3) with respect to the rotation group. For example the usual spinor functions for spin 1/2 fermions are boosts, usually canonical spin boosts. But there are many boost possibilities, such as helicity or front form boosts [11]. Similarly for massless particles boosts are coset representatives of SO(1,3) with respect to E(2), and provide the link between the four-vector potential and photon creation and annihilation operators. That is, polarization vectors are boost representatives, coset choices of SO(1,3) with respect to E(2). The usual polarization choice is the helicity boost, given in Eq.5. But just as there are many different boost choices for massive particles, all connected by Melosh rotations [11], [12], so too there are many boost choices for massless particles, all connected by the analogue of Melosh rotations, namely E(2) transformations. An example of a non-helicity polarization vector, a front form boost for massless particles in given in the appendix, Eq.42. Gauge transformed photon creation and annihilation operators with the correct Lorentz (Eq.21) and subsidiary (Eq.24) conditions affect only the time-like and longitudinal polarizations, leaving the transverse parts unchanged. Using the connection between four-vector potentials and creation and annihilation operators, the usual gauge transformations for the positive frequency part of the four-vector potentials are obtained, as well as the fact that under such a gauge transformation the Lorentz gauge condition remains invariant. Since the four-vector potential operator transforms as a four-vector under Lorentz transformations, it can be coupled to the current operators defined in the previous paper, to form the electromagnetic vertex for particles of any spin, which is the starting point for constructing the electromagnetic mass operator. To conclude it should be pointed out that the procedures applied here to photons can equally well be applied to massless spin two (or for that matter to arbitrary spin) particles, namely gravitons. The construction of the relevant nonunitary representations of the Euclidian group, as well as the construction of different polarization tensors, will be discussed in another paper. A Appendix: SL(2,C) and Massless Particles In section 2 all operations were carried out using the Lorentz group SO (1,3). In this appendix the fact that SL(2,C) is the covering group of the Lorentz group is used to derive some results in a more transparent way. Under a Lorentz transformation Λ, a four vector k goes to k ′ = Λk. Such a transformation is carried out in SL(2,C) by replacing the four-vector k by the hermitian matrix H(k), with A is an element of SL(2,C) and k ± = k 0 ± k z , k ⊥ = k x + ik y . where now a = a x + ia y gives the two translations. Written in this way it is straightforward to show that the elements in Eq.38 combine as E(2) elements. Any element of SL(2,C) can be decomposed into boosts (coset representatives) with respect to E(2). A natural choice is a front form boost, for then and the parameters of A are readily expressed in terms of k and the Euclidian parameters φ and a.
Frailty and functional dependence in older population: lessons from the FREEDOM Limousin – Nouvelle Aquitaine Cohort Study Background Monitoring frailty indicators in elderly people is recommended to identify those who could benefit from disability prevention programs. To contribute to the understanding of the development of frailty in the elderly, we have created the FREEDOM-LNA cohort constituting an observational study of ageing in general population. Here, we described the characteristics of a cohort of elderly subjects who are followed for determination of frailty and loss of independence trajectories. Results The cohort was composed of 1085 subjects in advanced age (mean: 83.7 ± 6.0 years) and of women in majority (68.3%). Cardiovascular risk factors were present in 88.4% of subjects. Abnormal musculoskeletal signs were reported in 44.0% and neurologic signs in 31.9%. There were 44.8% of subjects at risk of malnutrition (MNA <24) and 73.3% (668/911) at risk of mobility-related disability (SPPB ≤9); 39% (384/973) of subjects had impaired cognitive function (MMSE< 24, adjusted on education) and 49.0% (397/810) had signs of depression (GDS >9); 31.8% (240/753) were frail and 58.3% were pre-frail. Most subjects had at least one disability in ADL (66.9%) and IADL (85.1%). The SMAF indicated a loss of independence in 59.6%. Overall, 59.9% of subjects could not stay at home without at least some help. Consequently, a medical consultation was proposed in 68.2 and 42.1% social supports. Conclusions A large part of this cohort was frail or pre-frail and presented signs of loss of independence, which may be explained by multiple factors including impaired health status, poor physical performance, cognition, isolation, depression, or nutrition. This cohort will help to determine factors that adversely influence the trajectory of physical frailty over time. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-022-02834-w. Introduction The rise in life expectancy is one of the most remarkable advances of the last century around the world. The increased longevity is however challenged by the ageing population especially in developed countries [1,2]. In Europe, 24% of the population is aged over 60 years and, with the post-war baby-boom generation, that proportion is projected to reach 34% in 2050. [2] Longer life promotes a progressively higher prevalence of chronic age-related comorbidities and disabling illness, including cardiovascular, metabolic, musculoskeletal, sensorial and cognitive disorders, and increasing risk of psychological distress, social disconnection, loss of independence, and dependency at the end of life [3,4]. In line with the geriatric community, [5][6][7] the World Health Organization (WHO) recently asked to adopt a global strategy to keep the elderly healthy, including providing long-term Open Access *Correspondence: achille.tchalla@unilim.fr 6 Geriatric Medicine Department, University Hospital Centre, 2 Avenue Martin Luther King, 87042 Limoges, France Full list of author information is available at the end of the article integrated care to maintain a level of functional ability in an age-friendly environment [4]. The objective is to keep people healthy based on the notion of functional ability and not just to treat the acute or chronic diseases [4,5,8]. Pathological aging as opposed to healthy aging occurs when the organism at various organ levels is unable to compensate for age and disease-related changes [9]. On the other hand, physical and functional decline may occur in the absence of identifiable disease which has led to the concept of frailty. Frailty is defined as an agerelated state of decline and vulnerability characterized by decreased physiological reserves and function across multiple organ systems. Frail people are less resilient to sudden changes in health status even minor stressor such as mild acute illness or physical or psychological trauma, and are thus at increased risk of adverse agedrelated outcomes such as falls, hospitalizations, disability and morbi-mortality [10,11]. There is a considerable overlap between comorbidity, frailty and disability [12,13]. Contrary to disability, there is current consensus that frailty is potentially reversible with appropriate interventions including physical activity, nutrition, and cognitive training in older adults. [14] Thus, monitoring frailty indicators in community-dwelling elderly people is recommended to identify old people who could benefit from disability prevention programs [15][16][17][18]. Research is also needed to determine how physical, psychological, and social conditions are associated with frailty and functional status and to determine factors that adversely influence the trajectory of physical frailty over time [19]. To contribute to the understanding of the development of frailty in the elderly, we have created the FREE-DOM-LNA cohort (French acronym for Frailty, Clinical Research and Evaluation at Home in Limousin -Nouvelle Aquitaine) constituting an observatory of ageing in general population. We performed prospective and retrospective analyses of frailty, functional loss, and cognition in community-dwelling elderly with the objective to determine factors associated with frailty trajectories. A secondary objective was to analyse the different trajectories of loss of independence. In this preliminary report, we described the profile of this cohort population including health and socio-environmental factors, the loss of functional independence, and the appropriate geriatric interventions proposed to stay longer at home. Study design FREEDOM-LNA was an historical longitudinal cohort conducted by the UPSAV (University Hospital, Clinical Geriatric Department, Limoges, France). The UPSAV is a clinical unit composed of a dedicated multidisciplinary team of geriatric physicians, nurses, ergotherapists, psychomotor therapists, and social workers. The team provides global preventive geriatric assessments in general population at home with the aim to detect the risk of loss of independence and the warning signs of frailty. Subjects are solicited from various information channels including healthcare professionals (e.g. family physicians, specialists, or hospitals), social professionals, closed relatives (family members or friends), or by the subject him/ herself. The FREEDOM-LNA cohort comprised subjects aged ≥ 65 years with at least two comorbidities, or aged ≥ 75 years followed by the UPSAV between 01 January 2010 and 31 August 2017. All subjects were involved in a health care program that offered a comprehensive geriatric assessment every 6 months the first year and thereafter once a year. At the end of each assessment, the medical staff offered appropriate geriatric interventions including hygiene therapeutic advices, occupational therapist, psychomotor therapist, or social worker. The study protocol was reviewed and approved by the local Institutional Review Board (CEREES, Limoges; Approval number: TPS 429,669). The protocol was also approved by the French Data Protection Authority (CNIL) insuring protection of individualized data according to the French law. Informed consent for data processing was obtained from all subjects (or legal representatives). All procedures were carried out in accordance with the 1964 Helsinki Declaration and its later amendments. Demographic, socio-environmental and clinical data Demographic and socio-environmental characteristics were collected at inclusion and each follow-up visit. Self-reported supports including household incomes and financial supports, human supports and socio-medical supports and technical helps were also recorded using a specific questionnaire. A physical examination was performed and other clinical data, including medications, were obtained from self-reported questionnaire and from biological reports when available. Nutritional status The nutritional status was assessed using the Mini Nutritional Assessment (MNA). The full MNA includes 18 items grouped in 4 categories: anthropometric assessment; general assessment; short dietary assessment; and subjective assessment (self-perception of health and nutrition). Malnutrition was defined by a score < 17 and a risk of malnutrition by a score between 17 and 23.5 [20]. Physical activity and mobility Mobility was assessed using the Short Physical Performance Battery (SPPB) which consists of a 4-meter walk at usual pace, a timed repeated chair stand, and three increasingly more difficult standing balance tests [21]. The total score ranges from 0 (worst) to 12 (best). A SPPB score ≤ 9 was suggesting for a risk of mobility-related disability. Frailty Frailty was assessed using the five phenotypic criteria as described by Fried et al. [10]: weakness as measured by grip strength (dominant hand < 20%), slowness (walking speed < 20% of normal), low level of physical activity in the last 2 weeks (<20% of energy expenditure, based on a physical activity questionnaire), low energy or selfreported exhaustion, and unintentional weight loss (4 to 5 kg since the previous year). Subjects were considered as frail when at least 3 criteria were present, pre-frail when there was one or two criteria and robust when there was no criteria. Health status The health status was assessed using the EuroQol-5 Dimension (EQ-5D). Each item of five dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression) was scored using a 3-point scale (no problem=1, with problems=2; with extreme prob-lems=3). The subjects were also asked to value their own health status on an analogue scale (EQ-VAS) ranging from 0 (the worst possible health status) to 100 (the best possible health status) [22]. Cognitive and psychosocial status Neurocognitive domains such as verbal memory, immediate memory, and executive functioning were assessed using various neuropsychological tests including the Mini Mental State Examination (MMSE) questionnaire (30 items, scored between 0 and 30), [23] the 5-word test (5WT), [24] the clock drawing test (CDT), [25] the Controlled Word Association Test, [26] and the Category Naming Test [27]. Subjects were considered to have a cognitive deficit if MMSE was ≤ 20 in subjects with low education, ≤23 in subjects with medium education and ≤26 in subjects with a high education. A poor memory performance was indicated by 5-WT score ≤ 9. Depression over the past week was monitored using the Geriatric Depression Scale (GDS, 30 items); scores ranging from 0 to 5 indicate normal mood; scores between 5 and 9 indicate a risk of depressive symptoms, and scores > 9 indicate severe depressive symptoms [28]. Functional status The functional status was assessed using the Katz's index for basic daily living (ADL), and using the Lawton's scale for instrumental activities of daily living (IADL) [29,30]. An ADL ≤ 5 indicates dependency for daily activity, and an IADL ≤ 7 dependency for instrumental daily activities. Independence was also assessed using the SMAF (French acronym for Functional Autonomy Measurement System) questionnaire [31]. The SMAF is a 29-item scale and measures functional ability in 5 areas: daily living activities (7 items), mobility (6 items), communication (3 items), mental functions (5 items) and domestic tasks (8 items). For each item, the disability was scored on a 5-point scale: 0 (independent), -0.5 (with difficulty), -1 (needs supervision), -2 (needs help), and -3 (dependent). The total scored from 0 to -87 and a score ≤ -16 was subjective of a loss of independence. The level of dependency was also assessed using the actual legal instrument for evaluating dependency in elderly in France (AGGIR) [32]. This is a 17-item questionnaire which covered relatively complex activities related to physical or domestic functions (walking, dressing, toileting, household cleaning …), cognitive or social functions (cooking, medication use, finances, leisure, etc.). Each activity is scored according to three levels of dependency. This leads to calculate 3 degrees of dependency: strong dependency (GIR1 or 2), moderate dependency (GIR3 and 4) and weak dependency (GIR 5 or 6). Geriatric intervention Geriatric interventions such as therapeutic-hygienic and preventive advices; treatment modifications; additional medical and social assessments; reeducation/ readaptation in an occupational therapist; psychosocial readaptation in a psychotherapist were proposed at the end of each visit according to the subject's need. Statistical analyses For subjects included between 2010 and 31 January 2014, data were recorded on the subject's file and then entered in the software dedicated to the study. For subjects included between 01 and 2014 and 31 August 2017, data were directly entered in the software system. Statistical analyses were performed using the SAS software, version 9.4 (SAS Institute, Cary, NC, USA). The statistical analysis focused on subjects characteristics at first visit (inclusion). Quantitative variables were described using means, standard deviations (SD), medians, quartiles, minimal, and maximal values and qualitative data were described using number of cases and percentages. Missing data were not replaced, and percentages were calculated without accounting for missing data, unless otherwise specified. Results Overall, 1337 subjects were included; 250 (18.7%) had no data recorded, and 2 subjects refused their data to be analysed. Thus, the analysed population was composed of 1085 subjects. Main subjects' characteristics are provided in Table 1. The cohort was mainly composed of elderly subjects of 80 years old or above (73.6% of subjects) and of women in majority (68.3%), with a low/medium educational level (61.9%), and living alone (53.8%). Most subjects (88.5%) had at least one cardiovascular risk factor and are exposed to polypharmacy, 83.4% taking 5 and more medications daily. Clinical examination showed abnormal cardiovascular signs in 82.2% of subjects, musculoskeletal signs in 44.0%, and neurologic signs in 31.9% of subjects. Socio-environmental conditions and supports A description of the main available supports including financial, human and technical is provided in Table 2. Most subjects (77.8%) were owner or had a free of charge lodging. The household income was quite low (< 1500 € per month) in 46.7% (485/1038). Almost all (97.2%) were covered by a private health insurance, and 73.1% for long-lasting illness. In addition, 31.3% received a personalised allowance of autonomy (i.e. monthly amount of 138 ± 243 €). Overall 89.8% of subjects could rely on human support including relatives (64.0%), nurses or home care nurse services (63.3%) and domestic help (52.9%). Regarding technical support, 83.8% used at least a technical help, mainly an alarm system (61.3%), grab bars (45.6%), or sticks (43.1%). Overall, 68.0% wereliving in a house. Most dwellings appeared not fully adapted including presenceof onside or outside stairs (78.2% and 58.7%, respectively); 50.2% had a showerand 39% a bathtub but in most cases, they were not adapted, and/or notaccessible(See Supplementary Table S2). Discussion The goal of this preliminary report was to determine the profile of the FREEDOM-LNA cohort. The cohort included 1085 community-dwelling elderly subjects, in advanced age (83 years on average) and composed of women in majority. More than half of subjects were living alone, had several cardiovascular risk factors and are exposed to polypharmacy. Overall, they presented with a very low physical capacity and mobility-related disability and 30-50% showed significant cognitive deficit, depression and a risk of malnutrition. Most subjects were frail or prefrail and in loss of independence and thus required help to stay at home. Overall, the health status based on the EQ-5D questionnaire was worse compared to another cross-sectional study in advanced elderly, [33] with a substantial proportion of subjects reporting pain/discomfort, mobility difficulties, and anxiety/depression. The overall health status of the FREEDOM-LNA population was also worse compared to a cohort of communitydwelling older subjects selected for their ability to walk 20 feet without personal assistance, [34] but quite better compared to another small clinical trial in our clinical centre with frail elderly people, [35] and compared to older adults admitted to our emergency geriatric medicine unit [36]. Disability in essential daily activities is considered as an adverse outcome of frailty. In this study, 32% of subject were frail based on the Fried's criteria and another considerable proportion (58%) was pre-frail. In a systematic review of the literature, the reported prevalence of frailty in elderly among the community worldwide was variable ranging between 4% and 59% and the meta-analysis showed a weighted prevalence of 10.7%, [37] which is quite lower compared to the rate in the FREEDOM-LNA study. In another literature review, Shamliyan et al. estimated the prevalence of the frail phenotype to 26% in people over 85 years [38]. In another cross sectional study in France, physical frailty was reported in 9.5% of people aged 70-79 years, 18.4% of people 80-89 years, and 25.3% of people aged ≥ 90 years [39]. In our study, the most frequent frailty criteria were weakness and low activity. These two frailty criteria have been shown to be the most powerful predictors of ADL disability [40]. In this study, it is noteworthy that the rate of subjects with some disabilities was substantially higher (66.9% had difficulties in at least one ADL, and 85.2% in at least one IADL) than the rate of frail subjects. By comparison, in another French cross-sectional study, 15.0 and 22.4% of elderly of similar age had difficulties in at least one ADL and IADL, respectively [41]. Loss of independence in our cohort was consistent for more than 51.3% of subjects as indicated by the GIR scores, and 59.5% of subjects using SMAF. They predominantly needed help to do most executive functions and for "grooming". Other daily activities were performed with difficulties, and some may be limited to alteration in mobility ("walking outside", or "using stairs) or mental function ("memory", "judgement"). Overall, the disability profile is consistent with an early phase of loss of independence [42]. Possible causes of loss of independence included musculoskeletal and neurological disorders which were reported by 56 and 32% of subjects, respectively, and also cognitive or mental decline in 30-50% of subjects. Our results also showed that almost 75% had a risk of mobility-related disability as assessed using the SPPB questionnaire. A low SPPB score has been previously associated with an increased risk of frailty, disability in daily life activities, falling, hospitalisation, and nursing home admission. [43,44]. Moreover, social isolation and depression can also lead to frailty and decline in functional status [45]. In this cohort, more than half of the population were living alone and 49% had signs of depression. On the other hand, the home environment can also influence the ability to perform ADL, and we found that it was frequently not adapted with stairs and inaccessible showers or bathtubs. Nutrition is believed to influence age-related frailty, cognition and disability, and adverse health outcome [46,47]. Here, we used the MNA questionnaire which can be considered as a valuable tool to identify frail elderly subjects at risk of malnutrition, especially because it encompasses physical and mental aspect of health including mobility, psychological stress or acute disease in the previous 3 months [20]. It can also predict the risk of malnutrition when serum albumin and BMI are still normal, which was the case in the FREEDOM-LNA cohort. Here, we found that 7.5% of subjects were clearly malnourished (MNA < 17) which seems low compared to the rate of frailty subjects and compared to another small clinical trial in frail older subjects referred to our clinical centre [35]. Nevertheless, a high proportion (37%) of subjects is considered at risk of malnutrition. Taken together, the baseline characteristics of the FREEDOM-LNA cohort showed a heterogeneous population of elderly particularly aged, frail or prefrail and presenting with frequent multimorbidity, and at risk of loss of independence due to low physical capacity and alteration of cognition. At the end of this geriatric assessment, it was considered that most subjects needed human support to be able to stay at home. Technical and financial conditions may be an issue, thus requiring intervention. The independent factors associated with frailty, functional loss and cognition will be analysed in an upcoming report. As observational, our cohort has some limitations, mainly due to selective and information biases. First, the cohort was composed of community-dwelling subjects who were interested to receive a comprehensive geriatric assessment at home. Thus, such assessment may be less considered in apparently healthy elderly subjects. In addition, it is not known exactly if the subjects were addressed for primary or secondary prevention. According to an estimation between 2010 and 2017, interventions by our clinical centre were mainly solicited by the subject or a relative (45.5%), followed by hospital (30.8%), familial physicians (15.4%) or others (7.9%) (Personal data, not published). Nevertheless, our aim was not to obtain a representative sample of the general population, but rather to constitute an observatory of elderly subjects at risk of loss of independence. Next, we used the frailty criteria defined by Fried et al [10]. This is the most frequent screening tool used for frailty and was shown to be independently predictive of incident falls, worsening mobility or ADL disability, hospitalization, and death in the elderly. However, this restricts the multidomain of frailty to a physical phenotype, and thus do not completely consider the impact of cognitive and emotional function in development and progression of frailty [48]. Nevertheless, various neuropsychological tests were used in our study to measure cognitive and depressive functions and their relationship with disability and frailty. This will be analysed in separate reports. Finally, some percentages may be overestimated due to missing data including cognitive tests (i.e. GDS, CDT, verbal fluency) and frailty. In conclusion, the FREEDOM-LNA cohort is composed of advanced elderly with various risk factors of frailty and disability associated with low health status, and impaired physical and cognitive functions. This cohort will help to determine factors that adversely influence the trajectory of physical frailty over time.
Covariant field theory for self-dual strings We give a gauge and manifestly SO(2,2) covariant formulation of the field theory of the self-dual string. The string fields are gauge connections that turn the super-Virasoro generators into covariant derivatives. Introduction There are three known kinds of string theories: Those with critical (uncompactified) dimension: (1) D=26 ("N=0"), which has various fundamental problems (divergences, tachyons, no fermions, etc.); (2) D=10 ("N=1"), which is now thought to be a misleading formulation of a D=11 theory that includes supermembranes; and (3) D=4 [1] ("N=2" [2]), which describes self-dual massless theories in 2 space and 2 time dimensions [3,4]. (Note that critical/uncompactified dimension characterizes a string theory since, by embedding one string into another, the number of worldsheet supersymmetries can be altered. It is not clear if the results in this paper will be useful for N=2 strings which come from embeddings of the D=26 and D=10 strings.) The last type of string (the topic of this paper), because of its 2 time dimensions, has lent itself to various interpretations. Clearly unitarity cannot be applied in the usual way, and usually is ignored. 4D Lorentz invariance is not manifest in the usual N=2 formulation, and therefore also has largely been neglected, even though the self-dual field theories with which it is identified have a Lorentz covariant definition. Directly related to the loss of manifest Lorentz invariance is the loss of gauge invariance: The N=2 formulations correspond to certain light-cone gauges, which are not always the best choice for analyzing such theories, particularly for such nonperturbative solutions as instantons. Since spin is ignored, statistics is also ignored; besides, unitarity and Lorentz invariance are the usual justifications for their relation. The N=2 string is also equivalent [5] to the N=4 string [6], but although the latter formulation is manifestly Lorentz covariant, its complicated ghost structure has not been completely worked out, and therefore its existence is seldom recognized. Even dimensional analysis is a problem [7] since, e.g., pure self-dual Yang-Mills contains no dimensionful coupling constant, unlike the field theory action used in [3,4]. The precise definition of this string theory depends strongly on the motivation for its consideration. Up to now, all the work on the noncovariant formulation of the self-dual string has been associated with the fact that it implies classical field equations for self-dual Yang-Mills theory or self-dual gravity [3,4] (but not self-dual gravity coupled to self-dual Yang-Mills [4]) in light-cone gauges. These equations of motion can be used to derive the classical equations of motion of wide classes of integrable models in lower dimensions, as well as study certain properties of solutions of the classical equations in four dimensions. However, most of the known 4D solutions (multi-instantons) require more general gauges to be written explicitly [8]. (For example, explicit n-Eguchi-Hanson and n-Taub-NUT solutions to self-dual gravity would require solving 2n-th-order polynomial equations in this coordinate system, and thus can be explicit only for n=1,2 [9].) Furthermore, the identification as the self-dual part of some non-self-dual theory at the quantum level requires dimensional analysis to be consistent (e.g., for the renormalization group). In particular, the fact that these self-dual field theories can be interpreted as (Wick rotations of) truncations of the corresponding non-self-dual theories [10,11] implies that this string theory actually can be used to help understand physical theories in 3+1 dimensions, and perform perturbative and nonperturbative calculations in them. For this purpose, it is useful to find a method of applying 2D conformal field theory that preserves Lorentz and gauge invariances. Traditionally, the string field or wave function in any string theory has been assumed to be a scalar (or at least a one-component field), with all excitations described by its dependence on its arguments. However, this is not a physical requirement of the theory, but an assumption of the conformal field theory description. The same assumption is generally not made for the quantum mechanics or quantum field theory of particles, and it is not clear that such a requirement would aid in the evaluation of Feynman diagrams. Since the purpose of two-dimensional conformal field theory in string theory is perturbation, one might consider calculational rules for string S-matrices that allow for "indices" in addition to the obvious coordinates. These indices are analogous to the Chan-Paton factors which appear in open string theory and which have no conformal field theory justification. (Although it is true that Chan-Paton factors can be associated with fermionic coordinates living on the worldsheet boundary, their influence is always calculated by simply multiplying the group-theory factors into the amplitude, rather than by calculating worldline propagators for the fermions, etc.) Just as the SO(32) Chan-Paton factors of the light-cone-gauge open superstring are required for SO(9,1) Lorentz invariance, the indices in the self-dual string are needed for SO (2,2) Lorentz-invariance. In earlier papers such indices were associated with N=2 theories to restore Lorentz invariance (and also allow supersymmetry) [10,7,11]. One immediate improvement over the no-index formulation, even at the classical level and in the usual light-cone gauges, is that the equations of motion can consistently describe self-dual gravity coupled to self-dual Yang-Mills theory (see section 2). The purpose of this paper is to associate Lorentz indices with string fields (or wave functions) in such a way as to give a string description of self-dual theories while preserving gauge invariance and manifest Lorentz invariance. The usual string descriptions of these theories are related to light-cone gauge choices (with the associated elimination of auxiliary string fields). The new formulation of the string theory, and its relation to the conventional ones, is closely analogous to the known treatment of the particle field theory describing just the massless fields. We therefore use, as our guide for covariantizing the string theory, the covariant description of the particle theory, which we review in the following section. As for the particle field theory, two different light-cone gauges are possible for the string field theory, corresponding to the polynomial (cubic vertex) and nonpolynomial (Wess-Zumino-like) formulations. The string field theory has already been formulated in the latter gauge [12], so we review it in section 3. It is the formulation to which we apply the covariantization, as described in section 4. In the final section we discuss supersymmetry and ghosts. Self-dual Yang-Mills theory For purposes of perturbation theory, we can describe ordinary Yang-Mills theory as a perturbation about self-dual Yang-Mills theory [13] (see [11] for a light-cone approach): We can write the Yang-Mills Lagrangian in first-order form as [14] where g is the usual Yang-Mills coupling, G αβ = G βα is an anti-self-dual tensor, and is the anti-self-dual-part of the usual Yang-Mills field strength F . Here α = ± is an SL(2,C) Weyl spinor index, and . α = . Elimination of G by its equation of motion produces the usual Lagrangian, up to a total derivative. (The action is then real in either 3+1 or 2+2.) On the other hand, we can keep G, and treath (L → L/h) and g 2 as independent expansion parameters. To lowest order in g 2 (i.e., g = 0), we have a theory that describes self-dual Yang-Mills theory, in the sense that G is then a Lagrange multiplier that enforces self-duality of F . However, G itself is propagating, as required by Lorentz invariance: Propagating helicity +1 in the self-dual part of F requires propagating helicity −1 multiplying it in the action. This perturbation expansion in g 2 is natural in the sense that the simplest tree and one-loop amplitudes in Yang-Mills theory are those where (almost) all the external helicities are the same, and the amplitudes become progressively more complicated as more helicities change sign. Similar remarks apply to self-dual gravity, where the non-self-dual Lagrangian can be written in differential-form notation as [14] L β is the vierbein form (the analog of A above) and ω αβ = dx m ω m αβ is the anti-self-dual part of the Lorentz connection form (the analog of G above). In fact, almost all the amplitudes at g = 0 or κ = 0 vanish, so this term in the action is very similar to a kinetic term: In the self-dual theories described by the above actions, (1) all the tree amplitudes vanish on shell except for the three-point From now on we restrict ourselves to the self-dual theories (g = κ = 0, or lowest order in that perturbation expansion). There are two light-cone gauges for analyzing the self-duality condition F αβ = 0 in Yang-Mills theory (see [11] for an analysis at the quantum level): (1) a gauge proposed by Yang [15], which gives field equations resembling a 2D Wess-Zumino model, and (2) a gauge that gives a quadratic field equation, found by Leznov, Mukhtarov, and Parkes [16]. In both cases, we first choose the light-cone gauge and then solve the F ++ = 0 part of the self-duality condition: The two cases differ in which of the remaining two equations is solved as a constraint, and which is left as a field equation: In the Yang case, Note that in the LMP case the light-cone "time" derivatives ∂ − . α appear only in the kinetic term φ, while the Yang case is more like a Wess-Zumino model, with such derivatives included in the interaction term. If we denote the surviving component of G αβ byφ (G +− in the Yang case, G −− in the LMP case), then the Lagrangian becomes justφ times the φ field equation. Note that G (and thusφ) has engineering dimension 2, while φ is dimensionless. Also, the Lorentz transformations of φ andφ differ. (This is especially clear in the LMP case, where they even have different weights under the unbroken GL(1) subgroup of the SL(2) acting on the undotted spinor indices.) Thus, it is not possible to write an action in terms of just φ that reproduces the above field equations, without violating Lorentz invariance and introducing a dimensionful coupling. (Even withφ, Lorentz invariance is not manifest: The Lorentz transformations are nonlinear in the fields.) Furthermore, using a single field φ destroys the correspondence with ordinary Yang-Mills theory, as described by a perturbation about the self-dual theory. Perturbatively, the (off-shell) tree graphs agree, since they are the classical field equations. The only difference is the labeling of the external lines: Calling one external lineφ and the rest φ gives the same Feynman diagram as labeling all lines φ (although the interpretation is different). However, the 1-loop graphs of the single-field theory differ by a factor of 1/2, and it has nonvanishing higher-loop graphs that have no apparent relation to Yang-Mills theory. As for open superstrings with different gauge groups, the only difference in the theories is the index structure (in this case, 1-valued index vs. 2-valued), but this makes all the difference in the quantum corrections. This analysis has been extended to self-dual gravity, and to self-dual gravity coupled to Yang-Mills theory (as well as their supersymmetric versions) [7]. In the case of gravity, the analog of the Yang gauge is the Plebański gauge [17], which in this case gives quadratic field equations. While the analog of the LMP gauge [7] again gives quadratic field equations, they differ from the Plebański gauge by the absence of "−" derivatives in the interaction term. (In N=2 string theory, the Plebański gauge arises if world-sheet instantons are ignored, while the LMP-like gauge follows if they are included [18].) In both cases, the gravitational 3-point vertex is the square of the corresponding Yang-Mills one (4 derivatives instead of 2). In general, then, the differences between the various actions are rather small, at least at the level of the propagator and 3-point vertices. (Higher-point vertices exist only for Yang-Mills theory, and only in the Yang gauge, whether with or without a Lagrange multiplier.) Explicitly, the kinetic term in the lagrangian is always of the form while the cubic term representing Yang-Mills coupling is and that for gravitational coupling is The explicit indices on the derivatives differ for Yang/Plebanski gauges vs. LMP gauges; here we will instead focus on the difference with or without Lagrange multiplers. Without Lagrange multipliers: (1) The kinetic term has φ 1 and φ 2 the same in any such term, independent of helicity (whether graviton, gluon, or their superpartners); (2) the Yang-Mills coupling L 3 also has all fields the same, namely 3 gluons (or a supersymmetric generalization); and (3) the gravitational coupling L ′ 3 has either 3 gravitons, or 2 gluons and 1 graviton (or supersymmetric generalization). On the other hand, when Lagrange multipliers are introduced, each of the 3 kinds of terms is linear in them (and thus either linear or quadratic in the usual fields). In that case, it's simpler to describe the couplings in terms of helicity: +2 for (self-dual) graviton, −2 for its Lagrange multiplier, +1 for photon, etc. Then L 2 has fields with helicity summing to 0, L 3 has any fields with helicity summing to +1, and L ′ 3 has any summing to +2. (This applies also to the supersymmetric cases.) There are several levels of Lorentz invariance a description of a theory can have: (1) The highest is when the theory is described in terms of an action that is manifestly Lorentz invariant. (2) The next level, as results for example when a noncovariant gauge is chosen or some auxiliary fields are noncovariantly eliminated, is when there is an action that is still invariant, but for which the Lorentz transformations are nonlinear (and perhaps even nonlocal). This is the case for the light-cone gauge actions described above when the Lagrange multiplier fields are included. (3) An even lower level is that for the corresponding case when the Lagrange multiplers are absent; the action is then not Lorentz invariant in any sense, but the field equations are Lorentz covariant in the sense of the previous level. (4) The lowest level lacks any kind of Lorentz invariance for even the field equations. This is the case for the coupling of the closed and open N=2 strings, as found in [4], which we now discuss in more detail. There the term "self-dual" is loosely applied, since the Plebański equation gets a source term from the gluons, and thus no longer describes self-dual gravity. The resulting field equation has no Lorentz covariant analog. The vertices are exactly those described in the previous paragraph. On the other hand, the Lorentz covariant action with Lagrange multiplers that we have discussed reproduces these vertices, except for the different index structure. The necessity for the Lagrange multipliers for a covariant interpretation is clear from the covariant actions given above: For self-dual gravity coupled to self-dual Yang-Mills, the actions given above (gravitationally covariantized for the Yang-Mills terms) give the field equations where all indices are implict. Thus, the self-duality equations for the vierbein and Yang-Mills are unaffected (except for covariantization of the latter), while the selfdual Yang-Mills energy-momentum tensor G αβ F . α . β appears in the field equation for ω. This clearly corresponds to the index structure described for the light-cone Lagrangian terms described in the previous paragraph, where the fields e, A, G, ω have helicities +2, +1, −1, −2. Thus, a simple relabeling of fields has strong implications even at the level of classical field equations. Noncovariant version of self-dual string field theory In a paper by one of the authors [19], it was shown how to construct an open string field theory action for any critical N=2 superconformal representation. This action differs from the standard open string field theory action [20], ΦQΦ + λΦ 3 , in that it is built directly out of N=2 matter fields and does not require worldsheet ghosts. This is possible since, after twisting, N=2 ghosts carry no central charge and decouple from scattering amplitudes. This ghost-free description of N=2 strings was developed by one of the authors with C. Vafa [12] and is extremely useful for calculating N=2 scattering amplitudes [12,21]. In the ghost-free description of N=2 strings, it is useful to note that any critical N=2 representation contains generators of a "small" N=4 superconformal algebra. For the self-dual representation of the N=2 string, the left-moving N=4 generators are: In this formulation, only SL(2) ′ is completely preserved manifestly. Although in this paper we work in an N=2 formulation, we'll find that both spacetime SL(2)'s can be preserved in the string field theory after adding indices on the string field. However, SL(2) ′′ remains broken to the usual local U(1) (or GL(1)) symmetry of the worldsheet, generated by J .. . ± indices refer to the U(1) charge.) As was described in [12], these generators can be used to compute N-point scattering amplitudes on surfaces of genus (field-theory-loops) L and instanton number n I where |n I | ≤ 2L − 2 + N. The most relevant scattering amplitude for open string field theory is the three-point tree amplitude at zero instanton number, which is given by: where Q α Φ signifies the contour integral of spin-one G α .. + around the vertex operator Φ and signifies the two-dimensional correlation function on a sphere. Note that this correlation function vanishes unless the two zero-modes of ψ . As in all open string theories, Φ carries Chan-Paton factors which will be suppressed throughout this paper. Up to gauge transformations, the only momentum-dependent U(1)-neutral vertex After performing the correlation functions over the N=2 matter fields (remembering that ψ . α .. + has a zero-mode), one finds that (3.1) produces the usual three-point tree amplitude is the structure constant for the Chan-Paton factors and k r is the momentum of the r-th state. To construct a string field theory action, it is natural to generalize the on-shell vertex operator to an off-shell string field Φ which is an arbitrary function of X(σ), ψ(σ). Note that the U(1) (GL (1) . − for the N=2 super-Virasoro algebra corresponds to a GL(1) subgroup of this SL(2), rather than a U(1) subgroup), the reality condition on the string field is the usual one: Note that in this representation, twisting T → T ′ = T + 1 2 J .. . − corresponds to a U(1) subgroup), but then hermitian conjugation must be accompanied by an SL (2) transformation to restore the original twist [19]. For the string field theory action to be correct, the quadratic term in the action should enforce the linearized equation of motion Q α Q α Φ = 0, while the cubic term should produce the correct on-shell three-point amplitude. Finally, the action should contain a gauge invariance whose linearized form is δΦ = Q α Λ α . The quadratic and cubic terms in the action are easily found to be of the form . . + times the self-dual equation of motion in Yang gauge. The action which produces this equation of motion is a straightforward generalization of the Wess-Zumino model [22] where the two-dimensional derivatives ∂ z and ∂z are replaced by Q + and Q − . The string field theory action is In addition to producing the correct linearized equations of motion and three-point tree amplitude, this action contains the nonlinear gauge invariance, which generalizes the linearized gauge invariance δΦ = Q α Λ α . Lorentz covariance In this section, we show how to "covariantize" the field theory action for the self-dual representation of the N=2 string. It is still unclear if the covariantization procedure will be useful for other This suggests that to recover 4D Lorentz invariance, one needs to place a twovalued index on the string field: whereΦ plays the role of the Lagrange multiplier and Φ plays the role of the self-dual field. Furthermore, the string field action in Yang gauge needs to be modified to Φ Q − (e −Φ Q + e Φ ). Note that except for the index structure and numerical (permutation) factors, this action has the same quadratic and cubic terms as (3.3), and the same linearized gauge transformation (see below). The change in the index structure has the effect of multiplyling the usual conformal field theory calculation by a factor δ 1−n,L 2 L where n is the number of tilded vertex operators and L is the number of loops. We can extend the analogy to the manifestly Lorentz covariant formulation by proposing the new string field theory action where K is arbitrary and Ω γαβ is symmetric in its indices. The string field theory action of the previous section can now be rederived from this covariant action by the same methods described in section 2. However, since the BRST operators Q α have nontrivial kernels (in contrast to the partial derivatives ∂ α . β ), new gauge invariances arise upon solving the constraints, in close analogy to 4D N=1 super Yang-Mills theory [19]. (See [23] for a review.) In this case, the gauge invariances arise because of the unphysical "massive" fields, absent in the discussion of section 2. Explicitly, we find in an appropriate gauge. (I.e., F ++ = 0 implies A + is pure gauge.) However, this does not completely fix the gauge: We are left with the residual gauge invariance In particular, this applies for the gauge transformation of G. For the next step, in the Yang gauge, This introduces the gauge invariance The complete gauge transformations for Φ are now Finally, the field equation is The Lagrangian then reduces toΦF +− , whereΦ = G +− . Applying these results, the gauge transformation forΦ is then On the other hand, we can also find a new light-cone string action by going to an LMP gauge: This introduces the Abelian gauge invariance The complete gauge transformation for Φ is now The field equation is now polynomial: and the Lagrangian isΦF −− ,Φ = G ++ . The gauge transformation ofΦ is Supersymmetry and ghosts The supersymmetric generalizations of self-dual theories have also been analyzed [10,7]. where i, j, k, l = 1, ..., 4 are the internal SL(4) indices of N=4 supersymmetry (not to be confused with the Yang-Mills group indices, which are still implicit), and φ ij is antisymmetric. Furthermore, self-dual Yang-Mills theory, self-dual gravity, and self-dual gravity coupled to self-dual Yang-Mills theory all can be treated as truncations of gauged self-dual N=8 supergravity (with Yang-Mills gauge group SO (8)). In light-cone gauges, the vertices (and, of course, the propagators) are identical to the nonsupersymmetric cases: Spin appears effectively as an internal symmetry index. This helicity-independence of the couplings also has an explanation in terms of N=2 strings: Spectral flow is usually interpreted as allowing the identification of states with different boundary conditions (which would normally be associated with different spins, and thus different statistics). However, these states can be distinguished if they are assigned different helicities. (The assignment of helicities is somewhat arbitrary in N=2 string theory; in fact, the usual continuous helicity representations of the Poincaré group [24] can be associated with these self-dual theories, but only if one abandons the possibility of manifest Lorentz covariance.) So, rather than using spectral flow to say there's only one state, it can be interpreted as a stronger version of supersymmetry that implies helicity-independence of the couplings. As for the nonsupersymmetric case, the above component action can be straightforwardly generalized to string field theory by dropping dotted spinor indices and replacing ∂ → Q: where now ≡ 1 2 ∇ α ∇ α . The component fields describing helicities +1, +1/2, 0, −1/2, −1 are now A α , χ i , φ ij , ξ iα , G αβ . Half of the supersymmetry transformations (those not involving F . α . β explicitly) can also be generalized: There is an interesting analogy between supersymmetry and the Zinn-Justin-Batalin-Vilkovisky formalism: The minimal field theory Lagrangian for the nonsupersymmetric self-dual string, including antifields and the ghosts for just the Yang-Mills gauge symmetry, is This is very similar to the supersymmetric action (without antifields), with the iden-
The dawn of mathematical biology In this paper I describe the early development of the so-called mathematical biophysics, as conceived by Nicolas Rashevsky back in the 1920's, as well as his latter idealization of a"relational biology". I also underline that the creation of the journal"The Bulletin of Mathematical Biophysics"was instrumental in legitimating the efforts of Rashevsky and his students, and I finally argue that his pioneering efforts, while still largely unacknowledged, were vital for the development of important scientific contributions, most notably the McCulloch-Pitts model of neural networks. Introduction The modern era of theoretical biology can be classified into "foundations," "physics and chemistry," "cybernetics" and "mathematical biophysics" (Morowitz, 1965). According to this author, an important part of the history of the modern era in theoretical biology dates back to the beginning of the twentieth century, with the publication of D'Arcy Wentworth Thompson's opus "On Growth and Form" (Thompson, 1917), closely followed by works like the "Elements of Physical Biology" of Alfred J. Lotka (1925). Some authors tracked Lotka's ideas closely, yielding books such as "Leçons sur la Théorie Mathématique de la Lutte pour la Vie," by Vito Volterra (1931), and Kostitzin's "Biologie Mathématique" (Kostitzin, 1937). Mathematics and ecology do share a long coexistence, and mathematical ecology is currently one of the most developed areas of the theoretical sciences taken as a whole. Genetics is another example of great success in modern applied mathematics, its history beginning (at least) as early as in the second decade of the last century, when J. B. S. Haldane published "A Mathematical Theory of Natural and Artificial Selection" (Haldane, 1924), followed by "The Genetical Theory of Natural Selection," in 1930, and "The Theory of Inbreeding," in 1949, both by R. A. Fisher (see Fisher, 1930Fisher, , 1949. The approach of "physics and chemistry" is represented by workers like Erwin Schrödingerone of the founding fathers of quantum mechanics -, who wrote a small but widely read book entitled "What is Life?" (Schrödinger, 1944), and Hinshelwood (1946), with his "The Chemical Kinetics of the Bacterial Cell." "Cybernetics" is a successful term coined by Norbert Wiener and used in a book with the same name (Wiener, 1948). At the same time C. E. Shannon (1948) published his seminal paper "A Mathematical Theory of Communication," and it is a known fact that both works profoundly influenced a whole generation of mathematical biologists and other theoreticians. While Wiener stressed the importance of feedbackwith the notion of closed loop control yielding new approaches to theoretical biology, ecology and the neurosciences -, information theory was improved and applied in several technological and scientific areas. Finally, the amalgamation of information theory with the notion of feedback strongly influenced the work of important theoretical ecologists like Robert E. Ulanowicz (1980Ulanowicz ( , 1997 and Howard T. Odum (1983). I suggest that the development of the last division highlighted above, namely "mathematical biophysics" is, up to now, largely unknown to mainstream historians and philosophers of science. Interestingly enough, this unfamiliarity spreads even to most historians and philosophers of biology. However, I wish to point out a recent revival of some fundamental ideas associated with this school, a fact that, alone, justifies a closer look into the origins of this investigative framework. Accordingly, in this paper I review and briefly discuss some early stages of this line of thought. The roots of mathematical biophysics and of the relational approach Nicolas Rashevsky was born in Chernigov on September 1899 (Cull, 2007). He took a Ph.D. in theoretical physics very early in his life, and soon began publishing in quantum theory and relativity, among other topics. He immigrated to North America in 1927, after being trained in Russia as a mathematical physicist. His original work in biology began when he moved to the Research Laboratories of the Westinghouse Corporation in Pittsburgh, Pennsylvania, where he worked on the thermodynamics of liquid droplets. There he found that these structures became unstable past a given critical size, spontaneously dividing into smaller droplets. Later still, while involved with the Mathematical Biology program of the University of Chicago, Rashevsky studied cell division and excitability phenomena. The Chicago group established "The Bulletin of Mathematical Biophysics" (now "The Bulletin of Mathematical Biology"), an important contribution to the field of theoretical biology. In this journal one finds most of Rashevsky's published biological material, and it also served to introduce the work of many of his students. Thus, the journal helped to catapult new careers and (above all) catalyze the formation and maintenance of the "mathematical biophysics" school, still an influential school in modern theoretical biology (Rosen, 1991). To be fair, the journal was widely open to all interested researchers. More than that, Rashevsky and his colleague Herb Landahl used to take for themselves the task of correcting and even helping to extend the mathematics, encouraging the authors to re-submit the papers. As a side note, it is interesting to mention that, given the shortcomings of the publication of graphics at the time, Landahl offered invaluable help to the authors, carefully preparing each drawing for printing (Cull, 2007). The importance of organizing new journals, proceedings and books for "legitimating" a new branch of science was emphasized by Smocovitis (1996). I would like to suggest, therefore, that the "mathematical biophysics" case fits nicely this interpretation. Other periodicals of importance that arose during this period include "Acta Biotheoretica," founded in 1935 by the group then at the Professor Jan der Hoeven Foundation for Theoretical Biology of the University of Leiden, "Bibliographia Biotheoretica" (published by the same group) and the well-known "Journal of Theoretical Biology," founded in 1961 (Morowitz, 1965). Rashevsky is rightly acknowledged for the proposition of a systematic approach to the use of mathematical methods in biology. He intended to develop a "mathematical biology" that would relate to experiments just like the well-established mathematical physics (Cull, 2007). He chose to name this new field of inquiry "mathematical biophysics," a decision reflected in the title of the aforementioned journal. By the mid-1930s Rashevsky had already explored the links between chemical reactions and physical diffusion (currently known as "reaction-diffusion" phenomena), as well as the associated destabilization of homogeneous states that is at the core of the modern notion of self-organization (Rosen, 1991). An elaborated theory of cell division based on the principles behind diffusion drag forces was offered close to the end of that decade (Rashevsky, 1939), and led to new equations concerning the rates of constriction and elongation of demembranated Arbacia eggs under division (Landahl, 1942a(Landahl, , 1942b. (I note that Arbacia is a genus of hemispherically-shaped sea urchins commonly used in experiments of the kind.) This theory agreed well with previously available empirical data, but Rashevsky himself urged to demonstrate that the theory of diffusion drag forces was inadequate to represent most other facts of cell division known at the time (Rosen, 1991). Rashevsky's major work is (rather unsurprisingly) entitled "Mathematical Biophysics" (Rashevsky, 1960), a book that was revised and reedited more than once (most of the above mentioned studies can be found in this publication). The original edition, dating back to 1938, covered cellular biophysics, excitation phenomena and the central nervous system, with emphasis in the physical representation (Morowitz, 1965). Subsequently, Rashevsky delved into still more abstract mathematical approaches, as I describe later. I wish to point out that the initial efforts of Rashevsky are both important and largely unrecognized by contemporary philosophers and historians of science. Indeed, the academic infrastructure and the research agenda established by this forerunner were crucial in subsidizing the development of important scientific contributions made by other researchers. Perhaps the most unexpected case is the crucial role played by Rashevsky in the line of investigation leading to the celebrated McCulloch-Pitts model of neural networks, published in 1943. Consider the following statement, which nicely summarizes a commonly held belief of the scientific and philosophical communities: "The neural nets branch of AI began with a very early paper by Warren McCulloch and Walter Pitts..." (Franklin, forthcoming). Actually, the first mathematical descriptions of the behavior of "nerves" and networks of nerves are to be credited to Rashevsky, who during the early 1930´s published several papers concerning a mathematical theory of conduction in nerves, based on electrochemical gradients and the diffusion of substances (Abraham, 2002). Rashevsky´s fundamental idea was to use two linear differential equations together with a nonlinear threshold operator (Rashevsky, 1933). It was in this paper, hence, that Rashevsky´s "two-factor" theory of nerve excitation became public for the first time. This theory was based on the diffusion kinetics of excitors and inhibitors (Abraham, 2002). Only many years later the unknown "substances" were correctly identified as concentrations of sodium and potassium, thanks to the important work of Hodgkin and Huxley (Cull, 2007). I am not willing to get into the technical details here (a thorough investigation will be published elsewhere), but suffice it to say that Rashevsky argued that his simple mathematical model could fit empirical data, available at the time, regarding the behavior of single neurons. Still more important, he postulated that these model neurons could be connected in networks, in order to yield complex behavior, and even allow the modeling of the entire human brain (Cull, 2007). Latter, close to the end of that decade, Walter Pitts was introduced to Rashevsky by Rudolf Carnap, and accepted into his mathematical biology group (Cowan, 1998). Together, Pitts (who was a superb mathematician) and the "philosophical psychiatrist" Warren S. McCulloch (Abraham, 2002) published their groundbreaking paper, entitled "A Logical Calculus of the Ideas Immanent in Nervous Activity" (McCulloch and Pitts, 1943). Many years later, McCulloch recalled that he and Pitts were able to publish their ideas in Rashevsky´s journal thanks to Rashevsky´s defense of mathematical and logical ideas in the field of biology (Abraham, 2002). The objective fact that Rashevsky was apparently the first investigator to come up with the idea of a "neural net" mathematical model (Rosen, 1991), however, was largely neglected. According to Rosen (1991), by the 1950s Rashevsky had explored many areas of theoretical biology, but he felt that his approach still lacked "genuinely fresh insights." Thus, he suddenly took a wholly new research direction, turning from mathematical methods closely associated with empirical data to an overarching search for general biological principles. Putting aside Rosen´s opinion, this radical turn is seemingly most easily justifiable simply as a strong reaction to Rashevsky´s own critics, who claimed that his mathematical biophysics approach was not a novelty anymore (ironically) -, given that many researchers had already began incorporating models derived from physics, as well as quantitative methods, in their work (Cull, 2007). Anyway, the fact is that Rashevsky expressed a bunch of novel ideas in a paper interestingly entitled "Topology and Life: In Search of General Mathematical Principles in Biology and Sociology". In the paper, after pointing out most major developments in the mathematical biology of the time, he goes on saying: "All [these] theories ... deal with separate biological phenomena. There is no record of a successful mathematical theory which would treat the integrated activities of the organism as a whole." (Rashevsky, 1954, p. 319-320). According to him, it was important to have the knowledge that diffusion drag forces are responsible for cell division and that pressure waves are reflected in blood vessels, as well as to have a mathematical formalism for dealing with complicated neural networks. But then he emphasized that there was nothing so far in these theories indicating that an adequate functioning of the circulatory system was fundamental for the normal operation of intracellular processes; Furthermore, there was nothing in the formalisms showing that an elaborated process in the brain, that resulted, e.g., in the location of food, was causally connected with metabolic processes going on in the cells of the digestive system. The same was true regarding the causal nexus between a failure in the normal behavior of a network of neurons and the cell divisions that resulted from a stimulation of the process of healing due to the accidental cutting of, say, a thumb (Rosen, 1991). And yet, according to him, "this integrated activity of the organism is probably the most essential manifestation of life." (Rashevsky, 1954, p. 320). Unfortunately, Rashevsky argued, one usually approaches the effects of these diffusion drag forces simply as a diffusion problem in a specialized physical system, and one deals with the processes of circulation simply as special hydrodynamic problems. Hence, the "fundamental manifestations of life" are definitely excluded from all those biomathematical theories. In other words, biomathematics, according to Rashevsky, lacked a capacity to adequately describe true integration of the parts of any organic system. As a result, it was useless to try to apply the physical principles, used in the aforesaid mechanical models of biological phenomena, to develop a comprehensive theory of life. Similar lines of criticism could be applied, I submit, to modern theoretical frameworks attempting to provide integrated models of the living organism (or the brain itself). This line of thought was further developed by Rashevsky´s student Robert Rosen, who yielded an interesting theoretical framework, built around a special notion of complexity. He also promoted the use of new mathematical tools, like category theory, in theoretical biology investigations (Rosen, 1991(Rosen, , 2000. According to Rashevsky, putting aside the possibility of constructing a physicomathematical theory of the organism based on the physicochemical dynamics of cells and of cellular aggregates does not prevent one from trying to find alternative pathways. What are the possibilities, then? The key to understanding Rashevsky's perspective, I suggest, is to start analyzing some of his fundamental premises. In fact, Rashevsky used to believe that the biomathematics of his time was in a pre-Newtonian stage of development, despite the elaborate theories then available. In pre-Newtonian physics there existed simple mathematical treatments of isolated phenomena, but it was only with the arrival of Newton´s principles, incorporated in his laws of motion, that physics attained a more comprehensive and unified synthesis. Ordinary models of biomathematics, like the models of theoretical physics, are all based in physical principles. But Rashevsky seemed to suggest that they should instead be based on genuinely biological principles, in order to capture the integrated activities of the organism as a whole (Rosen, 1991). This is apparent in his words: "We must look for a principle which connects the different physical phenomena involved and expresses the biological unity of the organism and of the organic world as a whole." (Rashevsky, 1954, p. 321, italics added). He also argued that mathematical models are transient in nature, while a general principle, when discovered, is perennial. For example, it is possible to devise several alternative models, all obeying the laws of Newton (one model being e.g. the "billiard ball" molecule of the kinetic theory of gases). On the same guise, there are distinct cosmological models, all based upon Einstein's fundamental principles (Peacock, 1999;Dalarsson and Dalarsson, 2005). It is clear then that Rashevsky wished to conceive general principles, in biology, enjoying the same status earned by principles in theoretical physics. After all, he was trained as a theoretical physicist. Looking for general biological principles I already highlighted the fact that Rashevsky was a researcher that eagerly pursued general biological principles, and he indeed explicitly managed to propose some. In what follows, I briefly examine (following Rosen, 1991) the nature of some of these principles. The first principle I would like to emphasize is Rashevsky's principle of adequate design of organisms, originally denominated principle of maximum simplicity, and introduced as early as in 1943. As originally formulated it states that, given that the same biological functions can be performed by different structures, the particular structure found in nature is the simplest one compatible with the performance of a function or set of functions. The principle of maximum simplicity applies therefore to different models of mechanisms, of which the simplest one is to be preferred. But given that simplicity is a vague notion in this case, being difficult to find out a measurement standard, Rashevsky self-critically turned to a slightly different version, denominated principle of optimal design. In this case, it is required that a structure necessary for performing a given function be optimal relatively to energy and material needs. But it can be argued that there is still some imprecision here, because a structure that is optimal with respect to material needs is not necessarily optimal as far as energy expenditures are concerned. Hence, a more straightforward notion was in need. Accordingly, Rashevsky turned to the last formulation of his principle, this time putting aside the notion of optimality: When a set of functions of an organism or of a single organ is prescribed, then, in order to find the shape and structure of the organ, the mathematical biologist must proceed just as an engineer proceeds in designing a structure or a machine for the performance of a given function. The design must be adequate to the performance of the prescribed function under specified varying environmental conditions. This may be called the principle of adequate design of the organism. (Rashevsky, 1965, p. 41, italics added). I note that the notion of "adequate" is still also a bit vague, but I am not pursuing this discussion further in this paper. Let me instead raise a more interesting question, which repeatedly arises in philosophy of biology, particularly in those areas of inquiry closely associated to the Darwinian theory of biological evolution: Is Rashevsky´s principle of adequate design of the organism teleological? To this objection Rashevsky himself had an answer: all variational principles in physics are "teleological" or "goal-directed," beginning with the principle of least action (see Lanczos, 1970, for a detailed mathematical account of this and other physical principles). Other investigators subsequently offered similar justifications, e.g. Robert Rosen, who dedicated a full chapter of his book "Essays on Life Itself" (Rosen, 2000), entitled "Optimality in Biology and Medicine", to this technical discussion. Another objection that Rashevsky was well aware of was that the principle of adequate "design" seemed to imply some sort of creative intelligence. Must one, in this case, presuppose a "universal engineer" of sorts? Not necessarily, because, like most scientific principles, the principle of adequate design of the organism "offers us merely an operational prescription for the determination of organic form by calculation." (Rashevsky, 1965, p. 45). Here, I suggest that one would perhaps do best simply not employing the term "design," that also seems to be rather misguiding in this context. On the other hand, according to Rashevsky, the principle could perhaps follow directly from the operation of natural selection, which would only preserve "adequate" organisms, although it could turn out to be an independent principle. This too, I suggest, could be nourishment for heated discussions among contemporary philosophers of biology. Perhaps still more thought-provoking is Rashevsky´s "principle of biological epimorphism," that emphasizes qualitative relations as opposed to quantitative aspects, topology instead of metrics. It can be argued that a given biological property in a higher organism has many more elementary processes than the equivalent biological property of a lower one. Examples of biological properties are perception, locomotion, metabolism, etc. The principle is based upon the fact that different organisms can be epimorphically mapped onto each other, after the biological properties were already clearly distinguished and represented. In such epimorphic mappings, the basic relations characterizing the organism as a whole are preserved. Given Rashevsky's mathematical proclivities, he wanted to put his principle into a precise and rational context. Among the several branches of relational mathematics, topology reigns supreme. Before going on, I think it necessary to briefly digress about this topic. It is a known fact that topological ideas are present in most branches of modern mathematics. In a nutshell, topology is the mathematical study of properties of objects, which are preserved through deformation, stretching and twisting (tearing is forbidden). Hence, one is entitled to say that a circle is topologically equivalent to an ellipse, given that one can be transformed into the other by stretching. The same is valid for a sphere, which can be transformed into an ellipsoid, and vice versa. Topology has indeed to do with the study of objects like curves, surfaces, the space-time of Minkowsky (in relativity theory, see Peacock, 1999), physical phase spaces and so on. Furthermore, the objects of topology can be formally defined as "topological spaces." Two such objects are homeomorphic if they have the same topological properties. Using such perspective, Rashevsky postulated that to each organism there is a corresponding topological "complex." More complicated complexes correspond to higher organisms, and different complexes are converted into each other by means of a universal rule of geometrical transformation. Furthermore, they can be mapped onto each other in a many-to-one manner, preserving certain basic relations. Rashevsky expressed his "principle of biological epimorphism" by postulating that, if one represents geometrically the relations between several functions of an organism in a single convenient topological complex, then the topological complexes that represent different organisms are obtainable, via a proper transformation, from just one or a few primordial topological complexes. A previous version of this principle is what Rashevsky used to call the "principle of bio-topological mapping." According to this principle, the topological complexes by means of which diverse organisms are represented are all obtainable from one or a few primordial complexes by the same transformation. This transformation contains one or more parameters, different values of it corresponding to different organisms (Rashevsky, 1954). The considerations above may hopefully give us a glimpse of Rashevsky's relational approach to the study of life, epitomized in the expression "relational biology," that he coined in order to help delineate a clear framework for thinking in the life sciences. Conclusion Nicolas Rashevsky (who passed away in 1972) was a pioneer in theoretical biology, having inaugurated the school of "mathematical biophysics" and subsequently pioneered the field of "relational biology" or (still another term that he coined) "biotopology." I call attention to the fact that the latter must definitely be distinguished from "topobiology," a term coined by Nobel laureate Gerald Edelman in the context of cell and embryonic development research. Edelman´s theory postulates that differential adhesive interactions among heterogeneous cell populations drive morphogenesis, and explains, among other things, how a complex multi-cellular organism can arise from a single cell (Edelman, 1988). I already emphasized that the creation of "The Bulletin of Mathematical Biophysics" was an important tool for establishing and helping broadcast Rashevsky´s work (as well as the proposals of his own students). Furthermore, I pointed out that the work of Rashevsky implies that at least some aspects of contemporary theoretical biology and neuroscience have older roots than previously thought. This is exemplified by Rashevsky´s active role in the body of research that paved the way to the development and publication of the McCulloch-Pitts model of neural networks. Finally, I suggested that Rashevsky´s criticism of purely mechanical and nonintegrative approaches to biology may as well be evaluated under the light of current theories, including proposals in theoretical biologyand, once again, in neuroscience. The critical analysis of these claims, I submit, is an interesting and yet largely unexplored subject-matter to philosophers of science. Rashevsky´s influence still reverberates in important scientific research areas such as neural networks and non-equilibrium pattern formation, among others. However, as I see it, relational biology effectively came of age with the far more encompassing and methodical work of Rashevsky's former student Robert Rosen (who passed away in 1998see Rosen 1991Rosen , 2000 and of his followers. A very active contemporary player worth mentioning is a bright pupil of Rosen, the mathematical biologist Aloisius H. Louie (see Louie, 2009Louie, , 2013.
A New Strategy to Uncover the Anticancer Mechanism of Chinese Compound Formula by Integrating Systems Pharmacology and Bioinformatics Currently, cancer has become one of the major refractory diseases threatening human health. Complementary and alternative medicine (CAM) has gradually become an alternative choice for patients, which can be attributed to the high cost of leading cancer treatments (including surgery, radiotherapy, and chemotherapy) and the severe related adverse effects. As a critical component of CAM, traditional Chinese medicine (TCM) has increasing application in preventing and treating cancer over the past few decades. Huanglian Jiedu Decoction (HJD), a classical Chinese compound formula, has been recognized to exert a beneficial effect on cancer treatment, with few adverse effects reported. Nevertheless, the precise molecular mechanism remains unclear yet. In this study, we had integrated systems pharmacology and bioinformatics to explore the major active ingredients against cancer, targets for cancer treatment, and the related mechanisms of action. These targets were scrutinized using web-based Gene SeT Analysis Toolkit (WebGestalt), and 10 KEGG pathways were identified by enrichment analysis. Refined analysis of the KEGG pathways indicated that the anticancer effect of HJD showed a functional correlation with the p53 signaling pathway; moreover, HJD had potential therapeutic effect on prostate cancer (PCa) and small cell lung cancer (SCLC). Afterwards, genetic alterations and survival analysis of key targets for cancer treatment were examined in both PCa and SCLC. Our results suggested that such integrated research strategy might serve as a new paradigm to guide future research on Chinese compound formula. Importantly, such strategy contributes to studying the anticancer effect and the mechanisms of action of Chinese compound formula, which has also laid down the foundation for clinical application. Introduction According to a WHO report, cancer has become the leading killer of human health, which is associated with high recurrence rate and high mortality. Typically, the year 2012 has witnessed about 14 million new cancer cases and 8.2 million cancer-related deaths. It is estimated that the annual new cases will increase from 14 million to 22 million over the coming 20 years [1]. The existing anticancer treatments mainly include surgery, radiotherapy, and chemotherapy. However, the patients would eventually choose to discontinue the treatment due to the high cost of radiotherapy and chemotherapy, as well as the serious related adverse effects [2]. With the development of medicine, cancer is treated based on a comprehensive and diversified treatment, and complementary and alternative medicine (CAM) has become an alternative option for patients under such circumstances. Traditional Chinese medicine (TCM), a critical component of CAM, has been increasingly applied in preventing and treating cancer over the past few decades [3,4]. As an adjuvant therapy, Chinese medicine shows beneficial effect on cancer treatment with few adverse effects reported [5]. Huanglian Jiedu Decoction (HJD), first recorded in the Prescriptions for Emergent Reference (Zhouhou Beiji Fang) written by Ge Hong, consists of four herbs, including Coptidis Rhizoma (Huanglian), Scutellariae Radix (Huangqin), Phellodendri Chinrnsis Cortex (Huangbo), and Gardeniae Fructus (Zhizi). HJD is a representative formula for cancer 2 Evidence-Based Complementary and Alternative Medicine treatment, which is frequently employed to treat pancreatic cancer, breast cancer, liver cancer, and colorectal cancer (CRC) in clinical practice [6]. For instance, some results of pharmacological experiment suggest that HJD has anticancer effect on human liver cancer cells both in vitro and in vivo, which can also markedly extend the survival time of liver cancer bearing mice [7,8]. However, the precise mechanism of its anticancer effect remains unclear so far. Chinese compound formula is characterized by the synergistic effects of multicomponent and multitarget. On this account, a method suitable for its characteristics is needed to reveal the underlying mechanism of action. Systems pharmacology is a new discipline studying the regularity and mechanism of drug-organism interaction at the system level [9]. It can study the changes in body function mechanisms caused by drug treatment for diseases from molecules, cells, tissues, to organs. Moreover, it would establish the interrelationships between drug efficacy and the organism at both microscopic levels (molecular and biochemical network levels) and macroscopic levels (tissue, organ, and overall levels). Besides, extremely abundant cancer data have been produced in recent years, with the rapid development of bioinformatics technology, including microarray, proteomics, and other high-throughput screening assays. By integrating systems pharmacology and bioinformatics, this study aimed to explore the relationships of HJD with its cancer-related targets and interactive genes and to reveal the underlying molecular mechanisms of action. Such strategy would be helpful for investigating the anticancer effect and the mechanism of action of Chinese compound formula, which could also provide the basis for clinical application. A flowchart of the research approach was presented in Figure 1. In addition, The Chinese herbal compound can be considered as a weak inhibitor with multicomponent and multitarget, and there are synergistic effects among multiple components. We hope to explore how this compound can actually work in the treatment of cancer, but it must be taken into account that the components of the compound are complex and not every component can play a role. Therefore, we screen out the main active components through multiple parameters and predict the targets of the active ingredients, so as to infer the therapeutic effect. Construction of Cancerous Target Network and Chemical Component Database. All targets for cancer treatment could be accessed in DrugBank database (http://www.drugbank.ca/), and the cancerous target network was thereby constructed through Cytoscape [12]. In addition, HJD was comprised of four herbs, including Coptidis Rhizoma (Huanglian), Scutellariae Radix (Huangqin), Phellodendri Chinrnsis Cortex (Huangbo), and Gardeniae Fructus (Zhizi). All chemical components of these Chinese herbs had been collected into TcmSP [13], TcmID [14], TCM Database@Taiwan [15], and NCBI Pubchem databases and had been standardized to a constituent data supplemented in the TcmSP database. Finally, the number of chemical compounds in HJD was obtained, as shown in the Appendix. Screening the Active Ingredients by OB Prediction. Oral bioavailability (OB) in vivo (%F), the unchanged fraction of the orally administered dose achieving systemic circulation, is one of the most commonly used pharmacokinetic parameters in drug screening cascades. In this study, a robust calculative system OBioavail 1.1 [16] was employed to predict the OB of the compounds, since it was difficult to assess the bioavailability of the complex TCM by "wet" experiments. It has combined the metabolism (cytochrome P450 3A4) and transporter (P-glycoproteins) information. Using this system, compounds with lower OB could be discarded, so that the amount of the original compounds could be distinctly reduced to a smaller set suitable for Chinese compound formulas. Compounds with the OB of ≥30% were selected as the active ingredients in this study. Such a threshold was selected based on (1) the use of a minimum number of components to maximally extract HJD information and (2) the fact that the obtained model could be reasonably explained by the reported pharmacological data. Screening the Active Ingredients by Drug-Likeness Prediction. Before target prediction, some compounds considered chemically unsuitable for use were removed by drugs similarity index, which could be deduced as a delicate balance among the molecular properties affecting pharmacodynamics and pharmacokinetics, ultimately influencing its absorption, distribution, metabolism, and excretion (ADME) in human body like a drug. In this study, the drug-likeness (DL) index of a new compound was calculated according to the Tanimoto similarity [17]. where A represented the new compound and B stood for the average DL index of all the 6511 molecules in the DrugBank database based on the Dragon soft descriptors. Accordingly, molecules with the drug-likeness of <0.18 were also removed. Finally, compounds with both the OB of ≥30% and DL of ≥0.18 were considered as the active ingredients. Prediction of the Targets of Active Ingredients. SysDT [18], the drug-target prediction model, was adopted to predict the targets of active ingredients. Briefly, SysDT was based on the 6511 drugs and 3987 targets of DrugBank database as well as the mutual correlation degree. Moreover, it was established using the stochastic forest algorithm and the support vector machine (SVM) algorithm, respectively. It turned out that the prediction model constructed by SVM was superior, with the consistency of 82.83%, sensitivity of 81.33%, and specificity of 93.62%. Using such model, targets with the SVM of > 0.7 were predicted as the putative targets of active ingredients. In addition, target information was integrated from SEA [19], STITCH [20], TTD [21], and HIT [22] databases to supplement this predictive model. Moreover, information regarding the physiological functions of all targets was obtained from the TTD and UniProt databases. Construction of the Network and Topological Analysis. Associations between active ingredients and putative targets were constructed into the compound-target network of HJD Evidence-Based Complementary and Alternative Medicine [12], which was then mapped with the cancerous target network to obtain the compoundcancer target network of HJD, including all HJD-related targets for cancer treatment. Afterwards, the protein-protein interaction (PPI) network of HJD-related targets for cancer treatment was constructed by STRING [23]. Subsequently, topology analysis was performed using the Network Analyzer plug-in to output the main topological parameters of this network [24]. Screening Key Targets and KEGG Pathway Enrichment Analysis. The centrality algorithm is a key method to measure the importance degree of nodes in the whole network, with a larger value indicating a higher importance degree of node in the whole network and greater influence on the structure and function of the whole network. In this study, the degree centrality algorithm was adopted as the major algorithm, supplemented by the closeness centrality and the betweenness centrality algorithm, so as to select and evaluate the key anticancer targets of HJD. Additionally, the biological information and attribution embedded in the anticancer targets were then analyzed using a web-based integrated data mining system, WebGestalt [25]. Biochemical pathways and functions linked to the anticancer targets of HJD were specifically queried and navigated by the KEGG pathway enrichment analysis tool in WebGestalt. Eventually, the top 10 pathways with an adjusted P value of <0.01 were selected. Exploration of the Cancer Genomics Data Linked to HJD by cBio Cancer Genomics Portal. The cBio Cancer Genomics Portal (http://cbioportal.org), an open platform to explore the multidimensional cancer genomics data, can encapsulate the molecular profiling data obtained from cancer tissues and cell lines into the readily understandable genetic, epigenetic, gene expression, and proteomic events [26]. Specifically, the complex cancer genomics profiles can be easily accessed using the query interface of the Portal, which enables the researchers to explore and compare the genetic alterations across samples. Furthermore, the obtained underlying data can thereby be linked to clinical outcomes, which has facilitated the novel discovery in biological systems. In this study, the cBio Portal was utilized to examine the connectivity of HJD-related targets for cancer treatment across all studies on PCa and SCLC available in the databases. These targets in all sample studies on PCa and SCLC were classified as altered or nonaltered using the Portal search function. The genomics datasets were then presented using OncoPrint as the heatmap, a visually appealing display of alterations in microarrays across cancer samples [27]. Another feature of the Portal was that, it could generate multiple visualization platforms through grouping PCa abd SCLCassociated alterations using the input from key HJD-related targets for cancer treatment [27][28][29][30][31]. In the meantime, the survival of these targets in PCa and SCLC was analyzed using survival option embedded in the Portal, a tool integrating the survival Kaplan-Meier estimate and the survival data in the TCGA database. Screening the Active Ingredients and Visualization of the Compound-Cancer Target Network. Compounds contained in all 4 herbs constituting HJD were collected through several databases, including Huanglian (48), Huangqin (143), Huangbo (140), and Zhizi (98). A total of 85 compounds with OB of ≥ 30% and DL of ≥ 0.18 were identified, among which only 59 active ingredients targeting the anticancer targets were screened (the Appendix). Correlations of the active ingredients with their anticancer targets were visualized through Cytoscape, and the compound-cancer target network was also obtained for subsequent analysis ( Figure 2). Construction of the PPI Network of HJD-Related Targets for Cancer Treatment as well as Topological Analysis. The HJD-related targets for cancer treatment could be obtained through the compound-cancer target network. In addition, the "protein-protein interaction (PPI) option" embedded in STRING was also adopted for further analysis, and a PPI network containing 98 interactive targets was also identified ( Figure 3). Later, the topological features of this network were calculated with the Network Analyzer plug-in (Table 1), which consisted of an entire portion of the interaction between the anticancer targets, with an average number of direct neighbors of 20.959. Besides, the degree of some nodes was much higher than the average number of direct neighbors. In the degree centrality algorithm, a higher degree of a node indicated greater impact on the whole network. In this network, the degree distribution between nodes was uneven. These nodes, which were twice the average number of direct neighbors, were then define as Hub nodes in this study, indicating their importance in the network for subsequent investigation. Searching and Analysis of the Key Targets. Three centrality algorithms were employed for key target screening, including degree centrality, closeness centrality, and betweenness centrality. Of them, the closeness centrality algorithm has emphasized the average shortest path length between nodes and other nodes. In contrast, the betweenness centrality algorithm measures the number of nodes on the shortest path of other nodes, which suggests the frequency that the shortest path between the other nodes passes through one node. In other words, if the shortest path of the other nodes often passes through this node, then this node shows a high importance or ability, which can modulate information transmission of other nodes as a link between the other nodes. These 3 algorithms were used to calculate the whole network, and the top 30 targets were summarized based on the algorithm results, as shown in Table 2. Consistently, nodes that were twice the average number of direct neighbors were defined as Hub nodes, including TP53, AKT1, EGF, PCNA, JUN, VEGFA, ESR1, and IL6. It should be noted that TP53 ranked the top among the three centrality algorithms, indicating that the primary target pathway under control or mediated by HJD was associated with TP53. In addition, AKT1 took up the second place, which was only second to TP53. As a critical component in the PI3K-AKT signaling pathway, AKT1 was closely correlated with the occurrence and development of human cancers. Baicalin and baicalein, the main active ingredients of Huangqin, had been reported to show a definite relationship with the downregulation of the PI3K-AKT pathway in anticancer effect [23,24]. Consequently, the AKT1-related signaling pathway might also have an important link with the anticancer effect of HJD. Analysis of the KEGG Pathway. To explore the biological mechanism underlying the anticancer effect of HJD, the KEGG pathway enrichment analysis embedded in WebGestalt was performed. Typically, the top 10 KEGG pathways linked to all targets in the PPI network were obtained, including cell cycle (24), pathways in cancer (31), the p53 signaling pathway (15), the AGE-RAGE signaling pathway in diabetic complications (17), prostate cancer (16), endocrine resistance (16), hepatitis B (18), the PI3K-Akt signaling pathway (25), small cell lung cancer (14), and the FoxO signaling pathway (15) ( Table 3). Broad grouping of the KEGG pathway analysis suggested that the anticancer effect of HJD was closely correlated with the following cancer-related signaling pathways with potential mechanisms, including (1) control of cancer cell proliferation and survival by p53-mediated cell cycle control, (2) the PI3K-Akt signaling pathway regulating the growth, proliferation, and invasion and metastasis of cancer cells by mediating the FoxO signaling pathway, and (3) the potential treatment of breast cancer achieved through regulating endocrine resistance. TP53 ranked the top among the 3 centrality algorithms (Table 2); as a result, emphasis was directed to the p53 signaling pathway. The KEGG analysis results probably indicated that the anticancer effect of HJD showed a functional correlation with TP53. In addition, the enrichment KEGG pathway analysis also suggested that 16 and 14 targets were associated with PCa and SCLC, respectively (Table 3). Mining the Genetic Alterations and Survival Analysis. It had been proved that HJD displayed therapeutic effects on different cancers; however, its specific biological mechanisms remained unclear so far. KEGG enrichment analysis revealed that HJD was correlated with the cancer-related pathways (Table 3). To further explore the validity of such correlation, cBio Portal, a web-based integrated data mining system, was adopted to examine the genetic alterations and survival analysis associated with HJD-related targets in PCa and SCLC. The p53 signaling pathway was the main target of HJD; consequently, the overlapping targets of the p53 signaling pathway with PCa and SCLC were studied. The results discovered that 8 overlapping targets were associated with the KEGG assay embedded in WebGestalt, including 7 in PCA (CDK2, CDKN1A, MDM2, CCND1, TP53, CCNE1, and CCNE2) and 5 in SCLC (CDK2, CCND1, TP53, CCNE1, and CCNE2). Therefore, the genomic and clinical characteristics of these targets in PCa and SCLC were examined, respectively ( Table 2). 13 studies on PCa were analyzed [10,[32][33][34][35][36][37][38][39][40], the results of which indicated 1.9% to 63.9% alterations in the gene sets/pathways submitted for analysis (Figure 4(a)). Multiple genetic alterations observed across each set of cancer samples from the Michigan study [10] with the most significant genomic changes were summarized and presented using OncoPrint. The results indicated that 37 cases (63%) had an alteration in at least one of the 7 targets, and the alteration frequency in each of the selected targets was presented in Figure 4(b). CDK2, CDKN1A, and CCNE1 were not associated with genetic alterations. For MDM2, CCND1, and CCNE2, most alterations were classified as amplification. TP53-associated genetic alterations included deep deletions and missense/truncating mutations. The alterations in these targets showed a cooccurrence trend across samples. However, mutual exclusivity analysis revealed no statistical significance (p=0.183) (data not shown). More interestingly, cases with genetic alterations were linked with a poorer survival compared with those without alterations (P=0.443, Figure 4(c)). Among the 3 SCLC studies analyzed [11,41,42], 78.6% to 93.6% alterations were found in the gene sets/pathways submitted for analysis ( Figure 5(a)). Multiple genetic alterations observed across each set of cancer samples from the U Cologne study with the most significant genomic changes were summarized and presented using OncoPrint [11]. The results indicated that 103 cases (94%) had an alteration in at least one of the 5 targets, and the alteration frequency in each of the selected targets was shown in Figure 5(b). Different from results of PCa study, these results indicated that almost all genetic alterations occurred in TP53, whereas no genetic alterations were seen in CDK2 or CCND1. CCNE1associated genetic alterations were classified as missense mutations, while CCNE2-associated ones were classified as truncating mutations. In comparison, TP53-associated 8 Evidence-Based Complementary and Alternative Medicine The following statistics were listed in the row: C: the number of reference targets in the category; O: the number of targets in both the gene set and the category; E: the expected number in the category; R: ratio of enrichment; rawP: p value upon hypergeometric test; and adjP: p value adjusted by the multiple test adjustment. genetic alterations included both missense mutations and truncating mutations. The mutual exclusivity analysis still displayed no statistical significance (p = 0.876) (data not shown). More interestingly, cases with genetic alterations also had a poorer survival relative to those without (P=0.166, Figure 5(c)). Discussion HJD serves as the object of study in this work. To elucidate the anticancer molecular mechanism of HJD, we have integrated systems pharmacology and bioinformatics. As a result, a number of public databases as the research basis and a set of tools are available to elucidate the molecular mechanisms and the relationship with the clinical outcomes of cancers. 3 steps are carried out in our workflow. (i) The cancerous target network is constructed through the DrugBank database, and all chemical components contained in the 4 medicines are obtained by databases, such as TcmSP, TcmID, TCM Database@Taiwan, and NCBI Pubchem. Subsequently, the active ingredients are screened based on the criteria of OB of ≥30% and DL of ≥0.18, and the targets of these active ingredients were then predicted using the SysDT model. Ultimately, 59 anticancer active ingredients and their anticancer targets were identified by mapping with the cancerous target network (the Appendix). (ii) Based on these anticancer targets, a PPI network containing 98 targets is constructed by STRING (Figure 2), and topological analysis is therefore performed. (iii) Taking TP53 as the main object of study, we have compared the p53 signaling pathway between PCa and SCLC, and 8 overlapping targets are obtained. Then, the genetic alterations and survival analysis of the overlapping targets in PCa and SCLC are performed, so as to evaluate the relevance of the p53 signaling pathway with HJD in treating cancer. Cases with Alteration(s) in Query Gene HJD has been suggested in a report to inhibit angiogenesis through suppressing the expression of VEGFA and MMP-9, thus further restraining cancer growth [43]. Similarly, we also discover that VEGFA is a key target in the anticancer activity of HJD using network analysis (Table 2). In addition, a study shows that HJD can obviously inhibit the proliferation of human SCLC NCI-H446 cells [44]. Coincidently, our findings also support that HJD has certain therapeutic effect on SCLC, which is probably achieved through regulating the p53 signaling pathway. However, no other related literature reports that HJD has therapeutic effect on PCa, which may account for a future research direction pending further validation of the experiment. Interestingly, we find through KEGG enrichment analysis that the AGE-RAGE signaling pathway is also present in diabetic complications. The therapeutic effect of HJD on diabetes and its complications has been approved in lots of literature; nonetheless, no existing study indicates HJD works through this pathway. Therefore, it remains to be further studied whether the AGE-RAGE signaling pathway may be a potential mechanism of HJD in treating diabetes and its complications. Compared with studies integrating systems pharmacology and network pharmacology, the current study has a certain biological rationality, since it has bridged HJD to its target genes and linked it with biological effects. Moreover, this study has also illustrated the relationship between the molecular mechanism of HJD and the clinical outcome of cancer through a set of network-based tools. This approach is greatly different from the use of experimental techniques to prove a few relationships at a time; instead, it can reduce redundant experiments from different laboratories. The use of such a new research strategy may remarkably contribute to (i) understanding the molecular biological mechanisms of Chinese compound formula, (ii) revealing the primary effects and targets of HJD on cancers, and (iii) promoting the clinical use of Chinese compound formula and laying down the clinical foundation. This method can be used not only in the study on HJD, but also on other Chinese compound formulas and on medicine combination therapy. However, there are some shortcomings deserving our attention. The compounds contained in the herbal medicines are obtained based on databases; therefore, the quality of databases would directly affect the final compounds obtained. Moreover, the selection of screening parameters and the setting of threshold can also affect the number of active ingredients obtained. All of these may influence the final analysis. In conclusion, the targets of HJD will undoubtedly be confirmed thanks to a growing number of studies on HJD carried out using traditional experimental techniques and methods. However, the relationship with the biological effects of HJD remains unclear yet. We believe that the use of this method can help to offset some uncertainties of HJD related to its target and its subsequent phenotypic expression. Furthermore, this approach contributes to determining the feasibility of future experiments. In the future, molecular biology experiments about the key targets and pathways of HJD can be carried out on the basis of the current study. Apart from PCa and SCLC, many studies have also reported the antitumor effect of HJD on other tumors, such as lung cancer, liver cancer, breast cancer, and colon cancer. These findings reveal that it remains to be further studied whether the connectivity between HJD and PCa as well as SCLC can be extended to other cancers.
Distinct mechanisms govern recalibration to audio-visual discrepancies in remote and recent history To maintain perceptual coherence, the brain corrects for discrepancies between the senses. If, for example, lights are consistently offset from sounds, representations of auditory space are remapped to reduce this error (spatial recalibration). While recalibration effects have been observed following both brief and prolonged periods of adaptation, the relative contribution of discrepancies occurring over these timescales is unknown. Here we show that distinct multisensory recalibration mechanisms operate in remote and recent history. To characterise the dynamics of this spatial recalibration, we adapted human participants to audio-visual discrepancies for different durations, from 32 to 256 seconds, and measured the aftereffects on perceived auditory location. Recalibration effects saturated rapidly but decayed slowly, suggesting a combination of transient and sustained adaptation mechanisms. When long-term adaptation to an audio-visual discrepancy was immediately followed by a brief period of de-adaptation to an opposing discrepancy, recalibration was initially cancelled but subsequently reappeared with further testing. These dynamics were best fit by a multiple-exponential model that monitored audio-visual discrepancies over distinct timescales. Recent and remote recalibration mechanisms enable the brain to balance rapid adaptive changes to transient discrepancies that should be quickly forgotten against slower adaptive changes to persistent discrepancies likely to be more permanent. To obtain a unified and precise percept in dynamic environments, the human brain integrates information across multiple sensory modalities. Perception of events and objects typically remains coherent despite discrepancies in the timing and/or spatial position of sensory signals between modalities 1 . To maintain perceptual binding of multisensory inputs, the brain corrects for errors arising between the senses. Consequently, multisensory discrepancies often lead to perceptual recalibration of corresponding modalities to minimize inter-modal errors. For example, changes in the perceived timing of multi-modal stimulus pairs have been reported following adaptation to temporally discrepant visuo-tactile [2][3][4] and audio-visual 4-6 stimuli. Similarly, repeated presentations of spatially discrepant visual and auditory stimuli lead to a perceptual recalibration of auditory space, such that perceived sound location is shifted to counteract the discrepancy -the "ventriloquism aftereffect" (VAE) [7][8][9][10] . Over what timescale should the brain track multisensory discrepancies? In principle, systematic errors between senses could arise rapidly (e.g. transient changes in the environment) or persist over much longer epochs (e.g. gradual changes taking place over many years during childhood), requiring a perceptual system that can balance how it adapts over very different timescales. Multi-modal recalibration effects have been observed following a range of adapting periods. Early behavioural studies of both temporal 5,6 and spatial 7,8,[11][12][13][14] audio-visual recalibration focussed on effects following upwards of several minutes of adaptation. However, more recent studies have further demonstrated rapid recalibration effects following just a few seconds or even a single trial of exposure to temporally 4,15 or spatially [16][17][18] discrepant audio-visual pairs. To maximise the benefit of multisensory recalibration and avoid spurious recalibration to sensory noise, the brain should match its rate of adaptation to the dynamics of the inter-modal error. However, if multiple sources of error exist, recalibration mechanisms should ideally be sensitive to the timescales over which such discrepancies occur 19 . Here we use the ventriloquism aftereffect to distinguish whether control of timescales of multisensory recalibration is governed by a singular mechanism that grows in strength, or distinct mechanisms that gradually activate over time. Behavioural studies of unimodal sensory perception have supported the notion of distinct mechanisms by showing that several, potentially opposing, adaptation effects can be simultaneously maintained when they occur across different timescales, ranging from minutes to hours or even days [20][21][22] . However, the extent to which the same principles apply to multisensory perception remains unclear. Adaptation to audio-visual temporal offsets can yield radically different effects depending on the task and duration of adptation 23 . Similarly, a previous study of rapid spatial adaptation by Bruns and Röder demonstrated that the ventriloquism aftereffect shifts from being frequency-independent to frequency-dependent with increasing durations of adaptation, from a single trial up to four trials 17 . This suggests an adaptation process that is coupled to the timescales over which audio-visual discrepancies occur in the environment. However, these behavioural effects pertain to rapid adaptation to the very recent past, hence it remains unclear whether distinct or unitary recalibration mechanisms exist over longer timescales. A recent study by Bosen and colleagues showed the growth and decay of the VAE could be predicted either by a multiple-exponential model or a power model 24 -yet it still remains unclear how these effects vary across different timescales, and whether opposing effects can be simultaneously maintained. We measured the growth and decay of the ventriloquism aftereffect in the recent and remote past by adapting human participants to audio-visual spatial offsets for a range of durations (from 32 seconds to 256 seconds). To distinguish recalibration mechanism(s) operating at single or distinct timescales we adapted and de-adapted participants to equal and opposite spatial offsets for long and short durations respectively. If recalibration is controlled by a single mechanism that envelopes different timescales, initial perceptual aftereffects will be proportional to the combined effects of adaptor and de-adaptor, and decay back to baseline. If recalibration is controlled by distinct mechanisms operating at different timescales, aftereffects will initially be cancelled by the de-adaptation, but will subsequently reappear with further testing 21,25 . Results Experiment 1: magnitude of spatial recalibration. In an initial experiment we first sought to replicate the basic ventriloquism aftereffect (VAE) with our experimental paradigm. Visual stimuli (2D Gaussian blobs) were projected onto a large semi-circular screen that wrapped 180° around the participant, whilst auditory stimuli (pink noise bursts) were delivered over headphones with stimulus azimuth simulated via head-related transfer functions (HRTFs). Participants adapted for 1 minute to audio-visual stimulus pairs presented in a randomised order across 15 locations between −35° (left) and 35° (right) azimuth in 5° increments. Pairs either had a spatial discrepancy of −20° (audio left of visual), 0°, or 20° (audio right of visual) between them, counterbalanced across blocks. Stimulus duration was 500 ms, with a 300 ms interstimulus interval. Each audio-visual pair was presented five times in a row at a given location to facilitate allocating spatial attention, with one full pass over all 15 locations being completed over the 1-minute period. Participants were then tested on their ability to reproduce the azimuth of unimodal auditory stimuli presented within the same range of azimuths. Figure 1a shows participants' perceived stimulus azimuth plotted against the actual stimulus azimuth for each adaptation condition. A clear, positive linear trend is evident in all conditions, indicating that the HRTFs were able to simulate stimulus azimuth effectively. Data for each adaptation condition were entered into a series of mixed-effects linear regression analyses, with perceived and actual stimulus azimuth defined as the outcome and predictor variables respectively (Fig. 1a). Regression intercept coefficients represent participants' spatial bias (Fig. 1b) and slope coefficients represent spatial gain (Fig. 1c). Adapting to spatially discrepant audio-visual pairs led to a shift in spatial bias in the direction of the visual offset. For instance, adapting to a −20° offset with the audio to the left of the visual led to a positive (rightward) shift in spatial bias, whilst adapting to a 20° offset with the audio to the right of the visual led to a negative (leftward) shift in spatial bias. Spatial gain parameters all appeared close to 1 and did not differ substantially across adaptation conditions. Thus, adapting to an audio-visual spatial offset led to a perceptual recalibration of auditory space, shifting the perceived location of auditory stimuli in the direction of the visual offset. This confirmed that the basic VAE could be replicated with our experimental paradigm. Experiment 2: timescales of spatial recalibration. In a second experiment we aimed to test how the VAE grows and decays over time. Participants again adapted to audio-visual pairs with either −20°, 0°, or 20° offset, but we now varied the length of the adaptation period between 32 s, 64 s, 128 s, and 256 s. Stimulus details are the same as for experiment 1, except that during adaptation stimuli were presented across 8 locations between −35° and +35° azimuth in 10° increments. One, two, four, and eight passes over all locations were completed for each adaptation duration respectively. In addition, we included an adapt/de-adapt condition in which participants first adapted to a given spatial offset (−20° or 20°) for 256 s and then immediately de-adapted to the opposing offset for 32 s. This allowed us to www.nature.com/scientificreports www.nature.com/scientificreports/ test whether it is possible to simultaneously maintain two opposing VAEs if they occur across different timescales (Fig. 2). Under a single-mechanism model, the de-adaptor simply reduces the adaptation built up by the initial adaptor. This may result in a reduced, cancelled, or even inverted aftereffect, but in all cases any effects will simply decay monotonically towards baseline. Under a distinct-mechanisms model, a short-term mechanism most sensitive to the immediate past would be mostly driven by the more recent de-adaptor at the start of the test phase, yielding a negative response that decays quickly towards baseline. Meanwhile, a long-term mechanism integrating information over wider time periods would be mostly driven by the initial longer duration adaptor, yielding a more sustained positive response. The net output of both mechanisms would initially produce a reduced aftereffect as the mechanisms cancel, followed by a later recovery in the direction predicted by the initial adaptor as the short-term mechanism decays whilst the long-term mechanism continues to sustain. Data from each test period were analysed using a sliding-trial window comprising 7 trials and incremented in 1 trial intervals. For each window, data were entered into separate mixed-effects linear regression analyses for each condition. The resulting spatial bias (intercept) and spatial gain (slope) coefficients are shown in Fig. 3a; coefficients are plotted against the middle trial of each window. We quantified the magnitude of the VAE by taking the trial-wise average of the −20° > 0° and 0° > 20° adaptation offset contrasts of the spatial bias coefficients (Fig. 3b). In the standard adaptation conditions, repeated exposure to spatial offsets caused shifts in spatial bias in the direction of the visual offset, indicating the presence of a VAE. These effects decayed in later trial windows as the effect of the adaptor deteriorated. The overall magnitude of the VAE did not increase substantially with increasing adaptation durations, suggesting a relatively fast acting adapting mechanism that saturated quickly. At the same time, the aftereffect often failed to fully decay to zero within the testing period, instead settling at a non-zero asymptotic level in later trials, suggestive of an additional slower acting mechanism predicting a more sustained response. By contrast, in the adapt/de-adapt condition the early trial windows showed little evidence of a shift in spatial bias, as the opposing VAEs caused by adaptors and de-adaptors cancelled. Crucially, however, a VAE was seen to re-emerge in later trial windows in the direction of the initial adapting offset. This indicates that the effects of the more recent but shorter-term de-adaptor had decayed, whilst the effects of the earlier but www.nature.com/scientificreports www.nature.com/scientificreports/ longer-term adaptor sustained. This demonstrates that distinct and opposing VAEs occurring across different timescales could be simultaneously maintained. Meanwhile, spatial gain coefficients appeared close to 1 in all conditions and did not differ reliably across either adaptation offsets or durations. To further interrogate the mechanisms underlying the VAE, we fit the data using both exponential (leaky integrator) and power function models [24][25][26] . A leaky integrator predicts an exponential decay of the response characterised by two parameters: a trial-constant (τ) which determines the rate of change, with larger values giving a slower change, and a gain parameter which determines the overall response amplitude. Similarly, a power function predicts a power-law decay, characterised by a rate parameter (α) and a gain parameter. Power functions may approximate the summation of a series of correlated exponential functions [26][27][28] , and thus a power model may produce a similar output to a multiple-exponential model. A series of box-car models were constructed to model the adaptation and test periods for each condition, comprising 40, 80, 160, and 320 trials for the 32 s, 64 s, 128 s, and 256 s adaptation periods respectively, and a further 30 trials for the test period. Each of these boxcars were then convolved with the leaky integrator(s) or power function. We tested the ability of our models to predict the group average VAE estimates (Fig. 3b). A single-exponential model was constructed by convolving a single leaky integrator with the boxcars. This can be expanded to a multiple-exponential model by convolving separate short-and long-term leaky integrators with the boxcars, and summing the outputs. A power-law based model was constructed by convolving a power function with the boxcar. In all cases, the convolved output during the test periods provides a prediction of the VAE which can then be compared against the real group average VAE magnitudes. The model parameters were then optimised via maximum likelihood estimation to minimise this prediction error. An illustration of the modelling procedure for the single-and multiple-exponential models is shown in Fig. 4. Finally, to compare model goodness of fits, we calculated corrected Akaike Information Criterion (AICc) 29,30 and Residual Standard Error (RSE) values for each model. We first tested the ability of these models to predict the responses in the standard adaptation conditions only, excluding the adapt/de-adapt condition (Fig. 5). A single-mechanism exponential model ( Fig. 5a) provided an adequate fit to the data and suggested a relatively rapid decay of the VAE (τ = 16.66, gain = 0.15). Next, we fit a multiple-exponential model comprising two leaky integrator mechanisms, tuned to integrate information over longer and shorter time periods respectively (Fig. 5b). This model also provided a good fit to the data, with the short-term mechanism suggesting a rapid rate of decay (τ = 4.53, gain = 0.22), while the long-term mechanism suggested a considerably slower decay (τ = 209.87, gain = 0.06). The power model (Fig. 5c) also fit the data adequately (α = 0.17, gain = 0.31). AICc values revealed similar performance between the single-exponential (AICc = 196.08), multiple-exponential (AICc = 195.09), and power models (AICc = 195.93), and these values did not differ significantly (all pairwise p > 0.999). Residual standard errors were reduced for the multiple-exponential model (RSE = 0.33) compared to the single-exponential (RSE = 0.41) and power models (RSE = 0.40). To better distinguish between the single-and distinct-mechanisms accounts, we next tested the ability of the leaky models to predict the VAE across all the adaptation conditions, including the adapt/de-adapt condition (Figs 2, 6). The single-mechanism exponential model now provided a poor fit to the data (Fig. 6a), with a slow rate of decay (τ = 100.84, gain = 0.09). A single exponential mechanism was thus unable to simultaneously capture the opposing effects of the adaptor and de-adaptor, and hence could not reproduce the delayed recovery of the VAE in the adapt/de-adapt condition. By comparison, the multiple-exponential model provided a good fit to Model predictions for adapt/de-adapt condition VAEs, based on the outputs of exponential (leaky integrator) mechanisms. (a) Single-mechanism prediction. De-adaptor reduces adaptation initially built up by adaptor; during the test phase the VAE may be reduced, cancelled, or inverted, but in all cases will decay monotonically towards baseline over time. (b) Distinct-mechanisms prediction. A short-term mechanism will be mostly driven by the more recent de-adaptor at the start of the test phase, yielding a negative response that decays quickly up towards baseline. A long-term mechanism will be mostly driven by the longer duration adaptor, yielding a more sustained positive response that decays slowly down towards baseline. The net output of both mechanisms initially yields a reduced aftereffect as the mechanisms cancel, but there is a later positive recovery of the effect as the short-term mechanism decays whilst the long-term mechanism sustains. www.nature.com/scientificreports www.nature.com/scientificreports/ the data (Fig. 6b) and again suggested a short-term mechanism with a rapid decay (τ = 3.94, gain = 0.22), and a long-term mechanism with a much slower decay that capped at the optimization routine's upper bound for the trial-constant (τ ≥ 360, gain = 0.06). The short-term mechanism builds and decays rapidly and hence in the adapt/de-adapt condition is primarily driven by the more recent de-adaptor, whilst the long-term mechanism adapts more slowly and so is more heavily influenced by the longer duration adaptor. Consequently, multiple exponential mechanisms can simultaneously incorporate conflicting information across different timescales and so are able to capture the delayed recovery of the VAE. The power model (α = 0.11, gain = 0.41) struggled to fully capture the pattern of results across conditions. The model was able to partially reproduce the delayed recovery of the VAE in the adapt/de-adapt condition, but undershot the magnitude of the effect. AICc values revealed poorer fits in the single-exponential (AICc = 261.36) than the multiple-exponential (AICc = 243.37) or power models (AICc = 246.84). These values differed significantly between the single-and multiple-exponential models (p < 0.001), and between the single-exponential and power models (p = 0.001). Although visual inspection of the model fits (Fig. 5b,c) suggests better fits for the multiple-exponential over the power model, these AICc values did not differ significantly (p = 0.176) as the multiple-exponential model is penalised for its larger number of parameters. Nevertheless, residual standard errors were lower for the multiple-exponential (RSE = 0.35) than the single-exponential (RSE = 0.56) or power models (RSE = 0.43). Discussion We have used the "ventriloquist aftereffect" (VAE) to quantify the dynamics of spatial multisensory recalibration and distinguish whether unitary or distinct mechanisms operate at different timescales. The VAE rapidly saturated but decayed exponentially, consistent with both transient and sustained adaptation. When long-term adaptation to a spatial offset was immediately followed by a brief period of de-adaptation to an opposing offset, VAEs initially cancelled each other but subsequently reappeared with further testing. These data were best fit with a multiple-exponential model that integrated information over both the recent and more remote past. Taken www.nature.com/scientificreports www.nature.com/scientificreports/ together, these findings suggest that multisensory adaptation is underpinned by distinct recalibration mechanisms that operate at different timescales. Although a reliable VAE was observed in all conditions, neither the magnitude of the VAE nor the rate of its decay differed substantially with increasing durations of adaptation. This would suggest the VAE built and saturated quickly -potentially even within the period of our shortest adaptation condition of 32 seconds. Indeed, the short-term mechanism of our multiple-exponential model predicted a relatively short trial constant that yielded complete saturation within the shortest adaptation period. This is consistent with a previous study by Frissen and colleagues that suggested the VAE saturates following between 30 seconds and 1 minute of adaptation 12 . Similarly, spatial recalibration effects have been reported following very brief periods of adaptation, potentially down to just a single trial [16][17][18] . Although the VAE built and decayed quickly, it often failed to completely decay to zero within the 30 trials of the testing run. The long-term mechanism of our multiple-exponential model predicted a much more gradual rate of change that yielded a more sustained response across the test period. Indeed, the trial-constant parameter (τ) capped at the upper bound set for the minimisation algorithm, suggesting a rate of decay occurring across a much greater timescale than was present within our testing runs. Testing for more extensive periods may help to better resolve the parameters of this longer-term mechanism. Consistent with these results, Frissen and colleagues also reported a longer-term sustain of the VAE 12 . This would support a distinct-mechanisms account, where short-term mechanisms sensitive to the immediate past yield a rapid build-up and decay of the VAE, whilst longer-term mechanisms with a much slower rate of decay yield a more sustained response in later time periods. To more explicitly test the possibility of distinct mechanisms operating at different timescales, we employed an adapt/de-adapt paradigm 21,25 in which participants first adapted to a given offset for an extended period (256 seconds) then immediately de-adapted to the opposing offset for a much shorter duration (32 seconds) before testing. As predicted, this paradigm resulted in vastly reduced VAEs in trials immediately after cessation of the de-adaptation period, when both long and short-term mechanisms remained active but in opposing www.nature.com/scientificreports www.nature.com/scientificreports/ directions and hence cancelled. However, as the effects of the short-term mechanism decayed rapidly whilst those of the longer-term mechanism sustained, a VAE was seen to re-emerge in later trials in the direction predicted by the initial adaptor. This demonstrates that VAEs with opposing directions but occurring across different timescales could be simultaneously maintained and suggests a model of the VAE in which distinct mechanisms adapt over different timescales. Consistent with this, our leaky integrator exponential models showed that whilst a single-mechanism could adequately predict the responses to the standard adaptation conditions alone, only multiple distinct mechanisms integrating information over both the recent and remote past were able to predict the responses in the adapt/de-adapt condition. Future research could further interrogate the temporal scales of these mechanisms using alternative adaptation paradigms. For instance, an adapt/de-adapt/re-adapt paradigm could be employed, in which a distinct-mechanisms model would predict a more rapid growth of the VAE during re-adaptation than the initial adaptation 21,25 . We also tested the ability of a power function model to predict the VAE across conditions, as such models can approximate the summation of multiple correlated exponential models [26][27][28] , and a recent study of the VAE suggests approximately similar performance between a multiple-exponential and a power model 24 . This model www.nature.com/scientificreports www.nature.com/scientificreports/ struggled to fully capture the pattern of responses across our conditions. Although it did partially reproduce the delayed recovery of the VAE in the adapt/de-adapt condition, it nevertheless undershot the magnitude of the effect. The delayed recovery of the VAE in the adapt/de-adapt condition critically depends on integrating opposing sensory information across different timescales. Whilst our results generally favour an account in which separable neural mechanisms are tuned to different timescales (akin to a multiple-exponential model), it remains possible that these effects could be predicted by a single neural mechanism that nevertheless integrates information across distinct timescales simultaneously (akin to a power model). Importantly, however, both of these accounts remain consistent with the VAE being underpinned by distinct recent and remote recalibration mechanisms. A distinct-mechanisms model is not without precedent. Previous studies of the VAE have identified effects of frequency-dependence 17 and spatial reference frames 31 varying with increasing adaptation duration, suggesting a change in adapting mechanisms. More generally, audio-visual recalibration effects have been reported following adaptation across a range of timescales, from just a few seconds to upwards of several minutes, and for several perceptual dimensions including spatial location [7][8][9]14,[16][17][18] and temporal synchrony 1,[4][5][6]15 . Mechanisms operating at distinct timescales have also been proposed to account for various unimodal adaptation effects, from perception of relatively low-level features such as visual orientation 21,22 , up to much higher-level processes such as face perception 20 . This raises the possibility that adapting mechanisms operating at distinct timescales are a more ubiquitous property of sensory recalibration. Sensory changes may occur across a wide range of timescales, from relatively brief and transient changes such as those caused by an organism transitioning between different environments, to those that may span much longer periods such as developmental changes throughout childhood. Mechanisms sensitive to the more recent past have the potential to more flexibly recalibrate to brief www.nature.com/scientificreports www.nature.com/scientificreports/ environmental changes, but also risk being more subject to transient sources of sensory noise. By maintaining sensitivity to the temporal scale over which sensory changes occur, the perceptual system can remain optimally tuned to changes over a wide range of timescales, whilst at the same time balancing the flexibility of rapidly tuned mechanisms against the long-term reliability of more sustained mechanisms 19 . One issue that remains unresolved is the extent to which the growth and decay of recalibration effects may be understood in terms of exact timescales (e.g. as measured in seconds) versus an accumulation of sensory evidence over time (but not necessarily linked to exact units of time). Studies in other domains have supported evidence-based accounts; for instance, storage of adaptation -in which aftereffects sustain for longer time periods in the absence of further sensory evidencehas been reported in both unimodal (e.g. visual contrast adaptation 32 ) and multimodal domains (e.g. audio-visual temporal recalibration 33 ). However, the test phases in our experiment contained only unimodal auditory stimuli, and hence did not present further multimodal evidence of audio-visual spatial relationships. Consequently, it seems difficult to explain the decay of the VAE in our experiment purely by accumulation of sensory evidence. Thus, our experiment instead appears more in line with a timescale-based account. Nevertheless, an evidence-based account cannot be entirely dismissed. For instance, we measured the progression of the VAE in units of trials rather seconds, so our results are not purely explained in terms of exact timescales either. An open question for a distinct-mechanisms account would be whether each mechanism relies on similar or distinct neural components. One possible candidate for the neural locus of the VAE is primary auditory cortex. Pairing visual information with an auditory signal modulates responses in primary auditory cortices 34 . The VAE itself correlates with responses in primary auditory cortices 35 , and is associated with early latency electrophysiological components 36 , suggesting a reliance on relatively early processing stages. The immediate spatial capture of sounds by vision (without adaptation) has again been associated with primary auditory cortices, but also later electrophysiological components 37 , suggesting a greater degree of mediation by higher-level processing stages. This suggests more immediate audio-visual recalibration may rely more on interactions between early sensory cortices and higher-level multisensory regions 38 , whilst more sustained adaptation effects may lead to a more permanent recalibration of early sensory cortices driven by top-down feedback. Different adapting mechanisms operating at different timescales may entail a shift in the neural locus and interactions between regions. Under this hypothesis, qualitatively different behavioural effects of multisensory recalibration may be expected across varying timescales. Indeed, different timescales of recalibration have been shown to affect both the frequency-dependence 17 and spatial reference frames 31 of the VAE. In conclusion, we used the ventriloquist aftereffect to examine the mechanisms underpinning multisensory recalibration across differing timescales. Our results support an account in which distinct adapting mechanisms integrate information over different temporal scales. This enables perceptual systems to correct for inter-sensory discrepancies by optimally tuning into the timescales over which sensory changes occur in the environment. Methods participants. 21 participants (10 male, 11 female, median age = 22, age range = 20-46) took part in the first experiment. Data from one participant were excluded due to difficulties experienced in localising the sound sources and performing the localisation task. 18 participants (8 male, 10 female, median age = 22, age range = 20-46) took part in the second experiment; all participants in the second experiment also participated in the first. The study was approved by the ethics committee of the School of Psychology, University of Nottingham, and all procedures were conducted in accordance with the relevant guidelines and regulations of the committee, and in accordance with the Declaration of Helsinki. All participants gave informed written consent to participate in the study. stimuli. Visual stimuli were projected onto a large semi-circular screen (radius = 2.5 m, height = 2 m ≈ 43.60° visual angle) that wrapped 180° in azimuth around the participant. Video feeds were projected by 3 interleaving projectors, and Immersaview's Sol7 software (https://www.immersaview.com/) was used to blend the feeds and correct for the warp of the screen. Visual stimuli during the adaptation phases were 2-dimensional luminance Gaussian blobs (FWHM = 5° visual angle), presented across a range of azimuths but always at 0° elevation. These were presented for 500 ms and were sinusoidally contrast modulated (rate = 6 Hz, depth between 50% and 100% of maximum contrast). During test phases, a visual marker subtending 1° of visual angle and the full height of the screen was presented. A pair of vertical lines were presented throughout the entire experiment above and below fixation, at 0° azimuth and a sufficient vertical distance from 0° elevation so as not to occlude the Gaussian blobs. Participants were instructed to keep their head oriented straight ahead and to fixate in between the lines at all times. The colour of the lines also cued the current experiment phase: lines appeared red during adaptation phases and blue during test phases. Audio stimuli were pink-noise bursts, presented binaurally over Sennheiser HD265 headphones. Stimulus azimuth was simulated using head-related transfer functions (HRTFs) from the MIT Kemar database 39 , which provides azimuths up to ±90° in 5° intervals. To encourage perceptual binding of visual and auditory stimuli, virtual reverberations were added to the auditory signals using the image-source method 40 to simulate sources at the distance of the projection screen. The participant was modelled as sitting 2.7 m from the left and 1.5 m from the back of a 4.2 × 5.2 m sized room, corresponding to the dimensions of the testing room. Sources were emulated as originating from an arc (radius = 2.5 m) wrapping around the front of the participant, corresponding to the projection screen. Reverberations comprised up to 5 reflections and assumed walls with a uniform absorbance of 0.2. An impulse response was constructed by collating the predicted incoming pulses at the participant's location following the reverberations. This was then convolved with the Kemar HRTFs to yield a new set of HRTFs that, when convolved with an input signal, would simulate both the source azimuth and reverberations according to source distance. The sound signals themselves were 500 ms pink-noise bursts (100-4000 Hz bandpass) which (2019) 9:8513 | https://doi.org/10.1038/s41598-019-44984-9 www.nature.com/scientificreports www.nature.com/scientificreports/ were sinusoidally amplitude modulated (rate = 6 Hz, depth = 3 dB). Signals then had onsets and offsets gated by 25 ms raised-cosine ramps, before finally being convolved with the HRTFs. Stimuli were sampled at 44.1 kHz, and the average listening level was measured to be 62 dB(A) SPL for the stimulus at 0° azimuth. During adaptation phases, visual and audio stimuli were presented synchronously. Both stimuli were presented for 500 ms duration and with a 300 ms inter-stimulus interval. To encourage perceptual binding, visual and audio stimuli were sinusoidally modulated in synchrony. To facilitate allocation of spatial attention to the stimulus location, audio-visual pairs were presented 5 times consecutively at each location 8 . Audio-visual pairs were presented with either −20° (audio left of visual), 0°, or 20° (audio right of visual) offsets in azimuth; offsets always refer to the location of the audio relative to the visual stimulus. Spatial offsets were applied evenly on either side of the target location, e.g. a stimulus pair presented at 15° azimuth with a 20° spatial offset would comprise a visual stimulus at 5° and an audio stimulus at 25°. Target locations ranged between −35° (left) and +35° (right) azimuth in either 5° (experiment 1) or 10° (experiment 2) increments. During test phases, audio stimuli (specifications same as for adaptation phase) were presented unimodally. Stimulus location was randomly selected on each trial from a normal distribution (μ = 0°, σ = 20°) between −35° (left) and 35° (right) azimuth in 5° steps. After each stimulus presentation participants were required to reproduce the perceived auditory location (azimuth). A visual marker was presented on screen which participants could move left and right with a mouse to indicate the stimulus location. Participants entered their response by mouse click, after which the next trial would be presented following an inter-trial interval of 200 ms. All experiments were run using custom software written in Python (PsychoPy 41,42 , http://www.psychopy.org/). Experiment 1: magnitude of spatial recalibration. Procedure. In an initial experiment, we sought to replicate the basic VAE with our experimental paradigm. The experiment employed a blocked design, with each block presenting a particular adaptation spatial offset (−20°, 0°, 20°). Adaptation stimuli were presented between −35° (left) and 35° (right) azimuth in 5° increments (15 locations total). Each adaptation phase lasted 60 s, which comprised one full pass over all locations in a randomised order. Following each adaption phase, participants completed 10 test trials in which they reproduced the locations of unimodally presented auditory stimuli (see above). Each block comprised 5 adapt/test cycles in this manner, and participants completed 2 blocks per adaptation offset (6 blocks total). Across all blocks, participants therefore provided 100 responses per adaptation offset. Statistical analyses. Stimulus azimuth was assigned as the predictor variable and perceived azimuth was assigned as the outcome variable. First, outliers were rejected for each adaptation offset and participant independently using a robust Mahalanobis distance metric 43 . For a given set of samples, the distance for each sample from the centre of the cluster was measured using Mahalanobis distance. To avoid the distance metric itself being biased by outliers, a minimum covariance determinant estimate 44 was used to obtain the covariance matrix. Repeated sub-samples each comprising 75% of all samples were taken from the dataset and the determinant of the covariance matrix calculated for each one. Mahalanobis distances were based on the covariance matrix of the sub-sample with the lowest covariance determinant value as this is the least likely to include outliers. Distances were then converted to probabilities via a chi-square distribution, such that more distant samples were considered less probable. An alpha criterion of p < 0.01 was used to identify and reject outliers; on average this resulted in the rejection of approximately 3.93% of total trials. Data were entered into mixed-effects linear regression analyses for each adaptation offset separately, allowing random intercepts and slopes across participants. Spatial bias was quantified by the intercept coefficients of the models, whilst spatial gain was quantified by the slope coefficients. To test for differences between the adaptation offset conditions, the mixed-effects coefficients for each parameter (spatial bias/intercepts, spatial gain/ slopes) across participants were entered into a one-way repeated measures ANOVA with a main factor of adaptation offset (−20°, 0°, 20°). Greenhouse-Geisser corrections were applied where data violated the assumption of sphericity. Effect sizes are reported using eta-squared. Post-hoc paired-samples t-tests contrasted the pairwise combinations of adaptation offsets (−20° > 0°, 0° > 20°, −20° > 20°) subject to a Bonferroni-Holm correction for multiple comparisons 45 . Effect sizes are reported using Hedges' g av , whereby the mean of the condition pairwise differences is standardised by the average of each condition's standard deviation and then corrected for bias 46,47 . All statistical tests were two-tailed and employed an alpha criterion of 0.05 for determining statistical significance. Experiment 2: timescales of spatial recalibration. Procedure. Stimulus parameters and procedures of the second experiment are the same as the first, with the following exceptions. During the adaptation phase, stimuli were presented between ±35° azimuth in 10° intervals (8 locations total). We manipulated the duration of the adaptation phase, between 32, 64, 128, and 256 seconds, corresponding to 1, 2, 4, and 8 passes over all locations respectively. Locations were selected in a pseudo-random order across passes. In addition, we included an adapt/de-adapt condition in which participants adapted to a given spatial offset for 256 seconds, then immediately de-adapted to the opposing offset for 32 seconds. All of the standard adaptation conditions were repeated for spatial offsets of −20°, 0°, and 20°, whilst the adapt/de-adapt was repeated for the −20° and 20° offsets. This gave a total of 14 adaptation conditions across all offsets and durations, each of which was repeated twice across a total of 28 blocks. After each adaptation phase, participants completed 30 test trials, performing the reproduction task as described above. Each block comprised 2 adapt/test cycles in this manner. Statistical analyses. Data were analysed using a sliding window of 7 trials incremented in 1 trial intervals. This window size provided a reasonable compromise between the greater temporal resolution of shorter windows and the higher signal-to-noise ratio of longer windows. This yielded 504 samples per window per condition (28 samples per participant per window per condition). Multivariate outlier rejection via robust Mahalanobis distance 43 was performed (as described above) for each window, condition, and participant separately; on average this www.nature.com/scientificreports www.nature.com/scientificreports/ resulted in the rejection of approximately 7.32% of total trials. Data for a given window and condition were then entered into a mixed-effects linear regression analysis, allowing random intercepts and slopes across participants. The magnitude of the VAE was calculated by contrasting spatial bias (intercept) coefficients for the −20° over the 0° adaptation offset conditions, and the 0° over the 20° adaptation offset conditions, and then taking the trial-wise average of the two contrasts. There was no 0° adaptation offset for the adapt/de-adapt condition, so estimates were instead contrasted against the 0° adaptation offset estimates from the 256 seconds standard adaptation condition. In all cases, coefficients and VAE values were assigned to the middle trial of each window. Modelling timescales of recalibration. Exponential (leaky integrator) and power functions were used to model the growth and decay of the VAE 25,26 . The leaky integrator takes the form L = e −τ/t , where τ is a trial-constant (analogous to a time-constant) that determines the rate of change (with larger values giving slower change), and t represents the trial number. The power function takes the form P = (t + α) −1 , where α is a rate parameter and t represents the trial number. In both cases, the function outputs were normalised to sum to 1, then multiplied by a gain parameter that determined the amplitude of the response. For each adaptation duration, a simple boxcar model was constructed that predicted a VAE response of 20° during adaptation, −20° during de-adaptation (if applicable), and 0° during the test periods. The duration of the adaptation periods was based on the number of trials within those periods (40,80,160, and 320 trials for 32, 64, 128, and 256 seconds of adaptation, respectively), and the duration of the test period was always 30 trials. The leaky integrator/power function was convolved with the boxcar for each adaptation duration, and the output during test periods was used to predict the group average VAE estimates. Model parameters were optimised to minimise the prediction error via maximum likelihood estimation. For the leaky integrator, optimisation of the τ parameter was bounded below 360, which is the total number of trials in the combined adaptation and de-adaptation phases of the adapt/de-adapt condition. The error was minimised across all adaptation duration conditions simultaneously, such that a single set of parameters describes the output of the model across all conditions. Models were fit to the data both with and without inclusion of the adapt/de-adapt condition. An illustration of the modelling procedure is shown in Fig. 4. For the single-mechanism exponential and power models, a single leaky integrator/power function was convolved with the boxcars and the output taken directly from this. In the multiple-exponential model, separate short-and long-term leaky integrators were defined and each convolved with the boxcars, and the outputs then summed together. The long-term mechanism was always constrained to have a longer trial-constant (τ) and smaller gain parameter than the short-term mechanism. Goodness of fits between models were compared using corrected Akaike Information Criterion (AICc) values 29,30 and Residual Standard Errors (RSEs). Smaller AICc values indicate a better goodness of fit to the data. The weight of evidence for one model over another can be defined as = − p e AICc AICc ( ) /2 1 2 , where AICc 1 and AICc 2 refer to the smaller and larger AICc values respectively. If this probability is less than the alpha criterion (α = 0.05) then this can be taken as sufficient evidence to reject the latter model in favour of the former. Fits were compared between all pairwise combinations of models (single-exponential, multiple-exponential, power) subject to a Bonferroni-Holm correction for multiple comparisons 45 .
Association of rs9679162 Genetic Polymorphism and Aberrant Expression of Polypeptide N-Acetylgalactosaminyltransferase 14 (GALNT14) in Head and Neck Cancer Simple Summary Neoadjuvant chemotherapy was performed before surgery. Because the tumor itself and the surrounding vascular bed were not damaged, the chemotherapy we performed could have good drug delivery. After the operation, the volume of the tumor can be reduced to facilitate surgery or radiotherapy. However, neoadjuvant chemotherapy also delays the patient’s time to receive main therapy. The physician must make sure that it has a good response and does not allow disease progression in the patient during neoadjuvant chemotherapy. Therefore, predicting the treatment response of neoadjuvant chemotherapy can shorten the treatment time, reduce the harm of chemotherapy side effects, and avoid the occurrence of drug resistance. The results of this study showed that GALNT14-rs9679162 and mRNA expression were associated with post-treatment survival in head and neck cancer. It can be used as an indicator to predict the treatment response of neoadjuvant chemotherapy. Abstract The polypeptide N-Acetylgalactosaminyltransferase 14 (GALNT14) rs9679162 and mRNA expression were associated with treatment outcome in various cancers. However, the relation of GALNT14 and head and neck cancer were nuclear. A total of 199 patients with head and neck squamous cell carcinoma (HNSCC) were collected in this study, including oral SCC (OSCC), oropharyngeal SCC (OPSCC), laryngeal SCC (LSCC), and others. The DNA and RNA of cancer tissues were extracted using the TRI Reagent method. The rs9679162 was analyzed using polymerase chain reaction (PCR) and sequencing methods in 199 DNA specimens, and the mRNA expression was analyzed using quantitative reverse transcription PCR (RT-qPCR) methods in 68 paired RNA specimens of non-cancerous matched tissues (NCMT) and tumor tissues. The results showed that the genotype of TT, TG, and GG appeared at 30%, 44%, and 26%, respectively. Non-TT genotype or G alleotype were associated with alcohol, betel nut, and cigarette using among patients with OSCC, and it also affected the treatment and survival of patients with OSCC and LSCC. High GALNT14 mRNA expression levels increased lymphatic metastasis of patients with HNSCC, and treatment and survival in patients with OPSCC. Overall, the GALNT14-rs9679162 genotype and mRNA expression level can be used as indicators of HNSCC treatment prognosis. Introduction The N-acetylgalactosaminyltransferase (GALNT) enzyme family contains 20 members (GALNT1-20), which mediates protein O-glycosylation by transfering the N-acetyl-Dgalactosamine (GalNAc) residue of UDP-GalNAc to the hydroxyl group of serines and threonines in target peptides [1]. GALNT6 has been shown to transfer GalNAc to large proteins such as mucins. Abnormal regulation of mucin-type O-glycosylation of proteins affects the malignancy of cancer cells, including tumor neogenesis, cell replication, migration, metastasis, and drug resistance [2]. Recent studies revealed that GALNT14, which is involved in various biological functions, has abnormal expression in various cancers [3]. Approximately 30% of the samples from various human malignancies show GALNT14 overexpression, and GALNT14 affects the O-glycosylation of death receptors in cancer cells and modulates sensitivity to cancer therapy [4]. GALNT14 mRNA and protein are upregulated in the chemoresistant breast cancer cell line MCF7 [5]. GALNT14 expression is upregulated and correlated with ovarian cancer [6]. Downregulation of GALNT14 significantly inhibits apoptosis and ferroptosis in ovarian cancer cells [1]. GALNT14 gene is located on chromosome 2, with 16 exons, and its mRNA is translated into 552 amino acids with a molecular mass of 64,321 Da. The single nucleotide polymorphism (SNP) GALNT14 rs9679162 is located in intron 3, and the genotypes are TT, GT, and GG. Although this SNP does not affect the post-translational amino acid sequence, it is linked to cancer prognosis during chemotherapy. In advanced hepatocellular carcinoma (HCC) patients, the rs9679162 genotypes are associated with the objective response to chemotherapy using 5-fluorouracil, mitoxantrone, and cisplatin (FMP) [7] and with the outcome of chemoembolization plus sorafenib therapy [8]. HCC patients with the TT genotype have a significantly better median overall survival, time-to-progression, response rate, and disease control rate than HCC patients with non-TT genotypes [9,10]. Whereas the GG genotype is associated with a longer time and partial response to concurrent chemoradiotherapy (radiotherapy combined with FMP), in patients with esophageal squamous cell carcinoma [11]. In addition to HCC, the GALNT14 SNP has been shown to predict progression-free survival (PFS), overall survival (OS), and response to chemotherapy in several types of gastrointestinal cancers, including cholangiocarcinoma, colorectal cancer, gastric cancer, esophageal cancer, and pancreatic ductal adenocarcinoma [12]. The GALNT14 TT genotype is associated with unfavorable overall survival in patients with stage III colorectal cancer, receiving curative surgery and adjuvant oxaliplatin-based chemotherapy [13]. However, the GG genotype is associated with a significantly better overall survival than the non-GG genotypes in patients with resected pancreatic ductal adenocarcinoma [12]. Head and neck cancer develops from tissues in the oral cavity (mouth), pharynx, larynx (throat), paranasal sinuses, nasal cavity, salivary glands, nose, sinuses, and facial skin. The most common types of head and neck cancer occur in the lips, mouth, and larynx. Squamous cell carcinoma of the head and neck accounts for over 90% of head and neck cancers [14]. Head and neck squamous cell carcinoma (HNSCC) is the seventh most common type of cancer diagnosed worldwide, with more than 600,000 new cases diagnosed annually [15], and oral cancer is the most common HNSCC in North Eastern Nigeria, Yemen, and Taiwan [16][17][18]. Alcohol and/or tobacco are major risk factors for HNSCC. Chewing of betel nut is also a major risk factor for HNSCC in Taiwan and India. Approximately 70% of oropharyngeal cancers (including the tonsils, soft palate, and the base of the tongue) are linked to human papillomavirus (HPV) [19]. Traditionally, surgery and radiation therapy have been the treatments of choice for most types of head and neck cancers, and concurrent chemoradiotherapy improves the survival rates in HNSCC patients. The 5-year relative survival rate in head and neck cancers significantly improved from 54.7% in 1992-1996 to 65.9% in 2002-2006 [20]. Chemotherapy with modified docetaxel, cisplatin, and 5-fluorouracil (5-FU) (mTPF) is effective for the palliative treatment of recurrent and metastatic HNSCC in Asian patients [21]. However, some patients still have poor prognosis after mTPF treatment, which also causes unnecessary side effects. If the prognosis of patients after chemotherapy can be accurately predicted, better chemotherapy outcomes achieved and unnecessary side effects can be avoided. GALNT14-rs9679162 genotype is a predictor of PFS, OS, and response to FMP chemotherapy FMP in HCC, and GALNT14 expression also affects chemoresistance in breast and ovarian cancer cells. However, the GALNT14-rs9679162 genotype and its expression in head and neck cancers have not been studied. Therefore, this study analyzed the frequency of the GALNT14-rs9679162 genotype and the expression level of GALNT14 in patients with head and neck cancer. In addition, we investigated whether the GALNT14-rs9679162 genotype is related to GALNT14 mRNA expression. Subjects The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Changhua Christian Hospital (200501, 29 March 2022) for studies involving humans, and a total of 233 HNSCC cases were obtained from the Changhua Christian Hospital Tissue Bank. The tissue samples were immediately frozen in liquid nitrogen until further use. A selection process was performed on frozen sections to obtain HNSCC samples with more than 70% tumor cells, which were required for the analysis. The first diagnosis date was from 10 August 2007 to 16 September 2019. Samples with poor quality extracted DNA and RNA, those that failed PCR or RT-qPCR, and those with unclear sequencing signals were excluded. GALNT14-rs9679162 polymorphism analysis was performed in 199 HNSCC cases and GALNT14 mRNA expression analysis was performed in 68 paired HNSCC and noncancerous matched tissues (NCMT). There were 62 cases of HNSCC overlapping in both the polymorphism and the mRNA expression analyses. The HNSCC cell lines A253, FaDu, HSC3, OEC-M1, SAS, and SCC9, and normal gingival epithelial SG cell lines were used for the in vitro study. The OSCC cell lines were kindly gifted by Professor Chi-Yuan Chen, (Chang Gung Memorial Hospital), and Professor Hsi-Feng Tu, (National Yang Ming Chiao Tung University). Cell culture conditions were as described previously [22]. DNA and RNA Extraction from Tissues DNA and total RNA were extracted from various tissues using a TRI Reagent RNA isolation kit (Molecular Research Center, Cincinnati, OH, USA). After homogenizing the tissues in the TRI Reagent, 0.1 mL of 1-bromo-3-chloropropane or 0.2 mL of chloroform was added per ml of TRI Reagent used. The sample was covered tightly, shaken vigorously for 15 s, and allowed to stand for 2-15 min at room temperature. The resulting mixture was centrifuged at 12,000× g for 15 min at 2-8 • C. The DNA was in the phenol phase and interphase, and the RNA was in a clear hydrophilic layer. Further DNA and RNA separation and purification followed the manufacturer's instructions. The quality and concentrations of DNA and RNA were measured by NanoVue Plus spectrophotometer (General Electric Company, Boston, MA, USA), and electrophoresis in 1% agarose gel. The samples were stored at −20 • C before use. The purified DNA and RNA were used for genotyping and gene expression assays, respectively. Genotyping For GALNT14-rs9679162 polymorphism analysis, 199 primary HNSCC cases, including 113 oral squamous cell carcinoma (OSCC) cases, 39 oropharyngeal squamous cell carcinoma (OPSCC) cases, 37 laryngeal squamous cell carcinoma (LSCC) cases, and 10 other SCC cases without previous treatment were included. DNA was isolated from these tissues using the TRI Reagent extraction method. GALNT14 sequences containing the rs9679162 SNP were obtained by PCR using the following primers: Forward: 5 -TCACGAGGCCAACATTCTAG-3 , Reverse: 5 -TTAGATTCTGCATGGCTCAC-3 , with reaction conditions of 95 • C for 1 min, 58 • C for 1 min, and 72 • C for 1 min, for 40 cycles. Genotyping was performed by purifying the 172 bp PCR products from the gel using a Qiaex II Gel Extraction Kit, and sequencing them using a 377 DNA sequencer (Applied Biosystems, Foster City, CA, USA), according to the manufacturer's instructions. RT-qPCR RT-PCR was performed as previously described [23]. All the RNA samples were treated with DNase I to remove the DNA contamination. For GALNT14 mRNA analysis, 68 HNSCC and noncancerous matched tissue (NCMT) pairs were used, which included 27 paired oral cancer cases, 20 paired oropharynx cancer cases, 20 paired larynx cancer cases, and one paired laryngopharynx cancer case. A total of 62 HNSCC patients (25 paired oral cancer cases, 20 paired oropharynx cancer cases, 16 paired larynx cancer cases, and 1 paired laryngopharynx cancer case) overlapped in both the SNP genotyping and the mRNA expression analyses. GALNT14 expression was analyzed by qPCR using the following primers: Forward: 5 -TAGCATCATCATCACCTTCCAC-3 , Reverse: 5 -TTACAGTCATCAGGGTCATTGC-3 with reaction conditions of 95 • C for 30 s, 58 • C for 15 s, and 72 • C for 15 s. The specific PCR product was 141 bp [24]. Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as an internal control (Forward: 5 -TGGTATCGTGGAAGGACTCATGAC-3 , Reverse: 5 -ATGCCAGTGAGCTTCCCGTTCAGC-3 ). PowerUp SYBR Green Master Mix (Applied Biosystems, Waltham, MA, USA) was used for qPCR. Amplification of these genes was performed for 40 cycles at 95 • C for 15 s and 60-61.7 • C for 1 min, for 40 cycles. Three independent PCR reactions were performed to validate the reproducibility of the analysis. The Ct value of GAPDH was 20-30 in the NCMT and tumor tissues. Cases with inconsistent results were excluded from the final analysis. GALNT14 expression in NCMT and HNSCC is shown as −∆Ct (Ct GAPDH -Ct GALNT14 ). GALNT14 upregulation or downregulation in HNSCC is indicated by −∆∆Ct (−∆Ct HNSCC -∆Ct NCMT ). The up-(−∆∆Ct ≥ 0) or downregulation (−∆∆Ct < 0) of GALNT14 was used for receiver operating characteristic (ROC) analysis to determine the cut-off score. Western Blotting Analysis Western blotting was performed using 50 µg of total protein from culture cells as described previously [27]. Depending on size, proteins were resolved on 7.5-12.5% polyacrylamide gel. The resolved proteins were transferred to 0.22 µm poly (vinylidene fluoride) (PVDF) membranes and blocked with 5% bovine serum albumin (BSA) for 1 h at room temperature (RT). The membranes were incubated with primary antibodies against GALNT14 (1:1000, sc-393051, SANTA CRUZ, Dallas, Texas, USA) and GAPDH (1:10,000, sc-32233, SANTA CRUZ) overnight at 4 • C. The membranes were then incubated with a peroxidase AffiniPure goat anti-mouse IgG secondary antibody (1:2000, 115-035-003, Jackson ImmunoResearch, Bar Harbor, ME, USA) for 2 h at RT, and visualized using the SuperSignal West Femto chemiluminescent substrate (Thermo Scientific, Rockford, IL, USA). GAPDH was used as an internal control. The results were quantitated using ImageJ software. Statistical Analysis Nonparametric analysis was performed, including the Mann-Whitney test for unpaired analysis and Wilcoxon signed-rank test for paired analysis. One-way ANOVA was used for comparing more than three groups. Fisher's exact test, logistic regression, analysis of the odds ratio (OR), and 95% confidence interval (95% CI) were performed using Prism 9.0 (GraphPad Software, version 9.0.0, Irvine, CA., USA) or Statistical Package for the Social Sciences 12.0 (SPSS, Inc., Chicago, IL, USA). The survival curves were preformed using log-rank test. Differences between variants were considered significant at p < 0.05. Clinical Characteristics of the Subjects in GALNT14-rs9679162 Polymorphism Analysis Subjects with incomplete medical records, PCR failures, or sequencing failures were excluded. A total of 199 HNSCC patients were included in this study, including 188 men and 11 women, with an average age of 57.33 ± 10.50 years. The subtypes of cancers include OSCC (113 cases), OPSCC (39 cases), LSCC (37 cases), and other cancers (10 cases). The top three types of head and neck cancer in Taiwan are oral, nasopharyngeal, and laryngeal cancers. Clinicopathological parameters-including subtype, age, sex, alcohol consumption, betel nut chewing, cigarette smoking habits, differentiation, tumor size, lymph node metastasis, AJCC 8th edition tumor stage, radiotherapy, chemotherapy, overall survival, and lesion site of oral cancer-are listed in Table S1. The case histories of some patients were missing, and some of them were categorized into stage BBB (no record). The most common primary sites in oral cancer subjects were the cheek and the tongue. GALNT14-rs9679162 Genotype Frequency in HNSCC The HNSCC cases include OSCC (57%), OPSCC (20%), LSCC (18%), and other cancers (5%). The GALNT14-rs9679162 fragment was amplified by PCR followed by direct sequencing. The base of GALNT14-rs9679162 position is marked in the red box ( Figure 1a). The single red peak and single black peak mean homologous TT genotype and GG genotype, respectively. One red and one black peak indicate a heterozygous TG genotype. Based on the sequencing data, the genotypes were identified as TT, GT, or GG ( Figure 1a, red box). Among the 199 HNSCC cases, 59 cases were type TT (30%), 87 cases were type GT (44%), and 53 cases were type GG (26%). When all HNSCC samples were analyzed together, some clinical significance was ignored because of differences between the OSCC, OPSCC, and LSCC groups. The ratios of TT, TG, and GG genotypes within OSCC, OPSCC, LSCC, and other cancers are shown in Figure 1b. No significant differences in genotypic and allelic frequencies for GALNT14-rs9679162 were observed between patients with HNSCC and other cancer subtypes. The genotypic distribution of GALNT14-rs9679162 in HNSCC, OSCC, OPSCC, and LSCC patients did not deviate from the Hardy-Weinberg equilibrium ( Figure 1b). Comparison of Clinical Parameters between GALNT14-rs9679162 TT and Non-TT Genotypes in HNSCC The association between GALNT14-rs9679162 genotypes and clinicopathological features was analyzed using binary logistic regression and showing p value, OR, and 95% CI in Table 1. The distribution of non-TT genotype (GT and GG genotypes) was associated with treatment-survival status in HNSCC and with risk factors in patients with OSCC. The frequency of the non-TT genotype was significantly higher in the HNSCC patients who died after radiotherapy, and in OSCC patients who consumed alcohol, chewed betel nut, and smoked cigarettes. Alleotypes of GALNT14-rs9679162 in HNSCC The association between GALNT14-rs9679162 alleles and clinicopathological features was analyzed using binary logistic regression analysis, and the p value, OR, and 95% CI are shown in Table 2. The G allele distribution was associated with treatment survival in HNSCC and LSCC patients, and risk factors in OSCC patients. The frequency of the G allele was significantly higher in HNSCC patients who died following radiotherapy, chemotherapy, in the betel nut and cigarette use groups in OSCC patients, and in the LSCC patients who died after radiotherapy and chemotherapy. The G allele frequency was higher in the OPSCC patients who died after radiotherapy as well. However, this difference was borderline (p = 0.0520; OR = 3.341) and approaching statistical significance. Association between GALNT14-rs9679162 Genotype and Survival Rate in HNSCC Subjects The association between GALNT14-rs9679162 genotypes and the survival status following radiotherapy, chemotherapy, and the overall survival status was investigated. The survival probability of GALNT14-rs9679162 GG genotype was significantly lower in the radiotherapy group (Figure 1c) of HNSCC patients, in the chemotherapy group of OSCC patients (Figure 1d), and in the radiotherapy and chemotherapy groups of LSCC patients (Figure 1f). However, the survival probability of GALNT14-rs9679162 GG genotype was not significantly different in OPSCC patients (Figure 1e). Clinical Characteristics of the Subjects in GALNT14 mRNA Expression Analysis A total of 68 HNSCC and noncancerous matched tissue (NCMT) pairs were included in this analysis, including 65 males and 3 females, with an average age of 57.85 ± 10.69 years. These included 27 oral cancer and NCMT pairs, 20 oropharynx cancer and NCMT pairs, 20 laryngeal cancer and NCMT pairs, and 1 laryngopharyngeal cancer and NCMT pair. The clinicopathological parameters of overall HNSCC and subtype cancer cases are shown in Table S2. GALNT14 mRNA Expression in HNSCC and Its Subtypes This analysis included cancer and NCMT pairs from OSCC (27 cases, 40%), OPSCC (20 pairs 29%), LSCC (20 pairs, 29%), and other cancers (1 pair, 2%). The GALNT14 mRNA was reverse-transcribed and a 141 bp fragment was amplified using RT-qPCR, and the GALNT14 expression level is shown as −∆Ct. In Figure 2a, the symbols and lines show the expression levels of GALNT14 in each of the paired NCMT and cancer tissues. A slope of less than 0 indicates that GALNT14 mRNA expression is lower in cancer (T < N), and a slope greater than 0 indicates the opposite (T > N). GALNT14 mRNA expression was not significantly different in HNSCC, OSCC, OPSCC, and LSCC tissues compared to paired NCMTs (Figure 2a). It was also not significantly different among various cancer subtypes within HNSCC (Figure 2b). Analysis of the data from the GPL96 platform (HG-U133) from the Gene Expression Database of Normal and Tumor Tissues (GENT2) [28] revealed that the expression of GALNT14 mRNA was not different in laryngeal and pharynx cancer tissues compared to that in unpaired normal tissues (Figure 2c). Analysis of the data from the GPL570 platform (HG-U133 plus 2) showed that the expression of GALNT14 mRNA was also not different in pharyngeal cancer tissues, but was lower in head and neck cancer and oral cancer tissues compared to that in the unpaired normal tissues (Figure 2d). However, the results from the GENT2 database were different from those of our HNSCC subjects (Figure 2a). It is possible that the expression of GALNT14 mRNA in NCMTs and normal tissues was different. The carcinogenic risk factors for HNSCC also differed between the GENT2 subjects and the subjects in this study. Association between GALNT14 Expression and Clinical Parameters in HNSCC We compared GALNT14 expression in 68 paired NCMT and HNSCC tissue samples. The mean age of the 68 patients from whom the paired tissue samples were obtained was 57 years. Patients whose clinical information was partially lost or those who were categorized as stage BBB were excluded. The association between GALNT14 mRNA expression levels and clinicopathological features was analyzed using binary logistic regression analysis, and the p value, OR, and 95% CI are shown in Table 3. The frequency of GALNT14 upregulation was higher in the tumor tissues in the HNSCC patients showing lymphoid metastasis (subjects with N > 0 in TNM, N staging shown in Table 3). Similarly, the frequency of GALNT14 upregulation was higher in the tumor tissues in the OPSCC patients who died after radiotherapy, or in the overall survival group. OPSCC patients who died after chemotherapy also showed a similar pattern, with the difference approaching significance. Though GALNT14 expression was not associated with any other clinical parameters in OSCC and LSCC subjects, it was significantly different between OSCC and OPSCC subjects. GALNT14 expression was not affected by alcohol, betel nut, or cigarette usage. Correlation between GALNT14 Expression and Survival Rate in HNSCC Of the 68 patients with HNSCC, 43 received radiotherapy, 15 did not; 44 received chemotherapy, 16 did not; and 36 received chemoradiotherapy. The radiotherapy survival curve, chemotherapy survival curve, and overall survival curve of HNSCC patients were analyzed using the log-rank test. The survival of OSCC patients following chemotherapy survival of patients with OSCC and the survival of OPSCC patients following radiotherapy and the overall survival of patients with OPSCC were significantly different between the GALNT14 downregulation (N > T) and upregulation (N < T) groups. The GALNT14 upregulation group showed a better chemotherapy survival rate than the GALNT14 downregulation group in OSCC patients (Figure 2f). However, the GALNT14 downregulation group showed a better radiotherapy survival rate and overall survival rate than the upregulation group in patients with OPSCC (Figure 2g). The survival curves were not significantly different between the GALNT14 downregulation and upregulation groups in HNSCC and LSCC patients (Figure 2e,h). Relation between OSCC Risk Factors and GALNT14-rs9679162 Genotype or GALNT14 mRNA Expression in OSCC GALNT14-rs9679162 non-TT genotypes increased alcohol, betel nut, and cigarette use ( Table 1). The frequency of GALNT14 mRNA downregulation in OSCC cases was higher than OPSCC cases (Table 3). GALNT14 mRNA expression levels were not significantly different between alcohol, betel nut and cigarette users and non-users in the OSCC group (Figure 4a-c). However, 7 of the 12 alcohol users (58.33%), 10 of the 15 betel nut users (66.67%), and 12 of the 17 cigarette users (70.59%) showed a downregulation of GALNT14 mRNA expression (−∆∆Ct < 0). There was no relationship between GALNT14 mRNA expression levels and genotypes in alcohol and cigarette users (Figure 4d-f). Seven oral cancer cell lines were used to analyze GALNT14 expression levels in the three GALNT14-rs9679162 genotypes. SG, SAS, and HSC3 cell lines had GG genotype, OECM-1 and A253 cell lines had GT genotype, and FaDu and SCC9 cell lines had GG genotype. The top two cell lines with high GALNT14 expression were OECM-1 and A253 (Figure 4g). These data indicated that the actual GALNT14 mRNA and protein expression levels were not correlated with the GALNT 14-rs9679162 genotype. The majority of the HNSCC patients used alcohol, betel nut, and cigarettes in combination. Hence, in order to reduce the confounding effects of the other variable factors, and to examine the specific effects of betel nut, we used arecoline in the following experiment. After 24 h treatment with arecoline treatment, GALNT14 mRNA and protein expression were downregulated in the SG, SAS, and HSC3 cell lines (Figure 4h-j); upregulated in the OECM-1, A253, and SCC9 cell lines (Figure 4k,l,n); and no significant change was observed in FaDu (Figure 4m). The results showed that GALNT14 mRNA and protein expressions might be inhibited in the TT genotype, but was enhanced in non-TT genotypes after arecoline treatment in OSCC cell lines. Full pictures of the Western blots and the densitometry scans are presented in Figure S1. (d) GALNT14 mRNA expression in the GALNT14-rs9679162 genotypes in alcohol drinking, (e) betel nut chewing, (f) cigarette smoking OSCC patients. (g) GALNT14 mRNA (−∆Ct, mean ± SD) and protein expression (folds) levels in seven OSCC cell lines. GALNT14 mRNA and protein expression after treatment with vehicle, 12.5 or 100 µg/mL arecoline for 24 h in (h) SG, (i) SAS, (j) HSC3, (k) OECM-1, (l) A253, (m) FaDu, and (n) SCC9 cell lines. One-way ANOVA was used to analyze the data in (d-n). * p < 0.05, ** p < 0.01, *** p < 0.001. Relationship between Chemoresistance and GALNT14-rs9679162 Genotype or GALNT14 mRNA Expression in OSCC GALNT14-rs9679162 genotype, allele and GALNT14 mRNA expression seem to affect radiotherapy, chemotherapy, and overall survival status in OSCC and OPSCC. Hence, we re-analyzed the relationship between GALNT14-rs9679162 genotypes and GALNT14 mRNA in patients with OSCC and OPSCC, who received radiotherapy and chemotherapy. The GALNT14 mRNA expression was not significantly different between the live and dead radiotherapy recipients in the OSCC group (Figure 5a). However, the mRNA expression was significantly lower in the dead chemotherapy recipients in the OSCC group, compared to that in the live ones (Figure 5b). The GALNT14 mRNA expression was higher in the dead radiotherapy and chemotherapy recipients in the OPSCC group, compared to that in the live ones (Figure 5c,d). However, the GALNT14 mRNA expression level was not significantly different between the GALNT14-rs9679162 genotypes within the dead radiotherapy (Figure 5e,g) and chemotherapy recipient groups (Figure 5f,h) in OSCC and OPSCC groups. Cytotoxicity analysis of these cell lines showed that HSC3 and SAS cells were more tolerant to docetaxel (Figure 5i) and OECM-1 and A253 were more tolerant to 5-Fu compared to other OSCC cell lines (Figure 5j). Discussion According to the database of single nucleotide polymorphisms (dbSNP) [29], the distribution of GALNT14-rs9679162 T:G alleles was 56%:44% in Americans; 37%:63% in Europeans; 66%:34% in Africans; 55%:45% in East Asians; and 70%:30% in Japanese, indicating that GALNT14-rs9679162 distribution frequency was different among different races. In this study, the T allele accounted for 51.5% and the G allele accounted for 48.5%, which was close to the data on East Asians in the dbSNP database. The non-TT genotype is associated with poor survival and chemotherapy response in HCC and gastrointestinal cancers, but is beneficial for colorectal cancer and pancreatic ductal adenocarcinoma survival and chemotherapy [7][8][9][10][11][12][13]. Our results showed that the non-TT genotype frequency was higher in the dead HNSCC patients who received chemotherapy (Table 1), the GG genotype shortened the radiotherapy survival time (Figure 1c), and the G allele frequency was higher in the dead patients in the radiotherapy, chemotherapy, and overall survival groups (Table 2). Thus, the GALNT14-rs9679162 genotype is indicative of the therapeutic survival status of patients with HNSCC (Tables 1 and 2, and Figure 1c). HNSCC includes OSCC, OPSCC, and LSCC according to the site of the primary tumor. Statistics showed that in OSCC, the non-TT genotype was associated with alcohol, betel nut, and tobacco use (Table 1); the G allele was associated with betel nut and tobacco use ( Table 2); and the GG genotype shortened chemotherapy survival time (Figure 1d). The G allele was associated with death in radiotherapy recipients in OPSCC ( Table 2). The G allele was associated with death in the radiotherapy and chemotherapy groups (Table 2), and the GG genotype shortened the radiotherapy and chemotherapy survival time in LSCC cases ( Figure 1f). Therefore, the survival rate was poor in HNSCC patients with the GALNT14-rs9679162 non-TT genotype or the G allele, and the survival time of HNSCC patients with the GG genotype was short. GALNT2 enhances the invasiveness of OSCC cells by modifying the O-glycans on epidermal growth factor receptor (EGFR) [30]. GALNT14 mediates the initial step of mucin-type O-glycosylation, and extensive O-glycosylation of mucin 1 (MUC1) contributes to cell resistance to anoikis [24]. High GALNT14 mRNA expression might enhance the O-glycosylation of MUC1 or EGFR to promote lymphatic metastasis in HNSCC (Table 3). Although the NCMT and tumor tissues were all exposed to the same carcinogenic initiators and promoters, GALNT14 downregulation in the OSCC tissues compared to paired NCMT were consistent with the lower expression of GALNT14 in oral cancer tissues compared to normal oral tissues in the public GENT2 profile analysis (Figure 2c,d). GALNT14 expression was not affected by alcohol, betel nut, or cigarettes in this study (Table 3). This may be because most of the HNSCC patients had more than one of these risk habits and different risk factor combinations interfere with GALNT14 expression. In Xena Functional Genomics Explorer (TCGA) [31] at GDC TCGA and TCGA Head and Neck Cancer profiles, GALNT14 expression was induced by alcohol consumption (Figure S2a,d) and upregulated in the late stage of tumors ( Figure S2b,e). The survival time of last 25% GALNT14 expression group was significantly longer than top 25% GALNT14 expression group ( Figure S2c,f). Therefore, GALNT14 might be upregulated by alcohol consumption, but downregulated by betel nut chewing in GALNT14-rs9679162 TT genotype patients (Figure 4). HPV-mediated OPSCC is fairly responsive to chemoradiotherapy and has a better prognosis than HPV-unrelated OPSCC [32]. Approximately 30% of OPSCC patients are HPV positive in Taiwan. In the Gene Expression Omnibus database (GEO) [33], in the GDS1667/219271_at profile, GALNT14 mRNA expression was higher in HPV-positive than in HPV-negative head and neck cancer cases ( Figure S3a,b). In the GDS3126/219271_at profile, patients with high GALNT14 expression also showed radiosensitivity compared to patients with low GALNT14 expression ( Figure S3c,d). Upregulation of GALNT14 mRNA in tumors was higher in the dead patients in the radiotherapy and overall survival die groups (Table 3) and reduced the radiotherapy-and overall-survival times in OPSCC (Figure 2g). The decreased survival rate after treatment may be related to patient age, tumor malignancy, tumor resistance, and the side effects of treatment. Tenofovir is a major negative regulator of GALNT14 substrates and an unfavorable anti-hepatitis B drug in patients with hepatocellular carcinoma receiving sorafenib [34]. Therefore, high expression of GALNT14 may also affect chemotherapy outcome (Figure 5d). Currently, there are no studies on GALNT expression in OPSCC. Whether HPV infection affects GALNT14 mRNA expression in OPSCC patients requires further exploration. The detailed medical records, risk factors, and chemoradiotherapy plan for each HNSCC patient require further investigation. GALNT14-rs9679162, located in intron 3 of the GALNT14 gene, has no effect on mRNA and amino acid sequences. However, introns have direct and indirect roles in regulating themselves or other genes. The direct functions include alternative splicing, enhanced gene expression, control of mRNA transport, chromatin assembly, and nonsense-mediated decay. The indirect role function includes natural selection, a source of new genes, and non-coding functional RNA genes [35]. We did not find a relationship between GALNT14-rs9679162 polymorphism or mRNA expression and the risk factors and the treatmentsurvival curves in clinical data (Figures 3-5). In the in vitro model, the GALNT14 mRNA was downregulated in four out of seven (57%) oral cell lines. However, GALNT14 mRNA and protein expression downregulated in cell lines with the TT genotype and upregulated in cell lines with non-TT genotype, upon treatment with arecoline (Figure 4h-n). There have been no in vitro studies focusing on the different GALNT14-rs9679162 genotypes; therefore, the effect of different GALNT14-rs9679162 genotypes on cell phenotype, genomics, and proteomics remains unclear. GALNT14 has two alternative splicing forms, the transcript contains exons 4 but not exons 2 and 3, and the transcript contains exons 2 and 3 but not exon 4 [36]. The direct and indirect roles of arecoline in regulating GALNT14 protein expression and GALNT14-rs9679162 genotype selection in betel nut users with OSCC require further study. mTPF-based chemotherapy and radiotherapy are often combined to treat OSCC. Sequential treatment with docetaxel and 5-Fu is commonly used to treat human oral cancer [37]. Compared with cisplatin and 5-Fu combination treatment, induction chemotherapy with the addition of docetaxel significantly enhanced progression-free overall survival in patients with unresectable HNSCC [38]. Some studies have shown that high GALNT14 expression induces apoptosis and chemosensitivity. GALNT14 promotes the O-glycosylation of death receptors 4/5 (DR4/5) [39] and mediates tumor necrosis factorrelated apoptosis-inducing ligand (TRAIL)-induced apoptosis in pancreatic carcinoma, non-small cell lung carcinoma, and melanoma cells [40]. GALNT14 protein expression was significantly higher in cell lines sensitive to dulanermin and drozitumab compared to that in resistant non-small cell lung cancer (NSCLC) cell lines [41]. However, some studies have shown that high GALNT14 expression induces chemoresistance in cancer cells. Overexpression of glycosylated P-glycoprotein (P-gp) in drug-treated cancer cells is one of the major causes for the failure of cancer chemotherapy. GALNT14 is associated with higher P-gp levels in adriamycin-resistant human breast cancer tissues [5]. To clarify whether GALNT14 GG genotype or mRNA expression affects chemotherapy survival, cells were only administered a single drug for chemoresistance analysis. OSCC cell lines with the TT genotype were docetaxel resistant, and OSCC cell lines with high GALNT14 mRNA expression and the TG genotype showed 5-Fu resistance ( Figure 5). If these preliminary results were not coincidental, there may be some correlation between GALNT14 genotype and GALNT14 expression. 5-FU resistance is controlled by three major metabolic enzymes: thymidylate synthase (TS), dihydropyrimidine dehydrogenase (DPD), and thymidine phosphorylase (TP) [42,43]. Drug efflux mediated by the transporter proteins, such as multidrug resistance 1 (MDR1) and MDR protein 5 (MRP5), plays a critical role in docetaxel resistance [44,45]. Based on the results of previous studies, it is speculated that GALNT14-rs9679162 TT genotype and upregulation GALNT14 enhanced the chemoresistance mechanisms against docetaxel and 5-Fu, respectively. Furthermore, GALNT14 mutations have been associated with neuroblastoma predisposition [46]. GALNTL14 is more commonly mutated in the non-complete response group to neoadjuvant chemoradiotherapy in locally advanced rectal cancer [47]. Whether the mutation rate in the three GALNT14-rs9679162 genotypes is also an important issue for future studies. In this study, the frequency of GALNT14-rs9679162 genotypes and the expression of GALNT14 were analyzed using HNSCC tissues, HNSCC cell lines, and the public HNSCC data platforms. The frequency of GALNT14-rs9679162 genotypes and GALNT14 expression at different HNSCC sites were different. The GALNT14-rs9679162 non-TT genotype was associated with survival, and the GALNT14-rs9679162 allele was associated with alcohol consumption, betel nut consumption, and cigarette smoking. GALNT14 was upregulated in OPSCC but downregulated in OSCC and LSCC, which may be related to different carcinogenic risk factors. HPV infection or alcohol consumption in HNSCC may upregulate the expression of GALNT14, while betel nut chewing may downregulate the expression of GALNT14 in individuals with the TT genotype but upregulate the expression of GALNT14 in individuals with the non-TT genotype. GALNT14-rs9679162 non-TT genotypes and high GALNT14 expression may enhance chemoresistance in HNSCC via different mechanisms. GALNT14-rs9679162 non-TT genotypes and GALNT14 expression can be used as indicators of prognosis and survival in HNSCC patients. In the future, the sample size from each HNSCC site should be increased to clarify the association of GALNT14-rs9679162 non-TT genotypes with GALNT14 expression and response to chemoradiotherapy. Neoadjuvant chemotherapy is administered preoperatively to reduce the tumor volume and to facilitate the main treatment, such as surgery or radiotherapy. The vascular bed surrounding the tumor provides efficient drug delivery. However, neoadjuvant chemotherapy also delays the main therapy, and the physician must ensure that the patient has good response and that the tumor is not progressing during neoadjuvant chemo-therapy. Therefore, predicting the patient's response to neoadjuvant chemotherapy can shorten the treatment time, reduce the side effects, and avoid the occurrence of drug resistance. This study showed that GALNT14-rs9679162 genotype and GALNT14 mRNA expression are associated with post-treatment survival in head and neck cancer, and can be used as indicators to predict the response to neoadjuvant chemotherapy. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers14174217/s1, Supplementary Figure S1. All Western blot figures (uncropped blots), include densitometry readings/intensity ratio of each band. Figure S2. GALNT14 expression level was associated with alcohol consumption, tumor stage, and survival time in head and neck cancer. Figure S3. The relation between GALNT14 expression and HPV positivity, or radiosensitivity in head and neck cancer, Table S1. Clinical characteristics of the subjects in GALNT14-rs9679162 polymorphism analysis, Table S2. Clinical characteristics of the subjects in GALNT14 mRNA expression analysis. Informed Consent Statement: The patient informed consent were waived, because the tissue abtained from the tissue bank and the data was deidentification. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
The Synergistic Effects of Curcumin and Chemotherapeutic Drugs in Inhibiting Metastatic, Invasive and Proliferative Pathways Curcumin, the main phytochemical identified from the Curcuma longa L. family, is one of the spices used in alternative medicine worldwide. It has exhibited a broad range of pharmacological activities as well as promising effects in the treatment of multiple cancer types. Moreover, it has enhanced the activity of other chemotherapeutic drugs and radiotherapy by promoting synergistic effects in the regulation of various cancerous pathways. Despite all the literature addressing the molecular mechanism of curcumin on various cancers, no review has specifically addressed the molecular mechanism underlying the effect of curcumin in combination with therapeutic drugs on cancer metastasis. The current review assesses the synergistic effects of curcumin with multiple drugs and light radiation, from a molecular perspective, in the inhibition of metastasis, invasion and proliferation. A systemic review of articles published during the past five years was performed using MEDLINE/PubMed and Scopus. The assessment of these articles evidenced that the combination therapy with various drugs, including doxorubicin, 5-fluorouracil, paclitaxel, berberine, docetaxel, metformin, gemcitabine and light radiation therapy on various types of cancer, is capable of ameliorating different metastatic pathways that are presented and evaluated. However, due to the heterogeneity of pathways and proteins in different cell lines, more research is needed to confirm the root causes of these pathways. Introduction Curcumin, extracted from the rhizome of Curcuma longa L., has been traditionally used as an alternative medicine for the treatment of different diseases. Curcumin has, beyond this traditional use, various roles and functions. In fact, it is a strong anti-inflammatory, antioxidant and anti-proliferative agent [1][2][3]. Previous studies show that curcumin also exerts cytotoxicity on a broad range of cancer cell lines, such as lung, colon, breast and ovarian cancer [4][5][6][7]. However, despite its various functions, it has limited applications due to its low solubility in water, thus leading to low absorption and low oral bioavailability. Several experiments have been performed to improve the solubility of curcumin and enhance its therapeutic effects in different pathologies [8]. Curcumin targets several signaling pathways involved in the regulation of cell proliferation, invasion, metastasis and apoptosis via the regulation of different regulators, as detailed in Figure 1. Several studies investigated the antitumor activity of curcumin on different cancer cells, revealing its mechanism of action on various signaling pathways. Recent reviews have summarized the dysregulation of the SIRT, JAK/STAT, MAPK, P13K/Akt, Wnt/β-catenin, Notch and NF-κB pathways, which are highly involved in the regulation of cell proliferation, apoptosis, cell cycle arrest, oxidative stress as well as An important signaling pathway that curcumin targets is the NF-κB signaling pathway ( Figure 1). Nuclear factor kappa-light chain activation suppresses apoptosis and induces cell invasion, metastasis and cell proliferation. In fact, activated NF-κB promotes the constitutive activation of IκB kinase, which causes the phosphorylation and degradation of IκBα (inhibitor of κB, α). Several studies on breast and prostate cancer cell lines show that curcumin inhibits the stimulation of the upstream regulator of NF-κB, thus reducing its signal and downregulating the expression of IκBα kinase, leading to cell apoptosis [14]. Moreover, curcumin downregulates the expression of NF-κB-regulated gene products, including IκBα, Bcl-2, Bcl-xL, cyclin D1, matrix metalloproteinases (MMP) -2 and -9 and urokinase-type plasminogen activator (uPA), in addition to interleukin (IL)-6 and IL-8 [15][16][17]. Important gene-products of the NF-κB pathway involved in the regulation of tumor invasion and metastasis are uPA, MMP-2 and MMP-9. The uPA kinase binds to the uPA receptor and activates the protease plasmin, which degrades the extracellular matrix (ECM). Similarly, MMPs are endopeptidases that degrade ECM, hence promoting tumor cell invasion and metastasis [18]. In this context, previous studies conducted on colon cancer cells revealed the potential role of curcumin in suppressing metastasis via the AMPK activation and subsequent inhibition of NF-κB, uPA and MMP-9 [17]. Similarly, the antiinvasive properties of curcumin were demonstrated in MCF-7 breast cancer cells via a dose-dependent decrease in uPA protein levels [19]. The signal transducer and activator of the transcription 3 (STAT3) signaling pathway is also involved in metastasis as well as migration and invasion into the ECM. Several An important signaling pathway that curcumin targets is the NF-κB signaling pathway ( Figure 1). Nuclear factor kappa-light chain activation suppresses apoptosis and induces cell invasion, metastasis and cell proliferation. In fact, activated NF-κB promotes the constitutive activation of IκB kinase, which causes the phosphorylation and degradation of IκBα (inhibitor of κB, α). Several studies on breast and prostate cancer cell lines show that curcumin inhibits the stimulation of the upstream regulator of NF-κB, thus reducing its signal and downregulating the expression of IκBα kinase, leading to cell apoptosis [14]. Moreover, curcumin downregulates the expression of NF-κB-regulated gene products, including IκBα, Bcl-2, Bcl-xL, cyclin D1, matrix metalloproteinases (MMP) -2 and -9 and urokinase-type plasminogen activator (uPA), in addition to interleukin (IL)-6 and IL-8 [15][16][17]. Important gene-products of the NF-κB pathway involved in the regulation of tumor invasion and metastasis are uPA, MMP-2 and MMP-9. The uPA kinase binds to the uPA receptor and activates the protease plasmin, which degrades the extracellular matrix (ECM). Similarly, MMPs are endopeptidases that degrade ECM, hence promoting tumor cell invasion and metastasis [18]. In this context, previous studies conducted on colon cancer cells revealed the potential role of curcumin in suppressing metastasis via the AMPK activation and subsequent inhibition of NF-κB, uPA and MMP-9 [17]. Similarly, the antiinvasive properties of curcumin were demonstrated in MCF-7 breast cancer cells via a dose-dependent decrease in uPA protein levels [19]. The signal transducer and activator of the transcription 3 (STAT3) signaling pathway is also involved in metastasis as well as migration and invasion into the ECM. Several cancerderived cell lines depend on the constitutive activation of STAT3, which is overexpressed in tumor cells as a result of its phosphorylation by Janus kinases (JAKs). A previous study suggests that STAT3 regulates cell proliferation and the expression of several proteins, namely c-myc, cyclin D, Bcl-2, vascular endothelial growth factor (VEGF) as well as MMP-2 and MMP-9 [20,21]. It was shown that curcumin is able to suppress the expression of STAT3 in various cell lines, including pancreatic, human non-small cell lung cancer, myeloid leukemia and breast cancer cells (Figure 1) [22][23][24][25]. Nevertheless, the expression of BCAT1 in cancerous cells has a major role in the progression of myeloid leukemia cells. The therapeutic effects of curcumin were able to induce apoptosis in leukemia cells by the suppression of the BCAT1 and mTOR pathways ( Figure 1). In fact, mTOR is capable of encouraging resistance of cells to different drugs, which would partake in the proliferation and migration of cancerous cells to different tissues. Studies have indicated that curcumin is able to control proliferation through the cleavage of PARP1, which is a nuclear enzyme responsible for DNA repair, and its upregulation has been reported in various human cancer cell lines [26]. Along with its extensive roles in regulating cancerous pathways in different cell lines and ameliorating the well-being of patients, curcumin serves an important role in enhancing the chemotherapeutic effects of other drugs, such as doxorubicin, cisplatin and paclitaxel, used in cancer therapy by increasing the sensitivity of cell lines to these prospective drugs, as previously detailed in a review published by Farghadani et al. [27]. The combination of curcumin with several drugs has reduced the toxic effects of the drugs themselves and has reduced resistance, making them more effective. In fact, previous studies reported that curcumin was able to enhance the sensitivity of cisplatin-resistant non-small cell lung carcinoma [28], while it was found effective in increasing the sensitivity of ovarian cancer cells to cisplatin when curcumin was combined with resveratrol [29]. The synergism between curcumin and many chemotherapeutic drugs has also enhanced pathways in cancerous cells, such as the induction of apoptosis and the suppression of proliferation, invasion and metastasis. Metastasis is the process of the transformation of cells into a malignant form that involves a series of genetic and epigenetic modifications due to the increase in genomic instability. These reformations lead to irregular cell cycle control, difficulty in apoptosis and acquire infinite replication abilities. This also enables the cells to invade, migrate and spread their properties to other cells and tissues [30]. Unfortunately, drugs that target metastatic pathways have portrayed long-term side effects that have led to abysmal repercussions in the well-being of patients undergoing chemotherapy. For this reason, experts resorted to the combination of these chemotherapeutic drugs with curcumin due to curcumin's ability to reverse the toxic effects of such drugs as well as its capability of enhancing the suppression of metastatic pathways, as described previously by Liu et al. [31]. The following review discusses the synergistic effects of curcumin combined with multiple drugs in the inhibition of various types of cancer cells' proliferation, invasion and metastasis, highlighting the underlying molecular pathways. Results The following section reviews the literature from the past five years reporting the promising effects upon combination of curcumin with various chemotherapeutic drugs or radiation on metastatic, invasive and proliferative cancer therapy in multiple cancer types. The combination of curcumin with each of 5-fluorouracil, doxorubicin, paclitaxel, metformin, docetaxel, berberine, gemcitabine and light radiation is described in the following subsections, and the data are summarized as a supplementary comprehensive table (Table S1). The Hindrance of Metastatic/Proliferative/Invasive Pathways Using Curcumin and Light Radiation The usage of radiotherapy in cancer treatment has portrayed crucial properties against various cell lines and their respective metastatic pathways. Studies have shown that radiotherapy has become the standard treatment to achieve apoptotic and non-metastatic effects. However, these effects are not always guaranteed due to the heterogeneity of responses reported in different patients, in addition to the adverse side effects of radiation therapy on normal tissues. Increased evidence has shown that many patients develop a relapse, which would hinder the radiotherapeutic effect and lead to abysmal repercussions in the migration of the cancerous cells to other tissues [27]. For this reason, it was of great importance to resort to alternative therapies to avoid these drastic side-effects by decreasing the doses of radiations while remaining effective. The combination of curcumin with light radiation has elicited promising effects in controlling metastasis and invasion through the suppression of multiple pathways, as summarized in Figure 2. In fact, curcumin has ameliorated the efficacy of light radiation and has increased the sensitivity of the cancerous cells to respond to radiation. that radiotherapy has become the standard treatment to achieve apoptotic and non-metastatic effects. However, these effects are not always guaranteed due to the heterogeneity of responses reported in different patients, in addition to the adverse side effects of radiation therapy on normal tissues. Increased evidence has shown that many patients develop a relapse, which would hinder the radiotherapeutic effect and lead to abysmal repercussions in the migration of the cancerous cells to other tissues [27]. For this reason, it was of great importance to resort to alternative therapies to avoid these drastic side-effects by decreasing the doses of radiations while remaining effective. The combination of curcumin with light radiation has elicited promising effects in controlling metastasis and invasion through the suppression of multiple pathways, as summarized in Figure 2. In fact, curcumin has ameliorated the efficacy of light radiation and has increased the sensitivity of the cancerous cells to respond to radiation. Studies have shown that radiation therapy induces EMT in cancer cells by downregulating E-cadherin and upregulating mesenchymal molecular markers. E-cadherin has a serious role in cancer since it stabilizes cell-cell adhesion in epithelial cells in addition to suppressing tumor transformation and growth. Moreover, MMP9 plays a determinant role in cancer invasion as described above. Upon addition of curcumin to radiation therapy, the expressions of E-cadherin, vimentin and SLUG, which are crucial EMT markers and promoters of invasion, were decreased, ultimately leading to the inhibition of EMT properties in A549 lung cancer cells. Nevertheless, the synergistic effect of curcumin and radiation suppressed MMP9 protein levels, which, in turn, inhibited E-cadherin levels, hence reducing the rate of invasion and metastasis of lung cancer cells [32]. According to another study, the combination of high concentrations of curcumin and light attenuated the adhesion and attachment properties of A498, Caki1 and KTCTL-26 to HUVECs (human umbilical vein endothelial cells). According to the study's results, the adhesive properties of Caki1 were completely blocked, while these properties were partially blocked in KTCL-26. Moreover, curcumin combined with light resulted in the complete downregulation of chemotaxis in Caki1, while upregulation of migration was observed in KTCTL- Studies have shown that radiation therapy induces EMT in cancer cells by downregulating E-cadherin and upregulating mesenchymal molecular markers. E-cadherin has a serious role in cancer since it stabilizes cell-cell adhesion in epithelial cells in addition to suppressing tumor transformation and growth. Moreover, MMP9 plays a determinant role in cancer invasion as described above. Upon addition of curcumin to radiation therapy, the expressions of E-cadherin, vimentin and SLUG, which are crucial EMT markers and promoters of invasion, were decreased, ultimately leading to the inhibition of EMT properties in A549 lung cancer cells. Nevertheless, the synergistic effect of curcumin and radiation suppressed MMP9 protein levels, which, in turn, inhibited E-cadherin levels, hence reducing the rate of invasion and metastasis of lung cancer cells [32]. According to another study, the combination of high concentrations of curcumin and light attenuated the adhesion and attachment properties of A498, Caki1 and KTCTL-26 to HUVECs (human umbilical vein endothelial cells). According to the study's results, the adhesive properties of Caki1 were completely blocked, while these properties were partially blocked in KTCL-26. Moreover, curcumin combined with light resulted in the complete downregulation of chemotaxis in Caki1, while upregulation of migration was observed in KTCTL-26 cells. Hence, a lower migration rate did not necessarily mean it was the outcome of low adhesion. The same study also portrayed the importance of evaluating cell surface proteins, such as integrins, due to their significant role in cell movement control. In all three cell lines, β1, β3, α3 and α5 were downregulated in the same manner by the synergistic effects of curcumin and light. β3 and α3 have been considered as prognostic markers in renal cancer because they are associated with a higher spreading capacity of the cells. β1 has the ability of promoting tumor growth as well as advancing it in metastasis, while α5 is expressed most abundantly on cell surface exerted paradoxical properties. However, the combination of light and curcumin was able to downregulate the expression of these integrins, which served as inhibitors of cell invasion and metastasis [33]. Later, Rutz et al. demonstrated that curcumin plus light radiations inhibit cell growth, proliferation, adhesion and metastasis of DU145 and PC3 prostate cancer cells. The potent anti-invasion activity was demonstrated by the dysregulation of integrin subtypes expression on DU145 and PC3 cells [34]. Another study strongly reported results of the expression of integrins and their effect on invasion and metastasis in bladder cancer. The results showed that the decreased motility of RT112, UMUC-3 and TCCSUP was due to suppression in the attachment of these cell lines to HUVECs. Moreover, the synergism of curcumin and light induced differences in integrin behaviors, especially in α3, thus inhibiting cell adhesion. Their results also revealed the role of α5 receptors in controlling both adhesion and chemotaxis in the three cell lines, whereas β1 solely acted on migration [35]. Another study published recently in 2022 investigated the role of curcumin in enhancing radiation therapy efficacy on glioblastoma cells in vitro. The results showed that curcumin, when combined with high linear energy transfer (LET) radiations, was able to significantly suppress glioblastoma cell invasion when compared to cells treated with curcumin alone or curcumin in combination with low γ-LET radiations [36]. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Doxorubicin Doxorubicin (DOX) is derived from the fungus Streptomyces peucetius var. caesius and belongs to the antibiotic family of anthracyclines. Doxorubicin mainly targets DNA molecules by intercalating between their base pairs, which ultimately leads to inhibition of topoisomerase II. The DOX-mediated stabilization of topoisomerase II halts the replication process by inhibiting DNA resealing [37]. Currently, DOX is the most efficient chemotherapeutic agent for the treatment of breast cancer and has portrayed an approximate response value of 35% in metastatic breast cancer [38]. However, DOX has exhibited some lifethreatening effects, which hindered its clinical use where the appearance of cardiotoxicity has been the most critical side effect. Additionally, cancer cells in some patients might develop DOX resistance through the modifications of various pathways, eventually promoting continuous growth and survival despite the chemotherapy. As such, using DOX for long-term treatment might be challenging [39]. A study has reported that high expression of Aurora A in MCF-7 cells is correlated with the promotion of tumorigenesis and with the decrease in sensitivity to chemotherapeutic drugs, such as doxorubicin. In fact, Aurora A is a regulatory kinase and an important regulator of proliferation, migration, invasion, metastasis and apoptosis, as detailed in a review published by Lin et al. [40]. After treating MCF-7 cells with CUR + DOX, Aurora A was downregulated in a time-dependent manner and showcased a drastic reduction in the proliferation rate. Nevertheless, p21 levels were also detected to observe an analogy between Aurora A inhibition and the protein itself. The synergistic effects of curcumin and doxorubicin were also able to inhibit the expressions of p53; however, the complete mechanism was not clarified [41]. Another study investigated the enhanced anti-cancer activity of doxorubicin when combined with curcumin on the gastric adenocarcinoma cell line (AGS) in vitro. The results revealed more prominent anti-proliferative, pro-apoptotic, anti-invasive and anti-metastatic activity of doxorubicin when co-administered with curcumin on AGS cells. In this study, DOX + CUR had a significant higher effect on cell viability when compared to doxorubicin or curcumin alone. Similarly, alterations in cell morphology, such as membrane damage, reduced cell size and cell shrinkage, were significantly higher in AGS treated with DOX + CUR than doxorubicin or curcumin monotherapy. This study further mentioned the enhanced activity of doxorubicin when combined with curcumin on cell motility. Their results showed a dose-dependent inhibition of invasion and migration as revealed by the scratch wound-healing assay and the transwell migration assay performed on AGS cells in vitro [42]. as compared to doxorubicin and curcumin alone. An important process promoting tumor metastasis is transendothelial migration (TEM). The results showed a significant inhibitory effect of DOX + CUR on the migration of MDA cells across the HUVECs -coated wells. In vivo studies noted the superior role of doxorubicin when combined with curcumin on the suppression of proliferation and pulmonary metastasis of cancer cells [43]. A summary of the molecular targets of DOX + CUR identified to date is depicted in Figure 3. gies by creating multi-pH-sensitive polymer-drug conjugates mixed with micelles for efficient delivery of doxorubicin and curcumin. This strategy was performed on MDA-MB-231 breast cancer cells to investigate the efficient co-delivery of drugs and the suppression of tumor metastasis in breast cancer cells. First, the polymeric micelles showed a synergistic anti-proliferative effect on MDA-MB-231 cells in a dose-dependent manner. Moreover, a significant inhibition of tumor cell invasion was observed upon treatment with DOX + CUR as compared to doxorubicin and curcumin alone. An important process promoting tumor metastasis is transendothelial migration (TEM). The results showed a significant inhibitory effect of DOX + CUR on the migration of MDA cells across the HUVECs -coated wells. In vivo studies noted the superior role of doxorubicin when combined with curcumin on the suppression of proliferation and pulmonary metastasis of cancer cells [43]. A summary of the molecular targets of DOX + CUR identified to date is depicted in Figure 3. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and 5-Fluorouracil Pyrimidine analogue of uracil 5-fluorouracil (5FU) has a carbon at position 5 instead of a hydrogen atom. This agent exhibits multiple sources of therapies due to its anti-metabolite and anti-cancer properties. In fact, once 5FU enters the cells, it is directly converted to several active metabolites. These active metabolites suppress proliferation by interfering with DNA synthesis through the inhibition of thymidine formation [27]. Despite its various clinical uses, 5FU has exhibited long-term side effects on the cognitive side as well as nausea, cardiotoxicity and hepatotoxicity, which restricted its usage in The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and 5-Fluorouracil Pyrimidine analogue of uracil 5-fluorouracil (5FU) has a carbon at position 5 instead of a hydrogen atom. This agent exhibits multiple sources of therapies due to its antimetabolite and anti-cancer properties. In fact, once 5FU enters the cells, it is directly converted to several active metabolites. These active metabolites suppress proliferation by interfering with DNA synthesis through the inhibition of thymidine formation [27]. Despite its various clinical uses, 5FU has exhibited long-term side effects on the cognitive side as well as nausea, cardiotoxicity and hepatotoxicity, which restricted its usage in breast cancer patients. Another limitation to 5FU is its resistance developed by cells, which mitigates its clinical use against different types of cancers. Studies have shown that the combination of curcumin with 5-fluorouracil reduces NNMT-related resistance and downregulation via the suppression of p-STAT3. Firstly, by measuring the IC 50 of low-dosed CUR + 5FU, the drastic reduction in the values in the two cell lines SW480 and HT-29 showed decreased proliferation and resistance to 5FU. This synergy also downregulated the mRNA expression of NNMT. Even though the authors reported a decrease in p-STAT3 in the SW480 cell, one cannot conclude that NNMT inhibition is due to low p-STAT3 expression [44]. To further investigate the synergistic effects of CUR and 5FU, hepatocellular carcinoma cell lines, as well as mice, were used. The increasing dose of CUR and constant concentration 5FU on SMMC-7721 cells caused an increase and then a decrease in nuclear NF-Kb, which means that the synergism of both agents inhibited the transfer of NF-kB from the cytoplasm to the nucleus ( Figure 4). Moreover, COX-2 protein was downregulated in SMMC-7721, Bel-7402, HepG-2, MHCC97H and L02 [45]. Curcumin was further investigated for its potent role in reducing CAF (cancerassociated fibroblast)-induced resistance to 5-FU in tumor cells through the suppression of the JAK/STAT3 signaling pathway [46]. Another study revealed that increasing the concentration of curcumin on HCT-116 cells that are resistant to 5FU increased sensitization and showed decreased rates in proliferation as well as increased expression of TET1, NKD2 and Vimentin, whereas a downregulation of β-catenin, E-cadherin, TCF4 and Axin expression was reported. This study also mentioned that TET-1 was responsible in the inhibition of covalent bonding between cytosine and methyl catalyzed by methyltransferase, which partakes in demethylation. TET-1 was deemed as a novel cancer suppressor gene: the increased levels in this study showed that it upregulated NKD2, which ultimately inhibited the WNT pathway. Nevertheless, Pax-6 acted as a transcriptional mediator in TET-1 expression, where NKD2 and TET-1 were also increased upon its upregulation [47] ( Figure 4). agents inhibited the transfer of NF-kB from the cytoplasm to the nucleus (Figure 4). Moreover, COX-2 protein was downregulated in SMMC-7721, Bel-7402, HepG-2, MHCC97H and L02 [45]. Curcumin was further investigated for its potent role in reducing CAF (cancer-associated fibroblast)-induced resistance to 5-FU in tumor cells through the suppression of the JAK/STAT3 signaling pathway [46]. Another study revealed that increasing the concentration of curcumin on HCT-116 cells that are resistant to 5FU increased sensitization and showed decreased rates in proliferation as well as increased expression of TET1, NKD2 and Vimentin, whereas a downregulation of β-catenin, E-cadherin, TCF4 and Axin expression was reported. This study also mentioned that TET-1 was responsible in the inhibition of covalent bonding between cytosine and methyl catalyzed by methyltransferase, which partakes in demethylation. TET-1 was deemed as a novel cancer suppressor gene: the increased levels in this study showed that it upregulated NKD2, which ultimately inhibited the WNT pathway. Nevertheless, Pax-6 acted as a transcriptional mediator in TET-1 expression, where NKD2 and TET-1 were also increased upon its upregulation [47] (Figure 4). The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Paclitaxel Paclitaxel (C47H51NO14) is a well-known anti-cancer drug that is produced in the bark and needles of Taxus brevifolid. Paclitaxel has anti-tumor effects, mainly leading to mitotic arrest [48]. Paclitaxel promotes microtubule stabilization, thus preventing cell cycle progression, mitosis and growth of several types of cancers [49]. There is an increasing number of studies that have revealed that curcumin, in combination with paclitaxel, promotes inhibition of cell migration and thus inhibition of mitosis in various cell lines ( Figure 5). The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Paclitaxel Paclitaxel (C 47 H 51 NO 14 ) is a well-known anti-cancer drug that is produced in the bark and needles of Taxus brevifolid. Paclitaxel has anti-tumor effects, mainly leading to mitotic arrest [48]. Paclitaxel promotes microtubule stabilization, thus preventing cell cycle progression, mitosis and growth of several types of cancers [49]. There is an increasing number of studies that have revealed that curcumin, in combination with paclitaxel, promotes inhibition of cell migration and thus inhibition of mitosis in various cell lines ( Figure 5). One of the many studies on that combination investigated its effects on ovarian cancer, both in vitro and in vivo. There was significant inhibition of cell migration in the SKOV3 cell line in response to treatment with paclitaxel only, as well as combined with curcumin. However, paclitaxel alone was not enough to affect the migration of the multi-drug-resistant cells of that same line. When combined with curcumin, however, the susceptibility of the multi-drug-resistant cell line to the treatment was restored, and a dosedependent inhibition of metastasis was observed [50]. Even though the authors did not further investigate the molecular pathway underlying the inhibition of migration, previous studies in the literature provide a possible explanation: paclitaxel promoted the activation of NF-κB in breast cancer cells while curcumin inhibited its expression by inhibiting IκBα kinase activation. Furthermore, they revealed that this combination suppressed metastatic proteins, such as VEGF, MMP-9 and intercellular adhesion molecule-1, thus leading to the suppression of metastasis [51] (Figure 5). further investigate the molecular pathway underlying the inhibition of migration, previ-ous studies in the literature provide a possible explanation: paclitaxel promoted the activation of NF-κB in breast cancer cells while curcumin inhibited its expression by inhibiting IκBα kinase activation. Furthermore, they revealed that this combination suppressed metastatic proteins, such as VEGF, MMP-9 and intercellular adhesion molecule-1, thus leading to the suppression of metastasis [51] (Figure 5). Recently, new research has investigated molecular pathways underlying the antimetastatic potential of curcumin combined with paclitaxel [52]. Vascular endothelial growth factor (VEGF), Cyclin D, and STAT3, all involved in metastasis, were shown to be effectively suppressed by curcumin treatment as well as in synergy with paclitaxel. The results reveal a downregulation in the gene expression of the three factors, with an upregulation of the pro-apoptotic caspase 9 ( Figure 5). Overall, a potent regulation of metastasis was observed when cancer cells were exposed to curcumin alone or in combination with paclitaxel as compared to the effect of the chemotherapeutic drug alone [52]. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Berberine Berberine is an isoquinoline alkaloid isolated from the Chinese herb Coptis chinensis. It also has anti-proliferative activity reported in gastric cancer cells and anti-metastatic effects reported in breast cancer cells when combined with chemotherapy. Recently, new research has investigated molecular pathways underlying the antimetastatic potential of curcumin combined with paclitaxel [52]. Vascular endothelial growth factor (VEGF), Cyclin D, and STAT3, all involved in metastasis, were shown to be effectively suppressed by curcumin treatment as well as in synergy with paclitaxel. The results reveal a downregulation in the gene expression of the three factors, with an upregulation of the pro-apoptotic caspase 9 ( Figure 5). Overall, a potent regulation of metastasis was observed when cancer cells were exposed to curcumin alone or in combination with paclitaxel as compared to the effect of the chemotherapeutic drug alone [52]. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Berberine Berberine is an isoquinoline alkaloid isolated from the Chinese herb Coptis chinensis. It also has anti-proliferative activity reported in gastric cancer cells and anti-metastatic effects reported in breast cancer cells when combined with chemotherapy. A combination of curcumin, berberine and quercetin was shown to effectively inhibit the expression of E-cadherin, mesenchymal N-cadherin, β-catenin, CD44 marker and MMP9 in triple-negative breast cancer cells ( Figure 5). This cancer type is characterized by a lack of expression of estrogen, progesterone and human epithelial growth factor (HER2) receptors, making it difficult to treat by hormonal therapy and rendering it better managed by traditional cytotoxic chemotherapy. In the context of breast cancer, both Eand N-cadherins are involved in the EMT process and promote oncogenesis and metastasis. Similarly, CD44 surface adhesion receptor facilitates invasion of cancer cells and their migration. Thus, inhibition of these markers by the multi-compound combination has antiproliferative and significant anti-migratory consequences that potentiate each compound's individual effect [53]. Another study aimed to evaluate the anti-cancer properties of curcumin, berberine and 5-Fu, alone or in combination, on breast cancer cells. The strongest effect on cell growth and invasion of MCF-7 cells was noticed when the cells were treated with CUR + BER + 5-Fu compared to a control or to each drug alone. Their study demonstrated the potential anti-apoptotic and anti-invasive properties of curcumin in combination with berberine and 5-Fu [54]. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Docetaxel Docetaxel (DTX) is sold under the brand Taxotere is part of the taxane family, a class of diterpenes that originate from plants of the genus Taxus. It is a chemotherapeutic agent typically useful in many cancer types, especially unresectable and metastatic cases. It interferes with microtubules assembly by binding β-tubulin, thus leading to cell cycles arrest at the G2/M phase. Docetaxel also downregulates the expression of the anti-apoptotic, pro-proliferative Bcl-2 protein. A treatment of docetaxel combined with curcumin demonstrated great efficiency, both in vitro and in vivo, in pancreatic cancer, lung cancer, glioma (brain tumor) and esophageal squamous cell carcinoma [55]. The molecular pathways involved are illustrated in Figure 6. Particularly, in PANC-1 pancreatic cancer cells, the combination led to the downregulation of MMP2 and MMP9, both pro-invasive and pro-metastatic metalloproteinases. This is mediated through an upregulation of tissue inhibitor matrix metalloproteinase 1 (TIMP-1), a natural inhibitor of MMPs. The same combination strategy was used to treat esophageal squamous cell carcinoma using ESCC KYSE150 and KYSE510 cells, demonstrating its ability to weaken the healing capacity of the cancer cells and inhibit their invasion [56]. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Metformin Metformin was FDA-approved in 1994 and is best known as an antidiabetic agent used in type 2 diabetes mellitus patients. It can further be used in combination with other agents, such as curcumin, against cancer. Metformin is known as a suppressor of mTOR activity by activating ataxia telangiectasia mutated (ATM) and liver kinase B1(LKB1) as well as adenosine-monophosphate-activated kinase (AMPK), thus preventing protein synthesis and cell growth. Several studies were conducted to support that metformin in combination with curcumin could be a potential drug against different types of cancer. HCC cell lines HepG2 and PLC/PRF/5, involved in hepatocellular carcinoma, were treated with metformin and curcumin. An increase in the inhibition rate of invasion for HepG2 and PCL/PRF/5 and the downregulation of MMP2 and MMP9 in HepG2 cells resulted from the co-administration of curcumin and metformin. Moreover, an upregulation in the expression of tumor suppressors PTEN and p53 was observed as well as an inhibition of NF-kB levels upon Another study demonstrated a different intervention on the usage of DTX and CUR against pancreatic cancer. GE11-DTX-CUR NPs are nanoparticles synthesized to deliver DTX and CUR into tumor cells. The NPs have GE11 target EFGR peptide on the surface of these cells. LNCaP cells were used in this study and were subject to in vitro and in vivo assays. A low IC 50 was observed, which is the minimal concentration of a drug required to achieve 50% inhibition of cell proliferation. In addition, CUR inhibits the PIK3/AKT pathway involved in the proliferation, apoptosis and metastasis of the tumor. CUR also lowers the endoplasmic reticulum stress, which is a pathway that allows tumor survival and expansion. Therefore, CUR and DTX introduced as GE11-DTX-CUR NPs constitute a synergistic antitumor treatment for pancreatic cancer [57]. Curcumin and docetaxel co-loaded poly lactide-co-glycolide (PLGA) nanoparticles were used to target U87 glioma cells and bEND.3 endothelial cells [58]. These nanoparticles can surpass the blood-brain barrier and deliver the drug (DTX + CUR). A low IC 50 was obtained, indicating the efficiency of the drug against invasion and metastasis. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Metformin Metformin was FDA-approved in 1994 and is best known as an antidiabetic agent used in type 2 diabetes mellitus patients. It can further be used in combination with other agents, such as curcumin, against cancer. Metformin is known as a suppressor of mTOR activity by activating ataxia telangiectasia mutated (ATM) and liver kinase B1(LKB1) as well as adenosine-monophosphate-activated kinase (AMPK), thus preventing protein synthesis and cell growth. Several studies were conducted to support that metformin in combination with curcumin could be a potential drug against different types of cancer. HCC cell lines HepG2 and PLC/PRF/5, involved in hepatocellular carcinoma, were treated with metformin and curcumin. An increase in the inhibition rate of invasion for HepG2 and PCL/PRF/5 and the downregulation of MMP2 and MMP9 in HepG2 cells resulted from the co-administration of curcumin and metformin. Moreover, an upregulation in the expression of tumor suppressors PTEN and p53 was observed as well as an inhibition of NF-kB levels upon treatment [59]. Therefore, a combination of metformin with curcumin could be used to affect invasion and metastasis by inhibiting MMP2 and MMP9 expression and the PTEN/PI3K/Akt/mTOR/NF-kB signaling pathway ( Figure 6). A study on gastric adenocarcinoma was also able to support the idea of this combination therapy against cell proliferation. The treatment consisting of metformin plus curcumin played a key role by reducing cell migration and invasion in a dose-and time-dependent manner, thus affecting the metastatic potential of human AGS gastric cells [60]. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Gemcitabine Gemcitabine (GEM) (a pyrimidine nucleoside antimetabolite) is a cytotoxic agent showing promising results when used in treatment against solid tumors. Since gemcitabine is a pyrimidine nucleoside analogue, it can be incorporated into DNA, thus interfering and blocking DNA synthesis [61]. It has been proved that gemcitabine in combination with curcumin has a synergistic activity against cancer with low toxicity. A study performed on pancreatic cancer cells evaluated the effect of GEM in combination with curcumin. The results showed a more efficient decrease in the number of invasive pancreatic cancerous cells when exposed to GEM plus curcumin [55]. In fact, this combinatorial treatment was found to exhibit higher anti-proliferative and pro-apoptotic activity on PANC-1 cells in vitro. Further, they performed wound healing assays to determine the inhibitory effect of CUR + GEM on migration and their findings were supported by the significant reduction in N-cadhinerin and Vimentin expression. Further, they investigated the anti-invasion properties of curcumin in combination with GEM in PANC-1 cells. Their results showed a significant decrease in the rate of invasive cells along with an upregulation in the expression of TIMP1 in a dose-dependent manner. Moreover, a downregulation in the expression of MMP2 and MMP9 was noticed upon treatment with CUR + GEM. Overall, this study demonstrates the synergistic anti-proliferative, pro-apoptotic, anti-invasive and anti-migration effects of GEM when combined with curcumin ( Figure 6). In addition, a positive correlation between GEM plus curcumin treatment and the preservation of quality of life in patients was displayed. In metastasized cancer, high baseline levels of IL-6 and sCD40L are encountered. IL-6, an immunosuppressive cytokine, exhibits a positive correlation with sCD40L (activate T lymphocytes), thus allowing tumor growth. However, no increase in these biomarkers took place with the intake of this treatment [62]. Another study involving pancreatic cancer and treatment with GEM plus curcumin was conducted on HPAF-II and PANC-1 cell lines [63]. In this study, they used a supermag-netic iron oxide nanoparticle-curcumin combined with gemcitabine (SP-CUR + GEM) to enhance drug delivery in pancreatic cancer cells. Their combinatorial treatment showed an increase in the expression of E-cadherin, which is involved in metastasis inhibition. An important pathway, the Sonic hedgehog (SHH) pathway, involved in the regulation of cell progression, was further studied. Changes in key regulatory proteins of SHH were determined, including Gli-1 and Gli-2, upon treatment with curcumin and gemcitabine ( Figure 6). Moreover, the SP-CUR enhanced the uptake of GEM into the cells through the inhibition of the CXCL-12, CXCR-4 pathway, highly involved in the regulation of growth, survival and metastasis. Another study performed on GEM-resistant lung cancer cells demonstrated the role of curcumin in improving the sensitivity of A549 cells to gemcitabine. The combination treatment inhibited invasion and migration of lung cancer cells, as shown by the downregulation of MMP9, Vimentin and N-cadherin, while overexpressing the E-cadherin expression level [64] (Figure 6). The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Carboplatin Carboplatin, sold under the trade name Paraplatin, is a chemotherapy medication used to treat several types of cancer. However, the use of carboplatin was limited due to its side effects, such as myelosuppression. Interestingly, a previous study revealed the potent role of curcumin in reversing carboplatin-induced toxicity and myelosuppression [65]. The combination of curcumin with carboplatin showed prominent results on the inhibition of tumor cell proliferation, invasion and migration in lung and breast cancer cells. A study conducted by Kang et al. investigated the synergistic anti-proliferative and anti-metastasis properties of curcumin and carboplatin in lung cancer cells [66]. Based on the results obtained, the combination treatment revealed a synergistic inhibition of cell proliferation in a dose-dependent manner in A549 lung cancer cells along with changes in cell morphology, including cell shrinkage, loss of cell-to-cell contact, membrane blebbing and partial detachment. They further investigated the rate of migration and invasion via wound healing and Matrigel cell invasion assays. Their data suggest a significant inhibitory effect of migration and invasion upon co-treatment with curcumin and carboplatin. To further understand the anti-invasive activities of this combination, they assessed two different metastasis regulators, namely MMP2 and MMP9. They found that the combination of curcumin and carboplatin significantly suppresses metastasis through the inhibition of these two markers. Another study on breast cancer revealed that this combinatorial treatment inhibits cell proliferation and promotes apoptosis in CAL-51 and MDA-MB-231 cells in vitro. They evaluated the effect of curcumin when combined with carboplatin on colony formation and their results showed a dose-dependent inhibition of colony formation in both cell lines. At a molecular level, they reported a downregulation of Rad51 expression, a DNA repair gene and an upregulation of γH2AX expression, involved in DNA damaging [67]. It was previously reported that Rad51 protein is overexpressed in cancer cells. Dysregulation or inhibition of this protein was proposed to be responsible for the suppression of metastasis and invasion [68]. Taken together, the study by Wang et al. suggests a potent anti-metastasis activity of curcumin and carboplatin through the inhibition of Rad51 in breast cancer cells. The Hindrance of Metastatic/Proliferative/Invasive Pathways by Curcumin and Other Drugs Few recent studies were identified in the past five years that investigate the superior role of curcumin when combined with other drugs in the inhibition of tumor cell proliferation, invasion and metastasis; these are summarized in Table 1. A study published recently evaluated the potential therapeutic effects of curcumin and resveratrol hybrids on colorectal cancer cells, namely SW620 and SW480. Their results showed that different hybrid molecules induce apoptosis through the caspase-dependent pathway in both cell lines. Further, they demonstrated a direct interaction between the hybrids and MMP7 molecules, suggesting the modulation of MMP7 catalytic activity, hence preventing cancer cell progression [69]. Similarly, Panda et al. investigated several hybrids of curcumin and dichloroacetate (DCA), a pyruvate dehydrogenase kinase 1 (PDK1) inhibitor, able to promote apoptosis in breast cancer cells. In vitro and in vivo results revealed the anti-proliferative and anti-metastatic activity of these conjugates in breast cancer [70]. Their data suggest that curcumin and DCA hybrids significantly reduced cell viability and colony formation in MDA-MB-231 and T47D cells in a dose-dependent manner. Moreover, the combination of two phytochemicals, curcumin and luteolin, showed promising anti-cancer activity on colon cancer [71]; curcumin and luteolin synergistically inhibit proliferation and migration of colon cancer cells, as shown by the wound healing assay. Moreover, protein expression analysis revealed the suppression of Notch-1 and TGF-β in vitro and in xenograft mice. These data suggest that curcumin and luteolin are effective in suppressing colon cancer cell proliferation and metastasis via the Notch1 and TGF-β signaling pathway. A different study evaluated the anti-cancer activity of curcumin when combined with aprepitant, a drug known for its antitumor properties on various cancers [72]. A study conducted on hepatocellular carcinoma demonstrated that liposome conjugates of curcumin and aprepitant drug are able to reduce ECM deposition through the inhibition of collagen synthesis. Wound healing assays showed a higher inhibition of the migration rate of SMMC-7721 cells when treated with liposome conjugates of curcumin and aprepitant. Their findings were further confirmed by the transwell migration assay. Their results revealed that the combination of curcumin with aprepitant is able to suppress tumor cell growth and migration as well as inhibiting lung metastasis in vivo [73]. The combination of curcumin was further studied with two other bioactive compounds, namely thymoquinone (TQ) and 3,3 -diindolymethane (DIM), on A549 lung cancer cells and HepG2 liver cancer cells in vitro [74]. The authors assessed the anti-metastasis activity of curcumin when combined with TQ and DIM, alone or together, in A549 cells by performing wound healing and colony formation assays. Their results revealed a significantly higher inhibitory effect of migration as well as colony formation in a dose-dependent manner as compared to control samples. Furthermore, this combination upregulates caspase-3 protein levels while it significantly downregulates PI3K and AKT levels in A549 lung cancer cells. Their data show that the combination therapy suppresses tumor cell proliferation as well as migration activity via the inhibition of the PI3K/Akt pathway in A549 and HepG2 cells. Additionally, Shao et al. assessed the synergistic anti-proliferative and anti-metastatic activity of curcumin when combined with a new biflavonoid, wikstroflavone B (WFB), on nasopharyngeal carcinoma cells (NCC), namely CNE-1 cells, in vitro [75]. Migration assays revealed a significantly higher inhibition of CNE1 tumor cell migration as compared to the control group. Curcumin along with WFB inhibits tumor cell proliferation in a dosedependent manner, as revealed by the dysregulation of several tumor growth markers, including survivin, cyclin D1, p53 and p21 gene expression. Further, their data revealed the modulation of tumor invasion and metastasis markers, such as MMP-2 and MMP-9, as well as FAK gene expression in CNE1 cells. Since the qRT-PCR data suggest that FAK was one of the most highly regulated genes upon treatment with CUR and WFB, they aimed at investigating the regulation of the FAK/STAT3 pathway in CNE1 cells. Western blot analysis showed a significant decrease in the protein expression level of p-FAK and p-STAT3 in CNE1 cells upon CUR + WFB treatment. To further confirm that the FAK/STAT3 pathway is involved in the previously determined anti-cancer activity, pretreatment of CNE1 cells with FAK inhibitor was performed. Their results showed a more potent inhibitory effect of cell proliferation and migration when the NCC were pre-exposed to the FAK inhibitor as compared to CUR + WFB treatment. The data reported in this study suggest that the combination of curcumin and WFB plays a crucial role in the regulation of NCC growth, proliferation, invasion and metastasis in a FAK/STAT3 dependent manner. Discussion Combination therapy is an emerging approach effective in increasing the treatment efficacy of cancer by limiting the major drawbacks of various chemotherapeutic drugs. Many studies in the literature target the combination of curcumin with various therapeutic agents, revealing a synergistic inhibitory effect of several pathways involved in the regulation of cell proliferation, metastasis and invasion. The data presented in this review outline the underlying molecular mechanisms of curcumin with various drugs; the results obtained provide solid evidence that curcumin combination chemotherapy interferes with multiple distinct cellular factors, as depicted in Table S1. In brief, this combinatorial treatment was found effective in the suppression of migration proteins, including MMPs, VEGF and cytokines, as well as several signaling pathways, such as NF-κB and JAK/STAT pathways. However, the clinical therapeutic application of these combinatorial regimens has been hampered by major limitations: the hydrophobicity of curcumin results in poor bioavailability due to its low absorption in the plasma, non-uniform bio-distribution and poor localization in the targeted cancerous tissues, hence increasing the drug toxicity [76]. For this reason, a new perspective using curcumin along with polymer-based nanocarriers is currently under investigation. These conjugates provide effective drug delivery, reduce drug toxicity and increase drug stability [77]. Scientists are able to control several factors, including the shape, size and composition of the nanoparticles, to ensure effective co-delivery of selected chemotherapeutic drugs into the tumor microenvironment [78]. Recently, these nanocarriers are being thoroughly investigated in combination therapy with several pharmaceutical agents that were presented in the results section to pave the way for promising clinical applications. Multi-pH-sensitive conjugate micelles of curcumin and doxorubicin synergistically inhibited cell proliferation and invasion of breast cancer cells [43]. Similarly, nanoparticle formulation of encapsulated paclitaxel and curcumin was able to inhibit tumorigenesis of ovarian cancer cells due to the efficient co-delivery of these drugs [50]. The various nanocarrier formulations, including polymeric nanoparticles, micelles, nanoliposomes, polymer-drug conjugates, dendrimers, hydrogels, nanocapsules and exosomes, are all promising strategies for the use of curcumin in combination with different anti-cancer agents [77]. It is noteworthy that, despite the recent advancements in polymeric nanoparticles therapy, scientists are still facing several challenges, such as the high cost of the nanoformulations as well as their long-term toxicity, which requires further investigation. However, all the current studies in this field show promising results and pave the way for in-depth clinical studies of curcumin nanoformulation in synergy with chemotherapeutic drugs to inhibit tumor metastasis and invasion. Search Strategy The PRISMA guidelines were followed to evaluate the literature of curcumin and its synergistic effects with multiple chemotherapeutic drugs. MEDLINE/PubMed and Scopus databases were referred to up to 2022. The reason behind choosing these two databases was due to their easy navigation and access to our scope of review in cancer chemoprevention. The search keywords applied were: (curcumin) AND (metastasis OR metastatic pathways) AND (proliferation OR proliferative pathways) AND (invasion OR invasive pathways) AND (combination therapy OR in combination with chemotherapeutic agents) AND (synergism OR synergistic effects OR in synergy) AND (inhibition OR inhibiting OR prevention). In the course of article searching for our literature, filters were applied to select for studies published within the last five years, while older studies were also chosen to be referenced in the introduction. Inclusion Criteria In search of the data for the systematic review, the inclusion criteria attributed to it included: (i) mechanisms of suppressing the different pathways and proteins involved in metastasis, proliferation and invasion; (ii) published articles were of pure research; (iii) published articles were written in English and free access to their full texts; (iv) in vitro human cancerous cell lines as well as in vivo animal studies; (v) combination therapy of curcumin along with various chemotherapeutic drugs in inhibiting metastasis, proliferation and invasion; (vi) studies that solely used curcumin as a means of treating multiple cancerous cell lines in different pathways. Exclusion Criteria On the other hand, the exclusion criteria encompassed were: (i) studies that used the different therapeutic effects of curcumin, such as its anti-inflammatory role or its role in ameliorating cardiovascular pathologies; (ii) studies that used the combination of curcumin and chemotherapeutic drugs in regulating other events in cancer, such as apoptosis, necrosis or cell cycle death; (iii) studies that used only curcumin as a therapeutic agent in inhibiting metastasis, proliferation and invasion; (iv) published articles that are reviews, letters to editors or authors and commentaries. Data Extraction The data extracted for the included studies were the type of pharmacological intervention, the methods, the cell lines, the molecular outcomes and study conclusions. Conclusions The current review is the first review to assess the various combination therapies between curcumin and multiple drugs that have portrayed inhibitory properties against metastatic pathways. The synergistic effects of curcumin in combination with synthesized chemical pharmaceuticals showed potential treatment in the suppression of different mechanisms and proteins involved in cell invasion, proliferation and metastasis, including SIRT, JAK/STAT, MAPK, P13K/Akt, Wnt/β-catenin, Notch and NF-κB as well as uPA, MMPs, VEGF and interleukins. However, the current studies are limited to a few aspects in the sense that there is a lack of evidence regarding the long-term toxic properties of such synergy as well as determining the core proteins that partake in the inhibition process of the pathways. Further research is required to comprehend the complete complex regulatory networks that contribute to the anti-cancer actions of curcumin in combination chemotherapy. Perhaps future research can expand on regulatory proteins in different metastatic pathways. Since the literature shows that the most significant anti-cancer properties are observed upon combination of curcumin with other drugs, it is highly possible to consider integrating curcumin in clinical
Interactive effects of rice bran compost and chemical fertilizers on macronutrients, oil and protein content in sunflower (Helianthus annuus L.) A field experiment was conducted at the research farm of Charfasson Govt. College, Bhola, Bangladesh in rabi season in 2015-2016 to evaluate the impact of conjunctive use of chemical fertilizers with rice bran on concentration, uptake and seed quality of sunflower cv. BARI-2 (Keroni-2). The experiment was laid out in the randomized complete block design (RCBD) having sixteen treatments with three replications. The size of the plots were 3 m x 2 m. Treatments were T1 Control (RB and -NPK), T2: 2.5 t RB ha-1, T3: 5.0 t RB ha,T4: 7.5 t RB ha-1, T5: N40P30K50 kg ha,T6: N80P60K100 kg ha-1, T7: N120P90K150 kg ha-1, T8: 2.5 t RB ha-1 + N40P30K50 kg ha-1, T9: 2.5 t RB ha-1 + N80P60K100 kg ha-1, T10: 2.5 t RB ha-1 + N120P90K150 kg ha1, T11: 5.0 t RB ha-1 + N40P30K50 kg ha-1, T12: 5.0 t RB ha-1 + N80P60K100 kg ha-1, T13: 5.0 t RB ha1 + N120P90K150 kg ha-1, T14: 7.5 t RB ha-1 + N40P30K50 kg ha-1, T15: 7.5 t RB ha-1 + N80P60K100 kg ha-1, T16: 7.5 t RB ha-1 + N120P90K150 kg ha-1. Results showed that the concentration, uptake and quality of seeds (oil and protein) of the crop increased with increasing rate of the amendments significantly (P<0.05) over the control and the variation between the treatments were also significant irrespective of the sources of amendments in most of the cases. Generally, combination of the treatments showed better performance than their individual application. Maximum values of NPKS concentration (%) in different organs of sunflower were 1.22, 0.35, 1.90, 0.18 for stem; 1.17, 0.35, 2.41, 0.16 for root; 3.98, 0.43, 4.28, 0.24 for leaf, 1.04, 0.65, 3.00, 0.22 for petiole; 2.16, 0.58, 2.21, 0.26 for inflorescence and 5.24, 0.83, 1.60, 0.47 for seed measured in treatments 5.0 t RB ha-1 + N120P90K150 kg ha-1 and 7.5 t RB ha-1 + N120P90K150 kg ha-1 in most of the cases. However, their uptake pattern also followed the same trend as in concentration and the highest values were found in those treatments in most of the cases. Significantly (P<0.05%) the highest content of oil (51.1%) in seed was measured in the treatment 5.0 t RB ha-1 + N120P90K150 kg ha-1 and protein (33.9%) was found in the treatment 5.0 t RB ha-1 +N80P60K100 kg ha-1. Their lowest values were found in control for oil and in 2.5 t RB ha-1 for protein, which was lower than control treatment. The overall findings of this study indicated that rice bran in combination with chemical fertilizers could be applied to achieve better concentration and uptake in different organs, oil and protein content in seeds of sunflower. Introduction Sunflower (Helianthus annuus L.) is one of the most important oil seed crops in the world's oil seed production, because it offers advantages in crop rotation systems, such as high adaptation capability, suitability to mechanization, low labor needs and easy cultivated and grown in different conditions and soils. In addition, it is one of the crops, which have high availability to planting and produce high yield under stress such as (drought, salinity or temperature). The oil Int. J. Agril. Res. Innov. Tech. 10(2): 91-99, December 2020 extracted (48-53%) is edible from this crop; about 80% of the oil is used for edible purpose and rest being non-edible, used for industrial purposes. Agriculture plays an important role in economy of developing countries like Pakistan (Badar and Qureshi, 2014), Bangladesh and so on. However, rapid crop production with inappropriate farming practices deteriorate organic matter in soil, which results in decreased microbial activity that eventually affect its physical, chemical and biological conditions which lead to decline in land productivity and crops yields. To solve this problem, synthetic fertilizers were always thought to be a better way to improve the soil fertility and crop productivity but unfortunately, the excessive use of these creates a number of serious environmental and health risk (Badar and Qureshi, 2014). Agro-chemicals deteriorate soil health and environment got pollute. Problems associated with continuous use of chemical fertilizers included nutrient imbalance, increased soil acidity, degradation in soil physical properties and loss of organic matter. Hence, the tendency to supply all plant nutrients through chemical fertilizer should be reconsidered in the future because of the deleterious effect on soil productivity on a long-term basis (Moyin-Jesu, 2015). To minimize these hazards, naturally occurring organic fertilizers, namely animal and plant manures, fall residues, and food and urban wastes are better alternate of commercially available fertilizers. Reports proved that organic farming improves soil composition, fertility, and soil fauna, which in the long run have a beneficial effect on crop production (Badar and Qureshi, 2014). Leguminous materials and rice bran (RB) supplied mainly N, P, K, Zn, Fe, Cu, Mn and B to the soil those NPK 15-15-15 fertilizer did not possess. The organic materials applied (wood ash, rice bran and so firth) have beneficial residual effects on soil properties which are in line with growing concern of using environment friendly fertilizer (Moyin-Jesu, 2015). Mahrous et al. (2014) reported that different organic nutrient management practices have been found to be technically and financially beneficial. Adding nutrient in the form of organic fertilizers has many advantages, e.g. they enhance soil biological activity, which improves nutrient mobilization, enhance root growth due to better soil structure, release nutrients slowly and contribute to the industrial pool of organic N and P in the soil. Also using organic fertilizers reducing N leaching loss and P fixation; they can also supply micronutrients, increase the organic matter content of the soil, therefore improving the exchange capacity of nutrients, increasing soil water retention, promoting soil aggregates. Sunflower is highly productive in sandy loam as well as a clay loam soil. Therefore, farmers could cultivate this crop widely both in rabi and kharif seasons in the coastal areas of Bangladesh. Moreover, it reduces climate change vulnerability by emission of large amount of CO2 (Mahapatra and Sharma, 1989). For higher productivity and sustainability, integrated use of organic and inorganic sources of nutrient is very important (Rasool et al., 2013). Keeping these aspects in view, the present investigation was carried out to examine the impact of conjunctive use of rice bran and chemical fertilizer on concentration, uptake and seed quality of sunflower (Helianthus annuus L.). The doses were selected according to the Fertilizer Recommendation Guide of Bangladesh Agricultural Research Council (BARC, 2012). At the time of initial land preparation, rice bran was applied and at final land preparation, N, P and K were applied as urea, triple super phosphate and muriate of potash, respectively. Seeds were sown on 29 December, 2015. Sixty seeds were sown in each plot. Length between row to row was 40 cm and width between seed to seed was 25 cm. Intercultural practices i.e. weeding, spading, fencing, pesticide etc. were applied as per when needed. Finally, plants were harvested after 90 days of sowing of seeds at the period of maturity. Different organs of sunflower plants viz., stem, root, leaf, petiole, inflorescence and seed were collected and dried in an oven at temperature of 65 0 C. The dry weight of different parameters and seed weights were measured and those were kept in paper bags separately. The uptake of nutrients by different parts of sunflower plant was worked out by multiplying the nutrient concentration and dry matter yield of the plant parts. Estimation of oil content (%) in the seed sample was done by Soxhlet Fat Extraction method evolved by (AOAC, 1990). Seed protein content was calculated by multiplying the N content of seed with a factor of 6.25. Analysis of variance was done with the help of SPSS program and the mean differences among different treatments were evaluated by LSD test at 5% level. (a) Concentration and uptake of NPKS in root Effects of rice bran and NPK fertilizers on NPKS concentration and uptake of root of sunflower were determined. The results showed that both concentration and uptake of NPKS of root increased significantly (P<0.05) due to application of various combinations of rice bran and NPK fertilizers over the control (Table 1). The treatments generally showed an increase in both the concentrations and uptakes of NPKS in sunflower root with increasing rates of both rice bran and NPK fertilizers. The values of concentration and uptake of the nutrients in root of sunflower when compared between the treatments, in most of the cases, identical results were obtained. Rice bran showed better results in nitrogen and sulfur concentration of root than fertilizer when applied alone. But, phosphorus and potassium concentrations in root of sunflower were found to be better in fertilizer treated plants. The ranges of nitrogen concentration and uptake were found to be 0.52 to 1.17% and 5.2 to 187.2 mg plant -1 root, respectively. The highest value of nitrogen concentration was recorded in treatments, 2.5 t RB ha -1 + N120P90K150 kg ha -1 and 5.0 t RB ha -1 + N120P90K150 kg ha -1 but that of uptake was observed in 5.0 t RB ha -1 + N120P90K150 kg ha -1 treatment. Like nitrogen, concentration and uptake of phosphorus in root of sunflower ranged from 0.09 to 0.35% and 0.9 to 56.0 mg plant -1 root, respectively (Table 1). Both the highest values were observed in the same treatment, 5.0 t RB ha -1 + N120P90K150 kg ha -1 . Potassium concentration and uptake of sunflower root were found to be 0.89 to 2.41 and 8.9 to 344.0 mg plant -1 root, respectively ( Table 3). The highest values were recorded in treatments, 5.0 t RB ha -1 + N40P30K50 kg ha -1 and 5.0 t CD ha -1 + N120P90K150 kg ha -1 , respectively. Sulfur concentration and uptake of root ranged from 0.07 to 0.16% and 0.7 to 19.2 mg plant -1 root, respectively ( Table 1). The highest recorded values were observed in treatments, 7.5t RB ha -1 and 5.0 t RB ha -1 + N120P90K150 kg ha -1 , respectively. (b) Concentration and uptake of NPKS in stem Effects of rice bran and NPK fertilizers on concentration and uptake of NPKS of stem of sunflower were determined. The results showed that both concentration and uptake of NPKS of stem increased significantly (P<0.05) due to application of various combinations of rice bran and NPK fertilizers over the control ( Table 2). The values of concentration and uptake of the nutrients in stem of sunflower when compared between the treatments, in most of the cases, identical results were obtained. Nitrogen, phosphorus and potassium concentrations in stem were comparatively more effective in inorganic fertilizers treated plants than rice bran treated ones. An opposite effect was observed in case of sulfur concentration in stem. Nitrogen concentration and uptake varied from 1.07 to 1.22% and 85.6 to 787.4 mg plant -1 stem, respectively. The highest values were recorded in treatments, N120P90K150 kg ha -1 and 5.0 t RB ha -1 + N40P30K50 kg ha -1 , and 7.5 t RB ha -1 + N120P90K150 kg ha -1 , respectively. Phosphorus concentration and uptake ranged from 0.11 to 0.35% and 8.8 to 200.6 mg plant -1 stem, respectively ( Table 2). The highest values of concentration and uptake of phosphorus were recorded in the same treatment, 5.0 t RB ha -1 + N120P90K150 kg ha -1 . Potassium concentration and uptake varied from 0.62 to 1.90% and 49.6 to 998.6 mg plant -1 stem, respectively ( Table 2). The maximum values of concentration and uptake were observed in treatments, 7.5 t RB ha -1 + N40P30K50 kg ha -1 and 7.5 t RB ha -1 + N80P60K100 kg ha -1 , respectively. Similarly, sulfur concentration and uptake also varied from 0.07 to 0.18% and 5.6 to 111.7 mg plant -1 stem, respectively ( Table 2). The highest values were recorded in treatments, 2.5 t RB ha -1 + N120P90K150 kg ha -1 and 7.5 t RB ha -1 + N80P60K100 kg ha -1 , respectively. (c) Concentration and uptake of NPKS in leaf Treatments of rice bran and NPK fertilizers, on concentration and uptake of NPKS in leaf were measured. Treatments showed significantly (P<0.05) positive effects with doses of rice bran and NPK fertilizers on nutrients concentrations of NPKS in sunflower leaf over the control (Table 3). But comparison between the treatments showed identical results. Between rice bran and fertilizers, rice bran played better role than inorganic fertilizers in case of only sulfur concentration. Concentration and uptake of nitrogen and phosphorus in leaf of sunflower varied from 2.18 to 3.98 and 0.28 to 0.43%; and 152.6 to 903.5 and 19.6 to 97.6 mg plant -1 leaf of both nitrogen and phosphorus, respectively. The highest values of both nitrogen and phosphorus were recorded in the same treatment, 5.0 t RB ha -1 + N120P90K150 kg ha -1 (Table 3). Similarly, concentration and uptake of potassium and sulfur in leaf ranged from 2.56 to 4.28% and 0.10 to 0.24%; and 179.2 to 971.6 and 7.0 to 55.2 mg plant -1 leaf, respectively (Table 3). The highest values were observed in treatments, N120P90K150 kg ha -1 for potassium and 7.5 t RB ha -1 + N120P90K150 kg ha -1 for sulfur. (d) Concentration and uptake of NPKS in petiole Treatments of rice bran and NPK fertilizers, on concentration and uptake of NPKS in petiole were measured. Treatments showed significantly (P<0.05) positive effects with doses of rice bran and NPK fertilizers on nutrients concentrations of petiole of sunflower over the control (Table 4). But comparison between the treatments showed identical results. Between rice bran and fertilizers, rice bran played better role than inorganic fertilizers in case of only sulfur concentration. The concentration and uptake of nitrogen and phosphorus in petiole ranged from 0.53 to 1.04 and 0.14 to 0.65%, and 6.9 to 59.3 and 1.8 to 37.1 mg plant -1 petiole, respectively (Table 4). The highest values for both concentration and uptake of nitrogen and phosphorus were observed in the same treatment, 7.5 t RB ha -1 + N120P90K150kg ha -1 . Like nitrogen and phosphorus, concentration and uptake of potassium and sulfur in petiole of sunflower varied from 0.56 to 3.00% and 0.04 to 0.22% and 7.3 to 171.0 and 0.5 to 12.8 mg plant -1 petiole, respectively (Table 5). The highest values of concentration and uptake of potassium was found in same treatment, 7.5 t RB ha -1 + N120P90K150 kg ha -1 . The highest values of concentration and uptake of sulfur was found in same treatment, 7.5 t RB ha -1 + N80P60K100 kg ha -1 here with in treatment, 5.0 t RB ha -1 + N120P90K150 kg ha -1 for sulfur concentration. (e) Concentration and uptake of NPKS in inflorescence The concentration and uptake of NPKS in inflorescence of sunflower were improved due to application of rice bran and NPK fertilizers in alone and various combinations. All the treatments increased the concentrations of NPKS in inflorescence of sunflower significantly (P<0.05) over the control (Table 5). Concentrations of NPKS increased with increasing doses of rice bran and NPK fertilizers. The differences between the treatments were identical in most of the cases. Nitrogen and phosphorus concentrations of inflorescence ranged from 0.36 to 2.16% and 0.09 to 0.58%, respectively. The highest values were recorded in the same treatment 7.5 t RB ha -1 + N120P90K150 kg ha -1 . The values of uptake of nitrogen and phosphorus varied from 14.4 to 365.4 and 3.6 to 93.6 mg inflorescence plant -1 , respectively (Table 6). The highest uptake of nitrogen and phosphorus were recorded in same treatment, 5.0 t RB ha -1 + N120P90K150 kg ha -1 . Potassium and sulfur concentrations were found to vary from 0.75 to 2.21% and 0.05 to 0.26%, respectively (Table 5). Potassium and sulfur uptakes were found to range between 30.0 to 338.1 and 2.0 to 41.4 mg plant -1 , respectively (Table 5). Highest values of both concentrations and uptakes of potassium and concentration of sulfur of inflorescence were being recorded in the same treatment, 7.5 t RB ha -1 + N120P90K150 kg ha -1 and that of uptake of sulfur in 5.0 t RB ha -1 + N120P90K150 kg ha -1 treatment. (f) Concentration and uptake of NPKS in seed The concentrations of NPKS in seed of sunflower were improved due to application of rice bran and NPK fertilizers in alone and various combinations. All the treatments increased the concentrations of NPKS in seed of sunflower significantly (P<0.05) over the control ( Table 6). Concentrations of NPKS increased with increasing doses of rice bran and NPK fertilizers. The differences between the treatments were identical in most of the cases. Nitrogen and phosphorus concentrations and uptakes of seed showed a variation between 3.17 to 5.24 and 0.43 to 0.83%, and 231.4 to 2396.6 and 31.4 to 345.0 mg plant -1 seed, respectively. The highest values of nitrogen and phosphorus concentrations were recorded in the treatments, 5.0 t RB ha -1 + N120P90K150 kg ha -1 and 7.5 t RB ha -1 , respectively. Similarly, the maximum values of uptake for both the nutrients were observed in the same treatment, 7.5 t RB ha -1 + N80P60K100 kg ha -1 . Concentrations and uptakes of potassium and sulfur in seeds of sunflower varied between 0.57 and 1.60, and 0.13 and 0.47%; 41.6 and 547.2 and 9.5 and 115.0 mg plant -1 seed, respectively ( Table 6). The highest values were observed in treatment, 2.5 t RB ha -1 , and 5.0 t RB ha -1 ; and 2.5t RB ha -1 + N40P30K50 kg ha -1 and 7.5 t RB ha -1 + N80P60K100 kg ha -1 , respectively. These findings are in consistent with the observations of Badar and Qureshi (2014) who reported that composted rice husk improved mineral nitrogen and phosphorus contents of sunflower plants. Yankaraddi et al. (2009) also showed that application of FYM @ 10 t ha -1 + rice hull ash @ 2 t ha -1 + 100% RDF recorded the highest nutrient content (2.51% N, 0.61% P and 2.91% K) and nutrient uptake (170.66 N, 41.14 P and 209.76 K kg ha -1 ) in rice plant. In this context, Akter et al. (2017), further, reported that the primary nutrition (NPK) of rice had better response in saline soil, which received rice hull, rice straw and saw dust. Marr and Cresser (1983) concluded that the typical concentrations of elements in dried healthy foliage are N 0.8-3.0%, K 0.5-2.5%, Ca 1.5-2.8%, Mg 0.15-0.45%, P 0.08-0.35%, Fe 40-150 mg kg -1 , Mn 30-100 mg kg -1 , B 10-50 mg kg -1 , Cu 5-12 mg kg -1 , Zn 30-200 mg kg -1 and Mo 0.1-1.5 mg kg -1 . The result of the concentration of N, P and K of the present experiment are in agreement with Marr and Cresser (1983). (g) Oil and protein content in seed Application of rice bran and NPK fertilizers at different combinations influenced on the oil content of sunflower seeds significantly (P< 0.05) over the control except in treatment, 2.5 t ha -1 RB (Table 6). Results showed that all the treatments increased the oil content of seed following the increase of rice bran and NPK fertilizers except 2.5 t ha -1 RB. The variations among the treatments were found to be significant in most of the cases. The highest (51.1%) content of oil was observed in 5.0 t RB ha -1 + N120P90K150 kg ha -1 treatment. The lowest (39.0%) content of oil was observed in 2.5 t RB ha -1 treatment that the value was lower than control treatment. These results are in agreement with the finding of Rasool et al. (2013) reported that application of organic manure @ 10 and 20 t ha -1 increased the oil yield of sunflower by 11 and 5.4%, respectively, over no application of FYM. The author, further, revealed that with increased N dose, the oil content consistently decreased but the oil yield of sunflower improved by the application of FYM with N in two experiments. Similarly, Mahrous et al. (2014) reported that, there were no significant effect of the interaction between the varieties and various form of fertilizers (organic, bio and mineral) application in most of the studied traits except in seed oil content of sunflower. Int. J. Agril. Res. Innov. Tech. 10(2): 91-99, December 2020 Protein content of sunflower seed also showed almost similar trend as in case of oil (Table 6). The treatments showed a gradual increase in protein content with the increase of doses of both the rice bran and NPK fertilizers. Six treatments namely 2.5 t RB ha -1 , 5.0 t RB ha -1 , 7.5 t RB ha -1 , N40P30K50 kg ha -1 , N80P60K100 kg ha -1 and N120P90K150 kg ha -1 showed not significant decrease in protein content when compared with the control. However, the rest of the treatments significantly (P<0.05) increased the protein content of sunflower seeds. The highest (33.9%) and the lowest (18.9%) contents of protein were recorded in 5.0 t RB ha -1 + N80P60K100 kg ha -1 and control treatments, respectively. Moreover, the variations among the treatments were found to be insignificant in most of the treatments as far as protein content was concerned. These findings agreed well with Badar and Qureshi (2014) who reported that decomposed rice husk improved total carbohydrate and protein contents of sunflower, which may be due to improvement in availability in nitrogen in soil. Similar information was also put forwarded by Mahrous et al. (2014).
Norovirus and Foodborne Disease, United States, 1991–2000 Analysis of foodborne outbreaks shows how advances in viral diagnostics are clarifying the causes of foodborne outbreaks and determining the high impact of norovirus infections. F oodborne infections are estimated to cause 76 million illnesses, 300,000 hospitalizations, and 5,000 deaths annually in the United States (1). Several high-profile outbreaks in the last 15 years have focused attention on the role of bacteria in severe foodborne illness (2)(3)(4) and led to serious efforts to prevent bacterial contamination of food during all levels of processing and handling-the "farmto-fork" model. However, in more than two thirds of outbreaks of foodborne illness, no pathogen is identified (5). Noroviruses (NoV), previously known as "Norwalklike viruses," have long been suspected to be a frequent cause of foodborne outbreaks (6)(7)(8)(9)(10)(11). Until recently, diagnosis of NoV infection relied on methods that were insensitive (electron microscopy [12]), difficult to set up (serologic testing with human reagents [13]), and available only in research settings. In 1982, epidemiologic and clinical criteria were formulated to help attribute outbreaks to NoV in the absence of a simple diagnostic test (14). Despite these criteria, the absence of any routine diagnostic assay for NoV has discouraged investigations into outbreaks of suspected viral etiology and thus limited assessment of the true impact of gastroenteritis associated with these pathogens. In 2000, for example, a survey of public health professionals in Tennessee found that only 9% cited viruses as a major cause of foodborne illness (15). Not unexpectedly, therefore, of the 2,751 foodborne outbreaks reported to the Centers for Disease Control and Prevention (CDC) from 1993 to 1997, only 9 (0.3%) were confirmed as due to NoV (5). 1 In the early 1990s, sensitive and simpler assays were developed to detect NoV by identifying viral RNA after reverse transcription-polymerase chain reaction (RT-PCR) (16). In 1993, RT-PCR was adopted at CDC for the routine detection of NoV (17), particularly in outbreaks in which specimens test negative for common bacteria. A number of state public health laboratories subsequently adopted similar assays or began sending specimens to CDC for NoV testing. When RT-PCR was used, a NoV was identified as the etiologic agent in 93% of outbreaks of nonbacterial gastroenteritis submitted for testing to CDC from 1997 to 2000 (18). However, this selection was of specimens from outbreaks of illness characteristic of viral infection, and they usually have already tested negative for bacteria. The selection introduces bias since it does not permit an assessment of the true relative frequency of foodborne outbreaks of NoV disease. Therefore, we analyzed data from all foodborne outbreaks (irrespective of cause) reported to CDC by state health departments from 1991 through 2000 to assess how recent application of RT-PCR techniques might have improved understanding of the relative impact and role of NoV in these outbreaks in the United States. Methods We used 3 related datasets: 1) all foodborne outbreaks reported to CDC from 1991 through 2000 (N = 8,271), 2) a subset of these outbreaks reported from 1998 though 2000 when surveillance was enhanced and states began to use NoV diagnostics (N = 4,072), and 3) all foodborne outbreaks reported in 2000 in 6 selected states from which supplementary data on diagnostic testing were gathered (N = 600). Foodborne Outbreak Reports, 1991-2000 Outbreaks of foodborne disease (excluding those on cruise ships) are voluntarily reported by state health departments to CDC for inclusion in the National Foodborne Outbreak Reporting System. Whether an outbreak is classified as foodborne or not is at the discretion of the state epidemiologist. Minimum data required for registering an outbreak report include the number of persons ill and the date of onset of the first case. The determination of outbreak cause is based on CDC's pathogen-specific guidelines (19). In 1998, the surveillance system was enhanced by annual data verification with states and solicitation of any unreported outbreaks. We reviewed records of 8,271 foodborne outbreaks reported to CDC from 1991 through 2000. We also noted the year in which state laboratories set up the RT-PCR assay for NoV, and by cross-referencing with CDC laboratory logs, we determined whether an outbreak had been confirmed as attributable to NoV at a laboratory in a state or at CDC. Foodborne Outbreak Reports, 1998-2000 This subset of foodborne outbreaks was selected for further analysis because, in addition to enhanced surveillance in this period, state public health laboratories had begun to test routinely for NoV, and these reports therefore included most outbreaks of confirmed NoV disease. Available variables included the laboratory-confirmed cause; clinical data (symptoms, median incubation period, median duration of illness); food vehicle; whether a foodhandler was implicated; and the number of persons exposed, ill, requiring medical attention, or hospitalized. From January 1998 through December 2000, a total of 4,072 outbreaks were reported to CDC. We excluded 30 outbreaks involving multiple states and 10 occurring in the U.S. territories and further analyzed the remaining 4,032 outbreak reports. To assess the differences between states in outbreak reporting and laboratory testing, each state was classified into 1 of 5 groups on the basis of the number of NoV-confirmed outbreaks that a state reported in 1998 to 2000 (>20, 10-19, 5-9, 1-4, or none reported). The proportion of reported outbreaks with a known cause and the proportion confirmed to be due to NoV were calculated for each group. The number of reported outbreaks per 100,000 population per state for these 3 years was also calculated by using U.S. Census data for 2000. To characterize the severity of illness and the settings associated with NoV outbreaks, we selected the 305 NoVconfirmed outbreaks and analyzed those with complete information on medical care (n = 112) and setting (n = 278). We calculated the proportion of persons seeking care and the proportion hospitalized by using the number of case-patients interviewed as a denominator. To compare the epidemiologic and clinical features of outbreaks attributed to bacteria and viruses, we selected, from the 4,032 outbreaks of gastroenteritis, a subset of 1,216 reports with complete information on the number ill, duration of illness, incubation period, and the proportion of interviewed patients who reported vomiting or fever. Of these outbreaks, 136 were attributed to NoV, 173 to bacteria, and 907 to an undetermined cause. We further compared outbreak reports with information on implicated food types (n = 608) and whether or not an ill foodhandler was thought involved by the outbreak investigators (n = 760). Data on Specimen Screening from 6 States, 2000 Data on the pathogens screened in a single outbreak are not reported to CDC; therefore, to estimate the proportion of outbreaks that would be NoV-confirmed if collected specimens were tested routinely not only for bacteria but also for NoV, we gathered additional data on the testing of stools gathered from foodborne outbreaks in 2000 from 6 states (Georgia, Minnesota, Ohio, Florida, Maryland, New York). These states were selected because they collected stools from a large number of outbreaks and had laboratory capability to test specimens for NoV. We applied the proportion of all outbreaks tested for NoV that were NoV-positive in each state (>1 positive specimens) to the number of outbreaks of undetermined etiology for which specimens had been gathered, had tested negative for bacteria, but had not been tested for NoV. We then added this figure to the total actual number of NoV outbreaks to estimate the proportion of all outbreaks with specimens in that state that would be attributable to NoV had specimens from all outbreaks been tested fully. Foodborne Outbreak Reports, 1991-2000 The number of foodborne outbreaks reported to CDC per year from 1991 to 2000 ranged from 411 outbreaks in 1992 to 1,414 in 2000, and increased markedly in 1998, when the reporting system was changed ( Figure 1A). Of 8,271 outbreaks, 5,637 (68%) were of undetermined etiology. The number of NoV-confirmed outbreaks increased markedly from 11 outbreaks in 1996 to 164 (12% of all reported outbreaks) in 2000 ( Figure 1B). This rise was initially due to laboratory confirmation of NoV by CDC, but by 2000, 100 (61%) of 164 NoV outbreaks were confirmed in state laboratories. Underreporting, however, remained an obvious problem since only 17 (34%) of 50 state public health laboratories tested for NoV, while the remaining 33 states (66%) either sent specimens to CDC for diagnosis (n = 12), or did not report any NoV outbreaks (n = 21). Foodborne Outbreak Reports, 1998-2000 Of 4,032 outbreaks reported in this period of enhanced surveillance, only 1,146 (28%) were of determined cause and 2,886 (72%) were of undetermined etiology (Table 1). NoV-confirmed outbreaks comprised 305 (8%) of all 4,032 outbreaks or 27% of the 1,146 outbreaks with a determined cause. These 305 NoV outbreaks accounted for 13,527 (18%) of all 74,481 sick persons in all 4,032 outbreaks or 39% of 34,539 sick persons in 1,146 outbreaks of known cause. NoV Reporting A great disparity was observed in the reporting of NoV outbreaks. Of the 50 U.S. states and the District of Columbia, 15 (29%) reported no NoV outbreaks (Table 1 and Figure 2). Of the total of 305 NoV outbreaks, 232 (76%) were reported by 11 states, which each investigated >10 NoV outbreaks and accounted for 613 (53%) of all 1,146 outbreaks of determined cause. We hypothesized that the proportion of outbreaks of determined cause reported in each state would be lowest in those states not reporting any NoV-confirmed outbreaks, but this hypothesis was not supported by the data. In fact, paradoxically, the 15 states that reported no NoV outbreaks in the study period determined a cause in 53% of all outbreaks, compared to 20%-45% in the 35 states that reported at least 1 NoV outbreak. The 11 states that reported >10 NoV outbreaks also reported, on average, more outbreaks per 100,000 population (2.3) compared with the 35 states that reported 0-10 NoV outbreaks (0.8-0.9). The number of NoV outbreaks reported by states, however, was not simply a function of total outbreaks reported; the percentage of NoV outbreaks of those outbreaks of determined etiology also increased significantly, from 0% to 57% (chi square for trend; p > 0.001), which suggests better outbreak investigation and testing for NoV. Illness Information on physician visits and hospitalization was complete in 112 (37%) of all 305 NoV outbreaks. Of 3,370 persons affected in these 112 outbreaks, 329 (10%) sought care from a physician, and 33 (1%) were hospitalized. NoV outbreaks were significantly larger than outbreaks of bacterial or unknown etiology (median number of cases per outbreak = 25 versus 15 and 7, respectively. Wilcoxon rank sum test: p < 0.001) ( Table 2). Viral outbreaks had a shorter duration of illness compared to bacterial outbreaks but one similar to that of outbreaks of unknown etiology (median duration <48 hours = 82%, 40%, and 85% of outbreaks, respectively). Vomiting was more often a predominant symptom (reported by >50% of ill persons) in NoV outbreaks than in outbreaks of bacterial or unknown etiology (p = 0.001) and was reported in all 136 NoV outbreaks. Fever, however, was less often reported in outbreaks of NoV disease. The median incubation period was significantly longer in outbreaks of NoV gastroenteritis: 85% of these outbreaks featured a median incubation period >24 hours compared with 39% in outbreaks of bacterial cause and 43% in outbreaks of unknown etiology. This finding may be explained by outbreaks caused by preformed toxins from certain bacteria (S. aureus, Clostridium perfringens, B. cereus), which tend to have shorter incubation periods. NoV outbreaks were strongly associated with eating salads, sandwiches, and produce: these items were implicated in 56% of the 76 NoV outbreaks in which a food item was identified, compared with 19% of 124 bacterial outbreaks and 28% of 408 outbreaks of unknown etiology (chi-square test: p < 0.05) ( Table 3). NoV outbreaks were significantly less often associated with meat dishes than bacterial outbreaks and outbreaks of unknown etiology (11% versus 44% and 34%, respectively: p < 0.05). A foodhandler was more likely to be implicated in a NoV outbreak (48% of 94 outbreaks with available data) than in either a bacterial outbreak (20% of 102 outbreaks) or an outbreak of unknown etiology (9% of 564 outbreaks) (p < 0.001). Specimen Screening Data from 6 States, 2000 In the 6 states for which data on specimen testing were obtained, the percentage of outbreaks tested for NoV that were positive was 44%-100%, and the total percentage in all 6 states was 79% (Table 4). Even in these states, NoV testing was much less likely to be performed than was testing for bacteria. Of 220 outbreaks from which stool samples were collected, specimens from 85 (39%) were tested for NoV compared to 207 (94%) tested for bacteria. Specimens from 55 outbreaks (25%) tested negative for bacteria, but no further testing for viruses was performed. The overall percentage of all outbreaks with specimens that tested positive for NoV was 30%, but in 2 states that tested all specimens for NoV (Georgia and Minnesota), the average percentage was 43% (22/ these outbreaks for NoV, 110 (50%) of the 220 outbreaks with specimens collected in all 6 states would have been confirmed as caused by NoV. Discussion The introduction of RT-PCR in the 1990s increased the percentage of all outbreaks attributable to NoV in the United States from <1% in 1991 to 12% in 2000. Nonetheless, noroviruses remain grossly underestimated as a cause of gastroenteritis outbreaks. From 1998 through 2000, most NoV outbreaks (76%) were reported from 11 states; 36 states, generally those with no PCR capability, reported either few or no outbreaks. Using data from 6 states, we estimated that if all specimens were tested for viruses, half of all foodborne outbreaks in the United States could be attributable to NoV. Even in these 6 states, bacteria were more likely to be tested for than viruses; specimens from 25% of outbreaks were negative for bacteria but not further tested. We also show that NoV outbreaks affect almost 50% more persons than in bacterial Norovirus and Foodborne Disease, United States outbreaks (median = 25 versus 15 persons affected). Although NoV outbreaks were generally of short duration, symptoms were sufficiently severe in 9.8% of patients to require medical care and in 1%, hospitalization. In addition to a historic lack of diagnostic assays, a further reason for underrecognition of NoV is a lack of specimens and epidemiologic information gathered from outbreaks that exhibit clinical features characteristic of viral gastroenteritis. We expected states that do not test for NoV to report more outbreaks of unknown etiology, but this was not the case. In fact, states that reported no NoV outbreaks also reported the lowest percentage of outbreaks with an undetermined etiology (47%, Table 1). This bias in the etiologic distribution of reported outbreaks toward bacterial causes that can be easily determined is further suggested by the lower number of outbreak reports in states with <10 NoV outbreaks from 1998 though 2000 (0.8-0.9 outbreaks/100,000 persons) compared with those states that reported >10 NoV outbreaks (2.3 outbreaks/100,000 persons). Genuine differences in the incidence of NoV disease (e.g., rural/urban) or different patterns of reporting disease among communities in different states are also possible. We found that >56% of foodborne NoV outbreaks were associated with eating salads, sandwiches, or fresh produce, which confirms that contamination of foods requiring handling but no subsequent heating is an important source of NoV infection (9,(20)(21)(22). Despite their well-documented role in large multistate NoV outbreaks (23)(24)(25), oysters have not been frequently associated with NoV disease in the last 10 years in the United States. We excluded only 2 multistate NoV outbreaks from the analysis, 1 of which was linked to oysters. Restaurants or caterers were associated with 39% of NoV outbreaks, yet in >50% of NoV outbreaks, no foodhandler was implicated. This finding probably reflects a lack of positive evidence rather than the actual ruling out of a foodhandler's involvement. Although asymptomatic infections may play a role in transmission (26), and foodhandlers are likely to underreport illness, some outbreaks with no foodhandler implicated may be due to contamination of fresh produce at the source, as has been previously documented for NoV (21,27) and other foodborne viruses transmitted by the feco-oral route (28). Our projected number of NoV outbreaks in each state may be overestimated because outbreaks that were tested for NoV were likely to have been more characteristic of NoV disease than those not tested. However, we only applied the proportion of outbreaks positive for NoV (79%) to outbreaks of unknown etiology that had already tested negative for bacteria. Moreover, between them, the 2 states that tested all nonbacterial outbreaks for NoV found 43% of outbreaks attributable to NoV, consistent with our estimate from all 6 states. Biases in surveillance data complicate straightforward extrapolation of our estimate of outbreaks with specimens from 6 states, to the group of reported outbreaks with no specimens collected in the same 6 states and in other states. Certain clinical characteristics of outbreaks of unknown etiology were similar to those of NoV outbreaks (e.g., percentage of patients vomiting); other epidemiologic characteristics were similar to those for bacterial outbreaks (e.g., implicated food). Etiologic make-up of outbreaks with no specimens collected is also likely to differ between states. Since specimens remain less likely to be collected from outbreaks of acute gastroenteritis of short duration, we think our estimate can be reasonably extrapolated to all outbreaks of unknown etiology. Only a few small studies have looked at the relative impact of NoV as a cause of foodborne illness (Table 5), and none have fully tested for NoV with PCR. A small study of enhanced surveillance during 1 year in a Swedish municipality found 6% of all foodborne outbreaks, but 38% of 13 that were laboratory-confirmed, to be attributa-ble to caliciviruses (30). Our estimate of 50% of foodborne outbreaks being attributable to NoV is higher than estimates that rely on epidemiologic criteria (33%-41%) (6,8), consistent with the low sensitivity of such criteria (CDC, unpub. data). Our estimate of percentage of outbreaks attributable to NoV is lower than Mead's figure of 66% of all foodborne illness of known etiology being caused by NoV (1). However, our finding that NoV outbreaks are >50% larger than bacterial outbreaks suggests that the total number of cases associated with our estimate of outbreaks is comparable to Mead's estimate. We may have overestimated the size of NoV outbreaks and the proportion of persons seeking care since these larger outbreaks of more serious illness may be more likely to be reported. However, our estimates are not inconsistent with a study in the United Kingdom that reported the median size of NoV outbreaks to be 21 persons and the hospitalization rate to be 0.3% (32). The very low infective dose of NoV (33) allows for extensive transmission by means of contaminated food and subsequent person-to-person spread. Data on other variables may also be biased. For instance, that 61% of bacterial outbreaks would have a median incubation of <24 hours is surprising, given that 69% of the analyzed bacterial outbreaks were attributed to Salmonella spp., Shigella spp., Campylobacter spp., and E. coli, which have longer incubation periods. Finally, since no standard criteria are required for an outbreak to be classified as foodborne and since NoV are more often spread from person-to-person than bacteria, the dataset from 6 states that we used may have resulted in an overestimate of the impact of foodborne NoV. Efforts are required to increase the capacity of states to investigate outbreaks, irrespective of suspected cause, and include improved specimen collection and more widespread testing for viruses. Evaluation of epidemiologic criteria is needed to assess how best these can be used to guide testing strategies when laboratory resources are limited. Better appreciation of the exact causes of the large number of outbreaks of undetermined etiology will help better target measures to prevent foodborne disease. Furthermore, to be able to identify novel and intentionally introduced pathogens, the ability of state health departments to quickly investigate outbreaks and discount common causes is critical. "Real-time" collection systems of epidemiologic and sequence data from different outbreaks, such as developed in Europe (34) and the United States, can provide insights into the epidemiology of NoV (35) and will allow for rapid comparison of data to rapidly identify common risk factors (such as foods contaminated at source) and implement control measures. While these initiatives are developed, however, the high disease impact of outbreaks of NoV illness should prompt prioritization of development and implementation of prevention measures, such as foodhandler education, by food safety agendas.
Schistosoma mansoni venom allergen-like protein 6 (SmVAL6) maintains tegumental barrier function Graphical abstract Introduction Adult male and female schistosome pairs are master modulators of their environment (DeMarco et al., 2010;Wilson, 2012b;Robinson et al., 2013) and display developmental features evolutionarily honed for survival in one of the most inhospitable biological settings, definitive host mammalian blood. One particular anatomical adaptation that enables both schistosome sexes to maintain long-term, intravascular residence is the syncytial tegument; a structure covered by a unique plasma membrane architecture comprised of multiple stacks of lipid bilayers (McLaren and Hockley, 1977;Skelly and Wilson, 2006). Effectively, the schistosome's tegument covers every host-interactive interface (body and blind ending gut) and triply functions as: (i) a barrier to host immunological and physiological defences, (ii) a dynamic layer for nutrient acquisition and (iii) a regulator of metabolic waste (Skelly and Wilson, 2006;Faghiri et al., 2010). Glycoprotein composition of this important parasite structure includes transmembrane candidates found on or between the surface memhttps://doi.org/10.1016/j.ijpara.2020.09.004 0020-7519/Ó 2020 The Author(s). Published by Elsevier Ltd on behalf of Australian Society for Parasitology. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). branes as well as embedded within tegumental organelles Mulvenna et al., 2010), glycosyl-phosphatidyl inositol (GPI) modified representatives (Koster and Strand, 1994;Castro-Borges et al., 2011) and numerous cytoplasmic constituents (van Balkom et al., 2005). The cytoplasm within this syncytial structure also contains mitochondria and two classes of secreted (from sub-tegumental cell bodies) inclusions termed discoid bodies and membranous bodies (Hockley, 1973). While the function of discoid bodies is unresolved (likely contributing to the maintenance of tegumental ground matter), the less numerous membranous bodies fuse with the apical tegumental membranes, contributing to their repair and maintenance (Wilson and Barnes, 1977). Due to the key importance of maintaining normal tegumental functions during schistosome lifecycle progression, the (glyco) proteins contained within it have often been the subject of detailed functional characterisation (Da'dara et al., 2012) and/or immunoprophylactic-based investigations (Wilson, 2012a). One particular protein found enriched in tegumental extracts is Schistosoma mansoni venom allergen like protein 6 (SmVAL6) (van Balkom et al., 2005;Rofatto et al., 2012;Sotillo et al., 2015), an atypical member of a large schistosome protein family sharing sequence similarity to Sperm-Coating Protein/Tpx-1/Ag5/PR-1/ Sc7 (SCP/TAPS) domain-containing representatives (Chalmers and Hoffmann, 2012). Previous studies have indicated that the gene encoding SmVAL6 is developmentally regulated, sex-associated (male > female) and alternatively spliced; the molecular processing is focused entirely on exons 3 0 to those coding for the conserved SCP/TAPS domain (Chalmers et al., 2008;DeMarco et al., 2010;Rofatto et al., 2012). Interestingly, while some SmVAL6 isoforms have been linked to tegumental membranes, other variants are enriched in cytosolic fractions derived from the syncytium (Rofatto et al., 2012); at least one of these is also the target of human IgE responses (Farnell et al., 2015). Collectively, these findings have led to the supposition that SmVAL6 confers a yet to be identified adaptive advantage for adult schistosomes living in the definitive host vasculature. However, to date, a definitive role for any SmVAL6 isoform in schistosome tegumental (or wider biological) processes has yet to be revealed. Here, applying temporal and spatial gene expression analysis methods, loss-of-function RNA interference (RNAi) approaches and yeast 2-hybrid (Y2H) assays, we have conducted the first known investigation exploring SmVAL6 function in adult schistosomes. Ethics statement All mouse procedures performed at Aberystwyth University (AU, United Kingdom) adhered to the United Kingdom Home Office Animals (Scientific Procedures) Act of 1986 (project licenses PPL 40/3700 and P3B8C46FD) as well as the European Union Animals Directive 2010/63/EU and were approved by AU Animal Welfare and Ethical Review Body (AWERB). In adherence to the Animal Welfare Act and the Public Health Service Policy on Humane Care and Use of Laboratory Animals, all mouse procedures performed at the University of Texas Southwestern Medical Center, USA, were approved by the Institutional Animal Care and Use Committee (IACUC) (protocol approval number APN 2017-102092). Parasite material A Puerto Rican strain (NMRI) of S. mansoni was used in this study. Mixed-sex worms were perfused from percutaneously infected TO (HsdOla:TO, Tuck-Ordinary, Envigo, UK) or Swiss-Webster (Charles River, USA) mice challenged 7 weeks earlier with 180 cercariae (Duvall and Dewitt, 1967) and used for RNAi, whole mount in situ hybridisation (WISH) and endpoint reverse transcription (RT)-PCR. SmVAL6 transcription profile Data from the 37,632 element S. mansoni long oligonucleotide DNA microarray studies of Fitzpatrick et al. (2009) was interrogated to find the expression profile of Smval6 (Smp_124050) across 11 different lifecycle stages. Raw and normalised fluorescent intensity values are available via Array Express under the experimental accession number E-MEXP-2094. Whole mount in situ hybridisation (WISH) of smval6 Adult worm fixation, processing and WISH were performed as previously described (Collins et al., 2013). Short interfering RNA (siRNA)-mediated SmVAL6 silencing Short interfering RNAs (siRNAs obtained from integrated DNA technologies (IDT), USA) were used to silence Smval6 in both adult male and female schistosomes as previously described (Geyer et al., 2011;Geyer et al., 2018). siRNAs designed for firefly luciferase functioned as a negative control. All siRNA sequences used in this study are described in Supplementary Table S1. Smval6 transcript abundance was measured by quantitative RT-PCR (qRT-PCR) at 48 h post siRNA treatment. Qualitative assessment of SmVAL6 protein abundance (western blot analysis) and quantification of adult worm phenotypes (laser scanning confocal microscopy, LSCM) was performed at day 7 post siRNA treatment. Dextran staining of adult worms and LSCM Adult male and female worms, untreated (n = 3 for each sex) or those treated with luc (n = 3 for each sex) or Smval6 (n = 3 for each sex) siRNAs for 7 days were labelled with biotin-TAMRA-dextran reconstituted in either hypotonic (ultrapure water) or isotonic (DMEM) solutions for 10 min prior to fixation and labelling with Alexa-Fluor 488-conjugated phalloidin (Life Technologies) as previously described (Wendt et al., 2018). Post-labelling fixation of worms and mounting onto microscope slides were also performed as previously described (Wendt et al., 2018). LSCM images were acquired using a Leica SP8 confocal microscope equipped with a HC PL APO 63Â/1.20 lens (Leica Microsystems, Germany), accruing a total of 50 sections for each Z-stack (step size of 0.365 mm). For each z-stack, the fluorescence intensity of the biotin-TAMRAdextran channel was used to calculate the total volume (lm 3 ) occupied by the fluorophore using the Surface tool in Imaris v8.2 (Bitplane). Volume measurements were taken from a 123 Â 50 Â 18 mm area located directly below the tegument. All voxels with an intensity over 10 arbitrary units (a.u.) or 15 a.u. were included for female and male worms, respectively. Quantitative reverse transcription (qRT)-PCR and endpoint RT-PCR analyses Schistosoma mansoni total RNA isolation and qRT-PCR analyses were performed as previously described (Chalmers et al., 2008;Geyer et al., 2011). A StepOnePlus thermocycler (Applied Biosystems) was used for all qRT-PCR assays with Smval6 gene expression results normalised to a-tubulin (Smp_090120). Endpoint RT-PCR was performed using adult worm cDNA essentially as described (Fitzpatrick et al., 2008). Dideoxy chain termination DNA sequencing of endpoint PCR products was performed at the Institute of Biological, Environmental and Rural Sciences (IBERS) (Aberystwyth University, United Kingdom) translational genomics facilities. Yeast assays Truncated versions of SmVAL6 (Smp_124050) were sub-cloned into the Y2H GAL4 DNA-BD fusion vector pGBKT7 and sequence verified. Each pGBKT7-SmVAL 'bait' construct was introduced into yeast strain Y187 using the lithium acetate transformation protocol (Gietz et al., 1992) and tested for auto-activation, toxicity and expression as described in the Matchmaker TM Library Construction and Screening Kit manual (Clontech). Total protein extracts from transformed yeast cells were obtained using urea/SDS, phenylmethylsulfonylfluoride (PMSF, Sigma-Aldrich) and a protease inhibitor cocktail tablet (Complete Mini, Roche) as described in the yeast protocols handbook (Clontech). Protein extracts were subsequently analysed for SmVAL6 expression by standard SDS-PAGE and western blotting. Mating reactions were performed between the haploid pGBKT7-SmVAL6 Y187 yeast transformants and a 7 week mixedsex adult worm pGADT7-'prey' library transformed in AH109 (donated by Professor Alex Loukas, James Cook University, Australia), plated onto triple dropout synthetic medium (TDO medium, SD/-Trp/-Leu/-His) and incubated at 30°C for 4-7 days as previously described (Geyer et al., 2018). Replica patches of all colonies were streaked onto TDO medium and quadruple dropout medium (QDO medium, SD/-Trp/-Leu/-His/-Ade), and incubated for a further 4 days to confirm activation of the ADE2 reporter. Activation of the MEL1 reporter was assayed colorimetrically on QDO containing X-alpha-GAL. LACZ reporter activity of all positive colonies was then tested using colony-lift assays as described in the Yeast protocols handbook (Clontech). Prey plasmids of interest were rescued from yeast using the Easy Yeast Plasmid Isolation kit (Clontech) and propagated in aselect Escherichia coli cells (Bioline). Prey clones were sequenced using either the 5 0 or 3 0 long distance (LD) amplimer described in the Matchmaker TM Library Construction and Screening Kit (Clontech). Sequences were queried against the reference S. mansoni genome (v7.0) using BLAST. All schistosome open reading frames (ORFs) were checked to ensure that they were in frame with the GAL4 activation domain (GAL4-AD), thereby ensuring correct expression of the fusion proteins in yeast. To confirm protein-protein interactions, a representative of each identified prey (Sm14 DeltaE3/Smp_095360.2; dynein light chain/Smp_158660) and its respective bait (SmVAL6T2, SmVAL6T1) were co-transformed into the Y2HGold strain (Gietz et al., 1992). Full-length Sm14 (Sm14FL/Smp_095360) was also co-transformed with SmVAL6T1 into the Y2HGold strain. p53 +SV40 Large T-antigen and lamin C+SV40 Large T-antigen baitprey combinations supplied with the Clontech kit were used as positive and negative controls, respectively. To test for prey autoactivation, the empty pGBKT7 vector was co-transformed with each isolated prey library construct. Co-transformants were selected for on SD-Trp/-Leu (DDO, double dropout medium) following incubation at 30°C for 3-4 days. Growing colonies were then replica streaked onto DDO, QDO/+X-a-GAL and QDO/+X-a-G AL/+Aureobasidin A selection media to confirm protein-protein interactions in yeast. Quantitative b-galactosidase assays were carried out to assess the relative strength of protein-protein interactions. Ortho-Nitrophenyl-b-galactoside (ONPG) assays were carried out as described in Clontech's Yeast Protocols Handbook. Pellet X-b-gal (PXG) assays were carried out as previously described (Mockli and Auerbach, 2004). Structural modelling of SmVAL6 complexes The structural model of a truncated form of Smp_124050/ SmVAL6 (Uniprot (UniProt, 2009) ID. Q1XAN2) and Smp_158660/ DLC (Uniprot (UniProt, 2009) ID. G4LZ86) were derived by homology modelling using M4T (Fernandez-Fuentes et al., 2007). The crystal structure of GAPR-1 (PDB (Berman et al., 2000) code 1smb ( (Serrano et al., 2004)) and DLC TcTex-1 (PDB code 1ygt (Williams et al., 2005)) were used as templates to model SmVAL6 and DLC, respectively. The expected value (E-value) and the percentage of conserved residues (shown in parentheses) of GAPR-1 and SmVAL6 and TcTex-1 and DLC are 2e À21 (60%) and 1e À10 (55%), respectively, with sequence coverage above 90% in both cases. The quality and stereochemistry of the models were assessed using ProSA-II (Sippl, 1993) and PROCHECK (Laskowski et al., 1993), respectively. The structural modelling of protein complexes was done using rigid body docking. The structural models of SmVAL6 and DLC as described above and the crystal structure of Smp_095360/Sm14 (PDB code: 1vyg (Angelucci et al., 2004) were used as inputs to derive models for the binary complexes SmVAL6-DLC and SmVAL6-Sm14. The docking space was sampled using ZDOCK 3.02 (Mintseris et al., 2007), generating 10, 000 docking poses for each of the complexes. The docking complexes were then ranked using pyDOCK (Cheng et al., 2007). The top 100 docking poses were used to compute the preferred SmVAL6 interface patches for both Smp_158660/DLC and Smp_095360/Sm14. Smval6 temporal and spatial expression While our earlier studies revealed developmentally regulated and gender-associated Smval6 expression in schistosomes (Chalmers et al., 2008;Rofatto et al., 2012) (confirmed here in Fig. 1A by both qRT-PCR and DNA microarray analyses), spatial expression of this parasite gene product in those investigations was not thoroughly described. Therefore, to reveal where Smval6 expression was found throughout adult schistosome tissues, we conducted WISH assays in both male and female parasites (Fig. 1B). Broadly supporting previous studies (Rofatto et al., 2012;Fernandes et al., 2017), adult male Smval6 expression was found to be concentrated in both oral (Fig. 1B, red box) and ventral (Fig. 1B, yellow box) suckers. However, our results also demonstrated wide distribution of Smval6 expression throughout adult male mesenchymal tissues (Fig. 1B, black box) as well as to the anterior region (Fig. 1B, cyan box) of the oesophageal gland (AOG (Li et al., 2013)). A similar pattern was also observed in female schistosomes. However, as females express Smval6 at much lower levels compared with males (Fig. 1A), the mesenchymal-, ventral sucker and ACM signals were much weaker in this sex. In fact, in this individual female, Smval6 expression was completely absent in the oral sucker. To further define which tissues throughout the mesenchyme were expressing Smval6, we consulted a single cell RNA-seq (scRNA-Seq) atlas of adult schistosomes (Wendt et al., 2020). Although Smval6 expression was found in a variety of mesenchymal tissues (e.g. neurons, flame cells and neoblasts), it was particularly enriched in tegumental cell bodies ( Fig. 1C and Supplementary Fig. S1). Smval6 siRNA-mediated knockdown Whereas functions for distantly related SmVALs have been linked to lipid binding (SmVAL4 (Kelleher et al., 2014)), extracellular matrix remodelling (SmVAL9 (Yoshino et al., 2014)) and plasminogen binding (SmVAL18 (Fernandes et al., 2018)), a role for SmVAL6 in any aspect of schistosome biology or host interactions has yet to be determined. Therefore, to assess the significance of Smval6 lossof-function in both adult male and female schistosomes (lifecycle stages where Smval6 localisation is known, Fig. 1B and C), siRNAmediated knockdown was employed (Fig. 2). Here, RNAi was reproducibly efficient in suppressing Smval6 transcript levels in both sexes (68% in males, 78% in females) compared with control worms (siLuc treated) as quantified by qRT-PCR ( Fig. 2A). This siSmval6mediated reduction in transcript levels also correlated with measurable decreases in protein abundance as determined by western blot analyses (using a polyclonal antisera raised against recombinant SmVAL6, (Rofatto et al., 2012)) of soluble male worm extracts (Fig. 2B). We were unable to detect SmVAL6 in soluble female extracts (regardless of the siRNAs used), presumably due to the low abundance of this protein ( (Rofatto et al., 2012) and inferred from Fig. 1A). Consistent with enriched expression of Smval6 in the parasite tegumental cell bodies, we noted that surface membranes of siSmval6-treated parasites were noticeably affected in comparison to siLuc controls ( Supplementary Fig. S2) and suggested a SmVAL6-regulated phenotype (membrane integrity). Therefore, to objectively quantify surface membrane integrity differences between siLuc-and siSmVal6-treated worms, a recently described method for fluorescently labelling (using biotin-TAMRA-dextran) the tegument and sub-tegumental projections/cell bodies was utilised (Wendt et al., 2018) (Fig. 3). Supplementary Table S1. (B) After 7 days, total protein was harvested and subjected to SDS-PAGE and western blot analyses using a murine polyclonal anti-rSmVAL6 antisera (Rofatto et al., 2012). Here, using this live worm labelling technique, a clear and significant increase in dextran permeability through the tegument and into the sub-tegumental cell bodies of siSmval6-treated adult female (Fig. 3A) and male (Fig. 3B) worms was observed compared with control siLuc-treated parasites. While this increase in surface permeability was not as dramatic as that seen in adult schistosomes exposed to hypotonic conditions ( Supplementary Fig. S3), these results clearly illustrated the importance of SmVAL6 in mediating tegumental integrity. SmVAL6 interacting partners Identifying SmVAL6 interacting proteins or complexes may help further define the role of this particular SmVAL, but more importantly, may provide an explanation for the surface membrane damage observed in siSmVAL6-treated adult worms. To this end, we performed Y2H screens of adult schistosome cDNA libraries to search for potential SmVAL6 protein interactors (Fig. 4). Two different SmVAL6 constructs were created for these Y2H screens; one Fig. 3. Schistosoma mansoni venom allergen-like protein SmVAL6 regulates tegumental barrier function in adult schistosomes. Seven week old male and female schistosomes were electroporated with either short interfering (si)Smval6 or siLuciferase (siLuc) duplexes (as described in section 2 and Fig. 2 legend). At 7 days, the worms were labelled with biotin-TAMRA-dextran, fixed, labelled with Alexa-Fluor 488-conjugated phalloidin and mounted onto microscope slides as previously described (Wendt et al., 2018). Mounted worms were then subjected to LSCM as described in the section 2. (A) Representative view of adult female worms (siLuc, n = 3; siSmval6, n = 3) together with a box and whisker chart of all collected data (siLuc sections from three worms, n = 67; siSmval6 sections from three worms, n = 55). (B) Representative view of adult male worms (siLuc, n = 3; siSmval6, n = 3) together with a box and whisker chart of all collected data (siLuc sections from three worms, n = 51; siSmval6 sections from three worms, n = 42). Alexa-fluor 488-conjugated phalloidin, green; biotin-TAMRA-dextran, pink (outside) and yellow (inside). Scale bars = 10 mm. Here, we focused on SmVAL6 T1 for these PPi assays due to the greater affinity of this variant for the targets identified (Fig. 4B). Despite not being detected in the original Y2H screens (possibly related to low abundance of this variant in the Y2H library), PXG assays demonstrated that Sm14 FL was, indeed, capable of binding to SmVAL6. However, the Sm14 FL/SmVAL6 interaction strength was slightly less than that detected for Sm14 DeltaE3/SmVAL6 interactions (not significant). (sm14), smp_158660 (dlc) and smp_124050 (Smval6) found within the 68 adult worm clusters generated from available scRNA-Seq data (Wendt et al., 2020). (B) Binary SmVAL6-DLC and SmVAL6-Sm14 complexes were created as described in section 2. The top 100 docking poses (a) were used to compute the preferred SmVAL6 interface patches for both Smp_158660/DLC (red spheres) and Smp_095360/Sm14 (blue spheres). The average preferred SmVAL6 interacting patch (b) is also indicated for DLC (red) and Sm14 (blue); notice how these occupy opposite faces of SmVAL6. SmVAL6/Sm14 FL/DLC are co-expressed and are likely to form higher order protein complexes in adults Interrogation of available scRNA-Seq data was used to provide evidence in support of the SmVAL6/Sm14 FL and SmVAL6/DLC PPis identified in the Y2H assays (Fig. 5A). In adult schistosomes, Sm14 expression was found disseminated throughout most cell types including the mesenchyme/parenchyme clusters Supplementary Fig. S3; this localisation broadly supported the Sm14 protein distribution reported by both Moser et al. (1991) and Gobert (1998) in S. mansoni and S. japonicum, respectively. While dlc was also found broadly distributed throughout adult schistosome tissues, it was less abundant than Sm14 (with the exception of flame cells and neurons). Importantly, clusters of Smval6+ cells (e.g. neurons, tegumental cells, neoblasts) were also found to be co-expressing both dlc and Sm14 (Fig. 5A and Supplementary Fig. S1). The scRNA-seq data indicated that Smval6, Sm14 and dlc were potentially co-expressed in a small proportion of cells throughout adult tissues. As the Y2H results also demonstrated that SmVAL6 T1 formed specific and direct interactions with both Sm14 FL and DLC (Fig. 4), we initiated molecular modelling with the three schistosome proteins to understand how these molecular complexes could be formed (Fig. 5B). Specifically, we used the SCP/TAPS region of SmVAL6 (SmVAL6 T1 only contained the SCP/TAPS domain; Fig. 4A) as well as full-length Sm14 and DLC to construct the models. Examining the top 100 (out of 10, 000) docking poses illustrated that two distinct SmVAL6 interfaces were predominantly used for Sm14 and DLC interactions (Fig. 5Ba). While some overlap between Sm14 and DLC interfaces existed in the predicted models, the dominant SmVAL6 interacting interface derived from averaging the top 100 docking poses for Sm14 and for DLC were on opposite faces of the SCP/TAPS domain (Fig. 5Bb). Discussion The SmVAL family contains both excreted/secreted (SmVAL group 1) as well as non-secreted (SmVAL group 2) members (Chalmers and Hoffmann, 2012). Despite temporal and spatial investigations revealing developmental (Chalmers et al., 2008) and tissue-associated patterns (Rofatto et al., 2012;Fernandes et al., 2017;Farias et al., 2019), how these proteins participate in parasite biology or host interactions remains largely enigmatic. Where functional studies have been performed, these have been restricted to the group 1 SmVALs and have indicated roles in host extracellular matrix reorganisation (SmVAL9) (Yoshino et al., 2014) as well as in plasminogen (SmVAL18 (Fernandes et al., 2018)) and lipid (SmVAL4 (Kelleher et al., 2014)) binding. Due to a dearth of information related to the biology of group 2 SmVALs, we present the first functional investigation of a representative family member, SmVAL6. In addition to confirming previous studies describing the spatial distribution of Smval6 to the oral and ventral suckers as well as the tegumental cells of adult schistosomes (Rofatto et al., 2012;Fernandes et al., 2017), we additionally localise this transcript to the AOG in both sexes (Fig. 1). Intriguingly, another group 2 SmVAL (Smval13) localises to the AOG; this contrasts with the localisation of Smval7 (a group 1 SmVAL), which is enriched in the posterior region of the oesophageal gland (POG) (Fernandes et al., 2017). The only other transcripts localised (thus far) to the adult schistosome AOG are the microexon genes (MEGs) 12, 16 and 17 as well as phospholipase A2, while a total of 11 MEGs, two lysosomal hydrolases and one glycosyltransferase are localised to the POG (DeMarco et al., 2010;Li et al., 2013;Wilson et al., 2015). While the oesophageal gland contributes to erythrocyte/leukocyte lysing and digestion, the specific roles of SmVAL7, SmVAL13 and now SmVAL6 in this process are currently unknown. However, as the AOG has been postulated to be a holding area for cells during schistosome feeding in preparation for transport into the POG where cellular lysis occurs (Li et al., 2013), perhaps SmVALs differentially (excreted/secreted group 1 SmVALs contributing to lysis and nonsecreted group 2 SmVALs contributing to maintaining structural characteristics for receiving a cellular bolus) participate in this critical process. Nevertheless, WISH/scRNA-Seq localisation of Smval6 to the adult worm AOG, suckers and tegumental/mesenchymal cells in the current study, combined with the proteomic characterisation of SmVAL6 to adult worm tegumental fractions (van Balkom et al., 2005;Rofatto et al., 2012), suggests that this group 2 SmVAL may have more than one function. To help shed light on this subject, siSmval6-mediated RNAi of adult worms and Y2H screening for SmVAL6 interactors were subsequently performed. The most striking phenotype observed in siSmval6-treated adult male and female worms was an increased distribution of biotin-TAMRA-dextran across the tegument and into the subtegumental cell bodies below the muscle layer (Fig. 3). This result indicates that SmVAL6 directly or indirectly participates in tegumental barrier function, in addition to oesophageal gland activities, in adult schistosomes. Two previous proteomics investigations identifying SmVAL6 in tegumental fractions (van Balkom et al., 2005;Rofatto et al., 2012), coupled with our WISH/scRNA-Seq localisation of Smval6 to sub-tegumental (amongst other) cell bodies (Fig. 1B), supports this assertion by providing complementary spatial contexts for the role of SmVAL6 in tegumental biology. Within the tegument or sub-tegumental cell bodies, SmVAL6 (one or more of its alternatively spliced isoforms) may associate with cytoplasmic/organelle/surface membranes due to its predicted palmitoylation post-translational modifications (Rofatto et al., 2012). Interactions with organelle membranes have previously been described for golgi-associated PR-1 protein (GAPR-1) (Eberle et al., 2002), a cytoplasmic human homolog of SmVAL6. While this interaction is facilitated by GAPR-1 myristoylation (a related protein lipidation), it also is dependent on interactions with caveolin-1 (Eberle et al., 2002). Further studies of GAPR-1 have demonstrated that this protein can also self-assemble into oligomeric fibrils (Eberle et al., 2002;Olrichs et al., 2014) and can interact with both beclin-1, a potent inducer of autophagy (Shoji-Kawata et al., 2013) as well as TMED7, a TRAM-TRIF signalling pathway inhibitor (Zhou et al., 2016). When taken together, group 2 SCP/TAPs domain containing proteins such as SmVAL6 and GAPR-1 contain features that stabilise protein/lipid (perhaps similar to the group 1 SmVAL4 (Kelleher et al., 2014)) or protein/protein interactions. The surface membrane disruption phenotype induced by Smval6 RNAi (Fig. 3), as well as the interaction of SmVAL6 with both the fatty acid binding protein Sm14 (FL and Delta E3 variants; Smp_095360.1 and Smp_095360.2, respectively) and a DLC (Smp_158660) (Fig. 4), mutually support this contention. However, it is currently unknown whether the marginal differential interaction strengths found for SmVAL6 with Sm14delta E3 > Sm14 variants (Fig. 4) affects the competitive regulation of lipid transfer or membrane turnover within schistosomes. This, together with identifying how SmVAL6 mediates these particular or additional molecular interactions, requires more thorough investigation. As an initial step in addressing how SmVAL6 mediates molecular interactions of the proteins identified in this study, we undertook two different approaches. In the first approach, scRNA-seq localisation of Smval6, Sm14 and dlc transcripts revealed the coexpression of all three transcripts to a small population of adult schistosome cells (i.e. tegumental cells, neurons and neoblasts) (Fig. 5A). These results provide in situ support for the Y2H findings and imply that SmVAL6-Sm14 and SmVAL6-DLC interactions could occur within adult schistosomes due to their spatial co-expression/ co-localisation. For SmVAL6 and Sm14, additional experimental support for spatial localisation of these two proteins was recently provided by the analysis of adult worm extracellular vesicles (EVs). In this previous investigation, the 15K EV pellet contained both SmVAL6 and Sm14 in sufficient quantities to be detected by the LC-MS/MS methodologies employed (Kifle et al., 2020). Interestingly, these two proteins were found within the peripheral membrane proteomes of the 15K EV pellet, indicating their association with lipid-rich compartments and further supporting a role for SmVAL6 in maintaining membrane integrity. An in silico modelling approach was subsequently used to predict how SmVAL6 mediated these distinct protein-protein interactions. Here, we found that opposing faces of the SCP/TAPS region of SmVAL6 were differentially used to drive interactions with Sm14 or DLC (Fig. 5B). This finding agrees with the supposition that parasite (and possibly other metazoan) SCP/TAPS domains may operate as a flexible tertiary structure critical to roles (known and unknown) in diverse functional contexts (Hewitson et al., 2011). With regard to schistosome SmVALs, considering how opposing SCP/TAPS faces orchestrate diverse aspects of host interactions (predominantly Group 1 SmVALs) and parasite-specific activities (predominantly Group 2 SmVALs) should contribute to an increased functional understanding of this enigmatic platyhelminth protein family (Chalmers and Hoffmann, 2012). Taken together, we provide direct evidence that SmVAL6 is necessary for maintaining barrier function of the tegument, likely through its interactions with both lipid and protein membrane constituents. Further roles in oesophageal function are implied, demonstrating that this Group 2 SmVAL may participate in diverse functions critical to schistosome biology.
Genetic programs can be compressed and autonomously decompressed in live cells Fundamental computer science concepts inspired novel information-processing molecular systems in test tubes1–13 and genetically-encoded circuits in live cells14–21. Recent research showed that digital information storage in DNA, implemented using deep sequencing and conventional software, can approach the maximum Shannon information capacity22 of 2 bits per nucleotide23. DNA is used in nature to store genetic programs, but the information content of natural encoding rarely approaches this maximum24. We hypothesize that the biological function of a genetic program can be preserved while reducing the length and increasing the information content of its DNA encoding. Here we support this hypothesis by describing an experimental procedure for compressing a genetic program and its subsequent autonomous decompression and execution in human cells. As a test-bed we choose an RNAi cell classifier circuit25 that comprises redundant DNA sequence and is therefore amenable for compression, as are many other complex gene circuits15, 18, 26–28. In one example, we implement a compressed encoding of a ten-gene four-input AND gate circuit using only four genetic constructs. The compression principles applied to gene circuits can enable fitting complex genetic programs into DNA delivery vehicles with limited cargo capacity, and storing compressed and biologically inert programs in vivo for on-demand activation. 3 fashion. Second, we confirmed the dependency of the process on sequence inversion by deleting one of the LoxP sites in the sensor module. This is expected to prevent the decompression process and leads to a single-input YES type circuit logic, which is indeed the case ( Supplementary Fig. 12). Finally we verified that the mimics have only very minor to no effect on the reverse complement sensor ( Supplementary Fig. 13). We further constructed compressed and control circuits implementing "miR-21 AND miR-20a" logic. We observed similar dependency of decompression efficiency on the modifications described above, with a particularly strong effect of the polyA insertion between the targets ( Supplementary Fig. 14). Optimization of the decompression process of the three-input circuit Firstly, we used a second SV40-polyA to block the transcription after miR-20a target and employed LoxP and LoxFAS 5 heterospecific recombination sites, all implemented in rtTA gene cassette while the repressor cassette was kept in a split configuration. Regardless of the orientation of the terminator sequence, some input combinations resulted in a relatively low dynamic range (Supplementary Figs. 16c and 16d). Next, we tested another terminator sequence, GAPDH-polyA, which is known to promote the cleavage of the pre-mRNA at the polyA site 6 , resulting in complete insensitivity to downstream miRNA target ( Supplementary Fig. 17a) and a strong improvement of the overall performance ( Supplementary Fig. 16e). To implement the three-input UTR in the repressor cassette, the spliced synthetic microRNA (miR-FF4) had to be relocated upstream of the terminator sequence and embedded in the LacI coding sequence, which did not affect the repression capacity of the system ( Supplementary Fig. 17b), but was slightly detrimental to the dynamic range with some input combinations ( Supplementary Fig. 16f). Careful analysis of the results indicated that high leakage was observed when the recombinase engaged the LoxFAS sites. To address this, these recombination sites were exchanged with Lox5171 7 resulting in a reduction of leakage in the off state with the problematical input combinations ( Supplementary Fig. 16g). Optimization of the decompression process of the four-input circuit We tested four combinations of recombination sites and obtained the best results with LoxP, Lox5171, and LoxFAS ( Supplementary Fig. 20), although LoxFAS site recombines less 4 optimally than LoxP and Lox5171 (Supplementary Figs. 16f and 16g). The GAPDH-polyA can also be used in reverse orientation to block the sensitivity to the miRNA targets placed downstream of the transcriptional terminator ( Supplementary Fig. 21). Generalization of the compression and decompression procedure First, we note that only circuits that contain redundant sequences can be compressed by our method. Further, our method is only suitable when the redundant and the unique sequences are long enough so that they can be inverted with a recombinase. Lastly, certain type of redundancies, such as tandem repeats, can not be compressed by the general procedure described in this work. Generalized constructs and decompression kinetics: We show a generalization of the compression protocol in Supplementary Fig. 23a. It is the extension of the mechanism shown in Fig. 4 (single-plasmid approach). We show this in detail for a case where the redundant (recurring) component is a promoter and a coding sequence, and the varying sequence is in the 3'-UTR of the coding sequence (such as a miRNA target). In the uncompressed case, the circuit might contain N different constructs containing the promoter P, the coding sequence CDS and the unique 3'-UTR component "T" per construct, for a total of N different 3'-UTR elements T 1 , T 2 , T 3 ., ... T N . Let us assume that N is even, otherwise we use the last target site twice in both the forward and the inverse orientation. In the compressed case ( Supplementary Fig. 23a), a single construct can be built that utilizes N-1 heterospecific recognition sites for a site-specific recombinase R 1 , R 2 , R 3 ,.. R N-1 (under the assumption that N-1 such sites can indeed be found). The 3'-UTR of the compressed construct consists of N/2 modules comprising pairs of sites: one site in the correct, active orientation, and the second in the inverted orientation. The sites are separated by the spacer containing a polyA signal termed pA. The modules are separated by transcriptional terminator sequences ("Stop" signals). Half of the heterospecfic sites, all facing in the same direction, are placed upstream of the modules (R 1 , R 3 , ..R N-1 ). For each module, the pairs of targets are flanked with face-to-face pairs of sites unique to the module (R 2 for module 2, R 4 for module 3, .. R N-1 for module N/2, or in the general case for module i (2≤i≤N/2), R 2(i-1) . Lastly, each module is bracketed at the 3'-end by a recombination site R 2i-1 for 1≤i≤N/2 that is facing a matching site in the sites panel located directly downstream of the stop codon. This configuration is called starting configuration and corresponds to the construct [1] in Supplementary Fig. 23a. From this configuration, the way to activate the site T i , for even i's, is to engage the recombinase with the sites R i-1 . This will position the site T i in the correct orientation after the stop codon and prevent the rest of the sites form being engaged due to the presence of the stop signal. Next, the engagement of the module-specific recognition sites R i-2 will invert the sequence and position the target T i-1 in the active position. We note that the engagement of site pair R i-1 results in temporary removal of all the targets T 1 , T 2 , .. T i-2 from being two steps away from decompression because the recognition sites R 1 , R 3 , .. R i-3 are no longer upstream of the target modules. To engage these targets, the inversion needs to proceed in reverse sequence that returns the required forwardfacing recombinase site (one of the sites R 1 , R 3 , .. R i-3 ) back into the upstream position. This takes more steps for lower i's, with the worst case being target T 3 , as shown in Supplementary Fig. 26f. However, the process is accelerated when a number of identical constructs are decompressed simultaneously in the same cell, a common occurrence in real-life delivery of DNA or viral vectors ( Supplementary Fig. 26g). Compression ratio simulations: To calculate the compression efficiency for different cases, we write down the equations for the cumulating length of the compressed and the source circuits. For the size of the compressed cassette [1] in Supplementary Fig. 23a with N targets (T's), the following relationship holds (|X| represents the length of a fragment X in base-pairs) The corresponding uncompressed circuit with N constructs would have the total footprint of When we include the size of the recombinase into the total size of the compressed cassette, we modify the formula The compression ratio with the fixed component included, and as a function of the number of compressed constructs (M) is shown in rows 7-9 in Supplementary Fig. 27. When we include the size of the recombinase into the total size of the compressed circuit, we modify the formula for the compressed circuit, without changing the formula for the source circuit. The compression ratio when both the recombinase and the fixed component are taken into account, as a function of compressed constructs, is shown in rows 10-12 of Supplementary Supplementary Fig. 25b. Plasmid construction Synthetic gene fragments and oligos used for plasmid constructs are listed in Supplementary 1 unit of iCre recombinase (NEB) for 30 min at 37˚C, followed by heat shock transformation. The plasmid harboring the proper recombination was identified by sequencing. Plasmids reported previously The following plasmids were reported previously in Lapique and Benenson 2014 The mean of all areas under the ROC curve (AuROC) is calculated, followed by a subtraction of 0.5, and shown on the plot (we call this "advantage value"). To allow a direct visual comparison, the graph of compressed circuit is rotated 180 degrees and juxtaposed over the graph of the control circuit. Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. 20 Supplementary Figure 2: Performance characterization of a compressed miRNA classifier. (a) Circuit diagrams of compressed "miR-21 AND miR-146a" logic gate and its control. (b) Each row shows a different transfection percentile used to build a ROC curve relative to a specific off state (no input, miR-21 input or miR-146a input) vs. on state (miR-21 & miR-146a inputs). Areas between the ROC curves and the discrimination line (diagonal), which we term "advantage value", are shown in blue with the respective values in white: control circuits are in light blue with corresponding axes on the left and at the bottom, and compressed circuits are in dark blue with corresponding axes on the right and at the top. (c) On the left, population-averaged output values for different input states. Each bar shows the mean of biological triplicate with the error bars showing ± 1 standard deviation. On the right, representative flow cytometry plots with indication of transfection percentiles used in ROC curve analysis. Note that "compressed circuit" refers to a compressed circuit that is codelivered with iCre recombinase and decompressed in situ in live cells. In vitro target recombination. On the left, the cartoon describes the protocol used to assess the recombination efficiency. In the middle, recombination efficiency calculated using 24 picked colonies for each construct following the indicated incubation time with iCre. Bars color code matches the colors of the rectangles framing the schemes on the right, depicting the targets configuration with and without insertion of polyA between the miRNA targets. (c) Output levels of "miR-146a AND miR-21" logic circuit measured in the miR-21 POS miR-146a NEG off state with varying composition of miR-21 and miR-146a sensor components as indicated on the horizontal axis. On the left, output levels generated by circuits without polyA between miRNA targets, and on the right, output levels generated by circuits with polyA between miRNA targets. The dashed lines show the dependency of the output level of the compressed circuits (Y axis) on sensor composition (X axis). Red dot indicates the output obtained with the control circuit in the presence of iCre. The measurements were fitted to the power law On (+miR-146a mimics) Reverse Targets miR-Y y=1173.9*x -0.54 on the left plot, and to the power law y=1733.2*x -0.65 on the right plot. The error bars show ± 1 standard deviation of a biological triplicate. (d) On and off output measurements of single input sensors with, respectively, presence and absence of corresponding miRNA mimics. The schemes at the top depict the different target configurations. On the left, performance of miR-21 sensor. On the right, performance of miR-146a sensor. Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. Each bar shows the mean of a biological triplicate with the error bars showing ± 1 standard deviation. The bars are colorcoded to correspond to the colors of rectangles framing the respective sensor configurations. Figure 5: Characterization of compressed and control two-input logic AND gate circuits with SV40-polyA between miRNA targets. At the top, schematics of circuit decompression. MiRNA targeting a particular structure with a blunted arrow indicate the cognate input for this sensor. Output levels of compressed and control circuits of the twoinput AND gate and ROC curve analysis relative to the miR-21 POS miR-146a NEG input state is shown at the bottom. Each bar shows the mean of a biological triplicate with the error bars showing ± standard deviation. Areas between the ROC curves and the discrimination line (diagonal) are shown in blue with the respective values in white: control circuits are in light blue with corresponding axes on the left and at the bottom, and compressed circuits are in dark blue with corresponding axes on the right and at the top. Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. (a) Compressed and control AND gate circuit in miR-21 POS miR-146a NEG off state. Shorthand circuit diagrams are shown at the top, with a miRNA targeting a particular structure with a blunted arrow indicating the cognate input for this sensor. Integrated output without transfection normalization is shown at the bottom at different time points. (b) Dynamic range calculated from miR-21 POS miR-146a NEG (off) vs miR-21 POS miR-146a POS (on). Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. At the top, schematic of circuit decompression using shorthand notation. miRNA targeting the boxed shorthand sensor notations correspond to the cognate inputs of these sensors. Output levels of compressed and control circuits of the two-input AND gate and ROC curve analysis of miR-21 POS miR-146a NEG state are shown at the bottom. Each bar shows the mean of a biological triplicate with the error bars showing ± 1 standard deviation. Areas between the ROC curves and the discrimination line (diagonal), are shown in blue with the respective values in white: control circuits are in light blue with corresponding axes on the left and at the bottom, and compressed circuits are in dark blue with corresponding axes on the right and at the top. Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. shorthand illustration of the decompression process beginning with forward-facing miR-21 target and backward-facing miR-146a target (T21-T146a Rev ), or from the mirror configuration (T146a-T21 Rev ). miRNA targeting the boxed shorthand sensor notations correspond to the cognate inputs of these sensors. Below, we show the full truth table of logic AND gate with corresponding measured output levels, and detailed ROC analysis of miR-21 POS miR-146a NEG input state for miR-21àmiR-21 + miR-146a decompression and for miR-146aàmiR-21 + miR-146a decompression. Each bar shows the mean of a biological triplicate with the error bars showing ± 1 standard deviation. Areas between the ROC curves and the discrimination line (diagonal), are shown in blue and purple with the respective values in white: control circuits are in light blue with corresponding axes on the left and at the bottom, and compressed circuits are in dark blue or purple with corresponding axes on the right and at the top. Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. Fig. 2. (a) The comparison of decompression efficiency between four different circuit variants, represented by four heatmap blocks. Each variant is characterized by a particular miRNA target arrangement (columns) and the delay in output availability (rows). Within each heatmap, the rows correspond to the three different off states, and the columns to the percentile of transfected cells that were used to calculate the ROC curves and AuROC values. The ratio of "advantage values" between the control and the compressed circuits are indicated using a colorbar shown at the bottom left of the heat map. Value of 1 (red) indicates identical classification performance between the control and the compressed circuits. A heatmap with a uniform red hue indicates identical performance between the compressed and the control circuits across all conditions. (b) Detailed data showing classification performance of the best circuit variant. On the left, population-averaged output values for different input states. Each bar shows the mean of a biological triplicate with the error bars showing ± 1 standard deviation. On the right, representative flow cytometry plots. Note that "compressed circuit" refers to a compressed circuit that is co-delivered with iCre recombinase and decompressed in situ in live cells. Figure 16: Optimization of three-input AND gate circuit compression using a single module approach. (a) Schematics of the iterative recombination used to generate all the sensor variants of the three-inputs AND gate circuit from a single module using a shorthand notation. MiRNA targeting the boxed shorthand sensor notations correspond to the cognate inputs of these sensors. See also Fig. 3 for detailed interconversion scheme. The program senses miR-X before decompression, and deploys miR-X, miR-Y and miR-Z sensors after addition of iCre. Each transcript is only capable of sensing one miRNA input thanks to a transcriptional terminator (STOP sign) inserted between miRNA targets in the forward orientation. (b-g) Iterative circuit optimization process. Key design elements and performance characteristics of each variant are shown: diagrams of the compressed sensor genes rtTA and LacI (top), the averaged output values for different input combinations and the lowest dynamic range of the truth table evaluation (bottom). Control circuit is specific for each compressed design and it comprises a premade mixture of in vitro generated recombination products. (b) Compression using a split module; (c-e) compression using split LacI/miR-FF4 cassettes and a single rtTA cassette with decompression encoded with LoxP & LoxFAS and a transcriptional block implemented with SV40 polyA in one (c) or the other (d) orientation, or with GAPDH polyA(e). (f-g) Compression using a single LacI/miR-FF4 cassette and a single rtTA cassette with a transcriptional block implemented with GAPDH polyA and decompression encoded with LoxP & LoxFAS sites (f) or with LoxP & Lox5171 sites (g). Each bar shows the mean of a biological triplicate with the error bars showing ± 1 standard deviation in the 2D plot and 1 standard deviation in the 3D plots. Figure 17: Three-input AND gate circuit compression using a single module approach, evaluation of the transcriptional terminator and of the repression potency. (a) Sensitivity of the miRNA targets located downstream of the transcriptional block (brown rectangle). Average output of the sensor with negative control mimic (left bar) or with miR-146a mimics (right bar). In this control experiment iCre was not added to avoid the recombination of the sensor module, accordingly we used an output without delay mechanism. In both cases the output is very low, indicating that miR-146a has no effect when the target is located downstream of the transcriptional block. (b) Effect of the insertion of miR-FF4 intron in LacI transcript on the repression strength of the sensor module; average off state sensor output with miR-FF4 intron placed downstream of LacI coding sequence (left bar), and average sensor off state output with miR-FF4 intron embedded in LacI coding sequence (right bar). In both cases the output is very low, indicating strong repression by the combination of LacI and miR-FF4. Each bar shows the mean of biological triplicate with the error bars showing ± 1 standard deviation. Figure 18: Performance appraisal of three-input AND gate circuit compression using a single module approach. Compression using a single LacI/miR-FF4 cassette and single rtTA cassette with a transcriptional block implemented with GAPDH polyA and decompression implemented with LoxP & Lox5171 sites. The heat map shows the discrepancy between the compressed circuit and the control circuit. The rows correspond to the seven different off states, and the columns to the percentile of transfected cells that were used to calculate the ratio of classification performance between the control and the compressed circuits. The ratio is color coded and shown in the heat map, with the value of 1 (orange) indicating identical classification performance between the two circuit variants. AuROC were calculated from all biological triplicates. Figure 21: Four-input AND gate circuit compression using a single module approach: Evaluation of the transcriptional terminator. Sensitivity of the miRNA targets located downstream of the GAPDH polyA in reverse orientation. Average off state sensor output generated with negative control mimic (left bar) or with miR-141 mimics (right bar). In this control experiment iCre was not added to avoid the recombination of the sensor module, accordingly we used an output without delay mechanism. Each bar shows the mean of biological triplicate with the error bars showing ± 1 standard deviation. Bottom, the heat map shows the performance discrepancy between the compressed circuit and the control circuit. The rows correspond to the fifteen different off states, and the columns to the percentile of transfected cells that were used to calculate the ratio of classification performance between the control and the compressed circuits. The ratio is colorcoded as shown, with the value of 1 (orange) indicating identical classification performance between the two circuit variants. AuROC were calculated from all biological triplicates. Transfection percentile The detailed scheme of compression generalization corresponding to the miRNA circuits described in this study. The source set of constructs (not shown in the diagram) has N components that all contain the same promoter "Prom" and the same coding sequence "CDS", and a unique target site in the 3'-UTR of the gene termed T i (1≤i≤N). Cassette [1] shows the compressed structure that is able to generate all N variants upon addition of a recombinase. Pairs of heterospecific recombinase recognition sites are depicted with triangles of different colour and labeled R 1 , R 2 ,…, R N-1 . Red octagons indicate transcriptional stop and pA indicate bidirectional polyA. Rectangles labeled with T on top indicate microRNA targets. Features with a bar on top of their label are inverted, and therefore inactive. The conversions are exemplified for a site pair T i-1 /T i and show the shortest path from the original compressed structure to the decompressed structures pA pA pA pA pA [4] Engagement of target T i-1 where these targets are deployed to generate functional components. Detailed decompression procedure is explained in Supplementary Text "Generalization of the compression and decompression procedure". (b) Similar compression scheme adapted to sets of constructs that share the same promoter but differ in their coding region sequences. Only the compressed variant is shown, while the decompression is similar to the description in (a). (c) Compression scheme adapted to sets of constructs that share the same coding sequence but differ in their promoters. Only the compressed variant is shown, while the decompression is similar to the description in (a). Supplementary Figure 24: Example of XOR gate circuits compression. (a) Two-input XOR gate circuit. The source circuit diagram depicts a recently published synthetic gene network that also operates with miRNA inputs 8 . The mechanism of miRNA sensing is similar to the sensor architecture used in this work, however the repressor layer does not contain LacI and is exclusively composed of an intronic miRNA. Accordingly, the repressor layer can not be directly controlled by the miR input, because the microRNA regulation occurs in the cytoplasm while the intron is already spliced in the nucleus 13 . Moreover, each microRNA sensor is built with a different activator and a different repressor, consequently the redundancy between the sensors is very low. Nevertheless, the two activators share the same promoter and can be compressed using the compression procedure described in Supplementary Fig. 23b. Contrary to the AND gate circuit, the XOR gate shows strong redundancy in the output layer, which can be addressed by using the compression method described in Supplementary Fig. 23a. (b) A hypothetical three-input XOR gate circuit representing a scale-up of the approach in (a). The source circuit is composed of three activator cassettes, three repressor cassettes and four output cassettes. Similarly to the twoinput gate, the three constructs in the activator layer (referred as source sensor in the diagram) can be compressed according to the procedure described in Supplementary Fig. 23b and the four constructs in the output layer can be compressed into a single construct according to the procedure described in Supplementary Fig. 23a. NarLc is an activator described in Hansen et al. 2014 14 . Supplementary Figure 25: Examples of compression using previously-published circuit. (a) The diagram on top describes a therapeutic gene circuit designed to correct insulin deficiency 15 . The original circuit contains three identical promoters, which can be compressed according to the compression scheme shown in Supplementary Fig. 23b. Source circuit is on the top and compressed/decompressed circuit at the bottom. (b) The diagram on top describes a gene circuit designed to perform division of arabinose concentration by AHL concentration in bacteria 16 . The black T shape in the cartoon stands for bacterial transcription terminator. The original circuit contains three identical promoters, which are compressible according to the scheme shown in Supplementary Fig. 23b, moreover, the two identical CDS can be compressed according to the strategy depicted in Supplementary Fig. 23c. The source circuit is on the top and compressed/decompressed circuit at the bottom. Figure 26: Simulation of recombination of a multi-input compressed circuit. The simulation is based on the general compression procedure shown in Supplementary Fig. 23a. All simulation runs start from the original state depicted in (a). In total 4 circuits were simulated with respectively 4 (b), 6 (c), 8 (d), and 10 (e) miRNA inputs. Each simulation run calculates the number of recombination events required to obtain a specific target in a productive location (first position from the left). The histograms (b-e) show the results after 10,000 runs of simulation for each circuit. The X axis of the histograms represents the number of recombination events it takes to place a specific target in the productive position. The target number in the top right corner of each histogram refers to the target position in (a). (f) Each plot shows the median values of the above histograms for respectively 4-, 6-, 8-and 10-input circuit. The X axis indicates the target position and the Y axis shows the median number of recombination events it takes to place the corresponding target in the first position. (g) The median number of steps it takes to reach a given target (Y axis) as a function of the number of identical copies of the compressed circuit (X axis). Each colored line represents a different target, the numbering correspond to Supplementary Fig. 23a. The plot on the left shows simulation for 6-target construct and the plot on the right shows simulation for 10-target construct. Effect of the size of genetic elements on circuit compression ratios using the method described in Supplementary Fig. 23a. The ratios are colour coded (see vertical bar on the right) and displayed in heat maps that all share the same Y axis, representing the number of miRNA inputs of the circuit (from 2-to 10-input AND gate circuits). The heat maps are arranged in block of 3 x 6 plots, for a total of four blocks. Within each block there are six parameters independent of each other, which are arranged in columns and refer respectively to variable sizes in base pairs of the promoter, CDS, recombination sites, miRNA targets, polyA sequences and transcriptional stop sequences. Column labels are shown at the bottom. Each row of the block corresponds to a different number of compressed components. For example, the circuit reported in this study is composed of two compression units, one is associated with the repressor component (LacI/miR-FF4) and the other with the activator component (rtTA). The two blocks at the top describe compression ratios of circuits composed exclusively of redundant gene cassettes. The two blocks at the bottom describes circuit composed of an uncompressible part of 3Kb in addition to the redundant gene cassettes. The first and third blocks from the top do not include the decompression program (sequence of the recombinase) in the calculation, while the second and fourth blocks from top consider the recombinase as a part of the compressed circuit but not of the source. Formulas used to calculate the compression ratios are described in Supplementary Text "Generalization of the compression and decompression procedure". Fig. 3b and Supplementary Figs. 16g and 18 for threeinput circuit compression using a single module. Nanogram amount of plasmid and picogram amount of miR-mimics and LNA are indicated.
Protein Expression Profile of HT-29 Human Colon Cancer Cells after Treatment with a Cytotoxic Daunorubicin-GnRH-III Derivative Bioconjugate Targeted delivery of chemotherapeutic agents is a new approach for the treatment of cancer, which provides increased selectivity and decreased systemic toxicity. We have recently developed a promising drug delivery system, in which the anticancer drug daunorubicin (Dau) was attached via oxime bond to a gonadotropin-releasing hormone-III (GnRH-III) derivative used as a targeting moiety (Glp-His-Trp-Lys(Ac)-His-Asp-Trp-Lys(Dau = Aoa)-Pro-Gly-NH2; Glp = pyroglutamic acid, Ac = acetyl; Aoa = aminooxyacetyl). This bioconjugate exerted in vitro cytostatic/cytotoxic effect on human breast, prostate and colon cancer cells, as well as significant in vivo tumor growth inhibitory effect on colon carcinoma bearing mice. In our previous studies, H-Lys(Dau = Aoa)-OH was identified as the smallest metabolite produced in the presence of rat liver lysosomal homogenate, which was able to bind to DNA in vitro. To get a deeper insight into the mechanism of action of the bioconjugate, changes in the protein expression profile of HT-29 human colon cancer cells after treatment with the bioconjugate or free daunorubicin were investigated by mass spectrometry-based proteomics. Our results indicate that several metabolism-related proteins, molecular chaperons and proteins involved in signaling are differently expressed after targeted chemotherapeutic treatment, leading to the conclusion that the bioconjugate exerts its cytotoxic action by interfering with multiple intracellular processes. Introduction Receptor-mediated drug delivery is a promising approach for the treatment of cancer, which may provide increased selectivity and decreased systemic toxicity compared to classical chemotherapy (i.e., administration of free anticancer drugs) [1][2][3]. Considering that receptors for certain regulatory peptides, such as gonadotropin-releasing hormone (GnRH; also known as luteinizing hormone-releasing hormone, LHRH), are highly expressed on a variety of cancer cells with relatively limited expression in normal tissues, they represent important molecular targets in cancer therapy [4]. Thus, GnRH derivative peptides could be employed as targeting moieties for the attachment and subsequent specific delivery of chemotherapeutic agents to GnRH-receptor (GnRH-R) positive cancer cells. After their internalization by receptor-mediated endocytosis, the bioconjugates are generally processed in lysosomes, leading to the release of the free drug or to the formation of drug-containing metabolites [5,6]. A promising native GnRH analog to be used as a targeting moiety is lamprey GnRH-III (Glp-His-Trp-Ser-His-Asp-Trp-Lys-Pro-Gly-NH 2 ), which binds to GnRH-Rs, has an insignificant endocrine effect in mammals and exerts a direct antiproliferative effect on both hormone-dependent and -independent cancer cells [7][8][9]. In our previous work, various anthracycline-GnRH-III derivative bioconjugates have been designed, synthesized and biochemically characterized [10][11][12]. One of the most promising drug delivery systems developed to date in our laboratories consists of the anticancer drug daunorubicin (Dau) attached via an oxime bond to a GnRH-III derivative in which Ser in position 4 was replaced by Lys(Ac) [13]. Daunorubicin ( Figure 1A) is a chemotherapeutic agent which interferes with the cell proliferation and division by mechanisms such as DNA intercalation, inhibition of topoisomerase II, free radical formation, lipid peroxidation, etc. Despite its clinical benefits, the administration of free Dau is followed by toxic side effects, the most severe one being cardiotoxicity [14,15]. Therefore, the attachment of Dau to GnRH-based targeting moieties should provide increased selectivity and decreased systemic toxicity [12]. We have recently shown that the bioconjugate GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] ( Figure 1B) exerted in vitro cytostatic/cytotoxic effect on human breast, prostate and colon cancer cells, with IC 50 values in low mM range. It is important to mention that on HT-29 colon cancer cells, the bioconjugate exerted higher cytostatic effect (IC 50 = 7.462.6 mM) than the parent bioconjugate in which Dau was attached to the native peptide hormone (IC 50 = 27.864.2 mM). Moreover, on colon carcinoma bearing mice, GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] exerted significant in vivo tumor growth inhibitory effect (49.3% tumor growth inhibition relative to the untreated control group) [13]. Furthermore, H-Lys(Dau = Aoa)-OH was identified as the smallest drug-containing metabolite produced in the presence of rat liver lysosomal homogenate, which was able to bind to DNA in vitro [10,13], result that could contribute to the understanding of the cytotoxic effect of the bioconjugate. In order to get a deeper insight into the mechanism of action of GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] bioconjugate, changes in the protein expression profile of HT-29 human colon cancer cells after treatment with the bioconjugate or free Dau were investigated by mass spectrometry-based proteomics. Our results indicate that several metabolism-related proteins, molecular chaperons and proteins involved in signaling are differently expressed after targeted chemotherapeutic treatment, leading to the conclusion that the bioconjugate exerts its cytotoxic action by interfering with multiple intracellular processes. In vitro Cytotoxic Effect The in vitro cytotoxic effect of GnRH-III[ 4 Lys(Ac), 8 Lys(-Dau = Aoa)] bioconjugate and free Dau was determined by MTT-assay. A number of 3610 3 cells per well were plated on 96-well plates. After 24 h incubation at 37uC, cells were treated for 6, 24, 48 and 72 h with the bioconjugate or free Dau dissolved in serum-free medium (concentration range: 2.6610 24 -10 2 mM). Cells treated with serum-free medium for the same periods of time were used as a control. After that, the MTT solution was added to each well. After 3.5 h of incubation, purple crystals were formed by mitochondrial dehydrogenase enzyme of living cells. Cells were centrifuged for 5 min at 1000 g and the supernatant was removed. Crystals were dissolved in dimethyl sulfoxide and the optical density (OD) was measured at l = 540 and 620 nm using an ELISA Reader (Labsystems MS reader, Finland). OD 620 was subtracted from OD 540 and the percent of cytotoxicity was calculated using the following equation: where OD treated and OD control corresponded to the optical densities of treated and control cells, respectively. Cytotoxicity % was plotted as a function of concentration, fitted to a sigmoidal curve and the IC 50 value was determined on the basis of this curve. IC 50 represented the concentration of bioconjugate or Dau required to achieve 50% inhibition in vitro. Preparation of Cell Lysates In order to prepare the cell lysates, 5610 5 HT-29 human colon cancer cells per well were plated on 6-well plates. After 24 h incubation at 37uC, the cells were treated for 72 h with the bioconjugate (at a concentration of 3 mM) or free Dau (at a concentration of 0.15 mM). Cells treated with cell culture medium for the same period of time were used as a control. After incubation, the cells were centrifuged for 5 min at 1000 rpm, washed with phosphate buffered saline, pH = 7.3 and then a volume of 300 mL lysis buffer was added to each well. Samples were incubated for 30 min on ice and then the isolated protein mixtures were centrifuged at 16000 g for 20 min at 4uC. The protein content of the soluble fraction was determined by BCA assay according to manufacturer's instructions. Protein Separation by Two Dimensional-sodium Dodecyl Sulfate-polyacrylamide Gel Electrophoresis (2D-SDS-PAGE) From the cell lysates, the proteins (500 mg/sample) were first precipitated by five volumes of ice-cold acetone at 228uC for 4 h. After their solubilization in rehydration buffer containing 0.6% dithiothreitol (DTT), passive rehydration of 17 cm nonlinear IPG strips (pH 3-10) was performed for 14 h at RT. The isoelectric focusing (IEF) was carried out using a Bio-Rad Protean IEF cell instrument and the following parameters: (i) 0-150 V in 3 min, (ii) 150 V for 30 min, (iii) 150-300 V in 15 min, (iv) 300 V for 30 min, (v) 300-3500 V in 150 min, (vi) 3500 V for 12 h. For the second dimension, each IPG strip was first incubated for 20 min in equilibration buffer containing 1% DTT and for another 20 min in equilibration buffer containing 2.5% iodoacetamide (IAA). The strips were then placed on a 12% SDS gel and electrophoresis was performed in two steps: the current was first set to 25 mA/gel for ,30 min and then to 40 mA/gel, until the tracking dye reached the anodic part of the gel. After this separation step, the proteins were prefixed for 1 h with 12% trichloroacetic acid and stained overnight with Coomassie Brillant Blue G-250 containing solution. After destaining the background with 25% methanol in water (v/v), the gels were scanned with a GS-800 calibrated imaging densitometer (Bio-Rad Laboratories GmbH, Munich, Germany) using the QuantityOne software. For each sample, three replicate 2D-gels were comparatively analyzed using the PDQuest 8.0 software (Bio-Rad Laboratories GmbH, Munich, Germany), which allowed automatic detection with manual corrections and quantification of protein spots. The significance of differences between protein spots was evaluated by Student's t-test and a p value lower than 0.05 was considered as significant. Additional selection criterion was a fold change value higher than two. In-gel Tryptic Digestion Selected protein spots were manually excised and subjected to in-gel tryptic digestion as previously reported [16]. Briefly, destaining of the protein spots was achieved by performing the following steps, which were repeated until the gel pieces were transparent: (i) incubation of the gel pieces with acetonitrile-Milli-Q (3:2 v/v) solvent mixture for 30 min; (ii) drying using a SpeedVac (Eppendorf AG, Germany) and (iii) rehydration with 20 mM ammonium bicarbonate (pH 8.0) for 15 min. After that, a freshly prepared trypsin solution (12.5 ng/mL of sequencing grade modified trypsin (Promega, Madison, WI, USA) in 20 mM ammonium bicarbonate, pH 8.0) was added to the dried gel pieces and incubated at 4uC for 45 min. Then, the trypsin solution was replaced by 20 mM ammonium bicarbonate (pH 8.0) and incubated at 37uC for 12 h. The tryptic peptides were extracted from the gel with a mixture of acetonitrile-0.1% TFA in Milli-Q water (3:2, v/v) at RT (3660 min). Mass Spectrometric Analysis and Protein Identification Tryptic peptide mixtures were analyzed by reversed-phase liquid chromatography-nanospray tandem mass spectrometry (LC-MS/MS) using an LTQ-Orbitrap mass spectrometer (Thermo Fisher Scientific, Bremen, Germany) and an Eksigent nanoHPLC (CA, USA). The characteristics of the reversed-phase LC column were 5 mm, 200 Å pore size C18 resin in a 75 mm i.d.610 cm long piece of fused silica capillary (Hypersil Gold C18, Thermo Fisher Scientific, Bremen, Germany). After injecting the sample, the column was washed for 5 min with 90% eluent A (0.1% formic acid in water) and 10% eluent B (0.1% formic acid in acetonitrile). The peptides were eluted using a linear gradient of 10-50% eluent B in 25 min and then 50-80% eluent B in 5 min, at a flow rate of 300 nL/min. The LTQ-Orbitrap mass spectrometer was operated in a data-dependent mode in which each full MS scan was followed by five MS/MS scans, where the five most abundant molecular ions were dynamically selected and fragmented by collision-induced dissociation (CID) using a normalized collision energy of 35% in the LTQ ion trap. Dynamic exclusion was allowed. Tandem mass spectra were searched against the SwissProt protein database using Mascot (Matrix Science) with the following parameters: ''Trypsin'' cleavage with one missed cleavage, cysteine alkylation by iodoacetamide as a constant modification and methionine oxidation as a variable modification. Optimization of Cell Treatment Conditions with the Bioconjugate and Free Daunorubicin The cell treatment conditions (i.e., incubation time and concentration) with the GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] bioconjugate and free Dau were chosen on the basis of in vitro cytotoxicity data. The HT-29 human colon cancer cells were treated either with the bioconjugate or free Dau at different concentrations for 6, 24, 48 and 72 h, respectively. Free Dau exerted a cytotoxic effect even after 6 h, which was more pronounced with time. The lowest IC 50 value (0.26 mM) was determined after 72 h of incubation. In contrast, the bioconjugate was cytotoxic only after 72 h (IC 50 = 11.5 mM); therefore, the treatment time of 72 h was used in further proteomics studies. The selected cell treatment concentrations were below the IC 50 values, namely 0.15 mM for free Dau and 3 mM for bioconjugate. It is important to note the different IC 50 values and consequently different cytotoxic properties of free and conjugated Dau that could be explained by their mechanisms of cellular uptake, namely passive diffusion in the case of free Dau vs. receptor-mediated endocytosis, which is followed by intracellular processing of the GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] bioconjugate. Changes in the Protein Expression Profile of HT-29 Human Colon Cancer Cells after Chemotherapeutic Treatment After optimizing the treatment conditions, the HT-29 human colon cancer cells were treated for 72 h with the GnRH-III[ 4 Lys(Ac), 8 Lys(Dau=Aoa)] bioconjugate or free Dau. Cell lysates were prepared according to the protocol described in Materials and Methods section. The protein content of the supernatant fractions was determined by BCA assay and it was in average 2.34 mg/mL for the untreated cells used as a control, 2.94 mg/mL for Dau-treated and 3.18 mg/mL for bioconjugate-treated cells. Proteins were then separated by 2D-gel electrophoresis using 3-10 nonlinear IPG strips and 12% gels. After Coomassie staining, the gel patterns were compared using the PDQuest 8.0 software. Differently expressed proteins were subjected to in-gel tryptic digestion, followed by nanoLC-tandem mass spectrometric analysis of the tryptic peptide mixtures and database search. In Figure 2, the analyzed proteins are denoted with an arrow and shown only on the control gel. The proteins found to be differently expressed after targeted chemotherapeutic treatment (fold change .2; p,0.05; Table 1 and Table S1) can be classified in the following functional categories: (i) molecular chaperons, (ii) metabolism-related proteins and (iii) proteins involved in signaling. Among molecular chaperones, the expression of heat shock 70 kDa protein 1A/1B (Hsp70) was found to be significantly decreased after treatment with the GnRH-III[ 4 Lys(Ac), 8 Lys(-Dau = Aoa)] bioconjugate, while free Dau had no marked effect on its expression in HT-29 human colon cancer cells compared to untreated cells (protein spot P1). Previous reports have indicated that the stress-induced Hsp70 is a molecular chaperone marginally expressed in unstressed normal cells, but overexpressed in different types of cancer cells. Moreover, elevated expression of Hsp70 in cancer cells has been associated with disease progression, metastasis, resistance to chemotherapy and generally poor patient prognosis. These functions may be explained by its anti-apoptotic properties, helping the cells to survive stressful conditions such as the action of chemotherapeutic agents [17][18][19]. The downregulation of HSP70 has been found to inhibit cell proliferation and induce apoptosis. Thus, Hsp70 has been proposed as an important molecular target in cancer treatment and efforts are being made to develop Hsp70 modulators (e.g., small-molecule inhibitors) [20][21][22]. In our work, the treatment of HT-29 human colon cancer cells with a single dose of free Dau did not affect the expression of Hsp70 protein. Interestingly, the attachment of Dau to a GnRH-III derivative used as a targeting moiety resulted in a bioconjugate with inhibitory properties on Hsp70 (Table 1). Another protein with chaperone activity, which was found to be differently expressed in HT-29 cells after their treatment with the GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] bioconjugate, was Calreticulin (protein spot P2). The implications of this protein in cancer have recently been reviewed by Zamanian et al. [23]. Its overexpression has been reported in different types of cancer cells (e.g., breast, bladder, esophageal, pancreatic, colon, gastric cancer, etc.) and found to be associated with increased invasion, metastasis and poor prognosis [23]. Considering these previous findings, Calreticulin could also represent a cancer therapeutic target, whose decreased expression might provide a therapeutic benefit. In the present study, in contrast to the treatment with free Dau, which had no significant effect on Calreticulin expression, the bioconjugate exerted an inhibitory action (i.e., in comparison with the untreated cells, in the bioconjugate treated ones the level of Calreticulin was in average four times lower; see Table 1). Compared to untreated and Dau-treated HT-29 colon cancer cells, the treatment with the bioconjugate resulted in decreased expression of protein disulfide isomerase (PDI), a multifunctional protein which plays an important role in protein folding by catalyzing the formation, breaking and rearrangement of intramolecular disulfide bridges (protein spot P3). PDI is known to operate as a chaperone which inhibits the aggregation of incorrectly folded proteins. Furthermore, it has been found that the expression of PDI is elevated in cancer cells and this protein has been proposed as a biomarker for certain types of cancer, such as colon cancer or mammary tumorigenesis [24,25]. Moreover, Goplen et al. have shown that PDI was strongly expressed on invasive glioma cells, playing an important role in tumor cell migration and invasion, actions that could effectively be inhibited by a PDI monoclonal antibody, as well as bacitracin [26]. In addition to molecular chaperones, the expression of metabolism-related proteins such as UDP-glucose 6-dehydrogenase (UGDH) was affected by the chemotherapeutic treatment (protein spot P4). While free Dau had a positive influence on UGDH expression, the application of the bioconjugate had an inhibitory effect. The latter might contribute to cancer therapy, since UGDH antagonists have been proposed as useful therapeutic agents [27,28]. This is based on the consideration that elevated glycosaminoglycan formation (e.g., hyaluronan), in which UGDH plays a key role, is involved in a variety of human diseases, including tumor progression [29]. Both the bioconjugate and free Dau significantly affected the expression of epidermal fatty acid-binding protein (E-FABP) (protein spot P5). The FABPs are multifunctional proteins involved in lipid metabolism, but also in the modulation of gene expression, growth and survival pathways as well as inflammatory and metabolic responses [30]. Furthermore, altered FABP expression patterns were described for different types of cancer, suggesting that FABPs play an important role in carcinogenesis. For instance, in a recent study by Junqin Li et al. it has been found that E-FABP, as well as Liver-FABP and Heart-or Muscle-FABP, are involved in the development of invasive ductal breast cancer, since their levels were significantly elevated in ductal infiltrating carcinoma compared to fibroadenoma [31]. Increased expression of E-FABP was also detected in endometrial cancer and chemoresistant pancreatic cancer cell lines [32,33]. Thus, E-FABP might represent another molecular target in cancer therapy. Although not significantly affected by the chemotherapeutic treatment, the expression of Ran-specific GTPase-activating protein (also called Ran-binding protein 1; RanBP1) (protein spot P6) decreased after treatment with the bioconjugate (2.21 fold change, p = 0.05). Ran is a small GTPase that functions as a molecular switch by binding to either GTP or GDP. One essential regulator of this process is Ran-specific GTPase-activating protein (RanBP 1), which also catalyzes the GTP hydrolysis of Ran. It has been found that Ran and Ran binding proteins are involved in a broad range of fundamental cellular processes (e.g., nucleocytoplasmic transport, mitotic spindle assembly, nuclear envelope and nuclear pore complex formation) as well as in cell death, cell proliferation, cell differentiation and malignant transformation (see [34] for review). Moreover, it has been reported that the expression of Ran and RanBP1 is increased in different types of cancer and that the abrogation of RanBP1 may lead to cell death. Taken together these results, it has been suggested that both Ran and Ran-specific GTPase-activating protein are good candidates as molecular targets in cancer therapy [34,35]. Compared to the untreated HT-29 colon cancer cells, the expression of guanine nucleotide-binding protein subunit beta-2like 1 (GNB2L1, also known as receptor of activated protein kinase C 1, RACK1) (protein spot P7) was influenced by the treatment with the bioconjugate, but not significantly (2.25 fold change; p = 0.0773). However, the expression of this protein was significantly different in the bioconjugate vs. Dau treated cells (p = 0.0113), a lower amount being detected in the bioconjugate treated cells. GNB2L1 has been found to play a crucial role in multiple intracellular signal transduction pathways. Regarding its possible implications in cancer development and progression, it has been shown that RACK1 promotes breast carcinoma proliferation and invasion/metastasis in vitro and in vivo and its expression is associated with poor prognosis. Furthermore, reduction of RACK1 expression led to the inhibition of cell proliferation in vitro [36]. Similar results have also been reported in case of other types of cancer such as non-small cell lung and colon carcinoma [37]. In conclusion, the treatment with the daunorubicin-GnRH-III derivative bioconjugate resulted in changes in the protein expression profile of HT-29 colon cancer cells. In particular, molecular chaperons, metabolism-related proteins and proteins involved in signaling were affected by the targeted chemotherapeutic treatment, their expression being down-regulated in the bioconjugate treated cells compared to the untreated and Dautreated ones. Previous studies have demonstrated the implications of these proteins in cancer, indicating that their down-regulation might be of therapeutic benefit (e.g., Hsp70, protein disulfide isomerase, etc). Recent progress in cancer therapy has suggested the importance of targeting more than one protein or signaling pathway. One possible approach to achieve this could be targeted cancer chemotherapy. On the basis of our results, it can be concluded that the GnRH-III[ 4 Lys(Ac), 8 Lys(Dau = Aoa)] bioconjugate exerts its cytotoxic action on HT-29 colon cancer cells by interfering with multiple intracellular processes and represents a promising targeted chemotherapeutic agent. Author Contributions
Modulation property of flexural-gravity waves on a water surface covered by a compressed ice sheet We study the nonlinear modulation property of flexural-gravity waves on a water surface covered by a compressed ice sheet of given thickness and density in a basin of a constant depth. For weakly nonlinear perturbations, we derive the nonlinear Schr\"odinger equation and investigate the conditions when a quasi-sinusoidal wave becomes unstable with respect to amplitude modulation. The domains of instability are presented in the planes of governing physical parameters; the shapes of the domains exhibit fairly complicated patterns. It is shown that under certain conditions the modulational instability can develop from shorter groups and for fewer wave periods than in the situation of deep-water gravity waves on a free water surface. The modulational instability can occur at the conditions shallower than that known for the free water surface kh = 1.363, where k is the wavenumber and h is the water depth. Estimates of parameters of modulated waves are given for the typical physical conditions of an ice-covered sea. Introduction Water waves in oceans, lakes and other estuaries covered by ice sheets more and more attract the attention of researchers in recent years. This interest is caused by the exploration of polar regions, which are rich in mineral resources. Works on the ice-covered seas are required by the development of infrastructure which includes the construction of dwellings, research stations monitoring weather and climate change, laboratories, aerodromes, barns, etc. In many countries lakes and rivers are covered by ice in the winter period which provides conditions for transport across the ice field. This also makes investigations of properties of the ice-water system topical. In such ice-covered zones, dangerous extreme wave events are registered repeatedly, examples are given in (Liu and Mollo-Christensen, 1988;Marko, 2003;Collins et al., 2015). Another problem, which is similar to the ice-water system, is related to large floating artificial constructions (airdromes, platforms, artificial islands, long tankers). The dynamic properties of such constructions are close to the properties of elastic ice sheets, and their description is based on the combination of classical hydrodynamic equations with specific boundary conditions on the free surface which account for the elastic plate within the Kirchhoff-Love model (Forbes, 1986;Părău and Dias, 2002;Il'ichev, 2016; or a special Cosserat theory of hyperelastic shells (Plotnikov and Toland, 2011;Guyenne and Părău, 2014). There is a vast volume of publications devoted to the linear properties of flexural-gravity waves (FGWs); it is impossible to list here all publications therefore, we mention only the most relevant monographs (Kheisin, 1967;Squire et al., 1996;Sahoo, 2012;Bukatov, 2017). Much fewer publications are devoted to the non-linear processes occurring in the ice-water system. The weakly nonlinear theory was developed by Marchenko and Shrira (1992) using the Hamiltonian formalism and taking into account linear contributors to the pressure related to the rigidity of the ice and to the stresses due to external loads; in particular, the nonlinear Schrödinger (NLS) equation for directional waves in infinite depth was derived. Weakly nonlinear modulated waves were studied by (Părău and Dias, 2002;Guyenne and Părău, 2014) within the framework of the NLS equation but neither the ice compression nor the ice-plate inertia was considered. In the paper by Il'ichev (2016) a modulated solitary wave of arbitrary amplitude in the form of a "bright" soliton was obtained by taking into account both these effects. In the recent publication, Il'ichev (2021) considered a strongly nonlinear envelope solitary wave within the framework of the primitive Euler equation for the particular carrier wavelength that corresponds to the minimum of the phase speed. Then, a similar solution in 3 the form of NLS soliton was derived within the weakly nonlinear theory in finite-depth water and it was shown that both solutions are close to each other for a water basin of moderate depth. However, to the best of our knowledge, the general analysis of the modulational instability for the basic governing parameters was not carried out so far. Below we fill this gap and derive the NLS equation for weakly nonlinear perturbations for a fluid of a finite depth. Meanwhile, we point misprints in the classic works by Segur (1979, 1981) and emphasize inconsistency of the theory developed in Liu and Mollo-Christensen (1988). Then, on the basis of the Lighthill criterion, we investigate the conditions when a quasi-sinusoidal wave becomes unstable with respect to the amplitude modulation, i.e. for different relations of parameters, we derive the conditions when a small-amplitude modulation increases with time and becomes deeper due to the growth of side-bands in the spectrum. This allows us to determine zones on the plane of parameters where bright or dark envelope solitons can exist. As well known, such formation plays an important role in the oceanic wave dynamics (see, for example, (Kharif et al., 2009;Osborne, 2010)). In the recent decade, bright and dark solitons, as well as breathers were successfully reproduced in a series of laboratory experiments in hydrodynamic flumes with the open water surface (Chabchoub et al, 2011(Chabchoub et al, , 2012(Chabchoub et al, , 2013Slunyaev et al, 2013). Long-lived envelope solitons embedded into fields of strongly irregular waves were found in numerical and laboratory simulations as well (see (Slunyaev, 2021) and numerous references therein). An observation of a giant nonlinear wave packet on the surface of the oceans was reported recently by Onorato et al. (2021). According to the numerical and laboratory simulations of hydrodynamic envelope solitons in open water (Slunyaev, 2009, Slunyaev et al., 2013, they are rather well described by soliton solutions of the weakly nonlinear NLS equation up to surprisingly big steepness, ka0 ~ 0.1-0.2. Local wave breaking begins when ka0 ~ 0.3. The values ka0 ~ 0.05-0.1 are frequently considered as the characteristic steepness of nonlinear wind waves in the open seas; therefore, the NLS equation often serves as a reasonable first-order approximation model. Similar wave phenomena can occur with flexural-gravity waves in ice-covered ocean. Therefore, it is important to predict theoretically which amplitudes, widths, and speeds solitons can have. At which combination of ice-water parameters they can emerge. Our paper partially illuminates these issues. The paper is organized as follows. In Section 2, we consider a model of flexural-gravity waves in the ocean covered by a compressed ice. In Section 3, we derive the NLS equation, and in Sections 4 and 5 we present the analysis of linear and nonlinear properties of the 4 flexural-gravity waves, respectively. In the Discussion, we summarize the results obtained and present estimates for the typical parameters of modulated waves in an ice-covered sea. A model of surface waves in the presence of ice cover Let us consider plane waves which propagate along the horizontal x-axis with the z-axis directed upward. We assume that the fluid is ideal and irrotational, so that the velocity potential  (x, z, t) can be introduced, v = , where v is the two-dimensional vector of fluid velocity, and  is the Hamiltonian operator in the (x, z)-plane. Then, we obtain from the equation of mass conservation for the incompressible fluid the Laplace equation in the domain occupied by the fluid: where  (x, t) is the water surface displacement beneath the thin ice plate, and    2 =  2 /x 2 +  2 /z 2 stands for the Laplacian operator. The water rest level corresponds to the horizon, z = 0, while the flat bottom is at z = -h, where h denotes the constant water depth. The nonleaking bottom boundary condition requires that h z z On the upper boundary, z = , we set the traditional kinematic condition which reflects the equality of two definitions of the vertical velocity component at z = : The other, the dynamic boundary condition, is the Bernoulli integral on the water surface z =  where the role of the external pressure in the right-hand side plays a pressure produced by the bended elastic ice plate (see, for example, (Forbes, 1986;Squire et al. 1996;Sahoo 2012; Stepanyants and Sturova 2021)): where  is the water density, D = Ed 3 /[12(1 - 2 )] is the coefficient of ice rigidity/elasticity, E is the Young modulus of elastic plate, Q is the coefficient of longitudinal stress (Q > 0 corresponds to the compression, and Q < 0to the stretching), and M = 1d. Other parameters are:  is the Poisson ratio, 1 is the ice density, d is the thickness of the ice plate, and g is the acceleration due to gravity. The function K() describes the ice-plate curvature caused by the plate deflection; according to (Forbes, 1986) and (Il'ichev, 2016;, Lower indices here denote partial derivatives with respect to x. Another model for the iceplate curvature was suggested by Plotnikov and Toland (2011) and Guyenne and Părău (2014)). In (Liu and Mollo-Christensen, 1988) the NLS equation was derived in the limit of infinite depth, and the term with the coefficient d in Eq. (5) was omitted. Having parameter M proportional to d, this leads to a different expression for the nonlinear coefficient of the evolution equation. The first term in the square brackets on the right-hand side of Eq. (4) describes the elastic property of the ice plate; the second term represents a longitudinal stress or strain of the plate; the third term describes the inertial property of the ice plate. Hereafter the coefficients M, Q, D and d will be treated as independent physical parameters. Below we consider quasi-monochromatic perturbations of the velocity potential and related ice-plate deflection and derive the nonlinear Schrödinger equation for weakly nonlinear waves. Then, on the basis of the derived equation, we analyze the stability of such waves with respect to small amplitude modulations. The weakly nonlinear theory for a modulated wave To derive the nonlinear equation for long modulations of flexural-gravity waves from the system of governing equations (1)-(4), we use the asymptotic method employed in Slunyaev (2005). We choose the carrier wavenumber k and assume that the wave amplitude is small such that k = O(ε) where  << 1 is a small parameter. We also assume that the wave is quasimonochromatic with the narrow spectrum around the peak value k and width k so that the relative spectrum width Δk /k = O(ε) is of the same order of smallness as the wave steepness k. Let us expand the wave fields in series of wave harmonics which will be treated separately: where E(x, t) = exp(i ti k x). The term φ0 represents the induced mean flow, and η0 describes the long-scale surface displacement. The following conditions Each harmonic is additionally decomposed in the asymptotic series on the small parameter  << 1: where fast and slow coordinates, and fast and multi-scale slow times are introduced, such that the operations of differentiation in space and time act as follows: In Eq. (8) n0 = 0 if n ≠ ±1, thus the dominant term corresponds to the first harmonic. Functions in Eqs. (8) and (9) depend on slow horizontal coordinate and times. The only function E(x, t) depends on the fast variables: Relying on the small surface displacements, the potential and its derivatives can be expanded into the Taylor series in the vicinity of surface z = 0. Then, the surface boundary conditions (3) and (4) give the following relations near z = 0: for the sake of convenience. The nonlinear function K(η) is also expanded in the Taylor series. For the NLS theory it is sufficient to consider the three first terms of the expansion: Here, as in Eq. (5), lower indices denote partial derivatives with respect to x. The series (6)- The solution of the Laplace equation (1) with the bottom boundary condition (2) can be found explicitly for the given amplitudes Anm specified at the water surface as follows: The amplitudes of the velocity potentials Anm(x1, t1, t2) in these equations are functions of slow variables. The nonlinear surface boundary conditions (12)-(14) constrain the amplitudes; the corresponding equations for the amplitudes are considered below in each order of smallness with respect to . In the order ε 1 E 1 , the well-known dispersion relation for the FGWs is found as the compatibility condition: Simultaneously, we obtain the relation between the leading-order surface displacement 10 and the leading order amplitude of the velocity potential A10: Other harmonics are absent in the order O(ε). In the order O(ε 2 ), three harmonics exist. At ε 2 E 1 , the advection equation appears: It can be readily confirmed that this expression is the group velocity of FGWs, V = dω/dk. 8 In the order ε 2 E 1 , the relation between the surface displacement and the velocity potential has the form: The relation between the long-scale surface displacement and the amplitude of the wave induced potential, which appears in the order ε 2 E 0 , is: The second harmonic is determined in the order ε 2 E 2 : The corresponding term of the surface displacement is: . (22) The nonlinear induced flow, which is supposed to propagate with the group velocity V, is defined by the following equation, which appears in the order ε 3 E 0 : ( ) 2 2 2 4 2 2 00 10 2 1 1, 2 2 2 In the order ε 3 E 1 , we finally obtain the evolution equation for the carrier wave amplitude, which accounts for the effects of nonlinearity and dispersion: 2 2 10 10 00 1 10 10 2 10 2 2 Seemingly, the term T3 is given in a compact form, when the coefficient Q is eliminated using the dispersion relation (15). Note that in all interesting cases the ice is supposed to be thin compared to the wavelength, and then, the parameter kd is small. The terms T2, T4 and T5 can be neglected in Eq. (26) under the assumption that kd << 1. Using Eq. (23), we reduce Eq. (24) to the closed form: 2 2 10 10 10 10 2 21 where ( ) Finally, the NLS equation for the complex amplitude B(x,t) = 1 = ε10 + O(ε 3 ) of the surface displacement can be obtained by combining Eqs. (17) and (28) and using the relation (16): which is accurate to the order O(ε 3 ). Here the real ice-sheet displacement and the velocity potential are specified by the relations: Explicit expressions for the main coefficients of the theory are given in the electronic supplement to the paper. The coefficients of Eq. (30) can be significantly simplified in the limit of infinite depth, kh → ∞. They are listed below with the subindex ∞: The nonlinear coefficient α∞ can be further simplified assuming that kd is small. Note that due to the factor (gh -V 2 ) in the left-hand side of Eq. (23) (1979), and Trulsen and Dysthe (1996). Analysis of the dispersion relation of flexural-gravity waves For the analysis of the dispersion relation (15) it is convenient to use the dimensionless variables introducing the scaled quantities as follows: , , Then the dispersion relation (15) The Benjamin-Feir instability of flexural-gravity waves As well-known, waves described by the NLS equation (30) are affected by the modulational (alias Benjamin-Feir) instability, when the Lighthill criterion  > 0 is fulfilled (Lighthill, 1965;1978;Ablowitz and Segur, 1981;Ostrovsky and Potapov, 1999 The effect of a surface tension on the modulation instability has been also studied; the diagram of the modulational instability of gravity-capillary waves can be found in the book by Ablowitz and Segur (1981) (see Fig. 4.15 there). In our notations, this problem formally corresponds to the following choice of parameters: D = 0, M = 0, d = 0, but Q ≠ 0. In Fig. 3 we present a similar stability diagram for the dimensionless parameters of the water depth kh and the longitudinal stress g Q k / 2 . The stability diagram in the book by Ablowitz and Segur (1981) corresponds to the leftward part of our Fig. 3a where Q ≤ 0, if Fig. 4.15 in the book is mirror-reflected with respect to the vertical axis. Shaded areas pertain to the domains of instability where  > 0. Unfortunately, using the expression for the nonlinear coefficient from (Liu and Mollo-Christensen, 1988), we failed to reproduce the result of Ablowitz and Segur (1981) in the limit kh → ∞; therefore, we don't examine our results against the ones from (Liu and Mollo-Christensen, 1988). (Note that Ablowitz and Segur (1979) claimed that in their paper the derived equations are equivalent to the equations derived by Djordjevic and Redekopp (1977) "except for the correction of a misprint". In fact, they corrected the misprint in Eq. (2.12) of Djordjevic and Redekopp (1977) (where must be "2" rather than "2T " in the numerator of the fraction on the right-hand side) but made another typo in Eq. (2.24d) (or Eq. (4.3.26) in the cited book), where must be (3 -σ 2 ) rather than (2 -σ 2 ). This misprint becomes obvious when the deep-water limit is consideredsee Eq. (2.25) in Ablowitz and Segur (1979) or Eq. The domains of modulational instability in Fig. 3 in the range of flexural waves the situation is inverse. A qualitatively similar situation takes place for gravity-capillary waves, when M = 0, D = 0 and Q < 0. The detailed analytic study of the dispersion is rather complicated even in the infinite-depth limit. The nonlinear coefficient changes its sign when it passes either through the zero value (shown by red curves) or through the infinity (shown by black and black-yellow curves). Black-yellow curves correspond to the condition of the synchronism between the waves and the nonlinear induced long-scale flow, which results in infinite value of the nonlinear coefficient (see Eq. (29)). It can be understood from (15) and (18) This can give at most one curve for a fixed kh in the parametric diagrams in Figs. 4 and 5. Note that though the denominator of the second term in the curly brackets of Eq. (26) can vanish too (P2 = 0), this does not lead to a singularity of q1 due to the cancellation of zero terms in the denominator and the numerator. The singularity in the coefficient  ∞ in Eqs. (32) remains in the limit of a deep water. Note that the relations (36) and (37) depend on d explicitly. In practice, the parameter kd is small, therefore its effect on the instability domains is not strong. Accordingly, moderate values of M do not change the 20 diagrams qualitatively either. The sets of instability diagrams for the zero ice thickness (Fig. 4) and an exaggerated value kd = 0.5 (Fig. 5) look generally similar. The most significant deformation of the instability diagram is observed in the case of negative Q, cf. Fig. 4f and Fig. 5f. The analysis of zeros of the nonlinear coefficient  is fairly complicated due to the bulky terms T3, T4 and T5 in Eq. (26). As follows from Eq. (32), in the limit of deep water the nonlinear coefficient ∞ can vanish at two values of D . It also follows from Figs. 4 and 5 that the number of zeros  can be larger in shallow water. In fact, the numerator of the coefficient  in the shallow-water limit contains the coefficient D in power four, hence up to four roots are possible, what agrees with the diagrams in Fig. 4 and 5. The resonance conditions (36) and (37) are simpler for investigation. According to Figs. 4 and 5, waves can be modulationally unstable below the critical depth kh < 1.363 under certain combinations of the physical parameters. The instability expands down to the zero-depth limit if Q ≥ 0 (when the ice plate is compressed). It can be readily shown that in the shallow-water limit kh << 1, and then, the resonance condition (36) . The instability diagram for the limit of small depth is also shown in Fig. 6a; it exhibits the discussed above features. On the other hand, as follows from the diagrams, short waves are stable with respect to self-modulation when the rigidity parameter D exceeds some value. As can be seen from Eq. (32), the group velocity in the limit kh ≫ 1 remains finite for any finite D . Therefore, for the finite coefficients of rigidity and longitudinal stress, the nonlinear resonance with long waves does not occur in the deep-water limit. This conclusion agrees with the slowly growing blackyellow edges in Figs. 4 and 5 when D increases; the corresponding curve is absent in Fig. 6d which illustrates the limit kh → ∞. This also leads to the statement that the second branch of 21 the long-wave resonance curve (the leftmost in Figs. 4c,d and 5c,d), which appears when 2ˆ/ 3 4 k Q g  , must cross the axis D = 0 at some large value of kh, though this point is beyond the limit of the shown graphs. As for the super-harmonic resonance condition (37), in the shallow-water limit kh ≪ 1, it is reduced to 25 Q k D = (see black line in Fig. 6a). Therefore, it always has one root when Q ≥ 0 and has no solutions when Q < 0. In the deep-water limit kh ≫ 1, the resonance condition Besides the principal possibility of modulation instability, the issues on the spectral range of instability and maximum growth rate are important to be achievable under realistic conditions. It is well known [see, e.g., (Ablowitz and Segur, 1981)] that the most unstable wave within the NLS theory (30) where a0 is the real wave amplitude. The range of the wavenumbers K where the modulation instability occurs is 0 < K < Km2. From the practical perspective it is constructive to express the perturbation length, L = 2π/K, and the characteristic growth time,  = 2π/, in terms of the wave period, T = 2π/ω and the wavelength,  = 2π/k, respectively. Then, using Eq. (39), we obtain: where ε = a0k is the wave steepness which is for realistic sea waves is of the order of 10 -1 or less. The coefficient n represents the normalized number of waves in an unstable group, and the coefficient  is the normalized maximal growth time. ; frame (f); k M = 1, k 2 Q /g = -1. The edges of the instability domains are shown by cyan lines. Horizontal blue dotted lines designate the condition kh = 1.363. (Note the twice smaller limit of the horizontal axis here compared to Fig. 4). (in Fig. 7) and faster modulation growth (in Fig. 8). As one can see, compared to the case of deep-water gravity waves without ice (the axis 0 = D in Figs. 7a and 8a), the flexural-gravity waves can be characterized as less or more unstable depending on the ice parameters. According to Eq. (40), shorter wave groups become unstable when β approaches zero value (inflection points of the dispersion relation), or when α goes to infinity. Large values of α also lead to faster development of the modulational instability. Interestingly, strongly unstable situations can be realized under much shallower conditions than kh = 1.363. As follows from Eq. (40), the conditions β ≈ 0 and  → ∞ will result in formally very strong nonlinear selfmodulation regime with small values of n and δ. A more accurate theory should be derived to describe these degenerate conditions. To gain some insight how the considered effects can manifest in realistic conditions, we present in Fig. 9 The diagrams for n look qualitatively similar. The horizontal dotted lines in each frame show the boundary k0h = 1.363. It should be beard in mind that the genuine scaled time of the modulational growth is  ε -2 provided that ε = 0.1 or less. Discussion and Conclusion In this paper, we have studied the modulation property of flexural-gravity waves on a water surface covered by a compressed heavy ice sheet of a given thickness. We have derived the nonlinear Schrödinger equation with coefficients that depend on the ice parameters and water depth (they are given in the electronic supplement). The new theory is consistent with the earlier findings by Segur (1979, 1981), though does not agree with the result by Liu and Mollo-Christensen (1988). Conditions, when a quasi-sinusoidal wave becomes unstable with respect to the amplitude modulation, have been investigated. It is well known that the modulational instability in open oceans is the most efficient when the ocean depth is infinite. Our analysis reveals that the presence of an ice cover not just allows the development of the modulational instability, but for some combinations of the ice and depth parameters can lead to even stronger nonlinear self-modulation in different senses. This pertains to the condition on the minimum length of unstable wave groups; the maximum growth rate; the threshold depth when the instability occurs. As one can see from Figs. 3-6, domains of the modulation stability and instability are bizarrely interspersed depending on combinations of the physical parameters. The modulational instability is related to the dangerous rogue wave phenomenon when abnormally large waves can emerge from rather regular initial perturbations (Onorato et al., 2001;Kharif et al., 2009). When such rogue waves occur on the top of an ice sheet, they can lead to ice destruction; this problem deserves further study. In the context of large floating artificial constructions, the obtained stability-instability diagrams may help to design safer constructionsby choosing the structure characteristics which correspond to stable wave regimes. The importance of highlighting instability domains is in the understanding when bright and dark solitary waves can exist (sketchy examples of such solitary waves are given in Fig. 10). The former can arise in the process of development of the modulational instability, whereas the latter can appear in the modulationally stable regions on the parameter plane. Solitons are long-lived coherent wave patterns that possess their own specific dynamical features. For example, envelope solitons demonstrate different from linear waves amplitude growth rates when transform adiabatically in inhomogeneous conditions or due to pumping (Onorato and Proment, 2012;Slunyaev et al., 2015); also, they can form bound states with repeated extreme wave occurrence (Ducrozet et al, 2021), and so on. When bright solitons interact with significant background waves, they may be described by breather solutions of the nonlinear Schrodinger equation, which are the mathematical prototypes of oceanic rogue waves. The presence of soliton-or breather-type waves alters the wave statistics so that it can 27 remarkably deviate from the Gaussian statistics Randoux et al., 2016;Slunyaev and Kokorina, 2017). Apparently, bright solitons can emerge in the wakes behind moving pressure sources such as landing or taken-off aircrafts, snowmobiles, skaters on frozen rivers or lakes, etc. A similar problem was considered by Berger and Milewski (2000) who discovered the formation of gravity-capillary lumps (fully localized in space two-dimensional solitons) in the wake behind a moving source on a thin water layer. The problem of generation of NLS bright solitons by moving loads on the floating ice plate or by topographic effects due to a flow around underwater hills is very topical. Such problems are more complex but solvable; they yet wait for their solution. Note that the cubic NLS equation degenerates or becomes invalid, and higher-order theories should be developed when some of the nonlinear or dispersion coefficients in this equation either vanish or become singular. These specific cases correspond to boundaries between the domains of modulation stability and instability and yield the maximum unstable 28 growth rates which may be significantly reduced within the more accurate examination. As known, the wave attenuation due to dissipative effects can stabilize the modulational instability as well (Segur et al., 2005). A reliable theoretical model of flexural-gravity wave attenuation is not developed yet. According to experimental data (Squire, 2020), the decay rate of wave amplitude is a power-type function of frequency,  p , where the exponent p varies in the range 1.9-3.6 depending on the ice property. A well-known fact is that the wave motion from the Southern Ocean can penetrate the Antarctic marginal ice zone up to 400 km (Squire, 2020). This suggests that the wave attenuation under certain conditions can be relatively weak. In the first approximation of wave amplitude, the dissipation effect can be taken into account by introducing a linear dissipation term in the NLS equation (Alberello and Părău, 2022); this leads to the downshifting of the spectral peak and a less than exponential energy decay. Our estimates show that the dissipative term is weak compared to the nonlinear and dispersive terms in the NLS equation of the paper (Alberello & Părău, 2022) model with the exponent n = 3 if the phase speed of FGWs is relatively small, Vp << ( g 2 A 2 /1dw) 1/3 , where A is the wave amplitude, and w is the water viscosity. Using the ice-water parameters after Eq. (34) and setting A = 1 m, w = 10 2 m 2 /s, we obtain Vp << 1 m/s. Such values are quite realistic in the vicinity of a phase speed minimum, especially in the compressed ice (we did not present here the plot of the phase speed for FGWs but it is qualitatively similar to the plot of the group speed shown in Fig. 2). From the formal point of view, wave self-modulation remains significant if the dissipation term is of the order of ε 2 or smaller. Consideration of the problem beyond the one-dimensional geometry should open even more intriguing perspectives. Under certain physical conditions, one may expect such situations when a wave collapse becomes possible [see, for example, (Marchenko and Shrira, 1992)]; this can lead, apparently, to ice cover buckling and crash in finite time. All aforementioned unsolved problems can be a challenge for future studies.
Comparative analysis on pore‐scale permeability prediction on micro‐CT images of rock using numerical and empirical approaches Varieties of pore‐scale numerical and empirical approaches have been proposed to predict the rock permeability when the pore structure is known, for example, microscopic computerized tomography (micro‐CT) technology. A comparative study on these approaches is conducted in this paper. A reference dataset of nine micro‐CT images of porous rocks is generated and processed including artificial sandpacks, tight sandstone, and carbonate. Multiple numerical and empirical approaches are used to compute the absolute permeability of micro‐CT images including the image voxel‐based solver (VBS), pore network model (PNM), Lattice Boltzmann method (LBM), Kozeny‐Carman (K‐C) equation, and Thomeer relation. Computational accuracy and efficiency of different numerical approaches are investigated. The results indicate that good agreements among numerical solvers are achieved for the sample with a homogeneous structure, while the disagreement increases with an increase in heterogeneity and complexity of pore structure. The LBM and VBS solver both have a relative higher computation accuracy, whereas the PNM solver is less accurate due to simplification on the topological structure. The computation efficiency of the different solver is generally computation resources dependent, and the PNM solver is the fastest, followed by VBS and LBM solver. As expected, empirical relation can over‐estimate permeability by a magnification of 50 or more, particularly for those strong heterogeneous structures reported in this study. Nevertheless, empirical relation is still applicable for artificial rocks. The traditional way to obtain rock physics generally relies on laboratory test on natural specimens. Due to the complex and disordered pore structure of natural rock specimens, the test results may also vary from samples under a consistent test environment. Besides, the test sample will be damaged and polluted over time, which limits the repeatable use for parallel analysis. 5,6 Thus, extraction and reconstruction of rock structure from natural rock specimens for petrophysics computation are valuable and have been regarded as an efficient approach to handle those issues. Then, digital rock petrophysics (DRP) has been adopted to obtain those macroscopic properties of rock including strength and permeability, as well as acoustic, electrical, and thermal properties, by simulating the physical process on these reconstructed models. [7][8][9][10] The most widely used DRP approaches of permeability prediction based on CT images are including pore network modeling, Lattice Boltzmann method, and image voxel-based direct N-S solver. 11 PNM, which extracts pore-throat network with topologically representative from micro-CT images, has been regarded as an effective approach for absolute permeability calculation and multi-phase flow simulation. 12,13 Generally, the Hagen-Poiseuille law is adopted to compute the conductance between the adjacent pore bodies, and the absolute permeability is derived from Darcy's law. 14 The corresponding computational methods for PNM can be classified into quasistatic and dynamic models. The marked difference lies in that the quasi-static model simulates the equilibrium states of the drainage and imbibition processes controlled by capillary force, while the dynamic model can realize the simulation of intrusion process which is time-dependent and controlled by both viscous and capillary forces. 15 Fatt 16 constructed the first network model who regarded the analogy between electricity in a random resistor network and flows in a porous medium. Afterward, PNM has been used to study the complex transport behavior in porous medium considering uniform or mixed wettability conditions, including phase change, reactive transport phenomenon, and non-Newtonian displacement. 17,18 However, PNM clearly makes approximations concerning rock pore-throats geometry which limits its application. Another tool for pore-scale transport properties prediction is direct simulation which is performed directly on the 3D-voxelized segmented image (eg, CT image). 13,19 One remarkable numerical method of this category is the LBM, which solves a discrete, mesoscale form of Boltzmann equation directly on voxel grid of segmented image, thus it is suited to solve the porous flow in complex geometries. Moreover, this method is easy to code and is ideally suited for parallel computing, whereas it is time-consuming even with a massively parallel implementation. 20 Recent researches indicate that it is possible to compute relative permeability and to compute the interfacial area in the multi-phase flow. 21,22 However, the application of relaxation factor limits the use in single-factor analysis. For more details of LBM theories and applications, the literature can be referenced. [23][24][25] The other method in this category is VBS, which solves Stokes and Naiver-Stokes equation directly on the voxel grid using finite volume method (FVM) 26 by fast Fourier transform (FFT). The VBS solver computes permeability, as well as velocity and pressure, on 3D pore space image using an adaptive grid to reduce the number of grid cells rather than a regular grid. According to previous studies, this method solves fast for high-porosity structures, but needs more iterations for low-porosity structures to reach desired accuracy. 27 The convergence speed of VBS solver, in general, depends on the complexity and heterogeneity of the porous medium, and the most challenge of this method is to simulate slip boundary conditions. Besides for the DRP approaches, the absolute permeability can also be predicted by some empirical relations which are derived from laboratory and engineering data, when the parameters of pore structure are known, for example, F I G U R E 1 CT imaging process and model reconstruction Sample preparation Imaging and data aquisition Reconstruction porosity, surface area, etc The most well-known is the K-C equation which is originally derived for a granular medium but is later widely applied in the geoscience. 28,29 Besides, there are also several improved empirical relations for permeability prediction. The initial form of the permeability formulas can be generally expressed as the function of pore geometry parameters and porosity, 30 which have considered more parameters to improve the accuracy and to fit specific rock types. 31,32 In this paper, nine rock samples are drilled and imaged using micro-CT technology. The absolute permeability of these samples is computed by different DRP approaches and empirical relations. The results are compared and analyzed for the accuracy and efficiency. | Micro-CT imaging and processing A total of nine microstructures of natural rock are extracted from micro-CT images and used as inputs for absolute permeability computation. The selected samples cover artificial sandpacks, unconsolidated sandstone, tight sandstone, and carbonate, and the porosity ranges from 10% to 31%. Considering the heterogeneity of microscopic structure of rock, a total of nine samples with different pore size distribution are adopted to make the results more reliable and representative. 33 The mini-core plugs with 3-6 mm in diameter and 7-10 mm in length are generally used for CT scanning to obtain highresolution data. The standard process of CT scanning is illustrated in Figure 1. During the scanning, the sample stage is rotated at a specific time, so that the core plug can be scanned by X-ray from a different direction. 34 Some CT datasets (S1, S5, S6, and S7) used in this study are scanned by ourselves, and the others are available from the open-access library at Imperial College London ([http://www.imper ial.ac.uk/earth-scien ce/resea rch/resea rch-group s/perm/resea rch/pore-scale-model ling/microct-images-and-netwo rks/sand-pack-f42a/] and the Digital Rocks Portal [https ://www.digit alroc kspor tal.org/proje cts/]). And the original CT images used in this paper are listed in Figure 2. | Reconstruction of pore-scale rock model The image processing is needed before reconstruction, including denoising, filtering, segmentation, and binarization Pore (i) P ore (j) The lattice velocity of D3Q19 model 39 of the raw images. The purpose is not only to remove the image defects, that is, noises and concentric shadows caused by system device but also to extract the target object, that is, pore space and mineral composition. 35 Then, the representative elementary volume (REV) of the rock sample is identified to optimize the computational demands. 36 The general work-flow of model reconstruction including image processing and REV selection is illustrated in Figure 3. On the basis of extracted pore structure, pore space analysis including porosity, pore-size distribution, specific F I G U R E 6 Pore space extraction and analysis from (A) original CT image, and (B)-(D) are separated label field, extracted pore network model, and calculated pore-throat radius distribution of sample S1, respectively. The results listed subsequently are the rest of all samples. The colors of the pores are different at random in separated label field model surface area, and coordination number can be performed. These parameters are crucial to the permeability estimation using empirical relations, such as the K-C equation. The details will be given and discussed in subsequent sections. | Theoretical basis of rock physics computation The numerical approaches and empirical relations reported in this study are presented as follows. | VBS solver VBS solver solves Stokes and Navier-Stokes equations directly on a 3D-segmented voxelized image using the FVM by FFT. Combining the assumption of incompressible, Newtonian fluid within a steady-state laminar flow, the N-S equation can be simplified and given as 37 : where ⃗ ∇ and ⃗ ∇ ⋅ are the gradient operator and divergence operator;∇ 2 and μ are the Laplacian operator and dynamics viscosity; ⃗ V and P are the velocity and pressure of the fluid. The nonslip condition is adopted at the solid-fluid interface. Once the equation system above is solved, Darcy's Law is adopted to determine the absolute permeability coefficient. | PNM solver The PNM solver computes the absolute permeability by simulating single-phase flow in the pore network extracted from CT images. 15 The pores and throats are commonly reproduced by idealized spheres and cylinders, respectively. In the pore network model, the flow rate, Q a for single-phase flow between two connected pores i and j (as explained in Figure 4) is given by 15 : where p a i and p a j are the pressure in pores i and j, respectively, g a p,ij represents the conductance of two adjacent pores i and j of fluid a and can be derived from: where L ij is the distance from the pore-throat interface of pore i to j (also represents the total length of the pore throat, L t ), L p, i and L p, j are the radii of pore body i and j, respectively. For a given shape of the channel, the conductance g p can be derived from the Hagen-Poiseuille formula: where μ p is the dynamic viscosity of fluid a, A and G are the cross-sectional area and shape factor of the pore network model, respectively. k is a constant and for a circular, equilateral and square tube, the value is 0.6, 0.6, and 0.5623, respectively. 38 Adopting Darcy's law, the absolute permeability K a of pore network model can be derived by: where Q a is the total flow rate of fluid a through a pore network model of length L with potential pressure drop ΔP. | LBM solver The LBM solver solves a discrete, mesoscale Boltzmann equation and can be reduced to N-S equation in a low-Mach number limit. 25 The LBM solver used in this study is implemented by adopting the D3Q19 model, as shown in Figure 5, and Eqs. (6)-(8) constitute the iterative model of LBM. 39 The discrete direction of velocity can be expressed as: The evolution equation can be derived as follow: where f i (x,t) is the particle distribution function of lattice x at time node t in i direction; is the relaxation time, and the term f eqi which is also named equilibrium distribution function can be expressed as: Δt is the lattice velocity;Δx and Δt are lattice step and time step, respectively. And the term t which is called weight coefficient can be written as: | Empirical relation solver The most well-known empirical relation for permeability prediction is K-C equation, which can be written as 40 : where is the porosity of porous media (dimensionless); S represents the specific surface area which is defined as the ratio of surface area of whole pores to total volume of specimen (m −1 ); c is the Kozeny constant which depends on the geometry of porous media, for example, for cylindrical capillaries, c = 2, is the tortuosity of porous media (dimensionless). For tortuosity, the empirical relation proposed by Saxena et al 41 can be used and written as: Besides, Thomeer 42 proposed another empirical permeability model using pore size distribution and mercury intrusion capillary pressure which can be written as: where P D is the entry pressure of mercury, and G is a constant which reflects pore shape property and the impact of tortuosity (generally, G = 0.2 for siliciclastic rocks, dimensionless). The equation can also be driven in terms of pore diameter D (in μm) as: ( ± 1,0,0) (0, ± 1,0) (0,0, ± 1), i = 1, ⋯ ,6; ( ± 1, ± 1,0) ( ± 1,0, ± 1,) (0, ± 1, ± 1), i = 7, ⋯ ,18. A, Abundant dissolution pores due to calcite dissolution (red arrows), plane-polarized light (PPL), thin section; B, micropores and inter-granular pore (red arrow) presented in "Cluster" quartz aggregates (red-dotted ellipse); C, dissolution pores associated with calcite, dissolution residue with irregular morphology (red arrows); D, secondary dissolution pore with irregular morphology observed in calcite grain (red arrow) | Quantitative analyses of pore structure characteristics The pore space analysis including porosity and pore size distribution is performed on the pore structure model extracted from CT images, as shown in Figure 6(A). For the purpose to reduce the impact of the exterior region on image processing and to decrease the data size, a cubic region of interest (ROI) located in the center is generally selected for model reconstruction. Then, the separated operation is performed on extracted pore space to generate a label field, as shown in Figure 6(B) which splits the whole pore space into numerous parts to generate the pore network model ( Figure 6C) and for pore size distribution analysis, as shown in Figure 6(D). The pore network extracted from the separated label field is implemented in Avizo™. 43 Compared with the pore network extracted by using the maximal ball (MB) algorithm, 15 the radius of the pore network model is computed by the equivalent volume method in Avizo, whereas the MB method is equivalent to the radius of the inscribed sphere. The main difference between these two approaches on pore-throat size computation lies in the treatment of throat. The results of pore space analysis are listed in Table 1. More details of pore type, morphology, and petrography can be seen by thin sections and scanning electron microscope (SEM) in Figure 7. | Computed absolute permeability The absolute permeability is calculated by simulating single-phase flow with the assuming of an incompressible, Newtonian fluid and a steady-state laminar flow. | The first category-VBS solver The VBS solver simulates the experimental condition by simulating fluid flow between two opposite faces while other four faces are sealed with a one-voxel-wide grid (as shown in Figure 8). In this case, the Stokes equations are solved directly on the segmented 3D images of nine rock samples. The computed permeability values using VBS solver for all nine rock samples are presented in Table 2. | The second category-PNM solver The extracted PNM models are shown in Figure 9, in which the spheres and cylinders represent pores and throats, respectively. The single-phase flow simulation is performed on all extracted pore networks to compute absolute permeability. In this case, the conductance between connected pores is analytically given by Hagen-Poiseuille formula, while those isolated pores do not participate in fluid flow. The absolute permeability is derived from Darcy's law, as listed in Table 3. | The third category-LBM solver LBM solver treats the fluid flow as a continuous motion and collision process between fluid particles with collision model. Similarity, a certain pressure drop is applied on two opposite faces to force fluid flow in and out, and the rest of four faces are sealed with a one-voxel-wide grid. In order to ensure the second-order precision, the curve boundary condition is adopted between pore and grain. 44 The computed permeability using LBM solver is listed in Table 4. Since the LBM solver and VBS solver are both performed on image voxel grids, we visualize the normalized local velocity of both LBM and VBS simulation for further comparative analyses, as illustrated in Figure 10. According to the colormap of the velocity field, the global flow field and the main flow channel distribution of LBM (located on left side) agree well with that of VBS (right side) solver. | The fourth categoryempirical relation In this study, the empirical K-C equation and another empirical relation proposed by Thomeer are used to predict the absolute permeability of all nine rock samples. It is reported that the empirical relation of permeability is the least accurate approach compared with other solvers, which has been confirmed in previous studies. 30,41 The empirical relation is designed for comparisons in this study. The estimated permeability is listed in Table 5. Compared with simulation results, both of the empirical relations over-estimate the permeability of rock samples, especially the results of Thomeer relation. | Comparative analysis A comparative analysis is carried out to present the strategies on solver selection in DRP analyses, including computed deviation and computation time, as well as the solver variability aiming at different types of rock. There is no laboratory-measured permeability for natural rocks reported in this study, which means the direct comparison of computation accuracy between various numerical solvers is impossible. The reasons lie in the environmental difference and scaling effect between experimental test and simulation, as well as the solid-fluid interaction which is generally ignored in single-phase flow simulations. Thus, it is meaningful for this study to focus on the comparison among various numerical approaches and empirical relations. As is shown in Figure 11, the computed permeability values of three numerical solvers are basically in a great agreement with each other whereas the estimated values of empirical relations are relatively discretized and higher. For samples with strong sensitivity and heterogeneity, such as tight sandstone and carbonate, the estimated permeabilities by using empirical relations are less accurate. Compared with Thomeer relation, the K-C equation comprehensively considers the specific surface area and tortuosity of pore structure, and the values estimated by using the K-C equation are closer to the simulation results. For sample S2 (an artificial sandpacks with a resolution at 9.996 μm) in Figure 11, the digitally computed permeability values of three different solvers are in great agreement with K-C equation value. Considering that the K-C equation is originally derived for granular medium using the laboratory-tested results, it can be regarded as a validation for numerical solvers. However, the Thomeer relation still over-estimates the permeability for sandpacks. Using the mean value of computed permeability as references, the ratio of absolute permeability computed using different numerical solver to the references is presented in Figure 12, because the permeability computed by different approaches agrees well with each other. It can be found that the PNM solver shows the relative larger difference with respect to the reference value (mean value), whereas the relative deviations of VBS solver are less than 0.1 except for rock S8 and S9. The results presented in Figure 12 demonstrate that the VBS solver shows the best agreement with the reference value, followed by LBM and PNM solver. It should be noted the computed values of VBS and LBM solver are close to each other. Only the values computed by PNM solver are obviously discrete. Especially for carbonate, the rocks with strong heterogeneous structure, the disagreements are large over 0.5 and a maximum value of 0.66. It can be concluded by Figures 11 and 12 that the VBS and LBM solver show relatively higher computation accuracy, whereas the PNM solver is less accurate. However, the PNM solver costs a minimum computation time, followed by VBS and LBM solvers. Computation efficiency is also of significant interest in the image-based rock physics simulation. The numerical solvers described in this study are distinct in their governing equations, as well as the implementation algorithms. Thus, we implemented the solving process on the same platform to quantitatively analyze the time-consuming of different solvers. The solving processes of different solvers on nine rock samples reported in this study are all implemented on PCs equipped with Core i7-8700, i7-2600K and 32G RAM. The comparisons on the adopted platform, running time, boundary condition setups, and convergence criteria for various solvers are listed in detail in Table 6. F I G U R E 9 The extracted pore network model from sample S9. In this case, gray represents the grain region, and the colorful spheres and cylinders represent pores and throats respectively, where the color varies from the size of spheres and cylinders, which is determined by the normalized radius of pore and throat Overall, the computation time is generally depended on the model structure and size. The PNM solver is relatively faster compared with VBS and LBM solver but also relatively less accurate. In addition, the extra time spent on pore network extraction process is not considered. The VBS solver generally runs faster on GPU card compared with CPU computation, especially for the model with complex structure. The LBM solver owes the advantages of large-scale parallel implementation, whereas is the slowest solver for the given computation platform reported in this study. | CONCLUSIONS Image-based rock permeability computation by using numerical approaches contributes to reducing the subsurface uncertainties. Compared with laboratory-test-based measurements, the computing efficiency and accuracy of the selected numerical solver are of great concern in the DRP technique. Thus the performance of efficiency and convergence of these numerical solver become relevant. The work described in this study presents a comprehensive comparison between multiple numerical solvers and empirical relations on computation efficiency and accuracy of permeability. It can be concluded that the results of the three different numerical solvers agree well with each other, whereas the Constant pressure at the inlet and outlet, no-slip boundary condition at the side wall and curve boundary condition is adopted between pore and grain, this model implementation with single-relaxation time (SRT set to 1.5 in this study) 4/>24 (Depending on the processers called in the calculation) empirical models generally over-estimate the permeability by a magnification of 50 or more, especially for samples with complex and heterogeneous structure. The PNM solver needs the shortest computation time, whereas the LBM solver requires the longest running time. The PNM has a relatively less accuracy compared with the other two approaches due to the simplification on topological structure, especially for samples with strong heterogeneous structure, that is, carbonate.
Proteomic analysis of the secretome of Leishmania donovani Analysis of Leishmania-conditioned medium resulted in the identification of 151 proteins apparently secreted by the parasitic protozoan Leishmania donovani and suggested a vesicle-based secretion system. Background Leishmania spp. are the causative agents of a group of tropical and subtropical infectious diseases termed the leishmaniases. These infections disproportionately affect poorer peoples in developing areas of the world. Because of the debilitating and disfiguring results of infection, these diseases are a great barrier to socioeconomic progress in endemic areas. As of 2001, it was estimated that 12 million people worldwide have been infected with leishmania, and 2 million new cases are believed to occur each year [1]. Recent environmental changes such as urbanization, deforestation, and new irrigation schemes have expanded endemic regions and have led to sharp increases in the number of reported cases [2][3][4]. In addition, visceral leishmaniasis is establishing itself in previously unaffected areas by piggy-backing on the spread of the HIV epidemic [5]. Leishmania co-infection with HIV has become a serious global health threat. The two infections are involved in a deadly synergy, because leishmania infection exacerbates the immunocompromised state of infected individuals, thereby promoting HIV replication and resulting in earlier onset of AIDS [6]. The combination of HIV co-infection, expansion of endemic regions, and evolving drug resistance [7] has created great need for more effective antileishmanial drugs and other control measures. Progress in controlling the leishmaniases requires improved appreciation of the biology of the parasite to allow novel treatment strategies to be designed. Members of the genus Leishmania are digenetic protozoans. The organisms exist either as flagellated, motile promastigotes within the alimentary canal of their phlebotomine sandfly vector or as nonmotile amastigotes that reside within phagolysosomes of mammalian mononuclear phagocytes. Promastigote surface coat constituents have been the focus of considerable interest [8][9][10], and many of these -including glycoproteins, proteoglycans, and glycolipids -have been shown to play protective roles [8,11,12]. Surface-associated molecules are considered to make up the vast majority of leishmania secreted material [9]. Through these studies, it has become evident that there are a number of unusual features that typify exocytosis by this group of trypanosomatids. For example, in these highly polarized cells, regulated secretion is thought to occur solely at the flagellar pocket, a deep invagination of the plasma membrane from which the single flagellum of leishmania emerges [9,13]. Leishmania are known to synthesize and traffic most surface molecules, such as lipophosphoglycan and leishmanolysin GP63, along the classical endoplasmic reticulum-Golgi apparatus-plasma membrane pathway [9]. As mentioned, these surface molecules are ultimately delivered to the flagellar pocket, and it is thought that the pocket retains its role as the primary if not sole site of secretion in nonflagellated amastigotes [9]. Thus far, no leishmania candidate virulence factors have been shown to traffic through the flagellar pocket. This is not surprising, however, given that no ultrastructural work has accompanied descriptions of leishmania candidate virulence factors, and little attention has been paid to their intracellular or extracellular trafficking pathways. Whether leishmania use a classical amino-terminal signal sequence peptide to direct the export of most secreted proteins through the flagellar pocket or a different mechanism is unclear. Two leishmania surface glycoproteins, a proteophosphoglycan and GP63, are initially synthesized with a cleavable amino-terminal signal sequence [9]. However, the vast majority of characterized leishmania secreted proteins have no identifiable secretion signal sequence, with the exception of those that are initially membrane bound [9,14,15]. The lack of a clear amino-terminal secretion signal sequence among the majority of characterized leishmania secreted proteins suggests the existence of important nonclassical pathways of secretion. Despite the potential importance of protein secretion by leishmania, only a small number of leishmania proteins have been examined in detail from this perspective [14,[16][17][18]. Ideally, one would like to know the identities of all of the components of any complex system in order to fully comprehend functionality. Consequently, we set out to identify all, or as many as possible, of the proteins secreted by leishmania. To this end, we designed a quantitative proteomic approach based on SILAC (stable isotopic labeling of amino acids in culture) [19][20][21]. SILAC involves culturing cells with either normal isotopic abundance amino acids or with stable isotope-enriched amino acids (for instance, L-arginine versus 13 C 6 -L-arginine) until essentially all proteins of the cell are labeled. The two populations or samples to be compared are then mixed and analyzed by nanoflow liquid chromatography-tandem mass spectrometry (LC-MS/MS). We used this approach to analyze the extent to which any given leishmania protein was secreted into promastigote conditioned medium (Cm) by relating it to the level of the same protein that remained cell associated (CA). In this report, we identified 358 proteins in combined Cm/CA mixtures from Leishmania donovani and, based on a quantitative analysis, we conclude that 151 were actively secreted. The general properties of the identified secreted proteins allowed us to postulate potential mechanisms of secretion as well as functional roles within the context of infection. hampered by the presence of degradation products and by the requirement of the cells for serum [14,22]. In light of these complexities, we included a nontoxic protease inhibitor, soy bean trypsin inhibitor, in the promastigote culture medium during collection and isolation of Cm to minimize degradation of secreted proteins by proteases. Secondly, we reduced Cm collection time to 6 hours or less in order to allow culture of promastigotes under serum-free conditions. Pulse-chase labeling of leishmania with 35 S-methionine followed by isolation of serum-free Cm showed clearly that leishmania secreted numerous proteins (Figure 1a). Here, an equal number of trichloroacetic acid-precipitated counts/minute of Cm and whole cell lysate (WCL) were analyzed, allowing us to compare directly the intensities of protein bands from Cm and WCL. The results show that some of the leishmaniasecreted proteins (arrows in Figure 1a) were clearly enriched in the Cm. It is also important to note that the clearly distinct protein separation patterns of leishmania Cm and WCL indicate that the proteins detected in Cm were unlikely to be artifacts present due to lysis of cells during culture or processing (Figure 1a). To control further for the possibility of false positive protein detection in Cm caused by inadvertent lysis of promastigotes either spontaneously (due to programmed cell death) or during isolation of Cm, using an enzymatic assay we measured the amount of cytosolic marker glucose 6-phosphate dehydrogenase (G6PD) [23] present in Cm. The total amount of G6PD activity detected in Cm was compared with activities found to be associated with serial dilutions of the total mass of promastigotes that was used to generate the Cm. As shown in Figure 1b, the amount of G6PD detected in Cm never exceeded the total enzyme activity that was associated with 5% of the promastigotes used to generate the Cm. Notably, there was also no difference in the amount of G6PD detected in Cm collected from promastigotes that had been grown in either stable isotope or normal isotopic abundance culture medium (data not shown) during the SILAC analysis described below. Quantitative mass spectrometry identifies a wide array of leishmania-secreted proteins Serum-free leishmania Cm collected from stationary phase promastigotes was fractionated either by one-dimension SDS-PAGE or by in-solution isoelectric focusing and analyzed Leishmania Cm contains enriched proteins and is minimally contaminated by incidental cell lysis Figure 1 Leishmania Cm contains enriched proteins and is minimally contaminated by incidental cell lysis. (a) Leishmania promastigotes were metabolically labeled, as described in Materials and methods. Conditioned medium (Cm) from labeled cells and the cells themselves were collected in parallel, and the proteins present in the Cm and corresponding whole cell lysate (WCL) of promastigotes were precipitated in 10% trichloroacetic acid (TCA). Equal numbers of TCA-precipitated counts/minute of Cm and WCL were fractionated on a 5% to 20% gradient polyacrylamide gel. Arrows indicate proteins specifically enriched in leishmania Cm. The autoradiograph shown is representative of three independent experiments. (HMW) High molecular weight marker. (b) To control for inadvertent lysis of organisms during collection of Cm, glucose-6-phosphate dehydrogenase (G6PD) activity in Cm collected from isotopically labeled and nonlabeled cells was measured as described in Materials and methods and compared with the activity associated with deliberately lysed promastigotes. The data shown are the means of measurements from three independent experiments. 0.01 units of G6PD were assayed as a control in each experiment. The asterisks shown indicate a significance difference when compared with Cm (P < 0.001), calculated by one-way analysi of variance followed by Bonferroni's correction for multiple comparisons (GraphPad Prism 4.0). by LC-MS/MS using a linear trapping quadrupole-Fourier transform hybrid mass spectrometer (see Materials and methods, below). We set three criteria that had to be met for any protein detected by mass spectrometry to be included in the leishmania 'secretome'. First, we only considered proteins to be identified if at least two unique tryptic peptide sequences from that protein were detected (see Materials and methods for peptide criteria limits). Second, we required a particular protein to be observed in at least three out of four independent experiments. This resulted in the identification of 358 proteins (listed in Additional data file 1) in the pooled Cm and CA samples, with an estimated false discovery rate of less than one protein in 200. Interestingly, by these criteria we did not detect G6PD in any of the LC-MS/MS analyses, probably because the amount of G6PD was below the detection limit of the mass spectrometer. The method of preparation of Cm for LC-MS/MS analysis did not provide sufficient amounts of protein to allow reliable use of standard methods for measuring total protein concentration (see Materials and methods, below), so we estimated the protein content of Cm samples from an initial LC-MS/MS analysis and mixed these with an equal amount of oppositely labeled CA protein. Because this method of equalization is imprecise, we normalized all Cm/CA ratios within an experiment to histone H2B (GeneDB:LmjF19.0050). H2B was consistently detected in Cm, most likely as a result of both general cell lysis and apoptosis [24,25]. After normalization, the values were log e transformed (Additional data file 2) and Cm/CA ratios for all identified proteins were calculated as the mean Cm/CA ratio for all peptides from that protein across all experiments (Additional data file 2) [26,27]. These SILAC ratios reflected the degree of enrichment of individual protein species in leishmania Cm, and a frequency distribution is shown in Figure 2. Across all experiments the overall mean ± standard deviation Cm/CA value for the 358 proteins was 1.35 ± 0.85 ( Figure 2). Figure 2 Quantitation of leishmania secreted proteins in Cm. Conditioned medium (Cm)/cell associated (CA) ratios from each of four independent analyses were normalized to the ratio for that of histone H2B followed by log normal (Ln) transformation (see Additional data file 2 We used the Cm/CA ratio of histone H2B to define the third criterion for inclusion in the secretome. We considered leishmania proteins with a mean Cm/CA peptide ratio at least two standard deviations (1.7) above the ratio for histone H2B (after transformation = 0) to be actively secreted by leishmania ( Figure 2, solid line). In choosing this rather conservative yet arbitrary cut-off, we reasoned that if H2B was representative of proteins externalized by apoptosis then, by allowing a significant margin of error around it, the proteins (numbering 151 in total) with Cm/CA ratios of 1.7 or greater were likely to be bona fide secreted proteins. This conservative approach provided a high level of specificity for 'secretion' at the expense of sensitivity. We used Western blotting to examine a select group of proteins in paired Cm and CA samples to determine the extent to which this orthogonal method of detection would correlate with the SILAC/mass spectrometry analysis. Here we examined four proteins: heat shock protein (HSP)70, with a Cm/ CA value of 1.86, above the cut-off of histone H2B plus two standard deviations (or +1.70); HSP83/HSP90, with a Cm/ CA ratio of 1.50 falling just below the cut-off; elongation factor-1α (EF-1α) with a ratio of 0.69; and secreted acid phosphatase (SacP), which was not detected by LC-MS/MS. As shown in Figure 3, SAcP was detected as a dispersed band in Cm, but it was completely absent from the aliquots of WCL analyzed (lanes 1 and 2). Both HSP70 and HSP90 were also clearly enriched in Cm, with HSP70 to a greater extent than HSP90 (compare Cm with WCL lane 2). On the other hand, the bulk of EF-1α was retained intracellularly ( Figure 3). This qualitative analysis indicated that the SILAC/LC-MS/MS results correlated closely with conventional protein detection by Western blotting in respect of providing a semiquantitative estimate of protein secretion by leishmania. Additionally, these findings indicated that the arbitrary third criterion for inclusion in the secretome was both valid and in fact highly rigorous, because HSP90 -a protein falling just below the secretome cut-off (Cm/CA of 1.7; Figure 2) -was clearly found to be enriched in Cm by Western blotting (again compare Cm with WCL lane 2). The results for SAcP both by mass spectrometry and Western blotting were of particular interest and appeared to be a special case. Whereas this ecto-enzyme, which was previously reported to have an amino-terminal secretion signal [28], was highly enriched in Cm (Figure 3), its absence from the LC-MS/MS analysis suggested that its absolute abundance must be quite low. This is addressed further under Discussion (below). The results of the Western blotting also indicated that there was minimal contamination of Cm by incidental lysis. Figure 3 shows the protein profile of 5% of the cells (selected based on the maximum amount of lysis that may have occurred according to the results of the G6PD analysis; Figure 1b) to be markedly distinct from that of the leishmania Cm (compare Cm with WCL lane 1). The distinct profiles of Cm and WCL observed in the metabolic labeling experiment (Figure 1a) also indicated that contamination of Cm through lysis was negligible. Gene Ontology analysis of the leishmania secretome To develop an understanding of how protein secretion by leishmania might be related to specialized functions or proc-Leishmania HSPs are enriched in Cm Figure 3 Leishmania HSPs are enriched in Cm. Leishmania conditioned medium (Cm) was collected from a culture containing about 2 × 10 9 promastigotes. The proteins contained therein were precipitated and then solubilized directly in Laemmli sample buffer. One half of this volume (containing a known amount of protein) was loaded into the lane labeled 'Cm'. From the 2 × 10 9 promastigotes recovered from the culture, 5% (1 × 10 8 ) were removed and processed in parallel with the remaining 95%. The 5% figure was chosen based upon the estimated (see Figure 1) maximum number of organisms that may have undergone incidental lysis during the incubation period and the collection centrifugation. Proteins contained in the whole cell lysate (WCL) prepared from the 1 × 10 8 cells were precipitated and resolubilized in a volume equal to that used to resolubilize the proteins precipitated from Cm collection. To allow for direct comparison to be made, half of this volume was then loaded into WCL lane #1. Protein was also precipitated from the WCL prepared from the remaining 95% of the cells and after solubilization in sample buffer an amount of protein was loaded into WCL lane #2 equal to that loaded into the lane labeled Cm. After transfer to nitrocellulose membrane, blots were probed, stripped, and reprobed with the indicated antibodies. The data shown are representative of results obtained in at least three identical experiments. EF1alpha, elongation factor-1α; Hsp, heat shock protein; SAcP, secreted acid phosphatase. EF1alpha esses, we used the Leishmania Genome [29] and the Gene Ontology (GO) [30] databases in conjunction with the Blast2GO analysis tool [31] to determine whether any classes of proteins were more likely to be found in among the leishmania secreted proteins. This analysis resulted in 85% of the proteins detected in leishmania Cm having one or more GO term assignments (Additional data file 3). After tallying the number of leishmania secreted proteins assigned to each GO term, it was clear that many of the secreted proteins ( Figure 4a) were involved in turnover and synthesis of protein and nonprotein macromolecules. In fact, 27 out of the 151 secreted proteins (18%) were predicted to be involved in protein translation (GO: 0006412), which was more than in any other discrete biologic process (Figure 4a). Beyond this, as shown in Figure 4a, the leishmania secreted proteins identified by LC-MS/MS were found to be involved in a wide array of processes, including proteolysis (GO:0006508), protein folding (GO:0006457), and biologic regulation (GO:0065007). Consistent with the biological process GO analysis, a full 50% of leishmania secreted proteins were involved in protein binding interactions, for example binding to ATP (GO:0005524), ions (GO:0043167), or other proteins (GO:0005515; Figure 4b). Other highly represented functions included pyrophosphatase activity, hydrolase activity, and oxidoreductase activity (GO:0016462, GO:0016787, and GO:0016491). It is noteworthy that nearly 20 proteins that fell below the secretion cut-off were annotated as having transporter activity (GO:0005215), whereas no such activity was found for the secreted proteins (Additional data file 3). Of interest, there appeared to be a trend toward concentration of a distinct set of processes and functions in the group of 151 leishmania proteins making up the leishmania secretome. As shown in Figure 5a, when compared with the total group of 358 proteins consistently identified in Cm, there appeared to be enrichment of proteins involved in processes related to growth (GO:0040007), RNA metabolism (GO:0016070), and biopolymer modification (GO:0043412), including protein amino acid phosphorylation (GO:0006468). Consistent with these biologic process assignments, molecular functions such as kinase activity, peptidase activity, and translation factor activity (GO:0016301, GO:0008233, and GO:0003746, respectively) appeared to be more prevalent among the group of 151 leishmania secreted proteins than among the total group of 358 proteins consistently identified in Cm ( Figure 5b). We used the GOSSIP [32] statistical framework to determine whether any GO terms were significantly enriched in the secreted proteins when compared to other Cm proteins. Many of the processes and functions discussed and depicted in Figure 5 had significant (P < 0.05) single test P values. However, after correcting for multiple testing using both a false discovery rate (the most common correction method) and a family wise error rate (which is more correct in this context because there was no a priori basis for an association between the secreted proteins and any GO term) [32], no terms were found to be significantly enriched in the group of 151 secreted proteins. This may be due to our small sample size of individual GO terms associated with at most 358 proteins. In contrast, these statistical tests are regularly carried out on sample sizes in the tens of thousands [32] of genes or proteins. In addition, statistical significance may not have been achieved because we were comparing two datasets with a high probability of overlap, because we looked for enrichment of GO terms associated with the group of 151 proteins in leishmania secretome compared with GO terms associated with the total group of 358 Cm proteins. In fact, some of the Cm proteins below the cut-off may be actively secreted and certainly were found to be exported by some mechanism, including cell death. For these reasons, we consider that the apparent concentration of GO associations shown in Figure 5 may in fact be meaningful. In addition to members of the secretome having pleiotropic functions, they were also predicted to have a variety of subcellular localizations. Nearly one-third of leishmania secreted proteins were predicted to be cytoplasmic (GO:0005737) by GO, and these had associations with both membrane bound (GO:0043227) and nonbound intracellular organelles (GO:0043228), including ribosomal proteins, nuclear proteins, mitochondrial proteins, and glycosomal proteins (Additional data file 3). Only five secreted proteins were predicted to be integral membrane proteins, and none of the secretome proteins were predicted to be associated with the endoplasmic reticulum. Bioinformatics analysis of secreted proteins in the leishmania genome We screened the leishmania genome database for proteins containing a classical amino-terminal secretion signal peptide, in order to generate a putative list of classically secreted proteins for comparison with the proteins identified by LC-MS/MS. We modified a bioinformatics approach previously used to identify proteins secreted by Mycobacterium tuberculosis [33] and applied it to the genome of Leishmania major [34]. Proteins were considered highly likely to be secreted if the sequence included a classical amino-terminal secretion signal peptide and lacked additional transmembrane (TM) domains. Additional TM domains would have suggested that the protein was membrane bound and therefore unlikely to be released from the cell. The majority of leishmania surface expressed proteins are associated with the plasma membrane via a glycophosphotidylinositol (GPI) lipid attachment [9], and some of these GPI-attached surface proteins, such as GP63, are known to disassociate from the membrane and can be detected in Cm [35]. In light of this, as a final step we screened the proteins positive for a signal sequence and negative for multiple TM domains for GPI-linkage attachment sites and considered positive proteins to be secreted (Additional data file 4). Using these parameters, we Silverman et al. R35.7 Genome Biology 2008, 9:R35 High prevalence GO assignments in the leishmania secretome Figure 4 High prevalence GO assignments in the leishmania secretome. The secretome sequences were categorized according to (a) biological process and (b) molecular function. Nonredundant processes and functions assigned to at least ten leishmania-secreted protein sequences are displayed. Bars indicate the number of protein sequences found under each Gene Ontology (GO) term expressed as a percentage of the total 151 actively secreted proteins. found that the leishmania genome encodes 217 proteins that contain a classical secretion signal peptide, of which 141 are annotated as hypothetical proteins (Additional data file 4). Of the remaining 76 proteins, approximately one-third appear to be gene duplications, leaving 50 unique leishmania proteins with a known or putative classical eukaryotic secretion signal peptide. It is of interest that only one of the proteins we predicted to be secreted via an amino-terminal secretion signal -LmjF16.0790, a chitinase -has previously been demonstrated to be secreted by leishmania promastigotes [16,36], although we did not detect this protein in our LC-MS/MS analysis. Our analysis also suggests that SAcP does not contain a classical secretion signal, contrary to a previous report [37]. Based upon the Von Heijne algorithm [34], the latter study predicted the presence of a 23-amino-acid amino-terminal 'signal peptide'. Subsequently, this leader peptide was shown to be sufficient for secretion of a green fluorescent protein fusion construct expressed in L. donovani [27]. The SignalP algorithm we used is the updated version of the 1985 Von Heijne algorithm. The lack of concordance in these predictions highlights the limitations of bioinformatics, while reinforcing the well known fact that signal sequences are highly variable. Our bioinformatics analysis also confirmed the annotation in the Leishmania Genome database [29] that none of the histidine secretory acid phosphatases found in the genomes of L. major or L. donovani infantum have classical aminoterminal secretion signals. Interestingly, only the membrane bound acid phosphatases of L. major are annotated as containing classical secretion signal peptides, whereas the same is not true of the orthologs in L. donovani infantum, and these membrane bound proteins would have been excluded by our TM domain screen. Only 14 of the proteins predicted to be secreted through a classical signal sequencedependent mechanism were detected in leishmania Cm by MS, and only two of these, GeneDB:LmjF04.0310 and LmjF36.3880, had sufficiently high SILAC ratios to be included in the secretome (Additional data file 4). Although there are several possible explanations for failing to detect a protein by LC-MS/MS, the lack of correlation between the measured and the in silico predicted secretomes suggests that leishmania utilize nonclassical secretion signals and pathways to regulate the export of the majority of secreted proteins. Evidence that proteins released by leishmania may originate in exosome-like vesicles, apoptotic vesicles, and glycosomes Somewhat unexpected was the finding that leishmania Cm contained all of the proteins identified previously to be associated with exosomes isolated from both B lymphocytes and dendritic cells, with the exception of those for which the leishmania genome does not contain an ortholog (Additional data file 5). In fact, more than 10% of the proteins found in the leishmania secretome were previously detected in exosomelike microvesicles released from other eukaryotic cells (Table 1), including B lymphocytes [38], dendritic cells [24], and adipocytes [39]. Recently, mammalian adipocytes were shown to secrete microvesicles, which were referred to as adiposomes [39]. These adiposomes contained 98 proteins, 13 of which we concluded to be actively secreted (Table 1). At least 25 additional adiposome proteins were detected in leishmania Cm with relative abundances lower than the secretome cut-off (Additional data file 5). The concordance of the proteomic data between these higher eukaryotic secreted microvesicles and the leishmania secretome is remarkable. These findings suggested that leishmania secrete exosome/adiposome-like microvesicles carrying proteomic cargo that is similar in composition to host microvesicles. In support of this, using scanning electron microscopy, we observed 50 nm microvesicles specifically located at the mouth of the leishmania promastigote flagellar pocket (Figure 6a,b), as well as evenly distributed across the cell surface of cells with the apparent morphology of amastigotes undergoing differentiation axenically ( Figure 6c). Microvesicles budding from the flagellar pocket and plasma membrane of leishmania Figure 6 Microvesicles budding from the flagellar pocket and plasma membrane of leishmania. Stationary phase leishmania promastigotes were fixed and coated for scanning electron microscopy as described in Materials and methods. (a) A leishmania promastigote, (b) 10× magnification of the exposed flagellar pocket region of panel a (square) after stage rotation, and (c) a promastigote in the process of differentiating into an amastigote. Arrowheads point to microvesicles. 500nm 5um Surprisingly, DNA-binding histone proteins were reliably detected by LC-MS/MS in Cm of stationary phase promastigotes (Additional data files 1 and 5). Histone proteins have been detected in dendritic cell exosomal preparations and were shown to enrich in these preparations after the cells were treated with an apoptosis-inducing agent [24]. The dendritic cell vesicles containing histone proteins were more electron dense and migrated to a slightly higher sucrose density than the exosomes [24]. This led the authors to conclude that the histone-containing vesicles were indeed a distinct population of vesicles, termed apoptotic vesicles or blebs [24]. The detection of histones in Cm of stationary phase leishmania (Additional data files 1 and 5), along with the significant number of apoptotic leishmania known to be present in a stationary phase population (approximately 43 ± 5%) [25], suggests that promastigotes may have been releasing apoptotic vesicles as well as exosomes. In addition to exosomal and apoptotic vesicle-associated proteins, we also found that the leishmania secretome included many of the major glycolytic enzymes that normally reside in glycosomes of kinetoplastid organisms [40] (Table 1 and Additional data file 5). Relevant to these findings, leishmania have been shown to utilize peroxisomal targeting signals (PTSs; PTS1 and PTS2) to direct proteins to the glycosome [41], and a screen of the leishmania genome identified approximately 100 proteins with either a PTS1 or a PTS2 targeting signal [42]. Remarkably, our MS analysis of leishmania Cm detected nearly half of these predicted glycosomal proteins, with ten being detected at high enough relative abundance to be considered bona fide secreted proteins (Table 1). These findings suggest that leishmania release either whole glycosomes or glycososomal cargo into the extracellular environment. showed that L. donovani released a wide array of proteins when in the stationary phase of growth (Additional data file 1). Based on previous studies concerned with the pathogenesis of leishmania as well as other intracellular pathogens [17,43], we anticipated that leishmania may secrete virulence effectors into their extracellular environment, including the cytosolic compartment of infected host cells. By examining the composition of the leishmania secretome and generating quantitative information concerning the relative enrichment of secreted proteins, we expected to identify candidate leishmania effector proteins that may be involved in virulence. As expected, protein export was found to be heterogeneous, with some proteins exported to a higher degree than they were retained by the cell, whereas for others the opposite was true ( Figure 2). It was our assumption that proteins with higher Cm/CA ratios were more likely to be actively secreted than they were to be externalized as a result of either incidental lysis or apoptosis. In light of this, we used the relative abundance data and a rigorous statistical cut-off (Cm/CA values greater than the ratio for H2B by at least two standard deviations) to define proteins actively secreted by leishmania. Based on this analysis, we consider 151 proteins in this dataset to be bona fide members of leishmania secretome. On the other hand, we recognize that in implementing this rigorous cut-off we probably sacrificed some sensitivity. Thus, it is probable that at least some proteins with ratios falling below the cut-off are actively secreted as well. Next, we inspected the leishmania secretome for potential virulence factors. Candidate virulence factors were divided into four categories: proteins putatively involved in intracellular survival; proteins with known immunosuppressive functions; proteins involved in signal transduction; and proteins involved with transport processes ( Tables 2 and 3, and Additional data file 1), of which many had high Cm/CA values. In addition, proteolysis was one of the most common GO terms assigned to the leishmania secreted proteins. Although the frequency of this term did not reach statistical significance (see Gene Ontology analysis of the leishmania secretome, under Results, above), this term appeared to be somewhat over-represented among the proteins in the upper half of the ratio distribution (Figures 4a and 5a). It seems likely that the secretion of at least some of these proteins may be part of a stress response. On the other hand, some of these proteins may be involved in pathogenesis. One potential mechanism is the direction of their proteolytic activities toward degradative enzymes resident in phagolysosomes to promote intracellular survival. A second possibility might involve direction of their proteolytic activities to degrade major histocompatibility complex class I and II molecules, thereby preventing antigen loading and reduced efficiency of antigen presentation, as has been described for leishmaniainfected cells [44]. These findings suggest that secreted leishmania proteins with proteolytic activities may contribute to pathogenesis, and further investigation of this is warranted. Also likely to be involved in intracellular survival are secreted antioxidants, and more generally proteins with oxidoreductase activity, such as iron superoxide dismutase (GeneDB:LmjF32.1820). Other examples of these were found in the leishmania secretome (Figure 5b and Additional data file 1), and these may provide protection from intracellular free radical attack. In addition, some members of the secretome, such as the putative 14-3-3 protein, are known to have powerful antiapoptotic properties in other systems [45]. That leishmania infection inhibits host cell apoptosis is well known [46,47], and these antiapoptotic secreted proteins may be active in prolonging the lifespan of infected host cells. An important inclusion to the category of proteins with functional roles in intracellular survival were nucleases, such as GeneDB:LmjF23.0200, an endoribonuclease, which was found to be the second most highly secreted protein (Table 3). This endoribonuclease belongs to a class of proteins that act on single-stranded mRNA and are thought to be inhibitors of protein synthesis [48]. These nucleases may aid in purine salvage, which is obligatory for leishmania because they are incapable of de novo purine synthesis [49]. Myo-inositol-1-phosphate synthase (GeneDB:LmjF14.1360), the protein with the highest relative abundance ratio (Table 3) and therefore the most enriched in the Cm, may also play a role in intracellular survival (Table 2). Leishmania myo-inositol-1-phosphate synthase has been shown to be essential for growth and survival in myo-inositol limited environments [50]. Leishmania myo-inositol-1-phosphate synthase knockouts were found to be completely avirulent [50] in mice, suggesting that the phagolysosomal lumen may be a myo-inositol limited environment. Myo-inositol-1-phosphate synthase is required for de novo biosynthesis of myo-inositol, a precursor of vital inositol phospholipids such as those found in the GPI membrane anchors of nearly all leishmania surface proteins and other glycoconjugates such as GP63 and lipophosphoglycan. The massive export of this essential enzyme into Cm is intriguing and warrants further study. The leishmania secreted protein kinetoplastid membrane protein-11 (GeneDB:LmjF35.2210), identified in the SILAC/ mass spectrometry analysis, was previously characterized as having immunomodulatory effects on host cells during leishmania infection [51]. Furthermore, we found that the leishmania secretome contains an ortholog of the mammalian macrophage migration inhibitory factor (GeneDB:LmjF33.1750), a protein with known immunosuppressive and immunomodulatory properties [52] in humans. It is possible that this leishmania ortholog could share these functions and affect host immune responses during leishmania infection. Manipulation of host cell function via interference with signaling pathways is a well known virulence tactic of intracellular pathogens [53][54][55][56][57]. After internalization, leishmania infected macrophages exhibit defective signaling in response to various stimuli [54,55,57]. Based on our analysis, we estimate that at least ten secreted leishmania proteins are predicted to be involved in some manner in signal transduction (Table 2 and Additional data file 3). In this regard, we found that kinase activity was concentrated in the upper half of the secretome ratio distribution (Figure 5b). Secreted leishmania signaling intermediates such as the mitogen-activated protein kinases 3 and 11 (GeneDB:LmjF33.1380 and LmjF10.0490) and the protein tyrosine phosphatase-like protein (GeneDB:LmjF16.0230) have the potential to affect macrophage cell signaling after internalization [56]. Another interesting signaling related protein, the putative phosphoinositide-binding protein (GeneDB:LmjF35.2420), was one of most highly secreted proteins (Table 3). This protein might influence macrophage cell signaling through its potential binding of inositol containing signaling intermediates that are products of phosphatidyloinositol 3 kinase. Notably, GO analysis identified this putative phosphoinositide-binding protein (GeneDB:LmjF35.2420) as a sorting nexin 4-like protein (Additional data file 3). Sorting nexins are known to be involved in coordinating intracellular vesicle trafficking processes, including both endocytosis and exocytosis [58]. As such, this putative sorting nexin may be considered to be a leishmania candidate virulence factor for its potential to modulate vesicle trafficking in infected cells (Table 2). Somewhat unexpected was the finding of proteins in leishmania Cm known to be involved in vesicular transport (Tables 2 and 1), such as the phosphoinositide binding protein discussed above, the small GTP-binding protein Rab1 (GeneDB:LmjF27.0760) and a putative ADP-ribosylation factor (GeneDB:LmjF31.2790). We have classified these proteins as candidate virulence factors because, although these transport vesicle regulatory proteins may normally regulate vesicle trafficking in leishmania, ectopically following secretion, they may have the potential to affect vesicle trafficking in infected cells. For example, it is tempting to speculate that these leishmania secreted proteins could directly affect phagosome maturation through modulating transport to and fusion with host multivesicular bodies, endosomes, and lysosomes. Another interesting and unexpected aspect of the leishmania secretome was the presence of numerous proteins related to translational machinery (Figure 4a and Additional data file 3). The functional basis for this is unclear at this time. Perhaps the turnover of these proteins is extremely high and excess machinery is disposed of via secretion in addition to Leishmania candidate virulence factors enriched in Cm the reported processes of ubiquitination and proteasome mediated degradation. Interestingly, clathrin-coated vesicles isolated from rat liver [59] were found to contain more than 30 of the same translation related proteins we found in leishmania Cm, including a putative leishmania eukaryotic translation initiation factor 1A (GeneDB:LmjF16.0140), the protein with the fifth highest enrichment ratio ( Table 3). Appreciation of the multifunctional nature of proteins is increasing, and the possibility exists that these proteins, perhaps purposely packaged in leishmania secretory vesicles, may play ancillary roles in pathogenesis or pathogen survival, similar to what appears to be the case for EF-1α [17]. The protein secretion pathways utilized by leishmania are not well understood. According to our analysis, only two of the 151 proteins in the leishmania secretome contain a classical amino-terminal secretion signal (Additional data files 2 and 4). The fact that more than 98% of the secretome lacks a targeting signal indicates that nonclassical secretion pathways are probably the dominant means by which leishmania proteins are secreted. In support of this argument, the leishmania secretome included a large number of proteins previously identified as components of exosomes secreted from various higher eukaryotic cell types (Table 1). Leishmania Cm also contained many proteins shown to be cargo of clathrin-coated vesicles. Rat liver clathrin-coated vesicles were found to contain a total of 346 proteins, and in addition to the 30 translation-related proteins mentioned above, an further 30 of these proteins were detected in leishmania Cm, including clathrin (GeneDB:LmjF36.1630) and HSP70. Significantly, both clathrin and HSP70 have been found in exosomes released from various human cells [24,38,39]. In fact, the proteomes of these clathrin-coated vesicles and that of mammalian exosomes were strikingly similar [24,[59][60][61][62]. Furthermore, leishmania have been shown to form clathrin-coated vesicles [63], and clathrin-directed trafficking in leishmania was shown to be essential for survival in macrophages [64]. Taken together, these findings suggest that leishmania may use clathrincoated vesicles as a transport mechanism to direct vesicle trafficking at least, if not exocytosis of proteins from endosomal compartments to the extracellular milieu. Based on these findings, we propose that leishmania protein secretion probably involves the release of exosome-like vesicles, which may or may not be clathrin-coated. Moreover, we suggest that at least three distinct vesicular secretion processes contribute to the secretome, including exosomes, apoptotic vesicles, and glycosomes (Table 1). Exosomes are small vesicles, 50 to 100 nm in diameter, which are released by fusion of either multivesicular endosomes or secretory lysosomes with the plasma membrane of eukaryotic cells [65][66][67]. Exosomes were initially described in reticulocytes as a mechanism for shedding organellar proteins and excess transferrin receptor during differentiation into mature nuclei free red blood cells [68]. Somewhat later, the proteomes of B lymphocyte and dendritic cell exosomes were described [24,38]. Dendritic cell exosomes have attracted a significant amount of attention because of their immunostimulatory properties as cell-free, peptide-based vaccines [69][70][71][72]. The striking correspondence between the leishmania secretome and these exosomes strongly suggests that protein secretion by leishmania involves the release of intraluminal vesicles originating from either the tubular lysosome [73] or multivesicular endosomes, or both. It is tempting to speculate that leishmania exosomes, like dendritic cell exosomes [69,70], may be capable of modulating the host immune response, although it likely that their properties may be quite distinct. The formation of membrane blebs at the plasma membrane of apoptotic mammalian cells and their subsequent release are phenomena that have attracted significant attention [24,66,74]. As mentioned above, these apoptotic vesicles have been found to contain histone proteins and cytochrome c oxidase subunits. That leishmania undergo apoptosis is well established [25], and our finding that they release cytochrome c oxidase subunits and histones into Cm (Additional data files 1 and 5) suggests that they release apoptotic vesicles. Moreover, it has been shown that cultures of stationary phase leishmania promastigotes contain up to 43% apoptotic cells, and when the latter are removed by sorting the remaining nonapoptotic population is incapable of establishing and maintaining an infection [25]. These findings, taken together with our detection of apoptotic vesicle marker proteins, histones 1 through 4, in leishmania Cm (Additional data files 1 and 5), strongly suggest the possibility that leishmania apoptotic vesicles may be involved in pathogenesis. This could take the form of immune evasion, wherein (similar to activation of the 'silent phagocytosis' pathway used to internalize and clear very early apoptotic cells by mammalian macrophages [75]) these apoptotic vesicles would promote inhibition of macrophage activation before invasion by viable leishmania promastigotes. Somewhat more difficult to explain from our findings is the suggestion for whole glycosome release, based upon both the characterized and the putative glycosomal proteins we detected in leishmania Cm (Table 1). Notably, many of the leishmania Cm proteins that were bioinformatically predicted to be glycosomal by the presence of PTS1 or PTS2 have been identified in purified glycosomes of the closely related kinetoplast Trypanosoma brucei brucei [76]. Our identification of the two most prevalent leishmania glycosomal membrane proteins in promastigote Cm (Additional data file 5) suggests that intact glycosomes were being exported from the cell. This is as opposed to a model in which these organelles were fusing with the flagellar pocket to release their cargo, in which case we would not have expected to have detected glycosomal membrane proteins per se. As we suggested above to potentially explain the secretion of translation machinery proteins, release of glycosomal proteins may be related to a stress response, but the targeted release of glycosomes with a more specialized function remains a possibility. As previously stated, it is our hypothesis that the proteins with higher relative abundance in leishmania Cm are more likely to play an active role in pathogenesis than those proteins secreted to a lesser extent. Following this logic, export of proteins with lower Cm abundance may be related to either routine waste disposal or apoptotic blebbing, and these may be less likely to contribute to pathogenesis. Although this is a reasonable working model, it is not absolute and does not mean that proteins secreted in lesser abundance may not be of interest. In fact, EF-1α, a candidate virulence factor that has been shown to inhibit macrophage activation [17], had a Cm/CA peptide ratio in the lowest 20% of the ratio distribution ( Figure 2, and Additional data files 2 and 5), and well below the cut-off for active secretion used to define the secretome. These data, especially when combined with the findings that apoptotic leishmania are required for leishmania disease development [25], support the interpretation that many of the proteins found in leishmania Cm are potential candidates for unique and essential roles in leishmania virulence, and further analysis will be required to prioritize those that should receive additional attention. It should be mentioned that three leishmania proteins previously described to be secreted, namely SacP [14], chitinase [36], and silent information regulator (SIR)2 [18], were not identified in this LC-MS/MS analysis of leishmania Cm. One possible explanation for why these identifications were not made is that they have extremely low intracellular concentrations, with nearly all of the synthesized protein being secreted. Under these conditions other proteins present in the cell at a higher concentration could mask the CA peptide signals in the mass spectrometry. Importantly, the SILAC/mass spectrometry analysis was designed to compute ratios of simultaneously detected spectra from mixed Cm and CA samples. The absence of a CA signal in the mass spectrometry would have provided a denominator of zero, thereby not allowing for the computation of a meaningful Cm/CA ratio and exclusion from the analysis. Thus, no matter how abundant these peptides might be in Cm, without a comparable CA signal these proteins would not be included in the leishmania secretome, as defined by this study. It is possible that this explanation may also account for why chitinase and SIR2 were not identified in the secretome, especially considering that neither have characterized intracellular functions. Finally, we conducted these experiments using L. donovani donovani. Sequencing of the L. donovani genome is currently underway. As such we used the completed L. major genome to assign protein identities to the mass spectra gathered from leishmania Cm. Although the genomes for these two species are thought to be very similar, as their similar life cycles, biology, and expression profiles would indicate [77], it is possible that genomic difference between species prevented identification of some Cm proteins. Examination of the secretome led to several additional findings worth noting. First, the number of proteins known to be associated with small vesicles outstripped by far the number of proteins identified that had classical secretion signals. This finding suggests that the main secretory route for leishmania involves the release of small vesicles. Second, for the majority of candidate virulence factors that were identified, it seems most likely that they may function to influence the survival of leishmania within the phagolysosome, although this remains to be formally tested. As the collection time for Cm was limited because of the need to culture organisms in the absence of serum, proteins in the secretome that may be involved in pathogenesis are likely to act during early stages of infection. During this early stage, they may contribute to the observed delay of phagosome maturation [78]. It has been proposed that delayed phagosome maturation represents a window of opportunity during which internalized promastigotes can differentiate into the more acid-tolerant amastigotes [79,80]. Whether the amastigote secretome is similar to or distinct from that of stationary phase promastigotes is not known at this time. However, given the relatively low stage-specific differences in gene expression that have been described [81], we do not regard significant differences to be likely. Third, targeting of virulence factors into host cell cytosol has been shown to be an effective strategy used by intracellular pathogens to remodel the environment and to influence host cell function [17,[82][83][84][85]. After invading their macrophage hosts, leishmania have been shown to block cell activation, to inhibit microbicidal activity [86][87][88], and to attenuate antigen-presenting cell function [57,89,90]. A broad picture of the proteins secreted by leishmania in cell free culture provides a basis for investigation of effector proteins that may be active in host cells either within the phagolysosome or within host cytosol. Conclusion This quantitative proteomic analysis identified a large and diverse pool of proteins in leishmania Cm and allowed us to define the leishmania secretome based on measurements of relative protein abundance in Cm that could only be explained by active secretion. The identities of proteins within the secretome revealed many candidates for further studies concerned with potential contributions to virulence and pathogenesis as well as to investigate mechanisms of secretion. Moreover, the data also indicate clearly that leishmania use predominantly nonclassical targeting mechanisms to direct protein export. This leads us to propose a model in which protein export occurs largely through the release of microvesicles, perhaps including exosome-like vesicles, apoptotic vesicles, and glycosomes. Isolation of promastigote Cm Stationary phase promastigotes that had been grown either in medium containing normal isotopic abundance arginine and lysine or in medium containing 13 C 6 -arginine 2 H 4 -lysine L were collected by centrifugation at 300 × g for 10 minutes in a Beckman GS-6R centrifuge (Beckman-Coulter, Fullerton, CA, USA) and washed in Hanks balanced salt solution. Organisms were then concentrated tenfold by re-suspension in medium M199 without FBS and supplemented with 2 mmol/l L-glutamine, 10 mmol/l HEPES, 10 μg/ml soya bean trypsin inhibitor (Sigma-Aldrich), and either normal isotopic arginine and lysine or 13 C 6 -Arg and 2 H 4 -Lys in the concentrations given above for 4 to 6 hours at 26°C. Cm was isolated from cells by centrifugation at 300 × g for 10 minutes in a Beckman GS-6R. Supernatant was then subjected to centrifugation once more to ensure that no cells remained in suspension. Cm and cell pellets were either used immediately for enzymatic analysis or stored at -20°C for mass spectrometry analysis. A minimum of 5 × 10 8 promastigotes in culture was required to generate Cm with signals of adequate strength for mass spectrometry analysis. Four times as many stationary phase organisms were required to generate sufficient Cm for detection of proteins by either metabolic labeling and autoradiography or by Western blotting. Two billion organisms were cultured in M199 containing normal isotopic arginine and lysine (Sigma-Aldrich). For autoradiography, cells were collected and washed as above, and then starved of methionine by resuspension in RPMI-1640 medium without methionine and cysteine (Sigma-Aldrich) with 1% FBS. After 1 hour 50 μCi/ml of 35 S methionine (Sigma-Aldrich) was added and cells were cultured for a further 2 hours to allow labeling to occur. After washing to remove serum, cells were incubated for 4 hours in serum-free RPMI-1640 medium without methionine and cysteine, containing 10 mmol/l L-glutamine, 1 mmol/l HEPES, and 10 μg/μl Soya bean trypsin inhibitor, at which point the cells were separated from the Cm by low speed cen-trifugation to avoid mechanical lyses of cells. Pelleted cells were lysed on ice in lysis buffer (50 mmol/l Tris [pH 7.4], 1% Triton X-100, 0.15 mol/l NaCl, 1 mmol/l EGTA, 1 mmol/l phenylmethylsulfonyl fluoride, 10 μg aprotinin/ml, and 10 μg leupeptin/ml). Cell lysates were clarified by centrifugation in a microcentrifuge at maximum speed for 20 min at 4°C. The resulting WCL supernatants and the Cm were precipitated with trichloroacetic acid at 10% final concentration. The precipitates were solubilized in Laemmli sample buffer and equal counts/minute of Cm and WCL were separated by SDS-PAGE (5% to 20% gradient) followed by autoradiography. For Western blotting, Cm was collected as above, but organisms were concentrated in normal isotopic M199. After separating Cm from the cells, WCLs were generated by sonicating the cell pellets to mimic lysis that may have occurred inadvertently during culture or centrifugation. Briefly, cell pellets were solubilized in 0.5 mmol/l Tris Laemmli sample buffer without SDS, bromophenol blue, or β-mercaptoethanol, but including protease inhibitors leupeptin and aprotinin both at 1 μg/ml and 10 μg/ml phenylmethylsulphonyl fluoride. The solution was sonicated three times at a power setting of 3 for 10 seconds. The lysate was cleared of insoluble material by centrifugation for 5 minutes at 10,000 × g. Following clarification the supernatant proteins were precipitated following the procedure bellow. The pellet was resuspended in Laemmli sample buffer without β-mercaptoethanol or bromophenol blue. Protein precipitation For Western blotting and metabolic labeling analysis, proteins present within promastigote Cm were precipitated using pyrogallol red, as described previously [91]. Briefly, sodium deoxycholate was added to Cm to a final concentration of 0.02% and the solution was mixed for 30 minutes at 4°C to facilitate precipitation. Cm was then mixed with an equal volume of pyrogallol red solution (containing 0.05 mmol/l pyrogallol red, 0.16 mmol/l sodium molybdate, 1.0 mmol/l sodium oxalate, 50 mmol/l succinic acid, and 20% methanol [vol/vol]) and the pH adjusted to 2.0 with 2N HCl. The resulting solution was incubated at room temperature for 1 to 2 hours followed by 12 to 24 hours at 4°C. The Cm protein precipitates were harvested by centrifugation at 11,000 × g for 60 minutes at 4°C followed by two washes with ice cold acetone. The pellets were allowed to air dry before solubilization in Laemmli sample buffer without β-mercaptoethanol or bromophenol blue at 95°C for 30 minutes. Protein concentrations of the Cm and WCLs were measured using the BioRad DC Protein Assay (BioRad Laboratories Inc., Hercules, CA, USA). G6PD assay Promastigote cell pellets were lysed by sonication to generate a WCL in 1 ml medium M199 with the appropriate concentrations of either normal isotope or nonradioactive isotope arginine and lysine, 1 mmol/l L-glutamine, 1 mmol/l HEPES, 10 μg/ml soya bean trypsin inhibitor, protease inhibitors leupeptin and aprotinin both at 1 μg/ml, and 10 μg/ml phenylmethylsulphonyl fluoride. After clearance by centrifugation at 11,000 × g, serial twofold dilutions of the lysate were made in medium M199 supplemented as above to yield final concentrations of 50%, 25%, 10%, 5%, and 1% (vol/vol). The concentrations of G6PD in 100 μl of Cm and in serial dilutions of WCL were assayed in 55 mmol/l Tris-HCl and 3.3 mmol/l MgCl 2 buffer at pH 7.8, containing 3.3 mmol/l glucose-6phosphate and 2 mmol/l NADP. Enzyme was obtained from the Sigma Chemical Company for a positive control. To generate a reference, 0.01 units of G6PD were stabilized in 5.0 mmol/l glycine with 0.01% bovine serum albumin (pH 8.0) and assayed along with sample and WCL dilutions. Enzyme reactions were carried out at 30°C and the change in absorbance, caused by changing NADP concentration, over 5 minutes was measured at 340 nm. LC-MS/MS of promastigote conditioned medium and data analysis To identify proteins specifically secreted by leishmania into culture medium, direct quantitative comparisons of protein abundance in Cm versus CA were made on a protein-by-protein basis. The Cm was collected from leishmania grown in medium containing heavy isotopes of arginine and lysine, and compared with cell-associated material prepared from promastigotes grown in medium containing normal isotopic abundance amino acids. In some cases the reciprocal analysis was also carried out as well with identical results. Approximately equal amounts of labeled and unlabeled protein (estimated from a preliminary LC-MS/MS analysis) from Cm and CA were mixed together and analyzed either by gelenhanced LC-MS/MS exactly as described previously [92] or by peptide-level isoelectric focusing (IEF) combined with LC-MS/MS. For IEF, the protein mixture was solubilized in digestion buffer (50 mM NH 4 OH, 1% sodium deoxycholate, pH 8.0), denatured by heating to 99°C for 5 minutes, reduced by incubation with 1 μg dithiothreitol for 30 minutes at 37°C, alkylated with 5 μg iodoacetamide for 30 minutes at 37°C and finally digested by the addition of 1 μg porcine trypsin (Promega, Madison, WI, USA) overnight at 37°C. After digestion, the sample was acidified by addition of an equal volume of sample buffer (3% acetonitrile, 1% trifluoroacetic acid, and 0.5% acetic acid) and the deoxycholate that fell out of solution was pelleted at 16,100 × g for 5 minutes. Peptide mixtures were then desalted on STop-And-Go Extraction (STAGE) tips [93] before being resolved into 24 fractions from pH 3 to 10 on an OFFGEL IEF system (Agilent Technologies, Santa Clara, CA, USA), in accordance with the manufacturer's instructions. Fractions from the IEF were diluted with an equal volume of sample buffer, and each was desalted again on a STAGE tip. Each gel or OFFGEL fraction was analyzed on a linear trapping quadrupole-Fourier transform tandem mass spectrometer, as described previously [19]. Fragment spectra were extracted with ExtractMSN.exe (v3.2) using the default parameters (ThermoFisher Scientific, Ottawa, ON, CA); monoisotopic peak assignments were corrected with DTASuperCharge (default parameters [94]); and the resulting peak list was searched against the protein database for L. major plus the sequences of all human keratins and porcine trypsin (5 November 2006 version, 8,324 sequences) using Mascot (v2.1 [95]). MSQuant [94] was used to parse Mascot result files, to recalibrate mass measurements, and to extract quantitative ratios. The final nonredundant list of proteins was generated using finaList.pl, an in-house script available on our website [96]. The false discovery rate for protein identifications based on two or more peptides with a measured mass accuracy under 3 ppm (the overall average was 0.61 ppm), a Mascot score of 25 or greater, and length 8 residues or more was estimated to be less than 0.5%, using reversed database searching. All identified peptides with their associated parameters can be found in Additional data file 1. SILAC ratios were extracted exactly as described previously [19]. The mean log e transformed ratios from four independent analyses and the relative standard deviations can be found in Additional data file 2. Western blotting Following isolation of Cm, lysis of the corresponding cell pellet, and precipitation of proteins in both fractions, equivalent amounts of protein from the Cm and WCL were fractionated by SDS-PAGE. Proteins were transferred to nitrocellulose and probed with anti-EF-1α (Upstate Biotechnologies Inc., Lake Placid, NY, USA) following the manufacturer's instructions, as well as leishmania-specific antibodies to histidine secreted acid phosphatase [97] and against HSP70 and HSP90 [98] (a kind gift from Dr Joachim Clos). Scanning electron microscopy Stationary phase promastigotes were washed in phosphatebuffered saline and fixed in 2.5% gluteraldehyde in 0.1 mol/l sodium cacodylate buffer (pH 7.2) containing 0.146 mol/l sucrose and 5 mmol/l CaCl 2 at 22°C under vacuum in a microwave: 2 minutes at 100 W, 2 minutes without microwaves, 2 minutes with 100 W, and then repeated. Subsequently, fixed organisms were rinsed in the same buffer in the microwave for 40 seconds at 100 W two times and post-fixed in 1% OsO 4 in 0.1 mol/l sodium cacodylate containing 2 mmol/l CaCl 2 and 0.8% potassium ferricyanide (Polysciences, Warrington, PA, USA) at 22°C under vacuum in a microwave following the same steps used in the gluteraldehyde fixation. Cells were washed in distilled water at room temperature and allowed to adhere to poly-L-lysine (Sigma) coated coverslips. Subsequently the coverslips were dehydrated through an ascending ethanol series from 50% to 100%, each for 40 seconds at 100 W in a microwave. The fixed cells were critically point dried with liquid CO 2 in a Balzars 020 Critical Point Dryer (Balzars Union Ltd, Lichtenstein) and coated with gold palladium using a Nanotech SEMPrep II sputter coater (Nanotech Ltd., Prestwick, U.K.). Samples were observed and imaged using a Hitachi S-2600 VPSEM (Hitachi High Technologies, Finchampstead, Wokingham, Berkshire, UK) at the University of British Columbia Bioimaging Facility. Bioinformatics screen of the genome of Leishmania major to identify candidate secreted proteins The genome of L. major was accessed at the GeneDB L. major database [29]. Predictions of signal peptides and signal peptidase cleavage sites were made by SignalP [99]. Once these were provisionally identified, a filter was applied to remove those that contained more than one TM region predicted by TMpd [100]. Proteins with just one TM region were again screened to filter out those whose single TM domain did not overlap with the signal peptide coordinates. Finally, these putative classically secreted, non-TM proteins were screened for GPI attachment sites at the carboxyl-terminus using the GPI prediction program GPI-SOM [101]. Gene Ontology GO [30] annotations were performed using Blast2GO [31]. A nonredundant database was used as reference for Blastp searches with an expectation value minimum of 1 × e -3 and a high scoring segment pair cut-off of 33. Annotations were made with default parameters. Briefly, the pre-eValue-Hit-Filter was 1 × e -6 , the Annotation cut-off was 55, and the GO Weight was 5. The statistical framework GOSSIP [32] was used to identify statistically enriched GO terms associated with leishmania secreted proteins when compared to the GO terms associated with all of the proteins identified in leshmania Cm. GOSSIP generates 2 × 2 contingency tables for each GO term in the test group and uses a Fisher's exact test to calculate P values for each term. The P values are then adjusted for multiple testing by calculation of the false discovery rate and the family wise error rate [32]. Statistical analysis Statistical analyses of Cm/CA ratios and G6PD concentrations were performed using GraphPad Prism version 4.00 for Windows (GraphPad Software, San Diego, CA, USA). SKC helped in design and implementation of the bioinformatics screen. DPR was involved in LC-MS/MS data collection and analysis. DD provided leishmania-specific antibodies to secreted acid phosphatase. DN helped with design of biochemical analyses. LJF was involved in study design, data collection and analysis, and manuscript preparation. NER was involved in study design, interpretation of results, and manuscript preparation. Additional data files The following additional data files are available with the online version of this paper. Additional data file 1 is a table listing all the proteins, and the peptides contributing to their identification, detected in leishmania Cm. Additional data file 2 is a table showing a complete list of the SILAC ratios calculated for each Cm protein in each experiment, including the means of the four experiments. Additional data file 3 is a table listing all the GO terms associated with the leishmania Cm proteins. Additional data file 4 is a table listing the proteins predicted by bioinformatics to be secreted under the control of an amino-terminal secretion signal peptide; also shown here are the proteins with predicted GPI attachment sites and those proteins determined to be present in leishmania Cm by the SILAC LC-MS/MS analysis. Additional data file 5 is a table listing the leishmania Cm proteins, their mean SILAC ratios, and any documented microvesicle associations for these proteins. Additional data file 1 The proteome of leishmania Cm 358 proteins had at least two nonoverlapping peptides that were detected and quantified in three or more individual analyses of leishmania Cm proteins. The peptides corresponding to each iden-tification are shown. Protein identities were determined as described in Materials and methods and for Tables 1 to 3. Click here for file Additional data file 2 Cm/CA peptide ratios of leishmania Cm proteins After determining which proteins were to be considered for analy-sis (as described in Materials and methods and for Additional Data File 1), the measured Cm/CA ratios were normalized to the meas-ured value of histone H2B in each independent experiment. The normalized values were then log normal (Ln) transformed (mean Ln transformed Cm/CA ratio, experiments [Exps] 1 to 4) to reduce the spread of the data. The means of the Ln transformed ratios for each protein identity were then calculated (mean Ln transformed values). The relative standard deviations of the peptide ratios for each analysis are included. Click here for file Additional data file 3 GO analysis of leishmania Cm proteins GO annotation of the proteins detected in leishmania Cm. *Pro-teins with amino-terminal secretion signal peptides, and † proteins shown to be antigenic. GO IDs lists the GO identification number associated with each protein, and GO Term lists the term associ-ated with each GO ID. C, cellular compartment; F, molecular func-tion; P, biologic process. Click here for file Additional data file 4 Bioinformatics analysis of classically secreted leishmania proteins Leishmania proteins predicted to be classically secreted by a genome wide screen for proteins containing an amino-terminal secretion signal peptide. MS, proteins detected in the SILAC/mass spectrometry analysis; § proteins detected by mass spectrometry with ratios above the secretome cut-off; GPI, proteins found to con-tain a GPI attachment site; *, proteins previously reported to be secreted by leishmania. Click here for file Additional data file 5 Microvesicle associations of leishmania Cm proteins Proteins with mean Cm/CA peptide ratios greater that two stand-ard deviations above that of histone H2B were considered enriched. *Proteins with amino-terminal secretion signal peptides, and † proteins shown to be antigenic. Microvesicle Association dis-plays the vesicles associated with the protein ID. AP, adipocyte adi-posome; BC, B-cell lymphocyte exosome; DC, dendritic cell exosome; Gly, glycosome. Click here for file
A SIR-based model for contact-based messaging applications supported by permanent infrastructure In this paper we focus on the study of coupled systems of ordinary differential equations (ODE's) describing the diffusion of messages between mobile devices. Communications in mobile opportunistic networks take place upon the establishment of ephemeral contacts among mobile nodes using direct communication. SIR (Sane, Infected, Recovered) models permit to represent the diffusion of messages using an epidemiological based approach. The question we analyse in this work is whether the coexistence of a fixed infrastructure can improve the diffusion of messages and thus justify the additional costs. We analyse this case from the point of view of dynamical systems, finding and characterising the admissible equilibrium of this scenario. We show that a centralised diffusion is not efficient when people density reaches a sufficient value. This result supports the interest in developing opportunistic networks for occasionally crowded places to avoid the cost of additional infrastructure. Marina Murillo-Arcila Institut Universitari de Matemàtiques i Aplicacions de Castelló (IMAC) Escuela Superior de Tecnología y Ciencias Experimentales Universitat Jaume I, Spain Abstract. In this paper we focus on the study of coupled systems of ordinary differential equations (ODE's) describing the diffusion of messages between mobile devices. Communications in mobile opportunistic networks take place upon the establishment of ephemeral contacts among mobile nodes using direct communication. SIR (Sane, Infected, Recovered) models permit to represent the diffusion of messages using an epidemiological based approach. The question we analyse in this work is whether the coexistence of a fixed infrastructure can improve the diffusion of messages and thus justify the additional costs. We analyse this case from the point of view of dynamical systems, finding and characterising the admissible equilibrium of this scenario. We show that a centralised diffusion is not efficient when people density reaches a sufficient value. This result supports the interest in developing opportunistic networks for occasionally crowded places to avoid the cost of additional infrastructure. 1. Introduction. In this paper, we are concerned with the study of coupled systems of ordinary differential equations (ODE's) that describe the diffusion of messages between mobile devices. We base our model on Population Processes, a method commonly used to model the dynamics of biological population [12], more concretely, we use the so-called SIR (Susceptible, Infectious and Recovered) models which are used to describe the spreading of human epidemical diseases. The study of the symptotic stability of the disease-free and the endemic equilibrium is an active recent area of research, see for instance [6,10,14]. These biological models have a strong connection with message spreading and they have been recently widely studied. Under the formulation of a system of ODE's, Haas and Small in [8] developed a model based on epidemiological processes for a network that used animals (whales) as data carriers to store and transfer messages. Zhang et al. [17] stated a rigorous, unified framework to study epidemic routing and its variations. The authors of [5] introduced a mathematical approach for messages diffusion in opportunistic networks using the Epidemic protocol. One of the main conclusions of their analysis (mathematical model and its respective simulation) is that SIR models are quite accurate for the average behaviour of Epidemical DTN (Delay Tolerant Networks). In [16] the authors proposed a detailed analytical model to study the epidemic information dissemination in mobile social networks. It was based on SIR models including rules related to the user's behaviour, especially when their interests change according to the information type, and these changes can affect the dissemination process. Other approaches for modeling P2P communications can be found in [11]. Our research is motivated by the recent development of new contact-based messaging applications. As a example, we can find, Firechat, a messaging application meant for festivals which became popular in 2014 in Iraq due to the government restrictions on Internet use 1 , and after that during the Hong Kong protests 2 . There are other examples such as the secure messaging application Briar (see https://briarproject.org) or CoCam [13], an application for image sharing in events. The experience shows that these messaging applications seem to be operative in open places with a moderate to high density of people. Nevertheless, it still has to be tied to audits of cloud data storage [15]. In our paper, we propose new interesting models that describe a class of contactbased messaging applications which are based on establishing a short-range communication directly between mobile devices, and on storing the messages in these devices to achieve their full dissemination. For these models we studied their equilibrium and obtained analytical expressions for their resolution. Moreover, we performed numerical simulation to validate our results. The evaluations show that these models can reproduce the dynamics of message diffusion. The paper is organised as follows: in Section 2 we introduce some preliminaries about dynamical systems and the basic epidemic model. The diffusion of messages following an epidemic model for an open area where the people can enter and leave is described in Section 3. The case where the birth and death rates coincide is discussed with full details. In Section 4, we introduce a fixed infrastructure that contributes to the diffusion of the messages and a parallel study to the one in the previous section is conducted. The performance evaluation of the previous two models is shown in Section 5, and in Section 6 we summarise the main conclusions of the work. 2. Preliminaries. In this section we recall some notions of dynamical systems and we formally introduce the basic epidemic model in which we base our approach. A time dynamical system is given byẋ = f (x(t)) with a function f : Ω ⊆ R n → Ω. Given x 0 ∈ Ω, we can define its orbit by the dynamical system as the solution to the corresponding Cauchy Problem with initial condition x(0) = x 0 . We will pay special attention to the case when f : [0, N ] → [0, N ] is a continuous function which is also differentiable in ]0, N [, with N > 0. For the sake of completeness, we recall some basic fundamentals on dynamical systems. We say that x * is a fixed point or an equilibrium point if f (x * ) = 0, which yields a constant orbit for x * , x(t) = x * for all t ≥ 0. An equilibrium point is said to be stable if for all > 0 there exists δ > 0 such that for all x 0 ∈ [0, N ] such that |x 0 − x * | < δ we have that |x(t) − x * | < for all t ≥ 0. We say that x * is an attractor if there exists some δ > 0 such that for all We recall that an equilibrium point is hyperbolic if |f (x * )| = 0, otherwise x * is a non hyperbolic point. It is well-known that a hyperbolic equilibrium point is an attractor if f (x * ) < 0 and it is a repulsor point if f (x * ) > 0. Further information on dynamical systems can be found in [4]. First, we present our basic epidemic model in which we base our research, that has been already introduced in the present frame in [9]. It is given by: for all t ≥ 0, where I(t) denotes the class of infected nodes at time t and S(t) the class of susceptible nodes to be infected, and λ > 0 stands for the rate growth in which the number of infected nodes increases proportionally to the number of infected and non-infected ones. This shows that the transmission of messages follows epidemic diffusion, a concept similar to the spreading of infectious diseases, where an infected node (the one that has a message) contacts another node to infect it (transmit the message) [1]. Each node has a limited buffer where the messages in transit can be stored and when two nodes establish a pair-wise connection, they exchange the messages they have in their buffer, and check whether some of the newly received messages are suitable for notification to the user. It is important to point out that we assume that all nodes which have the messaging application store and forward messages and that the contact between the two nodes lasts long enough for transferring the whole message. We will assume that the population remains constant under the time: N 0 = S(t) + I(t), t ≥ 0. This permits to reduce this system to the one-dimensional logistic equation: Once we discretized the derivative for some h > 0 small, we get the following difference equation A similar description can be given for the nodes which are susceptible of receiving the message. The discretized models will be needed to perform numerical simulations. Epidemic model for an open area. In this section, we extend the model given in (1) to take into account that people can enter and leave an open area (e.g., a public square, a shopping mall, etc.). This model is further extended in the following section to considering a dual message diffusion model, that is, a contact based diffusion and centralised diffusion. Contact-based messaging applications considered in this work are based on establishing a short-range communication directly between mobile devices and storing the messages in these devices to achieve their full dissemination. Nodes move freely in a given area with a given contact rate between pairs λ > 0, and new nodes come to the place with an arrival rate β > 0 and a newly arrived node is a susceptible node (it does not have the message). We suppose that nodes leave the place with an exit rate of δ > 0. These are equivalent to the birth and death rates of the epidemical models. Thus, the number of nodes (population) in the place at time t, N (t), depends on the initial number of nodes in the place, N (0), and the rates of arrival and exit. We assume a short-range communication scope (for example, Bluetooth), so network congestion and interferences do not have a strong impact. In our model, we consider that either susceptible and infected nodes can leave the area, as if it was a natural mortality in a SIR model, see for instance [2,3]. Therefore, the final exit rate at each one of these classes is proportional to the relative number of susceptible and infected nodes. Thus, the number of nodes is not constant over the time and it can be obtained as The system can be expressed using a deterministic model based on the following system of coupled ODE's: for all t ≥ 0, with initial conditions S(0) = S 0 , I(0) = I 0 , and N (0) = N 0 tied to S 0 + I 0 = N 0 . 3.1. Dynamics for the epidemic model in an open area. Figure 1 represents the evolution of the infected nodes I(t) and the number of nodes N (t) as a function of time for different values of the arrival and exit rates. They have been obtained by using Euler method with step h = 0.001. All plots start with the same number of nodes N 0 = 100, one infected node I 0 = 1, and contact rate λ = 0.001. Analyzing the dynamics of this system, we see that, when there is no arrival and exit rate (β = δ = 0) we have the basic epidemic model, so the system is stable and all nodes get the message, as we can see in Figure 1a. In contrast, when the system has the same arrival and exit rate (figure 1c with β = δ = 1), the system reaches a fixed point, but not all the nodes get the message (I(t) < N (t)). If β > δ, then the number of nodes increases indefinitely as shown in Figure 1b. Finally, when β < δ, all the nodes leave the place, and N (t) falls to 0 as shown in Figure 1d. We now proceed to study analytically the equilibrium points of the model. When the system reaches an equilibrium point at time t s , this implies that S(t), I(t), N (t) are constant for t > t s , so their derivatives are 0. From equations (4), we get β = δ and the number of nodes N (t) remains constant to N 0 for all t ≥ 0. In this case, there is a renewal of nodes, with rate β = δ. If we consider the I (t) equation The solution of this differential equation when I(0) = 1, i.e. a single device mobile with the message, is: We can obtain the delivery time T d , that is the time when the message arrives to a given number of nodes M . Using equation (6), setting I(t) = M and solving for t, we have: We can also obtain the number of infected nodes when the system reaches the equilibrium. From equation (4), we can study the equilibrium points (S e , I e ) of the unidimensional dynamical system obtained when taking into account that in the equilibrium β = δ and I(t) + S(t) = N 0 for all t ≥ 0. In order to calculate the fixed points of equilibrium we solve the following quadratic equation, with solution S e = N 0 or S e = β λN0 . If we can study the behaviour near the fixed points S e = N 0 and S e = β λN0 . Computing the derivative of f we have f . As a consequence, if the following condition holds then N 0 is repulsor point, and if λN 2 0 − β < 0, then N 0 is an attractor. When λN 2 0 = β, then f (N 0 ) = 0 and we cannot conclude anything based on the former results. Nevertheless, a weaker criterion based on the values of the derivative can be used, and since f (S) < 0 for all S < N 0 , then we also get that N 0 is an attractor in this case. On the other hand, for S e = β λN0 we have f β λN0 = β−λN0 2 N0 . We recall that this second fixed point only appears when (10) holds, and here we get that it is an attractor. Thus, if (10) does not hold we have a unique fixed point at N 0 that is an attractor, and when λN 2 0 = β it bifurcates into two fixed points: N 0 , that is a repulsor, and β λN0 that is now the attractor. In both cases, the basin of attraction of the attractor point is the whole interval [0, N 0 ]. From this one-dimensional analysis of the behaviour on the variable S, and due to the tie S(t) + I(t) = N 0 for all t ≥ 0, we can directly extend these results to the two-dimensional case. As a consequence, (N 0 , 0) is the unique attractor if (10) does not hold, and if it holds, we have that (N 0 , 0) is a repulsor and is an attractor whose basin of attraction is all the points (S, I) ∈ [0, N 0 ] 2 satisfying S + I = N 0 . Epidemic model for an open area with fixed nodes. In this model we assume the same hypothesis as in the previous one but we now add a new consideration on it, the existence of fixed nodes with greater communication range (for example, WiFi), that can store and send the messages in the place. The number of fixed nodes will depend on the place area and nodes' communication range. All nodes sent the message with a given rate ρ, that will depend on message size and bandwidth. The nodes that are in the place, can receive the message from these fixed nodes, so the number of infected nodes increases with rate ρ. To design our model we take into account the following transitions: • (→ S, β): new nodes enter the place with β rate. • (S → I, λSI): new nodes get the message when contacts occurs. • (S → I, ρ): new nodes receive the message from the fixed nodes. • (I →, δI/(I + S)): nodes with the message leave the place and the system can be expressed using a deterministic model based on ODE's: We point out that coefficients β and δ can be obtained, for instance, from turnstiles or cameras at control access points. Clearly, all new nodes arriving at a rate β will be included in the category of S(t). However, nodes leaving the place at rate δ can either be carrying the message or not. The factors δS(t)/N (t) and δI(t)/N (t) separate nodes leaving the place into both categories, proportionally to the number of existing nodes of each category in the place. Dynamics for the epidemic model for an open area with fixed nodes. As in the previous model, we proceed to study (12) in depth. It is clear that when the system reaches the equilibrium N (t) = N 0 , so β = δ. In this case, if we consider the I (t) equation from (12), and replace N (t) with N 0 and S(t) with N 0 − I(t), we have: which is a Riccati differential equation. To simplify the notation we denote b = λN 0 − β N0 . The general solution of (13) will be given by I(t) = I p (t) + 1 z(t) , where I p denotes a particular solution of (13) defined as: On the other hand, z(t) denotes the solution of the linear differential equation z (t) = (2λI p −b)z(t)+λ. Solving this equation and considering the initial condition I(0) = 1 we get: with Using this equation we can obtain the delivery time T d for M nodes setting I(t) = M and solving for t: We now study the equilibrium of the model. From equation (13), we can find the equilibrium points (S e , I e ) of the discrete unidimensional system obtained when taking into account that in the equilibrium β = δ and I(t) + S(t) = N 0 . The equilibrium points are given as solutions of that is, To simplify the notation let d = λN 0 − β N0 2 + 4λρ. Then the equilibrium points are given by Finally, we analyse the behaviour of the fixed points S 1 and S 2 . The derivative of f is given by f (S) = 1+h 2λS − λN 0 − β N0 . First, for S 1 we have f (S 1 ) = 1+hd. As a consequence, |f (S 1 )| > 1 and then S 1 is a repulsor point. On the other hand, f (S 2 ) = 1 − hd. As a consequence, |f (S 2 )| < 1 and then S 2 is an attractor. As in the previous model and due to the fact that S(t) + I(t) = N 0 for all t ≥ 0, we can directly extend these results to the two-dimensional case. As a consequence and since in our model S(t), I(t) ≥ 0, the only equilibrium point that will exist is the one obtained from S 1 , that is the equilibrium point is given by: It is important to remark, that this point will only make sense when ρ ≤ β. We now evaluate this equilibrium point depending on ρ > 0 comparing these results with the dynamic evaluation of the system in Figure 2, that shows the evolution of the infected nodes I(t) and the number of nodes N (t) as a function of time. All graphs start with the same number of nodes, N 0 = 100, and one infected node, I 0 = 1. We also plot in each graph, I(t) when ρ = 0 (that is, the model analysed in Section 3) and when λ = 0, that is, when there are no contacts and the diffusion of the message is strictly performed by the fixed nodes. Thus, we have the following cases (omitting the previously studied case ρ = 0): • When 0 < ρ < β, the equilibrium point has S e > 0 and I e > 0 and, as it can be observed in Figure 2a, the number of infected nodes is always positive and it stabilises in I e . • When ρ = β, the equilibrium point is (S e , I e ) = (0, N 0 ), that is, all nodes are infected, confirming the experimental evaluation of the equation (see Figure 2b). • If ρ > β then (S e , I e ) will not appear but S (t) < 0 and then S(t) is a strictly decreasing function. Then the number of infected nodes will increase until it attain the value N 0 as it can be observed in Figure 2c. When ρ is higher (as in Figure 2c), we can see that the diffusion is mainly performed by fixed nodes. Summing up, we can see two important effects when ρ increases: first, a reduction on the diffusion time, and, second, the final number of nodes that get the message is increased. Moreover, when ρ ≥ β, all nodes finally receive the message. Thus, introducing fixed nodes in a place we can get a full diffusion of a message even when nodes can enter and leave the place. 5. Performance evaluation. The models introduced in Section 3 and 4 allow us to evaluate the dynamics of messages diffusion in a bounded area. When the system reaches an equilibrium point we can obtain characteristic parameters such as the number of infected nodes and the diffusion time. Here, our evaluation considers that the system is in an equilibrium state, that is, we assume that the arrival and exit rates are the same. From now on, we will jointly refer to both rates as the renewal rate. We consider a bounded rectangular area with size l = 100m with N 0 initial individuals that can move freely, entering and leaving the place with a renewal rate β = δ and carrying a mobile device that can establish pair-to-pair connection using short-range communications. In order to make the experiments independent of both the number of nodes and the area size, we chose to use the factor people density obtained as N 0 /l 2 . In a bounded area, as shown in [7], the contact rate λ ≈ 2.7366rE[V ] l 2 when r << l, where r denotes the communication range and E[V ] the mean speed of the nodes. In our model, we will consider r = 7.5m and E[V ] = 0.5m/s obtaining a contact rate λ = 0.001s −1 , that is, a pairwise contact rate of about 3.6 contacts/h. The diffusion rate of the fixed notes is set to ρ = 1 messages per second. We first evaluate the message coverage of the diffusion. We define message coverage as the final percentage of nodes that receive the message when the system reaches the equilibrium. This value is obtained evaluating the factor 100 · I e /N 0 using expressions (11) and (21). Figure 3 includes two contour plots of the message coverage depending of people density and renewal percentage. The (relative) renewal percentage (RR) is defined as the percentage of nodes that are renewed in the place every second RR = 100 · β/N 0 . In Figure 3a where we plot the results for contact-based distribution only, we can clearly see the impact of people density. When density increases, the percentage of nodes that receive the message increases, as the effect that the fixed renewal percentage is reduced, reaching practically 100% of nodes when density is very high. For low densities and higher renewal percentages the diffusion is reduced to values below 50% of nodes. In Figure 3b we can see the results for contact-based and fixed nodes diffusion for ρ = 1. We observe the effect of fixed nodes diffusion when people density is low, increasing the coverage of the diffusion compared to the results of the only contactbased diffusion. Nevertheless, this effect is vanished when people density increases. We can see that, when the renewal percentage is less than ρ = 1, the message reaches 100% of the nodes. Summing up, a centralized diffusion is not efficient when people density is high, so a contact-based diffusion is a better approach. We now evaluate the delivery time of a message using expressions (7) and (17). In Figure 4 we plot the delivery time depending on people density and for several renewal rates. As reaching 100% nodes is only possible when ρ > β we plot the delivery time for lower message coverages (95% and 75% concretely). In Figure 4a, we can see that using fixed nodes reduce the delivery time when people density is low. Specifically, for the case when the renewal rate is 1 and it is equal to ρ (that is, δ = β = ρ = 1), we have, that when density is very low, we obtain a very reduced delivery time (note also, that for these densities, the number of nodes in the place are very low, so a centralised approach can quickly disseminate the message). When, the density increases, all the curves converge to the same delivery time, so the effect of ρ and the renewal rate are vanished. The results for a lower message coverage (Figure 4b) show a similar pattern, although the values are lower, as they represent the time when the message reaches less nodes. 6. Conclusions. In this paper we focused on the study of coupled systems of ordinary differential equations (ODE's) to describe the diffusion of messages between mobile devices. The question we analysed was whether the coexistence of a fixed infrastructure can improve the diffusion of messages and thus justify the additional costs. We analysed this case from the point of view of dynamical systems, finding and characterising the admissible equilibrium of this scenario. We showed that a centralised diffusion is not efficient when people density reaches a sufficient value. This result supports the interest in developing opportunistic networks for occasionally crowded places to avoid the cost of additional infrastructure. The performance of contact-based diffusion depends mainly on people density and the renewal ratio. Using only contact-based diffusion, when people density is low, the message coverage is low and the diffusion time high. Introducing fixed nodes diffusion, we can increase the performance of the diffusion. Nevertheless, a centralised diffusion is not efficient when people density is higher, so a contact-based diffusion is a better approach. Funding. This work was partially supported by Ministerio de Economia y Competitividad, Spain (Grants TEC2014-
Assessing EFL Students’ Language Proficiency in Secondary School Classes in Benin This paper presents a study on assessing English as a Foreign Language (EFL) students’ language proficiency in Benin secondary schools. Assessment and evaluation are indispensable components of English language teaching. Assessing students is crucial to both learners and teachers themselves in the sense that its basic function is to improve learning. However, much awareness has not been raised on the key roles of assessment in Benin secondary schools. This study aims at investigating how effective teachers’ assessment of their learners’ language abilities is in EFL classroom and to explore EFL learners’ attitude towards assessment. Using qualitative and quantitative methods, 56 EFL teachers and 458 lower intermediate and upper EFL learners in the Atlantic region in Benin participated in this study. The field study revealed that many teachers wrongly mistake assessment for test and thus use both terms to mean the same thing. Teachers have not been able to see tests as a way of assessing their teaching methods and upgrading their students’ language skills. Besides, most teachers stated that testing is the only tool they use in assessing their learners’ language proficiency and it is mostly for the purpose of assigning grades at the end of the terms. Students declared that most tests consist of grammar questions. As this study draws attention to the close relationship between assessment and teaching, training workshops have been recommended to guide and train teachers on how to effectively assess their students so that teaching and learning in the EFL classroom meet expected objectives and goals. I. INTRODUCTION Assessing one's learners is an everyday practice for a good teacher. Assessment and evaluation are important aspects of language teaching and learning because they enable teachers to measure the effectiveness of their teaching methods in accordance with specific learning objectives. In the process of implementing teaching procedures or classroom instructions, teachers are constantly assessing their learners' achievements, strengths, and weaknesses either intentionally or incidentally. In other words, there is a constant interaction between assessment and learning. Although assessment, testing and evaluation are terms often used interchangeably, there are differences among these three terms. Evaluation, which is judgemental, measures the overall performance of learners by finding out what students are able to do with the language. According to Gultom [1], assessment and test are subsets of evaluation, which means that evaluation is broader in function. While the term assessment refers to "a variety of ways of collecting information on a learner's language ability or achievement" [2], Brown [3] states that a test is a tool that measures students' ability or knowledge in a given area. In addition, Richards [4] gives a clear distinction between assessment and testing, stating that while assessment refers to procedures a teacher uses to determine their students' learning and evaluate their teaching methods, a test is one of the forms of assessment that measures learners' learning at a specific point in time, which involves collecting information in numerical form. With reference to what has been mentioned above, it is clear that assessment plays a significant role in the teaching and learning process. However, in Benin context, assessment has not received much attention on the part of teachers and learners. This article explores how well the language proficiency of learners is assessed by EFL teachers. It intends to find out the various forms of assessment used by teachers and to what extent teachers are aware of the importance of assessment in their students' learning process. Following a mixed methods approach, data were collected through classroom observation and a questionnaire was administered to EFL teachers and students. II. PROBLEM STATEMENT With the advent of communicative language teaching, teaching methods have shifted from teacher-centredness to learner-centredness in most educational systems. Consequently, traditional forms of assessment have been dropped for other forms of assessment. Alternative forms of assessment are learner-centred and more efficient in the language classroom. Based on real life tasks, they focus on what students can integrate and produce with the language rather than on what students can recall and write down (Macias as cited in Ouahiani [5]). Also, in the era of communicative language testing, teachers "attempt to test real life language use, and use tasks where skills were integrated" [6]. Despite this significant change in language assessment, the language proficiency of EFL learners is not fully assessed by Beninese teachers. Quite a great number of Beninese EFL teachers do not understand the basic function of assessments which is to help students improve their learning. In most schools, especially public ones, tests are generally given to students for the sole purpose of grading. Moreover, teachers test what is easy to test instead of testing what is important to test. Teachers assess their learners only for the ultimate goal of nation-wide examinations. This has not helped in building students' skills for real life communication purposes; it has only resulted in teachers' focussing on what students have to do to go up to the next level rather than students' grasping and using the target language itself. In addition, it has been observed that teachers do not see tests as a tool that helps them to assess and improve their teaching methods. This paper thus explores teachers and students' awareness about the importance of assessment in learning and identifies ways that teachers can effectively assess their students in the EFL classroom. III. PURPOSE OF THE STUDY This study aims at investigating the effectiveness of teachers' assessment of EFL learners' language proficiency. This research work has determined the effectiveness of EFL teachers' assessment in their classes, explored learners' attitude towards teachers' assessment and found out how positive or negative the backwash effect is in EFL classes in Benin. IV. RESEARCH QUESTIONS To reach the expected objectives, three questions have been framed for this study: 1. How effective is teachers' assessment in EFL classes in Benin? 2. What are EFL learners' attitudes towards assessment? 3. Is the backwash effect positive or negative in the EFL classrooms in Benin? V. LITERATURE REVIEW The literature review pinpoints previous researches on the differences among assessment, testing and evaluation. It highlights the various types of assessment as well as criteria for a good test. A. What is Assessment? Assessment is a general term that refers to various methods such as tests and observations that are used to gather information about students learning. According to Valva and Gokaj [7], "assessment is an ongoing process which lies in a much wider domain. Every time a student answers a question, gives a comment, or tries to pronounce a new word, phrase or concept, the teacher unconsciously makes an assessment of the student's performance". Assessment is the systematic collection, review and use of information regarding educational programs undertaken, for the purpose of improving learning and development (Palomba & Banta as cited in [8]). Assessment is an ongoing process that measures progress which is not limited to final or summative tests. It is not limited to the final achievement at the end of the term and can measure learners' learning along diverse ways [9]. The main purpose of assessment is to find out how well students' learning match the course objectives. It enhances teachers' effectiveness. Unfortunately, assessment is often seen by most teachers as only quizzes or end-of-term tests. The National Research Council [10] stresses this point by stating that assessment often captures images of end-of-unit tests, examinations or quarterly report cards, these general aspects of assessment do not really capture the full extent of how assessment operates in the classroom. Classroom assessment is more about the everyday opportunities and interactions offered to teachers and students for gathering information about students' achievements and using this information to improve both teaching and learning. In other words, assessment "is a natural part of classroom life, that is a word away from formal examinations-both in spirit and in purpose" [11]. B. Why do Teachers Assess? The reasons for which teachers should conduct assessment according to Dr Alorvor and el Sadat [12], can be grouped into three. These are: helping the students, improving the teaching, and providing information for interested people. § Helping the students • Diagnose learning difficulties. • Guide students and teachers. • Determine the progress and potentialities of students. • Allocate students to groups for special attention. § Improving Teaching • Determine the extent to which objectives are achieved. • Compare progress of pupils under different teachers and different curricula. § Providing Information • Screen individuals for promotion. • Certify students. • Inform parents about performance. • Inform institutions of high learning and employees of student's attainment. C. What should Teachers Test in a Language Class? Teachers often give a ''grade'' in terms of marks to their students in classrooms which shows their ability in English (This may be expressed in various ways, e.g., 'C+' or ' !" #! ' or 'tenth' in the class' or 'above average'.) This situation does not really tell either the teacher or the students very much unless they know exactly what the grade is based on. It is not very useful to talk in general about 'ability in English': one student may be very good at listening but bad at writing; another students may speak fluently but make many grammar mistakes, and so on. So, in order to comment on a student's progress, we need to test particular skills and abilities We can test language (to find out what students have learnt): Grammar, Vocabulary, Spelling, Pronunciation. We can also test skills (to find out what students can do): Listening, Reading, Speaking, Writing. Which of these are the most important for your students? Which are the easier to test? Obviously, there is not 'correct' answer: it depends on the type of class, what the students expect to do in the future, the examination system, etc. Get teachers to give their own ideas, and try to bring out these points: 1. Tests often focus on grammar and vocabulary; but if we expected students to develop the ability to understand and use English, it is important to test skills as well as knowledge of the language. 2. Most people would regard all the skills as useful in some way: • Listening, for understanding spoken English on radio and television; • Reading, for study purposes (books, journals, etc.), and for understanding written instructions in English; • Speaking, for social contact with foreigners; • Writing, probably only for study purposes. 3. In deciding what is important, we should not only consider students' future needs; for example, most school students may never need to write English after they leave school, but it is still important because it helps in learning the language. 4. The receptive skills -listening and reading-are especially important because they will enable students to continue learning the language on their own [13]. D. Educational Uses of Language Tests It is important for the teacher before he can even begin to plan a language test, to establish its purpose or function. According to Harris [14], Language tests have many uses in educational programs, and quite often the same test will be used for two or more related purposes. The following list of six categories summarizes the chief objectives of language testing, which, for simplicity can be grounded under three headings: aptitude, general proficiency, and achievement. These three general types of language tests may be defined in the following manner: An aptitude test serves to indicate an individual's facility for acquiring specific skills and learnings. A general proficiency test indicates what an individual is capable of doing now (as the results, of his cumulative learning experiences), though it may also serve as a basis for predicting future attainment. An achievement test indicates the extent to which an individual has mastered the specific skills or body of information acquired in a formal learning situation. As the students are progressing through the various stages of learning English, they are usually provided with formal tests and examinations from time to time. But in addition to these formal tests, the teacher can also give regular informal tests to measure the student's language performance. E. Assessment and Testing Test is often used as a byword for assessment. This has raised hot debates among researchers and teachers' advisers. Assessment is often mistaken for test in the educational practice [15]. A test is a "procedure designed to elicit certain behaviour from which one can make inferences about certain characteristics of an individual" [16]. A test is just an instrument for measuring students' capabilities or proficiencies. However, according to Durairajan [6], tests do not often really measure the capacities of students because our judgement is based on just a particular test. For that reason, the test sample needs to be representative so that the inference is valid. unfortunately, test samples are not always representative of the actual capability of the learners. It follows that assessments do not often measure students' capabilities. Likewise, Karter [17] held that "assessment and testing are two different things". Understanding the differences between these two terms and their functions helps teachers to get the most out of both and enables them to see how well they (teachers) are doing and how well their students are doing [11]. Furthermore, researchers such as Dendrinos [18] pointed out clearly that tests are not synonyms of assessment. According to Dendrinos [18], assessment which is a more "encompassing" term than testing is primarily concerned with providing teachers or students with feedback information in order to provide the next learning step. Also, Sah [15] argues that during teaching periods, teachers knowingly or unknowingly keep observing students' interest and performance. Students themselves judge their improvements with regards to those of their peers, make comments on teachers' teaching methods, ask questions and so on. All these activities and measures which are geared towards improving further teaching and learning are known as assessments. Therefore, one can conclude that tests are just one form of the various forms of classroom assessment in EFL teaching. F. Purpose and Types of Assessment? Assessment is very important in teaching. In fact, there is no effective teaching and learning without assessment. That is why "the primary purpose of the assessment is to inform better teaching and more efficient learning" [19]. In the same vein, Alvarez [20] states that assessment is necessary to monitor learners' outcomes in a qualitative way and to establish a summative evaluation not only on the learners but on decision-making for educational programs. Generally, there are two main types of assessment in language teaching: formative and summative assessments. -Formative Assessment: According to [21], "assessment is formative when teachers use it to check on the progress of their students, to see how far they have mastered what they should have learned, and then use this information to modify their future teaching plans" (p. 5). In other terms, "assessment refers to observations which allow one to determine the extent to which students know or are able to perform a given task" [18]. It refers to all those activities (assigned by teachers and performed by students) that provide information used as feedback so that teaching may meet students' needs. It includes teacher assessment, feedback and feed-forward [11]. -Summative Assessment: Summative assessment is often carried out at the end of a course for the purpose of providing aggregate information on program outcomes to educational authorities (Chan as cited in [9]). Dendrinos [18] held that summative assessment is usually carried out at the end of a unit or units of instruction, activity or plan, to assess acquired knowledge and skills at that particular point in time. It usually serves the purpose of giving a grade or making a judgment about the students' achievements in the course. In conclusion, formative assessment aims at providing information about students' strong and weak areas for teachers to improve their teaching methods or instructional procedures. On the other hand, summative assessment aims at determining the extent to which students have been able to master the overall learning outcomes at the end of the instruction. G. Steps in Designing a Good Language Test According to Dr Alovor and el Sadat [12], there are six steps to go through when designing a good test. Step 1: Defining the purpose of the test. The basic question to answer is, ''why am I testing'' Classroom tests serve several purposes. The nature of the test item is influenced by the purpose. Step 2: Determine the item format to use. Test items could either be essay type or objective type. Objectives types include Multiple Choice, Short Answer; Matching and True or False. Step 3: Prepare a test specification table (blueprint). A specification table matches course content with instructional objectives. To prepare the table, specific topic and sub-topic covered during the instructional period are listed. The major course objectives are also specified, and the instructional objectives defined. The total number of test items is then distributed among the course content and instructional objectives (behaviours). Step 4: Write the test items. In writing the individual items, the specific principles guiding the construction of each type of test must be followed for the item format. Step 4: Review items. Each individual item must be examined carefully at least a week after writing the test items. Items which are ambiguous and those poorly constructed as well as items that do not match the objectives must be reworded or removed. Generally, bad ones must be eliminated. The items, after review should be compiled in the final for administration. Step 4: Writing Directions. Clear, concise, and specific direction should be written. Directions must include number of items to respond to, credit for orderly presentation of materials (where necessary), and mode of identification of respondent. Students must be aware of the rules and regulations covering the conduct of the test. Penalties for malpractices such as cheating should be clearly spelt out. The sitting arrangement must allow enough space so that candidates will not copy each other work. H. Criteria for a Good Language Test There are different ways and methods of assessing learners. However, the most common method used by teachers is the test. Sah [15] held that testing as part of assessment, measures learners' achievement. Tests are tools that "best fulfil its function as part of the learning process if correction performance is immediately confirmed and errors are pointed out" [22]. Therefore, it is important to outline what a test should represent and the criteria it has to fulfil before being administered to learners. Validity, reliability, practicability are basic features of a good test. • Validity: Validity of a test refers to the "extent to which it measures what it is supposed to measure and nothing else" [23]. Validity also refers to how test scores correspond to some criteria such as behaviour, personal accomplishment or characteristic that reflect the attribute that the test is designed to gauge [24]. The validity of a test includes face and content validity. Content validity determines whether the test is representative of the entire skills or areas the test seeks to measure while face validity considers how suitable the content of a test seems to be on the surface [25]. • Reliability: According to Madsen (as cited in [26]) a reliable test is "the one that produces essentially the same results consistently on different occasions when the conditions of the test remain the same." In other words, a good test must produce the same or similar results when administered to different testees and scored by different markers or testers. • Practicability: A good test has to be practical or usable. When constructing a test, practical considerations need to be taken into account. Rehman [27] stated that a practical test has to be easy to administer, easy to interpret, economical and the time allocated for its administration has to be appropriate. Also, according to Benmostefa [28], a test is impracticable when: a) It requires considerable financial means and therefore a considerable budget b) It is time-consuming in the sense that it takes hours to complete c) It cannot be administered on a one-to-one basis to hundreds of people with only a limited number of examiners. d) It takes a few minutes for a student to complete and several hours for the examiner to grade; e) It is too complex and sophisticated to the extent of not being of practical use to the teacher. In short, the literature review has highlighted the difference between assessment and testing, the various forms of assessment, the reasons why teachers should assess in a language class, the items teachers should test in a language class, the educational uses of language tests, different steps in designing a good language test and the qualities of a good test. The next section explains the methodology used for this study. VI. RESEARCH METHODOLOGY This study aims at finding out teachers' understanding of assessment and the approaches they use in assessing their learners. Moreover, the research work identifies students' attitudes and perception towards assessment and evaluates the effect of washback in the language classroom. In this regard, a mixed methods design has been used in carrying out this study. Fifty-six (56) EFL teachers and four-hundred and fifty-eight (458) EFL students participated in this study. The participants were randomly selected from nine (9) private and public schools in the Atlantic region in Benin. Questionnaires and classroom observation are the research instruments used to gather information about the topic under study. The teachers' questionnaire consisted of ten (10) items comprising five (5) (2) open-ended questions. The students' questionnaire was originally written in French and learners' responses were translated into English by the researchers. Both the teachers and students' questionnaires have been designed to find out how assessment is perceived by both teachers and students and how learners' language proficiency is assessed by EFL teachers. The second research instrument for data collection was classroom observation. In order to fully investigate teachers' assessment methods and to discover learners' attitude towards these methods, a classroom observation was conducted in six (6) EFL intermediate classes in both private and public schools. The data gathered from the questionnaire have been carefully analysed and interpreted. Tables and figures were designed to display and analyse the numeric data. The findings from the classroom observation have also been stated. VII. FINDINGS The findings from the various questionnaires and the results of the classroom observation carried out in the various schools are displayed. It is important to highlight the fact that over the fifty-six (56) EFL teachers who participated in the study, only forty-nine (49) teachers filled the questionnaire and returned it. Therefore, the remaining participants were excluded from the analysis presented below. According to Fig. 1, 78% of the EFL teachers have academic qualifications while only 22% of them have professional qualifications. This indicates that most of the teachers do not have the required training for the teaching profession. This could be a challenge for an effective teaching and proper assessment in secondary schools. This figure also discloses the teaching experiences of the respondents. Only 29% of them have more than five years of teaching experience in the English language and the majority (71%) have been teaching English for less than five years. A. Data related to Questionnaire Administered to EFL Teachers In response to the next question on assessment frequency and methods, all the forty-nine (49) teachers' answers stated clearly that they assessed their learners. Here is the assessment frequency in the chart below. The results on the table above shows that 57.14% of teachers mostly assess their learners through tests. 6.12% revealed they prefer oral presentations or projects. Some of the teachers claimed they prefer assessing their learners through individual assignments whereas 24.48% of the teachers declared that they assess their students through group work activities. This table shows that tests are the common forms of assessment EFL teachers use in their classroom. Table 2 shows that 42.85% of the respondents assess their learners in order to assign them grades. However, 32.65% stated they conduct assessment to provide their students with relevant feedback regarding their learning and finally, 24 The results on Table 2 show the aspects of language proficiency that are mostly used by EFL teachers in assessing learners. In fact, the majority of the respondents (34.69%) focus on grammar. 10.20% assess writing skills and unfortunately, only 12.24% of the 49 teachers assess speaking. Also, 22.24% of the EFL teachers focus on reading skills and 20.40% give priority to vocabulary acquisition. Grammar is tested by most of the respondents simply because grammar test papers are easier to grade than test papers on other skills and macro skills. B. Data Related to Questionnaire Administered to EFL Students Apart from the data collected from EFL teachers, the researchers also tried to mainstream students' opinions about assessment. With regard to the frequency of assessments, their opinions are presented in Fig. 2 below. From Fig. 2, 21% of the participants stated they are rarely assessed by their teachers whereas most of the students (43%) acknowledged that they are often assessed. However, 24% of the learners claimed their teachers assessed them most of the time while only 12% of the students revealed they are constantly assessed by their teachers. Since tests are commonly used by teachers as means of assessment, learners in lower intermediate and upper EFL classes were asked the effects tests generally have on them. A large number (49.34%) of the students stated that they feel motivated to learn their lessons when they have tests while almost the same percentage (48.25%) of them said they get nervous when they have a test, which makes them unable to concentrate and read. Yet, 2.40% of the learners stated that tests do not have any effect on them. From Table 4, one can notice that there are two nearly equal groups of students. Those who view test in a positive light and those who think that tests make them so nervous that they are unable to concentrate. For this last group, the nervousness certainly has a negative impact on their grades. On the issue of most frequently asked questions, 41.26 percent of the respondents indicated that grammar questions are more frequent while 23.58 % mentioned vocabulary as the most common type of exercise they are submitted to during assessments. Still, 18.77% stated exercises on reading comprehension as the most frequent while 16.37% of the students declared their teacher lay stress on writing when testing them. Table 6 shows learners' preference regarding who is to assess their learning performances. A large number of students (81%) preferred being assessed by their teachers. This could be due to the general belief that only teachers are the ones in charge of students' learning. Nevertheless, 13.97% of the students stated they preferred self-assessment while only 5.02% held that they preferred peer assessment. These results suggest that teachers are still viewed by their learners as the central authorities when it comes to assessment. C. Data Related to Classroom Observation The classroom observations conducted revealed that teachers do not often prepare students psychologically for tests. In most classrooms observed, as soon as teachers entered, they asked the learners to take a piece of paper without any introduction to announce the assessment. After this announcement, students were stressed out because it was at that time that some would start begging their classmates to give them a piece of paper or even a pen for the test. Some students could not even succeed in finding a piece of paper before the teacher finished writing the quiz on the board. In some cases, the quiz was administered as a punishment for a student's misbehaviour in the classroom. On one occasion, the teacher entered the classroom and just because 21% 43% 24% 12% Rarely Often times Most times Always the students did not rise to greet him, he admonished them for failing to greet him and afterwards, he asked them to take a piece of paper. Most of the students were sweating and looked very concerned. Their concern might be due to the fact that, under the influence of anger, the teacher would give them questions very difficult to answer. The quiz lasted fifteen minutes and he kept intimidating the students, shouting the number of minutes remaining and threatening to expel any student that might cheat from the classroom. Out of the 10 classrooms observed, four teachers administered a quiz. It is also important to mention that three out of the four quizzes were on grammar, which confirms the data from the questionnaire on the frequency of grammar testing in EFL classes. VIII. DISCUSSION OF THE FINDINGS This section discusses the findings of the field investigation. It provides answers to the research questions stated at the beginning of this paper. A. Effectiveness of Teachers' Assessment in EFL Classrooms in Benin One of the aims of this work is to discover the extent to which learners' language proficiency is assessed by EFL teachers. In light of the findings of the study, teachers' understanding of assessment is not quite broad. This is because the majority of the teachers do not have the required professional qualifications. All the teachers who participated in the study stated that they assessed their students on a regular basis. However, their construction of assessment is limited to summative assessment (mainly tests) intended to assign grades to the learners. A few mentioned oral presentations, assignments or group work as means of assessing their students. This means that tests are viewed as the major if not only the one type of assessment by EFL teachers. Moreover, 21 out of 49 teachers who answered the questionnaire disclosed that they assessed their learners in order to assign grades to be taken into account for the computation of means or averages in their subjects at the end of the semester. Nevertheless, a reasonable amount of them stated that assessment is carried out to improve student learning and a few of them claimed they assessed their learners to evaluate how effective their teaching methods are. This implies that the respondents do take into account formative assessment in their instructional practices. In addition, despite new approaches to testing, most teachers still adopt the traditional grammar and writing tests. Most of the respondents revealed that they focus on grammar assignments while others mentioned vocabulary, writing, and reading comprehension. None of the teachers mentioned listening and speaking which are very important because the ultimate goal of teaching EFL is to enable learners to communicate orally in the language. Teachers teach and assess what students need to pass their exams and not what they need to speak the language. Moreover, findings from a study by Ouahiani [5] suggested that assessment cannot be effective "unless the teacher takes time to assess students gradually following a set of steps and appropriately designed procedures.'' The idea here is that teacher need to follow specific procedures to administer assessment. That is the reason why the behaviours observed during the classroom observations are not to be promoted or encouraged. B. Learners' Attitudes towards Assessment and Backwash Effect The study has revealed that EFL learners' perceptions about assessment are not accurate. Most of the intermediate students stated that their teachers often assessed them through tests which is a confirmation of what the teachers stated. 48.25% of the learners view testing as the only way their teachers could assess their learning. Besides, learners also revealed that grammatical structures are the most tested items. On the other hand, the study has shown that tests have some effects on learners. Although, a significant percentage (49.34%) of the learners stated that tests motivate them to learn their lessons more, a higher percentage (48.25%) revealed that they get so nervous during tests that they find it difficult to concentrate. As stated earlier, the nervousness is partly due to teachers' behaviours during the administration of the tests and also to the significant influence of tests on the learners' promotion from one form to the other. The fear of getting a poor mark and ultimately failing at the end of the school year may also account for the nervousness experienced by learners during tests It can also be inferred that the washback of the tests is negative. This means that if tests are viewed by teachers and students as the main means of assessing learners' proficiency, then there is a need to be cautious because these learners might be proficient but due to the negative effect that tests have on them, they will perform poorly on those tests. Besides, most students stated they preferred teacher's assessment to self or peer assessment. Lack of appropriate feedback has been highlighted by the learners' feedback plays a critical role after assessments because it provides guidance to students. Beyond the grades assigned in summative assessments such as quizzes and tests, it is important for teachers to draw students' attention to their strengths and weaknesses. By providing feedback, teachers give their learners the opportunity to make adjustments in their learning styles and become better learners. IX. CONCLUSION AND RECOMMENDATIONS The objectives of this study were to determine the effectiveness of EFL teachers' assessment in their classes, to explore learners' attitude towards teachers' assessment, and to find out how positive or negative the backwash effect is in EFL classes in Benin. In order to collect relevant data, a questionnaire was administered to learners and another one to EFL teachers. Classroom observations were also conducted. The results of the study revealed that tests are the most common forms of assessment. Teachers often assess their learners in order to assign marks and most tests are grammar-based. Teachers have been encouraged to view assessment in a positive light, which is to ensure that teaching and learning are in line with expected goals. The findings also raised issues related to the training of teachers in the fields of language testing. It was found that almost all of teachers are not well prepared to face the challenges related to language proficiency assessment because some of them did not have the opportunity to learn to do so. The study found that English language teachers still have very limited knowledge about the educational value and the use of assessment in English language classes. On the basis of the findings, it is recommended that teachers should be sensitized on the importance of language testing and evaluation in the teaching and learning processes mainly at secondary school level. Teachers' awareness should also be raised on the need to assess their learners on a regular basis in order to make the necessary adjustments in their instructional procedures. Finally, teachers should be encouraged to vary the forms of assessment administered to their learners in order to cater for different learning styles in their classrooms.
Hypoxia-inducible factor-mediated induction of WISP-2 contributes to attenuated progression of breast cancer Hypoxia and the hypoxia-inducible factor (HIF) signaling pathway trigger the expression of several genes involved in cancer progression and resistance to therapy. Transcriptionally active HIF-1 and HIF-2 regulate overlapping sets of target genes, and only few HIF-2 specific target genes are known so far. Here we investigated oxygen-regulated expression of Wnt-1 induced signaling protein 2 (WISP-2), which has been reported to attenuate the progression of breast cancer. WISP-2 was hypoxically induced in low-invasive luminal-like breast cancer cell lines at both the messenger RNA and protein levels, mainly in a HIF-2α-dependent manner. HIF-2-driven regulation of the WISP2 promoter in breast cancer cells is almost entirely mediated by two phylogenetically and only partially conserved functional hypoxia response elements located in a microsatellite region upstream of the transcriptional start site. High WISP-2 tumor levels were associated with increased HIF-2α, decreased tumor macrophage density, and a better prognosis. Silencing WISP-2 increased anchorage-independent colony formation and recovery from scratches in confluent cell layers of normally low-invasive MCF-7 cancer cells. Interestingly, these changes in cancer cell aggressiveness could be phenocopied by HIF-2α silencing, suggesting that direct HIF-2-mediated transcriptional induction of WISP-2 gene expression might at least partially explain the association of high HIF-2α tumor levels with prolonged overall survival of patients with breast cancer. Introduction Temporally and spatially variable tissue hypoxia is characteristic of solid tumors. 1 Hypoxia-inducible factors (HIFs) allow cancer cells to adapt to microenvironmental tissue hypoxia, affecting all aspects of tumor progression, including metabolism, proliferation, inflammation, angiogenesis, and metastasis. [2][3][4][5] Importantly, HIFs are also involved in resistance to cancer therapy, and overall survival correlates with HIFα levels in a cancer type-specific manner. [6][7][8] Transcriptionally active HIFs are heterodimers usually composed of a constitutively expressed β subunit and either a HIF-1α or a HIF-2α subunit, the stability and activity of which is regulated by oxygen-dependent protein hydroxylation. 9,10 Despite their high structural similarity and identical DNA sequence recognition, several studies have identified specific roles for HIF-1 and HIF-2 in tumorigenesis. [11][12][13] We and others previously showed in a variety of cancer cell lines that HIF-1α protein levels decreased under prolonged hypoxia while HIF-2α levels increased, suggesting HIFα isoform-specific kinetics of target gene expression. 8,14,15 In contrast with the many known HIF-1 and HIF-1/HIF-2 target genes, 8,14,16 only a few genes have been reported to be regulated 24 Fuady et al exclusively by HIF-2, including erythropoietin, ephrin A1, VE-cadherin, protein tyrosine phosphatase receptor-type Z polypeptide 1, amphiregulin, and Wnt-1 induced signaling protein 2 (WISP-2). 8,14,[17][18][19][20][21] WISP-2 is a secreted protein member of the connective tissue growth factor/cysteine-rich 61/nephroblastoma overexpressed (CCN) family, and is also known as CCN5. 22,23 WISP-2 has been detected in adult skeletal muscle, colon, and ovary, and in the fetal lung, as well as in the stroma of breast tumors derived from Wnt-1 transgenic animals. 24 WISP-2 expression has been shown, in most studies, to correlate inversely with the aggressiveness of breast, pancreatic, and colon cancer, suggesting tumor suppressor-like activity. 25,26 WISP-2 shows transiently elevated levels during progression of breast cancer; while it is almost undetectable in normal human mammary epithelial cells, it is highly expressed in estrogen receptor-positive noninvasive breast cancer cell lines (including MCF-7, BT-474, ZR-75-1, and T-47D), and is again undetectable in highly invasive estrogen receptornegative cells (including MDA-MB-231, MDA-MB-468, BT-20, and DU-4475). [27][28][29][30] Loss of WISP-2 in MCF-7 cells induced estrogen-independent growth and promoted epithelial-to-mesenchymal transition, consistent with a more invasive phenotype, whereas forced WISP-2 expression in MDA-MB-231 cells reduced proliferation and invasiveness. 30 WISP-2 expression is induced by estrogens as well as by epidermal growth factor and insulin-like growth factor in estrogen receptor-positive cells, and WISP-2 is necessary for estrogen-induced and insulin-like growth factor-induced proliferation. [31][32][33][34][35] A functional estrogen response element has been identified in the WISP2 promoter which is required for inducibility of the WISP2 gene by estrogen. 29 Estrogen receptor-α recruits the histone acetyl transferase cAMP response element (CRE) binding (CREB) protein as well as the cyclin-dependent kinase inhibitor p21 WAF1/CIP1 to the WISP2 promoter, suggesting cooperative control of WISP2 gene expression. 29,36 Hypoxia has been identified as another stimulus of WISP2 gene expression, and is mediated specifically by HIF-2α in cooperation with the ETS oncogene family member ETS-like gene 1 (ELK-1) in MCF-7 cells. 14, 37 We previously demonstrated that the WISP2 promoter is induced specifically by HIF-2α in MCF-7 cells. 8 Here, we identified the hypoxia response elements (HREs) and characterized a microsatellite region responsible for HIF-2α-specific induction of the WISP2 promoter. Furthermore, we assessed the impact of hypoxia and HIF-2α on WISP-2-mediated cell proliferation, clonogenic growth, and motility. Messenger rna (mrna) and protein detection Total cellular RNA was extracted as previously described. 39 Total RNA (2 µg) was reverse transcribed using Affinity-Script reverse transcriptase (Agilent, Santa Clara, CA, USA) and complementary DNA (cDNA) levels were estimated by quantitative polymerase chain reaction (PCR) using a SYBR ® Green quantitative PCR reagent kit (Sigma-Aldrich) in a MX3000P light cycler (Agilent). Transcript levels were calculated by comparison with a calibrated standard and expressed as ratios relative to ribosomal protein L28 mRNA levels. Immunoblots were performed as previously described. 40 Antibodies against the following proteins were used: WISP-2 (Abcam, Cambridge, UK), HIF-1α (BD Transduction Laboratories, Allschwil, Switzerland), HIF-2α (Novus Biologicals, Littleton, CO, USA), Sp1 (Santa Cruz Biotechnology, Dallas, TX, USA), and β-actin (Sigma-Aldrich). Breast cancer tissue microarray analysis has been described previously. 8 cellular proliferation, colony formation and motility assays To determine cell proliferation and viability, 10 5 cells per well were seeded into six-well plates, allowed to adhere overnight, exposed to normoxia or hypoxia for 0 to 72 hours, detached by trypsin/EDTA, and counted using a Vi-cell™ XR cell viability analyzer (Beckman-Coulter, Brea, CA, USA). For low cell density colony-forming assays, 2 × 10 3 cells per well were plated into six-well plates, allowed to adhere overnight, and exposed to normoxia or hypoxia for 10 days, with the medium replaced every 3 days. The colonies were fixed with methanol, stained with 0.5% crystal violet, and counted. For anchorage-independent colony formation assays, 10 4 cells were resuspended in 2 mL of 0.4% low melting agarose (Sigma-Aldrich) in Dulbecco's Modified Eagle's Medium, poured on top of a 2% low melting agarose layer in six-well plates, and allowed to settle overnight. Following exposure to normoxia or hypoxia for 14 days, the soft agar was washed with phosphate-buffered saline, the colonies were stained with 0.005% crystal violet in methanol for one hour at room temperature, and counted. For the scratch assay, cells were allowed to grow to 100% confluency in 12-well plates. Following crosswise scratching with a 200 µL pipette tip, the cells were exposed to normoxia or hypoxia in fetal calf serum-free Dulbecco's Modified Eagle's Medium for 24 hours. The cell-free area was measured and converted to percent recovery. statistical analysis If not indicated otherwise, unpaired Student's t-tests were applied. Differences between two values at the P,0.05 level were considered to be statistically significant. Figure 1A). Although WISP-2 immunoblotting is notoriously difficult, 42 probably because the WISP-2 protein is partially secreted we managed to detect WISP-2 protein in MCF-7 cells where it was found to be upregulated following hypoxic stimulation ( Figure 1B), as previously published for the WISP-2 mRNA levels. 8 Immunoblot analyses revealed that all six cell lines expressed both HIF-1α and HIF-2α in a hypoxia-inducible manner ( Figure 1C). The variable relative molecular weight of HIF-1α between approximately 98 kDa and 120 kDa, as observed in the different cell lines, is due to varying degrees of phosphorylation as reported previously. 43 HIF-1α and HIF-2α were stably knocked down by viral transduction of shRNA constructs. Exogenous shRNA expression efficiently reduced HIFα levels ( Figure 1C). In the absence of HIF-1α, increased HIF-2α protein levels could be observed in most cell lines, a currently unexplained phenomenon that we and others have described previously. 8,44 As in MCF-7 cells, 8 Figure 1D). In MCF-7 cells, HIF-2α shRNA also prevented the hypoxic induction of WISP-2 but not of prolyl-4-hydroxylase domain 2 protein levels, the latter being an HIF-1α target ( Figure 1B). However, in BT-474 cells, hypoxic WISP-2 mRNA induction was HIF-1α-dependent but not HIF-2αdependent, and in ZR-75-1 cells hypoxic WISP-2 mRNA induction was attenuated in the absence of both HIF-1α and cultured in normoxic conditions or exposed to hypoxia for 24 hours, and analyzed by immunoblotting using antibodies derived against WisP-2, the HiF target prolyl-4-hydroxylase domain 2 (PHD2), HiF-2α, or the constitutively expressed control protein, β-actin. (C) cells were stably transfected with either control shrna or shrnas targeting HiF-1α (shH1a) or HiF-2α (shH2a) and analyzed by immunoblotting of nuclear extracts using antibodies derived against HiF-1α, HiF-2α, or the constitutively expressed control transcription factor, sp1. (D) Hypoxic WisP-2 mrna induction was determined in the indicated cell lines which were stably transfected with either control or shrna constructs as in (C). WISP-2 mRNA levels were quantified by reverse transcription quantitative polymerase chain reaction and normalized to the mrna levels of the ribosomal protein, l28. shown are mean values ± the standard errors of the mean of three independent experiments. (D) For statistical evaluation of the hypoxically exposed cells, the effects of HiF-1α or HiF-2α silencing were compared with the control shrna transfected cells. *P,0.05; **P,0.01; ***P,0.001. Abbreviations: ctrl, control; HiF, hypoxia-inducible factor; WisP-2, Wnt-1 induced signaling protein 2; mrna, messenger rna. 27 HiF-mediated induction of WisP-2 in breast cancer HIF-2α ( Figure 1D). These data demonstrate that WISP-2 is regulated by hypoxia in low-invasive luminal-like breast cancer cell lines and that cancer-specific HIFα isoforms confer hypoxic WISP-2 induction. Potential enhancer elements involved in HiF-inducible WISP2 promoter activity To further investigate the HIFα isoform-specific effect on WISP-2 transcription, we tested various WISP2 promoter truncations driving firefly luciferase reporter gene expression in MCF-7 cells. All constructs ranging from −1929/+16 to −422/+16 relative to the transcriptional start site of the human WISP2 gene displayed similar promoter activities. In contrast with the robust hypoxic WISP-2 mRNA induction ( Figure 1C), only weak responses to hypoxia could be observed with these reporter gene constructs (Figure 2A). Whereas moderate increases in promoter activity, mainly under hypoxic conditions, were observed after HIF-1α 28 Fuady et al overexpression, strong increases in normoxic as well as hypoxic promoter activities followed HIF-2α overexpression, suggesting that HREs likely are present on these constructs but are not sufficient to confer full hypoxic inducibility. Additional truncations of the −422/+16 construct resulted in complete loss of promoter activity in the absence of the region close to the TATA-like box (construct −422/−75), demonstrating that the first 75 base pairs are essential for basal promoter activity. Normal basal activity but unresponsiveness to HIFα overexpression was observed for construct −83/+16 ( Figure 2B). Three single putative HREs (HRE1, HRE2, and HRE3) and a double HRE (HRE4) were identified in the WISP2 promoter region ( Figure 2C). Of note, the three single HREs were all located within microsatellite (MS) regions. HRE1 was found in a CA repeat region denominated MS I, HRE2 was located at the transition between a GT and a GC repeat, collectively named MS II, and HRE3 was identified within the GC repeat of MS II ( Figure 2C). Furthermore, a phylogenetic footprint analysis of the promoter region revealed a strong sequence conservation around the TATA-like box, comprising a potential CRE, a binding site for ELK-1, and two WNT response elements (WREs, Figure 2C). However, when tested in MCF-7 shH2a cells with suppressed endogenous HIF-2α levels, WRE1 mutation neither affected basal promoter activity nor HIFα responsiveness. Mutation of the CRE site strongly decreased promoter activity but partially retained HIF-2α-mediated induction. Similarly, the ELK and WRE2 mutations reduced overall promoter activity and slightly affected HIF-2α responsiveness ( Figure 2D). In conclusion, these conserved elements are mainly required for basal promoter activity but do not seem to confer HIF-2α-dependent induction of the WISP2 promoter, although we cannot exclude a partial cooperation between these cis-acting elements and the HREs. Two Hres within an Ms repeat regulate HiF-dependent WisP-2 transcription To identify the HRE(s) responsible for HIF-mediated WISP-2 regulation, MS I (containing HRE1) was removed by a −252/+16 promoter truncation (∆MS I), and MS II (containing HRE2 and HRE3) was removed by a promoter deletion (∆MS II) as indicated in Figure 2C. Overexpression of both HIFα isoforms in stable HIF-2α knockdown MCF-7 cells was used to drive reporter gene expression. HIFα-dependent induction of the ∆MS I construct in stable HIF-2α knockdown MCF-7 cells was indistinguishable from the −411/+16 construct, indicating that MS I does not contain a functional HRE. In contrast, HIFα-dependent induction of reporter gene activity was strongly reduced when MS II was deleted ( Figure 3A). Deletion of the MS II resulted in basal reporter gene expression comparable with minimal WISP2 promoter (−83/+16) activity ( Figure 3B). These experiments were repeated in wild-type MCF-7 cells with a similar outcome (data not shown). Furthermore, the −83/+16 minimal promoter construct was extended with oligonucleotides containing either wild-type (−112/+16) or mutated (−112/+16mut) HRE2 and HRE3, as indicated in Figure 2C. While no differences in activity were observed between the −112/+16mut and the −83/+16 constructs, a partial but significant reconstitution of HIF-2α-mediated WISP2 promoter induction was observed with the −112/+16 construct under hypoxic conditions ( Figure 3B). In conclusion, these results demonstrate that HRE2 and/or HRE3 within MS II mainly confer HIF responsiveness to the WISP2 promoter. Because MS instability is a hallmark of cancer, 45 we tested whether differences in the length of the CA repeats in MS II might influence HIFα-inducible promoter activity. Therefore, genomic DNA from T-47D cells was amplified and cloned. CA repeat length variants, CA 12 , CA 13 , CA 17 , and CA 18 , were obtained and compared with the parental −422/+16 construct derived from MCF-7 cells ( Figure 3C). However, no change in reporter gene activity could be observed, suggesting that MS instability does not affect HIFα-mediated WISP2 promoter activity. Unexpectedly, sequencing of the WISP2 upstream regulatory region revealed the presence of HRE2 in all breast cancer cell lines analyzed (Figure 1), as well as in Hep3B and HepG2 hepatoma, SK-N-MC neuroepithelioma, and HeLa cervical carcinoma, but not in three published sequences (GRCh37.p2, HuRef, and Hs_Celera). Only UT-7 megakaryoblastic leukemia cells lacked HRE2 and showed the same sequence as the database entries (data not shown). Therefore, the −252/+16 region was amplified from UT-7 cells (−252/+16_UT7) and compared with the promoter constructs derived from MCF-7 cells. Only a small reduction in response to HIFα overexpression was observed with the −252/+16_UT7 construct ( Figure 3D). Also, mutation of HRE3 (∆HRE3) resulted in a modest reduction in reporter gene activity, and a double mutation of HRE2 and HRE3 (∆HRE2/∆HRE3) further abrogated reporter gene activity comparable with that observed after deletion of MS II ( Figure 3E). Taken together, these results indicate that the two HREs within MS II mediate at least partially HIFα-dependent induction of the WISP2 promoter. 29 HiF-mediated induction of WisP-2 in breast cancer WisP-2 negatively correlates with tumor infiltration by macrophages We previously reported that high WISP-2 levels correlate positively with high HIF-2α levels and prolonged overall survival in patients with breast cancer. 8 Because tumor macrophage infiltration, associated with the hypoxic tumor microenvironment and high HIFα levels, not only represents an important step during progression of breast cancer but also an important prognostic marker, 46,47 we analyzed whether WISP-2 expression levels are related to tumor-associated macrophage counts. Therefore, over 300 breast cancer samples were immunostained and scored for the macrophage markers CD68 and CD163 and the pan-leukocyte marker CD45 ( Figure 4A). Interestingly, a negative correlation between WISP-2 and tumor-associated macrophage counts was found ( Figure 4B). As expected, overall survival of these patients was negatively associated with macrophage infiltration ( Figure 4C). These results suggest that HIF-2α-mediated WISP-2 expression is a marker for (and maybe is even causally involved in) breast cancer progression stage, with low cancer cell proliferation/ invasiveness as well as low macrophage infiltration, and both contributing to an improved prognosis. HiF modulates the WisP-2 suppressed motility of McF-7 cells Silencing of WISP-2 in noninvasive MCF-7 cells has been reported to enhance motility and modulate the expression of genes involved in cancer invasiveness. 30,48,49 To investigate the functional consequences of HIF-mediated WISP-2 regulation, we stably transfected MCF-7 cells with WISP-2 shRNA, resulting in a robust knockdown of WISP-2 mRNA using two independent shRNA constructs (shW#1 and shW#2) under normoxic as well as hypoxic conditions ( Figure 5A). Proliferation of both WISP-2 Figure 5B). As determined by automated trypan blue exclusion and video microscopy analysis, no difference in gross cell morphology or viability could be observed (data not shown). Low cell density colony formation was somewhat attenuated by knockdown of both HIF-2α and WISP-2 ( Figure 5C). In contrast, anchorage-independent colony formation was increased, especially in normoxic cells ( Figure 5D). Similarly, recovery from scratches in confluent cell layers was significantly increased by knockdown of both HIF-2α and WISP-2 ( Figure 5E). Probably owing to remaining weak WISP-2 induction (see Figure 5A), under hypoxic conditions a slightly impaired anchorage-independent colony formation ( Figure 5D) as well as scratch recovery of MCF-7 cells was observed ( Figure 5E), although the values were still higher than in the hypoxic control cells. In summary, these results are consistent with a role of HIF-2 in WISP-2-mediated suppression of MCF-7 anchorage-independent growth and cell motility, which are two hallmarks of cancer progression. Discussion In this study, we could corroborate the previously reported hypoxic induction of WISP-2 8 which was HIF-2-dependent in most but not all low-invasive breast cancer cell lines tested, suggesting that currently unknown cell type-specific cofactors determine the HIFα isoform responsible for hypoxic WISP-2 induction, rather than intrinsic gene selectivity of the HIFα isoforms themselves. Previous studies have suggested a cooperation of HIF-2α, but not HIF-1α, with ETS factors resulting in target selectivity, 14,50 but whether a different composition of ETS factors explains the HIF-1αspecific WISP-2 expression in BT-474 cells remains to be explored. Although we observed lower basal levels of WISP2 promoter activity following mutation of the conserved CRE, ELK-1, and WRE sites, HIF-2α-mediated WISP-2 regulation remained unaffected, suggesting that none of these factors confers HIFα isoform selectivity. Two functional HREs were identified which are essential for HIF-2α-specific induction of WISP2 promoter activity in MCF-7 cells, but even promoter constructs up to 1,919 base pairs upstream of the transcriptional start site did not fully recapitulate the hypoxic induction factors determined at the mRNA level. Of note, in a genome-wide chromatin immunoprecipitation study, both HIFα isoforms were reported to bind to a region within the first intron of WISP-2, 51 suggesting that these elements might contribute to endogenous regulation of WISP-2. However, at least in MCF-7 cells, we could not detect any further increase in hypoxic WISP-2 31 HiF-mediated induction of WisP-2 in breast cancer transcription by this region in combination with the upstream regulatory elements (data not shown). The two functional HREs are located within the highly polymorphic MS II region of the WISP2 promoter. Such unstable polymorphic dinucleotide repeats are known to play an important role in tumor progression. MS instability seems to be mostly due to the loss of a functional mismatch repair machinery, and is implicated in prognosis and therapy response because it alters the expression levels of the affected genes. 45,52 However, we could not find any role for MS II length in regulating reporter constructs driven by the WISP2 promoter. By immunohistochemical analysis of breast cancer samples, we recently demonstrated that HIF-2α and WISP-2 levels correlate with a more differentiated tumor cell type and consistently with a better prognosis. 8 These results are in line with the findings presented herein showing that WISP-2 negatively correlates with tumor macrophage invasion, which provides an additional marker for a better tumor prognosis. Silencing of WISP-2 elevated two parameters of cancer cell aggressiveness, ie, anchorage-independent colony formation and recovery from scratches in confluent cell layers. These effects could not simply be explained by changes in proliferation rates and anchorage-dependent colony formation, which were actually slightly decreased rather than increased. Interestingly, increased anchorageindependent colony formation and scratch recovery following WISP-2 knockdown could be phenocopied by HIF-2α silencing. While we cannot exclude that additional signaling pathways are recruited by the hypoxic tumor microenvironment, our data strongly suggest that HIF-2-mediated WISP-2 induction contributes to a less aggressive tumor type. Conclusion Taken together, our data suggest that the previously reported association between high HIF-2α levels and an increased overall survival rate in patients with breast cancer could be explained at least partially by HIF-2α-mediated direct induction of WISP-2, maintaining a less aggressive breast cancer phenotype.
Exercise capacity and body mass index - important predictors of change in resting heart rate Background Resting heart rate (RHR) is an obtainable, inexpensive, non-invasive test, readily available on any medical document. RHR has been established as a risk factor for cardiovascular morbidity, is related to other cardiovascular risk factors, and may possibly predict them. Change in RHR over time (∆RHR) has been found to be a potential predictor of mortality. Methods In this prospective study, RHR and ∆RHR were evaluated at baseline and over a period of 2.9 years during routine check-ups in 6683 subjects without known cardiovascular disease from the TAMCIS: Tel-Aviv Medical Center Inflammation Survey. Multiple linear regression analysis with three models was used to examine ∆RHR. The first model accounted for possible confounders by adjusting for age, sex and body mass index (BMI). The 2nd model included smoking status, baseline RHR, diastolic blood pressure (BP), dyslipidemia, high-density lipoprotein (HDL) and metabolic equivalents of task (MET), and in the last model the change in MET and change in BMI were added. Results RHR decreased with age, even after adjustment for sex, BMI and MET. The mean change in RHR was − 1.1 beats/min between two consecutive visits, in both men and women. This ∆RHR was strongly correlated with baseline RHR, age, initial MET, and change occurring in MET and BMI (P < 0.001). Conclusions Our results highlight the need for examining individual patients’ ∆RHR. Reinforcing that a positive ∆RHR is an indicator of poor adherence to a healthy lifestyle. Background Resting heart rate Resting heart rate (RHR) is an obtainable, inexpensive, non-invasive test. It is quick, painless, and requires no additional equipment. It is readily available on any chart or medical document. RHR has been established as a risk factor for cardiovascular morbidity, is related to other cardiovascular risk factors, and may possibly predict them [1][2][3][4]. RHR is associated with metabolic disorders, and is elevated in individuals with increased glucose levels, triglycerides, cholesterol levels and body mass index (BMI). RHR is also elevated in individuals who fulfill the standard criteria for metabolic syndrome (MetS), suggesting possible shared pathophysiological mechanisms for both RHR and the MetS [1][2][3]. RHR appears to be an independent risk factor for heart failure [5,6]. Some studies have shown RHR to be predictive of all-death mortality and some link RHR to malignancies [7][8][9]. Many factors affect RHR. Some are non-modifiable determinants such as age, sex, height and race. Others are physiological factors such as influence of the circadian cycle, posture and blood pressure, or lifestyle factors like smoking, alcohol, and mental stress. Physical fitness also affects RHR, and may be expressed by the metabolic equivalent of the task (MET), which is commonly used to express the oxygen requirement of the work rate during a stress test, and demonstrates levels of fitness [1,[10][11][12][13][14][15].. Change in resting heart rate over time Some studies have examined the effect of change in RHR over time by measuring RHR at baseline and after a period of time (ΔRHR). Most of these studies, including large, recent studies, examined the effect of change in heart rate over time in populations on morbidity and mortality, and it has been found to be a potential predictor of both. Floyd et al. examined 1991 older subjects without known cardiovascular disease and found that 262 subjects had an incident MI event (13%) and 1326 died (67%) during 12 years of median follow-up, concluding that increase in mean RHR and variation in RHR over a period of several years represents a potential predictor of long-term mortality among older persons free of cardiovascular disease [16]. Jiang et al. performed a large cross-sectional and longitudinal study which found that RHR is an independent risk factor for existing metabolic syndrome (MetS) and a predictor for future incidence of MetS, supporting the results of previous studies [3,13,17]. Fewer studies have examined the characteristics and risk factors of individual patients, in relation to the change in their RHR (ΔRHR) over the years. The HARVEST study found ΔRHR to be an independent predictor of the development of hypertension and of weight gain in young persons screened for stage 1 hypertension [18,19]. Jouven et al., examined middle-aged Frenchmen employed by the Paris Civil Service between 1967 and 1972 and found ΔRHR to be related to age, tobacco consumption, current sport activity, diabetes mellitus, and blood pressure [20]. In light of evidence that elevated resting heart rate is a risk factor for MetS, heart failure, cardiovascular morbidity and possibly over-all mortality [1][2][3][4][5][6][7][8][9], the goal of our study is to further examine change in RHR over time in individuals and to determine the factors which affect this change in apparently healthy individuals. Study design The study was reviewed and approved by the Tel-Aviv medical center institutional Helsinki Committee (chairperson: Marcel Topilsky and Shmuel Kivity, numbers: 0491-17 and 02-049, at Jan 2002). The data used in this study was collected as part of the "TAMCIS: Tel Aviv Medical Center Inflammation Survey". Study participants (n = 19,385 individuals) were apparently healthy, employed individuals attending a center for periodic health examinations, for a routine health examination during the years 2002-2014 and who gave their written informed consent for participation according to the instructions of the local ethics committee. The routine annual checkups included a physician's interview and examination, blood and urine tests, and an exercise stress test. Resting heart rate was measured following at least 10 min rest and before the exercise test using electrocardiography ECG, Quinton® Q-Stress (Cardiac Science, Bothell, WA, USA). MET was evaluated using the Quinton Q-stress (Cardiac Science). Participants were recruited individually by an interviewer while waiting their turn for the clinical examination. They represent 91.6% of the examinees during this period. We systematically checked for nonresponse bias and found that non-participants did not differ from participants on any of the socio-demographic or biomedical variables (see supplementary material for details). Dyslipidemia was defined as serum triglycerides (TG) > 150 mg/dl or use of lipid lowering medications and Low HDL-40 mg/dl for men and 50 mg/dl for women or use of lipid lowering medications. Of this baseline cohort, 7735 (42%) subjects arrived for a second routine check-up until May 2014. We further excluded any subjects who changed their smoking status between check-ups (327 subjects who stopped smoking and 233 subjects who started smoking) and the top and bottom 0.3% of ΔRHR. Therefore, our cohort for heart rate change between visits included 6683 subjects (4569, 68.4% men and 2114, 31.6% women). Statistical analysis All data was summarized and displayed as mean ± standard deviation (SD) for the continuous variables and as number of patients plus the percentage in each group for categorical variables. For all categorical variables, the Chi-Square statistic was used to assess the statistical significance between sexes. All above analyses were considered significant at p < 0.05 (two-tailed). Multiple linear regression analyses were used to test heart rate increase over time (ΔRHR = RHR at follow-up visit minus RHR at baseline). Three models were examined. The first model accounted for the possible confounders by adjusting for age, sex and BMI. In the 2nd model we added the smoking status, baseline RHR, diastolic blood pressure (BP), dyslipidemia, high-density lipoprotein (HDL-c) and MET (metabolic equivalents) variable, and in the last model, the change in MET and change in BMI were entered into the model. General linear model for heart rate change (Fig. 3) regression included age, sex, basal heart rate, change in MET by quartiles and decrease or increase BMI. Results The mean change of RHR was − 1.1 beats/min between two consecutive visits for a routine health examination (mean 2.9 ± 1.7 years between visits). As expected from the literature, women presented higher RHR than men (73.0 vs. 69.8 beats/min, p < 0.001), lower BMI and blood pressure, lower MET and improved lipid profile (all p < 0.001), but the ΔRHR was similar between sexes (p = 0.925). Population characteristics presented in Table 1 and the distribution of ΔRHR is shown in Fig. 1 for men and women, with x-axis reference lines dividing the cohort to quartiles of ΔRHR, so that patients in the first two quartiles (left side of the distribution) had lower RHR at the follow-up visit compared to the baseline visit (their RHR decreased with age) while the RHR of those on the right side of the distribution was higher at the baseline measurement than on the follow-up measurement (increased over time). RHR was very weakly correlated with age and BMI (r = − 0.06, r = 0.07 p < 0.001) but moderately correlated with MET (r = − 0.39, p < 0.001). Controlling for MET revealed stronger negative correlation between RHR and age (r = − 0.205, p < 0.001). These results remain significant when we split the analysis by sex. Next we adjusted RHR for sex, BMI, and MET and plotted the residuals against age (r = − 0.221, p < 0.001), meaning that even after controlling for these 3 confounders, RHR decreased with age (Fig. 2). Determinants of increase in resting heart rate In order to search for possible determinants of increase in annual RHR, we divided subjects into quartiles of ΔRHR and compared their characteristics (See Fig. 1 for distribution). Subjects with increased ΔRHR (increased change between their baseline measurement and follow-up measurement), presented with low baseline RHR and diastolic blood pressure, and a higher frequency of smokers, and elevated MET (p for trend p < 0.05, Table 2). Follow up time differed significantly between quartile groups, but the group with lowest ΔRHR had similar follow up time compared to highest ΔRHR group (post hoc p value =0.384). Of special interest was the group of patients with diabetes mellitus (DM); as expected, we found the DM patients had higher RHR compared to DM-free patients (73.5 ± 12.5 vs. 70.3 ± 11.8, p < 0.001). One hundred thirty three patients became diabetic during follow-up period. This group had RHR relatively high already during baseline visit (p < 0.001) and their ΔRHR remain similar to the group of DM-free patients in both visits (p = 0.387). To assess which variables best explain the variability in heart rate increase, we performed linear regressions (Table 3). Analysis confirmed that age, female sex, baseline RHR, HDL-c and MET have a beneficial effect on heart rate change (decreased ΔRHR), while increasing body weight (expressed as ΔBMI) and presence of dyslipidemia have an adverse effect (increased RHR). Figure 3 presents the inverse trends of MET and BMI with ΔRHR, demonstrating that individuals who increased exercise capacity on follow-up visit (x = 3 and 4) showed decreased heart rate compared to those with decreased exercise capacity (x = 1) who showed increased heart rate. Reduction of BMI also resulted in decreased heart rate, and vice versa. Discussion In this cohort study of healthy, employed adults without known cardiovascular disease, we found that RHR decreased with age, even after adjustment for sex, BMI and MET. We found that RHR values tend to decrease with aging (mean change in RHR was − 1.1 beats/min per 2.9 years of follow-up), so that RHR increases are the exception, and chose to focus on the correlations for this change, emphasizing the importance of recognizing increased annual heart rate as another risk factor for cardiovascular disease and mortality. We showed that age, female sex, baseline RHR, HDL-c and MET have a beneficial effect on heart rate change (decreased RHR), while increasing body weight (expressed as ΔBMI) and presence of dyslipidemia have an adverse effect (increased RHR). Elevated RHR has been established as a risk factor for cardiovascular morbidity and mortality. Some studies have shown RHR to be predictive of all-death mortality and some link RHR to malignancies [7][8][9]. Our study found that RHR decreased with age. The effect of age on RHR is less well-established than that of other non-modifiable determinants [23][24][25]. In some studies there appears to be a decrease in RHR with age [12,[26][27][28][29]. Some show no change in RHR or a decrease followed by a plateau [12,[29][30][31][32]. Others found a decrease in RHR in women but not in men [8,30]. Palatini et al. found an increase in RHR with age, in patients with isolated systolic hypertension [33]. We found annual increased heart rate to be correlated with a decrease in MET. The metabolic equivalent of the task (MET) is commonly used to express the oxygen requirement of the work rate during a stress test, and demonstrates levels of fitness. Decreased MET, therefore, implies less efficient heart function and worse cardiovascular fitness. Generally, cardiac output and heart rate decrease with age [34,35].), therefore, individuals with an increasing trend of RHR should be encouraged to choose and maintain a healthy lifestyle in order to decrease cardiovascular risks. Nevertheless, the clinical significance of individual ΔRHR remains to be examined in larger-scale cohorts. Furthermore, our findings highlight the importance of fitness and heart rate tracking. The development in personal digital devices has made heart rate information as important and easily access as ever, thus, provides a golden opportunity to identify use delta heart rate as an early marker of disease prevention. However, currently there are no general recommendations to guide the general public in this area [36]. Heart rate is controlled by the autonomic nerves, with sympathetic stimulation increasing the rate and parasympathetic stimulation decreasing it. Athletes have lower RHR compared with the general population and training results in reduced RHR. This is explained by changes the sympathovagal balance of the sinus node. In response to regular exercise changes occur in the cardiovascular system, such as increased contraction due to cardiac muscle fibers hypertrophy and increased muscle mass of the ventricles. Oxygen and nutrient delivery to muscles is improved by enhanced capillary capacity for blood flow, which causes a decrease in total peripheral resistance [37]. The importance of fitness to health is well-known and studied, and there is an inverse relationship between fitness and mortality. Kokkinos et al. examined MET in 18, 102 men who were followed for a median of 10.8 years, and found that for each additional in exercise capacity of 1-MET, mortality risk was reduced by 12% [38]. We also found annual increased heart rate to be correlated with increase in BMI. The HARVEST study found ΔRHR to be an independent predictor of the development of weight gain in young persons screened for stage 1 hypertension, and several studies link increased RHR and metabolic syndrome. High BMI is a well-known cardiovascular risk factor, even in metabolically-healthy overweight people [39]. Interestingly, the increase in BMI seen in our study population Fig. 3 Plot of estimated effect on delta heart rate of both MET change (x axis, 1 = lowest, 4 highest MET change) and change of BMI (color coded, decreased = black, increased = red). Y axes indicate the estimated effect on delta heart rate (based on general linear regression model following adjustment to age, sex and basal heart rate remained in the normal range of BMI and our cohort was not over-weight or obese. Therefore, increase in RHR was affected by increase in BMI even in individuals with normal BMI. This study analyzed the correlations of annual ΔRHR; it does, however, have limitations. While initial study population included 18,083 subjects after applying exclusion criteria (see "Methods"), at the second time point we were able to follow-up on only 7735 (42%) subjects, who actually arrived at our center for annual routine checkups. We systematically checked for non-response bias and found that non-participants did not differ from participants on any of the socio-demographic or biomedical variables. Furthermore, a high rate of participation was observed (91.6% of those who arrived at the center and were asked to participate). Another limitation is that our cohort consisted of participants in a health screening program and is not a population-based sample. However, the mean BMI in our study is very similar to that published by the National Health Survey from Israel [40]. Second, mean follow-up was 3 years, which is a relatively short time period for the investigation of RHR trajectories. Nevertheless, we were able to show a significant link between the biomarker studied and RHR trends in this cohort of apparently healthy individuals. We expect that the observed trend will increase with time unless modification in life style will be done. Another possible limitation is that the use of baseline RHR measurements introduced potential bias with respect to regression to the mean. Suggesting that individuals with extreme RHR on its basal measurement but closer to the mean on its second measurement [41]. The very large spread in the change in RHR (Fig. 1) are most likely not due to biological factors, but other factors such as measurement error, change in measuring circumstances, regression towards the mean, and others. Last but not list, it was previously shown that among employees, white collar workers (professional, managerial, clerical) had slightly higher resting heart rates than blue collar workers. This suggests a possible effect of greater physical fitness among blue collar workers [42]. However, in our study we did not collect data regarding type and amount of physical activity during work hours. Conclusions Our results confirm that RHR decreases with age and strengthens the need for identifying patients with annual RHR increase as a population at risk. Determinants of increased annual RHR over time and its consequences include initial lower exercise capacity, dyslipidemia, increase of BMI or decrease in MET. Our findings may be useful in identifying easily, and without any additional cost or time, asymptomatic individuals at risk, who could benefit from primary prevention (lifestyle changes or medication) in order to reduce cardiovascular risk. Abbreviations ΔRHR: Delta resting heart rate, representing change in resting heart rate over a period of time; BMI: Body mass index; BP: Blood pressure; DM: Diabetes mellitus; HDL-c: High density lipoprotein cholesterol; LDL-c: Low density lipoprotein cholesterol; MET: Metabolic equivalents of task; MetS: Metabolic syndrome; MI: Myocardial infarction; RHR: Resting heart rate
Systematic study of fusion barrier characteristics within the relativistic mean-field formalism Background: The nuclear interaction potential and hence the fusion barrier formed between the interacting nuclei are the keys to understanding the complex fusion process dynamics. Purpose: This work intends to explore the fusion barrier characteristics of different target-projectile combinations within the relativistic mean-field (RMF) formalism. Methods: The density distributions of interacting nuclei and the microscopic R3Y NN interaction are obtained from relativistic mean-field (RMF) formalism for non-linear NL1, NL3, TM1, and relativistic-Hartree-Bogoliubov (RHB) approach for DDME2 parameter sets. The fusion and/or capture cross-section for the different reaction systems is calculated using the well-known $\ell$-summed Wong model. Results: The barrier height and position of 24 heavy-ion reaction systems are obtained for different nuclear density distributions and effective NN interaction potentials. The comparison of fusion and/or capture cross-section obtained from the $\ell$-summed Wong model is made with the available experimental data. Conclusions: The phenomenological M3Y NN potential is observed to give higher barrier heights than the relativistic R3Y NN potential for all the reaction systems. The comparison of results obtained from different relativistic parameter sets shows that the densities from NL1 and TM1 parameter sets give the lowest and highest barrier heights for all the systems under study. We observed higher barrier heights and lower cross-sections for DDR3Y NN potential as compared to density-independent R3Y NN potentials obtained for considered non-linear NL1, NL3 and TM1 parameter sets. According to the present analysis, it is concluded that the NL1 and NL3 parameter sets provide comparatively better overlap with the experimental fusion and/or capture cross-section than the TM1 and DDME2 parameter sets. I. INTRODUCTION The study of underlying physics involved in low energy heavy-ion fusion reactions is essential for a better understanding of the characteristics of nuclear forces, nuclear structure, superheavy nuclei (SHN), magic shell closure, drip lines, and other related phenomena [1][2][3][4][5][6][7][8][9][10]. The interaction potential and consequently the fusion barrier formed between the projectile and target nuclei provides the basis to understand the dynamics involved * srana60 phd19@thapar.edu † bunuphy@um.edu.my ‡ rajkumar@thapar.edu in these fusion reactions. The characteristics of the fusion barrier such as the barrier height, position and oscillator frequency etc. are used further to calculate one or other related physical quantities such as the fusion probability and cross-section [11][12][13]. Since the fusion barrier is not the direct measurable quantity in the experiments, so theoretical modelling is required to extract its characteristics [14]. The origin of the fusion barrier is the interplay between the attractive short-range nuclear potential and the repulsive long-range Coulomb potential. The Coulomb potential is a known quantity and has a well-established formula. For the calculation of nuclear potential, we have different theoretical approaches available in literature [2,[15][16][17][18][19][20][21][22][23][24][25][26]. These theoretical models differ from each other in respect of their basic assumptions and the parameters used. As a consequence, the fusion barrier characteristics calculated also vary and greatly depend upon the adjustments of parameters used in a theoretical formalism [2,[15][16][17][18][19][20][21][22][23][24][25][26]. The phenomenological proximity potentials based on the proximity theorem [16,17] are widely used to estimate the nuclear interaction potential in terms of the mean curvature of the interacting surfaces and a universal function of separation distance [2,[16][17][18][19]. The Bass potential based upon the liquid drop model also provides a simple exponential form for the nuclear interaction potential [2,[20][21][22]. Moreover, the semimicroscopic approaches describe nuclear interaction potential as the difference in the energies of interacting nuclei at an infinite separation and a distance when overlapping. The examples are asymmetric two-center shell model and the models based on the energy density formalism (EDF) [23][24][25][27][28][29][30][31]. Furthermore, the phenomenological double folding approach also has successful applications to deduce the interaction potential between the two colliding heavy-ions [32][33][34][35]. In this approach, the nuclear optical potential is obtained using nuclear density distributions and effective nucleon-nucleon interaction. The model has been widely adopted to provide real and imaginary parts of the optical potentials between the colliding ions in elastic and inelastic scattering as well as in the study of nuclear fusion characteristics [32][33][34][35][36][37] and references therein. With the frequent measurements of various bulk properties from experiments and the constraints on nuclear matter observables including highly dense and isospin asymmetric systems, all these relativistic parameters are developed. Each parametrization in the relativistic mean-field model has its own identities and certain limitations, for more details follow Ref. [65]. In the present study, we have considered two different kinds of parametrizations, such as non-linear NL1, NL3, and TM1 parameter sets within the relativistic mean-field model and density-dependent DDME2 parameter set within Relativistic-Hartree-Bogoliubov (RHB) approach to study the characteristics of nuclear fusion in terms of the nuclear density distributions and NN potential. The medium dependent R3Y (named as DDR3Y) NN potential given in terms of density-dependent nucleon-meson couplings was introduced for the first time in the fusion study and can be found in Ref. [66]. Along with the relativistic R3Y and DDR3Y potentials, we have also employed the widely adopted M3Y potential to estimate the nucleon-nucleon interaction potential [32,33] for the comparison. A systematic analysis will be carried out in three steps to study the effect of nucleon-nucleon interaction potential and the density distributions of the fusing nuclei on the fusion barrier characteristics and consequently on the fusion and/or capture cross-section. In the first step, a comparison will be made for the widely used M3Y and recently developed relativistic R3Y effective nucleon-nucleon (NN) potential in terms of nuclear potential within the double folding approach. Moreover, the density-dependent R3Y (DDR3Y) NN interaction potential obtained for the DDME2 parameter set within the RHB approach is also taken into account in this analysis. In the second step, the effect of RMF nuclear density distributions obtained for non-linear NL1, NL3, TM1, and density-dependent DDME2 parameter sets will be analyzed on the fusion characteristics. Finally, in the third step, we will study the effect of R3Y NN potential obtained for non-linear NL1, NL3, and TM1 parameter sets within the RMF model as well as the DDR3Y NN potential obtained for the density-dependent DDME2 parameter set within the RHB approach on the fusion barrier characteristics and consequently on the fusion and/or capture cross-section. We have chosen 24 different light mass projectile and heavy mass target combinations from the various exotic regions of the nuclear chart in the present analysis. The even-even 48 [41,72,73]. It is worth mentioning that all these isotopes are neutron-rich and also 304 120 is predicted to be a doubly magic shell nuclei within various theoretical models [41,57,72,[74][75][76][77][78]. Hence, it would be of interest to study the effect of different nuclear density distributions and nucleon-nucleon interaction potentials on the fusion and/or capture cross-section for these systems. The -summed Wong model [79] is employed to deduce the fusion and/or capture cross-section, and comparison with the experimental cross-section is also made wherever available. The paper is organized as follows: the theoretical formalism for nuclear potential using the relativistic mean-field approach and the double folding procedure is explained in section II along with a brief description of the −summed Wong model to estimate the fusion and/or capture cross-section. The results obtained from the calculations are discussed in section III. In Section IV summary and conclusions of the present work are made. II. NUCLEAR INTERACTION POTENTIAL FROM RELATIVISTIC MEAN FIELD FORMALISM A complete description of the total interaction potential is crucial to estimate the fusion probability of two colliding nuclei. This interaction potential comprises three parts: the nuclear interaction potential, the Coulomb potential, and the centrifugal potential. The total interaction potential (V T (R)) between the projectile and target nuclei can be written as, Here, V C (R) = Z p Z t e 2 /R and V (R) = 2 ( +1) 2µR 2 are the Coulomb and centrifugal potential, respectively. µ is the reduced mass, and R is the separation distance. The nuclear potential V n (R) is calculated here within the double folding approach [32], Here, ρ p and ρ t are the total density (sum of proton and neutron densities) distributions of projectile and target nuclei, respectively. V ef f is the effective nucleon-nucleon (NN) interaction. There are several expressions for the effective NN interaction potential available in the literature. One of the well-known expressions is known as the M3Y (Michigan 3 Yukawa) potential [33]. As the name suggests, it consists of three Yukawa terms [32,33,36,40] and is given by, − 2140 e −2.5r 2.5r + J 00 (E)δ(r). (3) Here, J 00 (E)δ(r) is long-range one pion exchange potential because there can be the possibility of nucleon exchange between the projectile and target nuclei. As mentioned before, the characteristics of the total interaction potential play a crucial role in determining one or another fusion properties. The barrier characteristics such as the barrier height (V B ) and barrier position (R B ) can be determined from Eq. (1) using following conditions: Moreover, the barrier curvature ( ω ) is evaluated at R = R B corresponding to the barrier height V B , and given as, To determine these quantities, we need a complete description of nuclear interaction potential, which is calculated here using the double folding integral given by Eq. (2). The main requirements to solve the double folding integral are the nuclear density distributions and effective nucleon-nucleon interaction potential. The well-known relativistic mean-field (RMF) formalism and relativistic-Hartree-Bogoliubov (RHB) have been employed to determine these density distributions and NN interaction potential. The RMF formalism has its successful applications in describing the properties of nuclear matter as well the finite nuclei [36, 37, 41-45, 47-49, 56] . A phenomenological description of nucleon interaction through the exchange of mesons and photons is given by RMF Lagrangian density [36, 37, 41-45, 47-49, 56] which can be written as, Here ψ denotes the Dirac spinor for the nucleons, i.e., proton and neutron. m σ , m ω and m ρ signify the masses of isoscalar scalar σ, isoscalar vector ω and isovector vector ρ mesons, respectively which intermediate the interaction between the nucleons having mass M. g σ , g ω and g ρ are the linear coupling constants of the respective mesons whereas the g 2 , g 3 and ξ 3 are the non-linear self interaction constants for scalar σ and vector ω-mesons, respectively. These mass of σ meson and the coupling constants of mesons are fitted to match the infinite nuclear matter's saturation properties and the bulk properties of magic shell nuclei. For the present study, we have considered three parameter sets, namely NL1 [42], NL3 [43], and TM1 [44]. In NL1 and NL3 parameters, only the σ−meson self-coupling non-linear terms (i.e, associated coupling constants, g 2 and g 3 ) are taken into account. In the case of the TM1 parameter set, the self-coupling term of vector ω−meson (ξ 3 ) is considered. The terms τ and τ 3 in Eq. (7) symbolize the isospin and and its third component, respectively. Ω µν , B µν and F µν are the field tensors for ω, ρ and photons, respectively and are given as, The quantity A µ here denotes the electromagnetic field, and the arrows symbolize the vectors in the isospin space. The equations of motion for the Dirac nucleon and the mesons are obtained from the Lagrangian density given in Eq. (7) using the Euler-Lagrange equations under meanfield approximation. The field equations for nucleons and mesons (σ, ω, ρ) mesons are given as, It is to be noted here that the terms with σ 3 and σ 4 account for the self-coupling among the scalar sigma mesons. Similarly, the term with ω 4 takes care of selfcoupling among the vector mesons. These non-linear selfcoupling terms take care of saturation properties and also soften the equation of state of the nuclear matter [39,[42][43][44]. The properties of finite nuclei such as the binding energy and charge radius (r ch ) estimated from the Lagrangian density containing the non-linear σ − ω terms also give satisfactory match with the experimental values [42-45, 47, 49, 56]. The alternative approach to introduce the density-dependent nucleon-meson couplings within relativistic mean field is the relativistic-Hartree-Bogoliubov (RHB) approach [53-55, 60, 61]. In this framework, the couplings of σ, ω and ρ mesons to the nucleon fields (i.e. g σ , g ω and g ρ ) are defined as [53-55, 60, 61], where and Here, x = ρ/ρ sat , with ρ sat is the baryon density of symmetric nuclear matter at saturation. The five constraintsf i (1) = 1, f i (0) = 0, and f σ (1) = f ω (1) reduce the number of independent parameters in Eq. (13) from eight to three. All the independent parameters (the mass of σ meson and coupling parameters) are obtained to fit the ground state properties of finite nuclei as well as the properties of symmetric and asymmetric nuclear matter. In present analysis we have adopted the well-known DDME2 parameter set [61] to study the fusion barrier characteristics and also compared the results with the ones obtained using non-linear NL1 [42], NL3 [43], and TM1 [44] parameter sets. A. Medium-Dependent Relativistic R3Y Potential The nucleon-nucleon interaction potential analogous to M3Y potential (see Eq. (3)) has also been derived by solving the mean-field equations in Eq. (11) within the limit of one meson exchange [37][38][39][40]. The relativistic NNpotential entitled as R3Y potential [37][38][39][40] can be written as, It is worth mentioning here that for the case of mediumindependent R3Y NN potential obtained for NL1, NL3 and TM1 parameter sets i.e., the nucleon-meson couplings (g σ , g ω and g ρ ) appearing in Eq. (15) are independent of the density. However, in case of mediumdependent DDR3Y NN potential for DDME2 parameter set, the g σ , g ω and g ρ are density-dependent [given in Eqs. (12)(13)(14)] and also the non-linear self interaction constants (g 2 , g 3 , ξ 3 ) are zero for DDR3Y. The density (ρ) entering in Eqs. (12)(13)(14)(15) is obtained within the relaxed density approximation (RDA) [80,81] at the midpoint of the inter-nucleon separation distance and can be written as, Here, ρ Np (ρ Pp ) and ρ Nt (ρ Pt ) are the neutron (proton) densities of projectile and target nuclei, respectively. Also, A p(t) ρ sat , with N p(t) and A p(t) being the neutron and mass numbers of projectile (target) nuclei, respectively. More details about the validity of this RDA in obtaining the DDR3Y NN potential can be found in one of our recent works [66], where a comprehensive analysis of the fusion cross-section obtained using density-dependent M3Y [32,82,83] and R3Y NN potentials are accomplished. In the present analysis, a systematic study of fusion barrier characteristics obtained using different RMF density distributions, relativistic R3Y, DDR3Y and non-relativistic M3Y NN potentials is done for 24 isospin asymmetric reaction systems forming heavy and superheavy nuclei. In open-shell nuclei, the pairing plays a significant role in describing their structure properties, including the density distributions. In the present study, we have considered the nuclei near the β-stable region of the nuclear chart, so we have considered the simple BCS pairing to take care of the pairing correlations [36,37,84,85]. Also, a blocking procedure is used to treat the odd-mass number nuclei [36,37,86,87]. The relativistic R3Y NN potential for different parameter sets and the analogous M3Y potential are shown in Fig. 1. The R3Y NN potential for DDME2 is plotted here for the coupling constants in Eq. (15) at the saturation density, ρ sat = 0.152 f m −3 [61]. It can be observed from Fig. 1 that the curves for R3Y NN-interaction potential for NL1, NL3, TM1, and DDME2 parameter sets show similar trends as the M3Y NN potential. The R3Y NN potential for the DDME2 parameter set at saturation density is observed to show the deepest pocket. However, the actual DDR3Y NN potential for the DDME2 parameter used to obtain the nuclear potential within the double-folding approach is density-dependent and is not exactly similar to one plotted at ρ sat = 0.152 f m −3 in Fig. 1. A more detailed inspection shows that the R3Y NN potential for NL3 and NL1 parameter sets shows a slightly deep pocket compared to M3Y NN potential. However, the R3Y potential for the TM1 parameter shows a lightly shallow pocket compared to the M3Y potential, which can be connected with the self-coupling non-linear term (ξ 3 ) in the ω-field. More details can be found in Refs. [37][38][39][40]. The total potential, i.e., the barrier characteristics in Eq. (1), is used further to obtain the fusion and/or capture cross-section using the -summed Wong model. -summed Wong model The fusion and/or capture cross-section for two colliding nuclei is given in terms of -partial wave by [36,37,79,88] Here, E c.m. is the center of mass energy of targetprojectile system and k = 2µEc.m. 2 . P is called the penetration probability which describe the transmission through the barrier given in Eq. (1). Using the Hill-Wheeler [89] approximation, P can be written in terms of barrier height (V B ) and curvature ( ω ) as, Eq. (17) describes the fusion and/or capture crosssection of two interacting nuclei in terms of summation over partial waves. C. Y. Wong [88] replaced this summation by integration using the following approximation: These approximations lead to simple formula to estimate the fusion and/or capture cross-section in terms of barrier characteristics. This simplified Wong formula [88] can be written as, However, using only = 0 barrier and ignoring the modifications entering due to -dependence of the potential cause the overestimation of fusion and/or capture cross-section by Wong formula at above barrier energies. Gupta and collaborators gave the solution of this problem [36,37,79] by using the more precise -summed formula given in Eq. (17). III. CALCULATIONS AND DISCUSSION In this section, the fusion and/or capture cross-section of heavy-ion reactions is studied within the −summed Wong model supplemented with self-consistent relativistic mean-field and relativistic-Hartree-Bogoliubov formalism. As we know, the value of nuclear matter observable and the structural properties of finite nuclei obtained from RMF and RHB formalism depend upon the force parameter set. Parallel to this, the dependence of fusion characteristics upon these RMF parameter sets can be anticipated. In the present analysis, we systematically study the nuclear density distributions and the effective nucleon-nucleon interaction potential dependence on the different parameter sets and, consequently, on the fusion barrier characteristics. The nuclear density distributions for all the interacting nuclei (targets and projectiles) are obtained within the relativistic mean-field formalism for three force parameter sets NL1, NL3 and TM1, and within the Relativistic-Hartree-Bogoliubov approach for the DDME2 parameter set. We have considered three types of effective nucleon-nucleon interactions: (i) The widely used non-relativistic M3Y potential given by Eq. (3). (ii) Density independent relativistic R3Y NN potential described in terms of masses and coupling constants of the mesons (σ-, ω-and ρ-mesons). The medium independent relativistic R3Y potential is obtained for three relativistic parameter sets, i.e., NL1, NL3 and TM1. (iii) The medium dependence of the R3Y NN potential is also taken into account via density-dependent meson-nucleon coupling terms obtained from the DDME2 parameter set. From Fig. 1, it can be noted that the NN potentials of M3Y and R3Y for NL1, NL3, TM1, and DDME2 sets show a similar trend but with different depths. A systematic study is carried out to study the effect of these NN interactions on the nuclear potential (V n ) calculated within the double folding approach. As mentioned above, the R3Y NN potential for DDME2 parameter is plotted at saturation density (ρ sat = 0.152 f m −3 ) in Fig. 1 whereas actual density-dependent R3Y (DDR3Y) obtained within the RHB approach is used for calculating the nuclear potential and fusion and/or capture cross-section. The calculations are done in three steps: i) Folding of nuclear density distributions with the medium independent relativistic R3Y potential for three parameters sets NL1, NL3 and TM1, and DDR3Y NN potential for DDME2 parameter set to obtain the nuclear potential. The nuclear density distributions are also folded with the phenomenological M3Y potential for the sake of comparison. ii) In the second step, the R3Y potential is fixed for one parameter set and is folded with the density distributions obtained with the considered four parameter sets. iii) In the last step, the density is fixed for one parameter set and is folded with the R3Y NN potential obtained for NL1, NL3, TM1, and DDME2 parameter sets. Nuclear Density Distributions: The RMF formalism successfully describes the bulk properties such as the binding energy, quadrupole deformations, nuclear density distributions, etc., throughout the nuclear chart. The total density (sum of the proton and neutron number densities, i.e., ρ T = ρ P + ρ N ) as a function of nuclear radius (r) is plotted in Fig. 2 for the representative case of (a) odd mass 31 [90][91][92]. Moreover, the density distributions of intermediate and heavy mass target nuclei show a comparatively flattened curve in the central region that falls rapidly in the surface region. The figure shows that the NL1 and TM1 have respectively, the lowest and highest magnitudes of central density for all the nuclei under study. In the case of the heavy-ion fusion reactions, the density at the tail/surface region only plays the most crucial role in the fusion process [93]. The insets in Fig 2 show the magnified view of the tail region of the densities. A slight difference is observed among the NL1, NL3, TM1 and DDME2 density distributions at the surface/tail region of all the interacting nuclei. A systematic and quantitative study is carried out in the upcoming subsections to analyze the effects of this slight difference on the fusion characteristics. A. Folding RMF densities with R3Y, DDR3Y and M3Y NN potentials: In the first analysis, a comparison is made between the widely adopted non-relativistic M3Y, densityindependent relativistic R3Y and the density-dependent relativistic R3Y (DDR3Y) NN potentials within the double folding approach. The RMF nuclear density distributions obtained for non-linear NL1, NL3, TM1 and density-dependent DDME2 parameter sets (see Fig. 2) are folded with the M3Y, R3Y and DDR3Y NN potentials to obtain the nuclear optical potential. In our earlier work, the density-independent M3Y and R3Y nucleon-nucleon potentials have been used to study the fusion hindrance phenomenon in a few Ni-based reactions [36] and to study the cross-section for the synthesis of heavy and superheavy nuclei [37,94]. In [36,37], the fusion and/or capture cross-section obtained using non-relativistic M3Y NN potential is compared with relativistic R3Y NN potential for NL3 * parameter set only. A more systematic study with the inclusion of different non-linear RMF parameter sets as well as the explicit medium dependence of R3Y NN potential is carried out in the present analysis to investigate the effect of different nucleon-nucleon potentials and the nuclear density distributions on the fusion and/or capture cross-section of 24 different exotic reaction systems. The reaction system involves the projectile with a higher N/Z ratio, synthesizing neutron-rich heavy and superheavy nuclei. We have considered 17 target-projectile combinations for the synthesis of different isotopes of SHN with Z=120. The fusion barrier characteristics (barrier height, barrier position, frequency, etc.) are obtained within the double folding approach for both M3Y and R3Y NN potential. Then the fusion and/or capture cross-section is calculated from the well-known −summed Wong model. Total Interaction Potential: As discussed above, the interaction potential at = 0 (sum of nuclear and Coulomb potential) formed between the target and projectile nuclei plays the most crucial role in determining the fusion characteristics of the system. We have calculated the nuclear interaction potential from nuclear density distributions integrated over M3Y, R3Y and DDR3Y effective NN interactions. The total interaction potential at = 0 [V T (R) = V n (R) + V C (R)] is obtained for the even-even 48 Fig. 3 shows the barrier region of the total interaction potential (MeV) at = 0 as a function of the radial separation R (f m) for all the 24 reaction systems. The dashed lines signify that the phenomenological M3Y NN potential integrated over the nuclear densities obtained for NL1 (light blue), NL3 (orange), TM1 (black), and DDME2 (grey) parameter sets. However, the solid lines signify the relativistic R3Y and DDR3Y potential along with the mean-field densities are used to obtain the nuclear potential within the double folding approach. From Fig. 3, one can notice that the M3Y NN poten- tial gives a relatively higher barrier as compared to the relativistic R3Y and DDR3Y NN potentials for all the considered systems. This signifies that the microscopic R3Y effective NN potential given in terms of the meson masses and their coupling constants gives comparatively more attractive interaction potential than the M3Y NN potential, described as the sum of three Yukawa terms. Comparing the barrier heights for different density distributions (NL1, NL3, TM1, and DDME2) folded with M3Y potential, it is observed that TM1 and NL1 sets give the highest and lowest barrier heights, respectively. For the case of M3Y NN potential, the NL3 and TM1 densities are observed to give a higher barrier than the DDME2 parameter set whereas for the case R3Y NN potential the DDME2 set gives the highest barrier. This is because the R3Y NN potential for the DDME2 parameter is density-dependent. The relaxed density approximation (RDA) is used here to include the density dependence of microscopic R3Y NN potential in terms of nucleon-meson couplings. Thus, the barrier height is observed to be increased with the inclusion of the in-medium effects of the R3Y NN potential. The Bass potential [20][21][22] is the simpler and well-known form of nuclear potential. Here, we have studied the variation in fusion barrier characteristics i.e., barrier height V B and position R B obtained from M3Y, R3Y and DDR3Y NN potentials with respect to (w.r.t.) those obtained from Bass potential. Fig. 4 shows the percentage change in barrier height (upper panel) and barrier position (lower panel) as a function of charges Z p Z t for all the 24 systems. Here, Z p and Z t are atomic numbers for projectile and target nuclei, respectively. M3Y-NL1, M3Y-NL3, M3Y-TM1, and M3Y-DDME2 signify that the nuclear density distributions obtained for non-linear NL1, NL3, TM1 and density-dependent DDME2 parameter sets, respectively, along with the M3Y NN potential, are used within the double folding procedure to calculate the nuclear potential. Similarly, NL1-NL1, NL3-NL3, and TM1-TM1 signify that the RMF density distributions and relativistic R3Y NN potential are used within the double folding approach to obtain the nuclear potential. Also, DDME2-DDME2 signify that the DDR3Y NN potential and density distributions obtained within the RHB approach for the DDME2 parameter set are used to calculate the nuclear potential. The Coulomb potential is added then to this nuclear potential potential and the barrier characteristics (V B , R B ) are obtained using Eq. (4) and (5). It can be noted from Fig. 4 that the nuclear potential calculated for M3Y NN potential shows ≤ 2% change in barrier height and ≤ 1% change in barrier position for the considered reactions w.r.t. the Bass potential. However, in the case of nuclear potential obtained for R3Y and DDR3Y NN potentials, the barrier height decreases by up to ≈ 5% and the barrier position shifts by up to ≈ 8% towards the higher separation distance w.r.t. the Bass potential. Moreover, this percentage change in barrier characteristics is minimum for the nuclear density distributions obtained for the TM1 parameter set and is maximum for those for the NL1 parameter set. This shows that the inclusion of vector meson self-coupling term (∝ ω 4 ) in RMF Lagrangian results in the stronger repulsive core of NN interaction potential. The characteristics of the fusion barrier have a direct impact on the fusion cross-section of the reaction systems. The higher the barrier height value calculated for a system, the lower will be its cross-section. The effect of different RMF densities and the NN interaction potential on the fusion and/or capture cross-section is studied using the well known −summed model in the following subsection. fusion and/or capture Cross-Section: The characteristics of total interaction potential (barrier height, position, and frequency) are further used to estimate the fusion probability and cross-section. We have calculated the fusion and/or capture cross-section for all the 24 reaction systems within the well-known −summed Wong model described in detail in the previous section. Fig. 5 shows the cross-section σ (mb) as function of center of mass energy E c.m. (MeV) for all the targetprojectile systems including the even-even 48 [5,[67][68][69][70][71]. The max values are calculated using the sharp cut-off model [95] for the reaction systems with experimental data. Since the experimental fusion and/or capture cross-section is not available for all the systems leading to the formation of SHN Z=120 except 64 Ni+ 238 U reaction [71], so sharp cut-off model is not applicable for them. To extract the max values for these system we have used the polynomial between E c.m. /V B and max values constructed using 64 Ni+ 238 U data in our earlier work [94]. The dashed lines in Fig. 5 show the cross-section estimated by employing the nuclear potential calculated by folding the M3Y NN interaction over the nuclear density distribution obtained for NL1 (light blue), NL3 (orange), TM1 (black) and DDME2 (grey) parameter sets. The solid lines in Fig. 5 signify that the nuclear potential calculated by folding the relativistic R3Y and DDR3Y NN potentials along with the spherical densities obtained for NL1 (light blue), NL3 (orange), TM1 (black) and DDME2 (grey) parameter sets is employed to calculate the cross-section. It can be observed from Fig. 5 Comparing the cross-section from different relativistic force parameter sets, we find that the DDME2 parameter set gives the lowest, whereas the NL1 set gives the highest cross-section value for a given system. However, this difference is more evident at below and around the fusion barrier centre-of-mass energies. At above barrier energies, the cross-sections from all the parameter set almost overlap. The reason for this behaviour is that the structure effects get diminished at above barrier energies, and the angular momentum part of total potential dominates [36]. Comparison of the results for different relativistic parameter sets with the experimental data shows that the NL1 parameter set is superior to the NL3, TM1 and DDME2 parameter sets. Out of NL3, TM1 and DDME2 sets, the parameter set NL3 is observed to fit the experimental data better. This is because the TM1 parameter set, which includes the self-coupling terms of the ω-mesons, gives comparatively repulsive NN interaction, underestimating the fusion and/or capture cross-section. However, for three reactions namely 46 K+ 181 Ta (Fig. 5(b)), 64 Ni+ 238 U (Fig. 5(d)) and 48 Ca+ 248 Cm (Fig. 5(g)), TM1 give better overlap than the other parameter sets. Moreover, the DDME2 density folded with M3Y is observed to give a higher cross-section than NL3 and TM1 densities. On contrary, the DDME2 density folded with DDR3Y NN potential gives a lower cross-section than the R3Y NN potential folded with NL1, NL3 and TM1 densities. This indicates that the inclusion of density dependence of microscopic R3Y NN potential in terms of the DDME2 parameter set decreases the cross-section. In the case of the system 26 Mg+ 248 Cm (Fig. 5(h)), both the M3Y as well as R3Y NN potentials underestimate the experimental cross-section at below-barrier energies. This deviation between the experimental and theoretically calculated cross-section is caused by the fusing nuclei's structural deformations, which are not considered in the present study. In the case of the reaction systems leading to the formation of different isotopes of SHN with Z=120, the experimental data is only available for the 64 Ni+ 238 U and, as discussed above, shows a good overlap with the results obtained with the R3Y NN interaction for TM1 parameter sets. Among all the systems for Z=120, the difference between cross-sections obtained for NL1, NL3, TM1, and DDME2 parameters sets is comparatively a little more prominent for 58 Fe+ 244 Pu (see Fig. 5(v)) system. In the case of 26 Mg+ 248 Cm system, the experimental data is available at the centre of mass energies far below its Bass barrier (at 126.833 MeV). For 48 Ca+ 248 Cm system, the R3Y NN potential for TM1 gives a better fit to the experimental data as compared to other systems. From all these observations, a more systematic investigation of the effects of NN potential and RMF density distributions on the cross-section is carried only for these three reaction systems in the upcoming subsections. We have dropped out the M3Y NN potential for further investigation of fusion characteristics as it gives comparatively poor overlap with the experimental data. Also, since the NL1 and NL3 RMF parameter sets give comparatively better results than the TM1 parameter sets, we fix the NN R3Y potential and RMF density distribution for further second and third steps for these parameter sets to explore their effects on the fusion characteristics. B. Fixing the relativistic R3Y NN interaction and varying the density: After comparison of the barrier characteristics and fusion and/or capture cross-section obtained from nonrelativistic M3Y and relativistic R3Y and DDR3Y NN interaction potentials, next, we investigate the effects of RMF nuclear density distributions obtained from different force parameter sets on fusion characteristics. In Fig. 2, one notice a small difference at the surface region of interacting nuclei among the nuclear densities given by NL1, NL3, TM1, and DDME2 parameter sets. Since nuclear fusion is a surface phenomenon, the tail region of density distributions plays the most crucial role in the fusion cross-section [93]. To study the effect of density distributions on the nuclear potential and consequently on the fusion characteristics, we have fixed the effective NN interaction in the double folding approach and then changed the densities of the fusing nuclei. First, we fixed relativistic R3Y NN potential for the NL1 parameter set and folded it with the density distributions obtained for NL1, NL3, TM1, and DDME2 parameter sets to estimate the nuclear potential from Eq. (2). Then the same procedure is repeated for the R3Y NN potential obtained for the NL3 parameter set. The total interaction potential in terms of barrier height and position, and fusion and/or capture cross-section are then investigated for different density distributions. Total Interaction potential: The total interaction potential is calculated for all the nuclear potentials using the same procedure described in the previous subsection. The barrier region of the total interaction potential as a function of radial separation is represented in Fig. 6 for the systems (a) 48 Table I. Here NL3-TM1 signifies that the R3Y NN interaction potential obtained for the NL3 parameter set is folded with the RMF density distributions obtained for the TM1 parameter set. Similarly, NL1-NL3 signifies that the R3Y NN interaction potential obtained for the NL1 parameter set is folded with the RMF density distributions obtained for the NL3 parameter set. Same notations will be used in all the figures and their discussion from here on-wards. The inspection of Fig. 6 and Table I Pu calculated by fixing the NN potential for NL1 and NL3 parameter set. The NL3-TM1 signifies R3Y NN potential using NL3 parameter set and density distributions using TM1 parameters within folding procedures are used to obtain the nuclear potential. The same procedure is followed for other cases as labeled in the figure. Pu. The NL3-TM1 signifies the R3Y NN potential using NL3 parameter set and density distributions using TM1 parameter set are used within folding procedures to obtain the nuclear potential. The same procedure is followed for other cases as labeled in the figure. lowest fusion barriers, respectively. Comparing the characteristics of the barrier for NL1-NL1, NL1-NL3, NL1-TM1, and NL1-DDME2 with the same effective NN interaction potential, we find the densities obtained for NL3, TM1 and DDME2 parameter sets raise the fusion barrier as compared to NL1. The barrier height increases by ≈ 1 MeV, and the barrier position shifts by ≈ 0.1 f m towards the lower radial distance as we replace the NL1 densities with those of NL3. Similar behaviour is also observed for the nuclear potentials for NL3-NL3, NL3-NL1, NL3-TM1, and NL3-DDME2. In nutshell, the density distribution obtained for the TM1 parameter set gives the highest fusion barrier, and those for the NL1 parameter set give the lowest fusion barrier. The density distributions given by NL1 parameter sets were observed to be more extended in the surface region than NL3, TM1 and DDME2 densities. It can be inferred from here that a small increase in the densities of fusing nuclei at the surface region lowers the barrier height by approximately 1 MeV. The suppression of barrier height will enhance fusion and/or capture cross-section around the barrier centre of mass energies. To study the effect of density distributions more clearly further, we have calculated the fusion and/or capture cross-section using all six nuclear potentials given in Fig. 6 and listed in Table I. Fusion and/or capture Cross-Section: To investigate the effects of RMF nuclear density distributions obtained from different parameter sets on the fusion mechanism, next, the fusion and/or capture cross-section for systems (a) 48 (MeV). The available experimental data (black spheres) [69,70] is also given for the comparison. Comparison of the cross-section calculated using different nuclear potentials shows that the TM1 density distributions decrease the cross-section, whereas the NL1 densities increase the fusion and/or capture cross-section. Also, the difference between the cross-section increases due to different nuclear potential at lower barrier energies. It becomes more prominent as we increase the mass number of the projectile nuclei. For 26 Mg+ 248 Cm ( Fig. 7(b)) system, the plots of the cross-section for different nuclear potential almost overlap with each other, whereas a larger difference is observed among for 58 Fe+ 244 Pu ( Fig. 7(c)) system resulting in the formation of SHN Z=120. This shows that the structure effects become more and more crucial as we move towards the heavier mass region of the Periodic Table. Comparison between the experimental and theoretical data shows that for 26 Mg+ 248 Cm system, the NL1-NL1 combination is observed to be more suitable than the others. However, for the case of 48 Ca+ 248 Cm, a nice fit is observed for NL3-TM1 with the experimental cross-section. All these observations indicate that even a small difference in the density distributions at the surface region significantly impacts the fusion and/or capture cross-section. Also, this effect becomes more prominent as we move towards the superheavy region of the nuclear chart. C. Fixing Density and varying R3Y NN potential The double folding optical potential depends upon the nuclear density distributions and the effective nucleonnucleon interaction. In the previous subsection, we investigated the effect of different nuclear density distributions on the optical nuclear potential and, consequently, on the fusion characteristics. To examine the effects of nucleon-nucleon (NN) interaction on the fusion barrier characteristics, we further fixed the nuclear densities and then changed the effective NN interaction in the double folding approach and studied the fusion characteristics. First, the R3Y NN interaction potential obtained for NL1, NL3 and TM1 parameter sets and DDR3Y NN potential obtained for DDME2 parameter set is integrated over the nuclear densities obtained for the NL1 parameter , and then the procedure is repeated for NL3 density. The NL3-NL1 signifies R3Y NN Potential potential from the NL3 parameter set and density distributions from NL1 parameters within the folding procedure are used to obtain the nuclear potential. The same procedure is followed for other cases as labeled in the figure. set. Then the same procedure is repeated with densities obtained for the NL3 parameter set. Again we get eight nuclear potentials denoted as NL1-NL1, NL3-NL1, TM1-NL1, DDME2-NL1, NL1-NL3, NL3-NL3, TM1-NL3, and DDME2-NL3. All the notations have the same meanings as mentioned in the previous subsection. The calculations for the total interaction potential and the fusion and/or capture cross-section are then carried out using these nuclear potentials. Total Interaction Potential: Fig. 8 displays the barrier region of the total interaction potential at = 0 as a function of radial separation. As observed in the previous sub-section, it is found here again that the NL1-NL1 combination gives the lowest value of the The inclusion of in-medium effects in microscopic R3Y NN potential in terms of density-dependent nucleon-meson coupling parameters is observed to significantly raise the potential barrier. Also, the relativistic R3Y NN potential is calculated with the TM1 parameter which accounts for the isoscalar vector ω−meson's self-coupling is observed to increase the potential barrier height compared to the NL3 and NL1 parameter sets. It is worth mentioning that modification in the fusion barrier height caused by changing the effective NN interaction is comparatively more significant than the one observed by changing the density distributions. The values of barrier heights and positions for all the considered 24 reactions systems are listed in Table II. On replacing the R3Y NN potential obtained for NL1 with that obtained for TM1 for a fixed density distribution, the barrier height increases by ≈ 1.5 MeV. The barrier height further increases by upto 5 MeV on replacing the R3Y NN potential obtained for TM1 parameter with DDR3Y NN potential obtained for DDME2 parameter set using relaxed density approximation. This difference in the barrier heights given by R3Y and DDR3Y NN potential is slightly more when folded with NL3 density as compared with NL1 density. Also the change in the barrier characteristics w.r.t. RMF parameter sets become more significant as the mass of the compound nucleus increases. For further exploration of the consequences of different relativistic force parameter sets in terms of NN potential, the fusion and/or capture cross-section for all eight combinations of nuclear potential given in Fig. 8 is investigated. Fusion and/or capture Cross-Section: The fusion and/or capture cross-section for the systems (a) 48 Ca+ 248 Cm, (b) 26 Mg+ 248 Cm and (c) 58 Fe+ 244 Pu is obtained as a function of center-of-mass energy and is represented in Fig. 9. The effect of the interaction potential characteristics are observed directly in the fusion and/or capture cross-section. We obtained the highest cross-section for the combination NL1-NL1, whereas the lowest cross-section was observed for the DDME2-NL3 parameter set. Also, the DDR3Y NN potential obtained for the DDME2 parameter set gives the lower cross-section as compared to the mediumindependent R3Y NN potential obtained for non-linear NL1, NL3, and TM1 parameter sets. This indicates that the inclusion of in-medium effects in the microscopic NN potential decreases the cross-section. The structure effects of the interaction potential are observed to be diminished at energies greater than the fusion barrier. For system 48 Ca+ 248 Cm ( Fig. 9(a)), the DDR3Y NN potential obtained for the DDME2 parameter set folded with NL1 density give a better fit to the experimental data. However for the system with lighter projectile i.e. 26 Mg+ 248 Cm ( Fig. 9(b)), the parameter set NL1 gives better results. The effects of varying the effective NN interaction are observed to be more prominent than the nuclear density distributions. Comparing the barrier characteristics and the fusion and/or capture cross-section obtained for M3Y and R3Y NN potential folded with nuclear density distributions for four different parameters sets, it is concluded that relativistic R3Y NN potentials give a better prediction to the experimental data. Also, as compared to the TM1 and DDME2 parameter set, which were introduced to include the self-coupling of the vector ω− mesons and density dependence of nucleon-meson couplings, respectively [44,61], the NL1 and NL3 parameter sets give better overlap with the experimental data for the fusion and/or capture cross-section. Moreover, NL1 is superior to NL3 in addressing the experimental cross-sections. However, in the case of nuclear matter properties, the NL1 parameter set produces a large value of the asymmetry parameter [42,43]. Moreover NL1 parameter set also fails to fit the neutron skin thickness of the nuclei away from β− stability line [58,59]. On the other hand, the parameter set NL3 improves the value of the asymmetry parameter without increasing the number of phenomenological parameters. In the present study, the NL3 parameter set gives a better fit to the fusion and/or capture crosssection as compared to the TM1 parameter set, which includes the self-coupling terms of vector meson in RMF Lagrangian [44]. Taking all these facts into count, it can be concluded that the parameter set NL3 is suitable for describing the bulk and fusion characteristics of finite nuclei (including heavy and superheavy nuclei with higher N/Z ratio) properties of infinite nuclear matter. The TM1 parameter set, which was introduced to incorporate the vector self-coupling to soften the equation of state of nuclear matter [44], gives comparatively repulsive nuclear potential in terms of nuclear density distributions and effective NN interaction potential, which consequently underestimate the fusion and/or capture cross-section. The inclusion of density dependence in the R3Y NN potential within the relativistic-Hartree-Bogoliubov approach for the DDME2 parameter set is observed to decrease the cross-section w.r.t. density-independent R3Y NN potentials obtained for NL1, NL3 and TM1 parameter sets. In the present analysis, we have considered only the isospin asymmetric reaction systems i.e. target-projectile combinations forming a neutron-rich compound nucleus. The systematic study for isospin symmetric (N=Z) reaction systems will be carried out in near future. IV. SUMMARY AND CONCLUSIONS A systematic study is carried out in order to study the effect of nuclear density distributions and the effective NN interaction on the fusion barrier characteristics. The fusion barrier properties and cross-section of 24 different target-projectile combinations containing the even-even 48 76 Ge + 228 Ra are investigated within the relativistic mean field formalism. The nuclear density distributions for all the interacting nuclei are obtained by employing the RMF formalism for NL1, NL3, TM1 parameter sets and the RHB approach for the DDME2 parameter set. The effective NN interactions are obtained using the well-known M3Y potential, the relativistic R3Y and DDR3Y potentials. The R3Y NN potential is obtained for the considered three relativistic mean-field parameter sets (NL1, NL3 and TM1) and DDR3Y NN potential is obtained within the RHB approach for the DDME2 parameter set. In the first step, the comparison of M3Y, R3Y and DDR3Y NN potentials is carried out by calculating the fusion barrier characteristics using the nuclear potential within the double folding approach. It is concluded that the relativistic R3Y and DDR3Y NN interaction potentials give a relatively better fit to the experimental data than the M3Y potential. It is also observed that the NL1 and NL3 parameter sets give a better fit to the experimental data than the TM1 and DDME2 parameter sets. However for systems 46 K+ 181 Ta, 48 Ca+ 248 Cm and 64 Ni+ 238 U, TM1 parameter set works better. Secondly, the effective NN interaction is fixed, and then the nuclear density distributions are changed in the folding procedure to study their effect on the fusion characteristics. It is noticed that the nuclear densities obtained for parameter sets NL3, TM1 and DDME2 give comparatively repulsive nuclear potential and consequently decrease the fusion and/or capture cross-section. In the last step, we studied the effects of effective NN interaction on the fusion characteristics by fixing the nuclear densities. We find that the TM1 parameter set gives a repulsive R3Y NN interaction potential and thus decreases the fusion probability. All these observations lead to the conclusion that the inclusion of vector self-coupling term (∝ ω 4 ) in RMF Lagrangian increases the magnitude of the repulsive core of the NN interaction, which consequently underestimates the crosssection. Moreover, the DDR3Y NN potential calculated in terms of density-dependent nucleon-meson couplings within the relativistic-Hartree-Bogoliubov (RHB) approach for DDME2 parameter set is observed to give the higher barrier height and lower fusion cross-section as compared to the medium-independent R3Y NN potential obtained within RMF formalism for NL1, NL3 and TM1 parameter sets. From the comparison of the theoretical cross-section with the available experimental data, it is concluded that both the densities and R3Y NN potential obtained for NL1 and NL3 parameters give comparatively better overlap than the TM1 and DDME2 parameter sets. However, if we consider the overall description of bulk properties and fusion characteristics of finite nuclei and the properties of infinite nuclear matter, the NL3 becomes the favourable choice. It is worth mentioning that the shape degrees of freedom, i.e., the nuclear deformation for the interacting nuclei, are not considered in the present analysis. Hence, the results may change slightly without affecting the predictions by including nuclear shape degrees of freedom within the relativistic mean-field formalism, which will be carried out shortly.
Center Vortices and the Gribov Horizon We show how the infinite color-Coulomb energy of color-charged states is related to enhanced density of near-zero modes of the Faddeev-Popov operator, and calculate this density numerically for both pure Yang-Mills and gauge-Higgs systems at zero temperature, and for pure gauge theory in the deconfined phase. We find that the enhancement of the eigenvalue density is tied to the presence of percolating center vortex configurations, and that this property disappears when center vortices are either removed from the lattice configurations, or cease to percolate. We further demonstrate that thin center vortices have a special geometrical status in gauge-field configuration space: Thin vortices are located at conical or wedge singularities on the Gribov horizon. We show that the Gribov region is itself a convex manifold in lattice configuration space. The Coulomb gauge condition also has a special status; it is shown to be an attractive fixed point of a more general gauge condition, interpolating between the Coulomb and Landau gauges. Introduction Of the many different ideas that have been advanced to explain quark confinement, more than one may be right, or at least partially right, and these should be related in some way. As in the old story of six blind men describing an elephant, those of us concerned with the QCD confinement mechanism might benefit from unifying some of our separate impressions, to form a better image of the entire beast. In this article we investigate the relationship between center vortices (for a review, cf. ref. [1]) and the Gribov horizon in Coulomb gauge, whose relevance to confinement has been advocated by Gribov and Zwanziger [2]. We begin with the simple fact that in a confining theory, the energy of an isolated color charge is (infrared) infinite, and this energy is a lower bound on the color-Coulomb energy [3] (defined below). This fact implies an enhancement of the density of near-zero eigenvalues λ n M φ (n) = λ n φ (n) (1.1) of the Faddeev-Popov (F-P) operator in Coulomb gauge where D(A) is the covariant derivative. The F-P eigenvalue density is an observable which we are able to calculate numerically (section 2), via lattice Monte Carlo and sparse-matrix techniques. Applying standard methods [4] we are able to separate any thermalized lattice configuration into vortex-only and vortex-removed components, and we find that the enhancement of the F-P eigenvalue density can be entirely attributed to the vortex component of the gauge fields. Vortices are associated with an enhancement in the density of F-P eigenvalues λ near λ = 0; this enhancement is key to the divergence of the color-Coulomb energy in Coulomb gauge. It is absent in the Higgs phase of a gauge-Higgs system (section 3), where remnant gauge symmetry is broken [5], and vortices cease to percolate. In particular, we compare the F-P eigenvalue density found in the Higgs phase to the corresponding density for configurations in the confined phase, with vortices removed. These densities are identical in form. The density of F-P eigenvalues in the high-temperature deconfined phase of pure gauge theory is examined in section 4. We find that the linearly rising, unscreened color-Coulomb potential, which is present in the deconfined phase [5], is associated with an enhanced density of F-P eigenvalues, and that this enhancement is again attributable to the vortex component of the gauge field. Divergence of the color-Coulomb energy is a necessary but not sufficient condition for color confinement, and this phase provides an interesting example where the infinite color-Coulomb potential gets screened. Although the horizon scenario was invented to describe confinement, it nicely accounts for the divergence of the color-Coulomb energy in the deconfined phase, as explained in the Conclusion. It is also shown (section 5) how an array of center vortices, set up "by hand" to simulate some aspects of a percolating vortex configuration, leads to an accumulation of F-P eigenvalues near λ = 0. In section 7 we demonstrate that center configurations (equivalently: thin center vortices) have some remarkable geometrical properties in lattice configuration space. First, as already shown in ref. [5], center configurations lie on the Gribov horizon. It is known that the Gribov horizon is a convex manifold for continuum gauge fields [6] − a result which we extend here (section 6) to lattice gauge fields − and one might therefore suspect that the Gribov horizon is also smooth and differentiable. In fact, thin vortex configurations turn out to be distinguished points on the Gribov horizon, where the manifold acquires conical or "wedge" singularities. Finally, in section 8, we point out that the Coulomb gauge condition also has a special status, in that Coulomb gauge is an attractive fixed point of a more general interpolating gauge condition, which has the Coulomb and Landau gauge conditions as special cases. The F-P eigenvalue density The energy of an isolated color charge is (infrared) infinite in the confined phase, even on the lattice where ultraviolet divergences are regulated. We will consider charged states in Coulomb gauge, which, for a single static point charge, has the simple form where α is the color index for a point charge in color group representation r, and Ψ 0 is the Coulomb gauge ground state. The gauge-field excitation energy E r of this state, above the ground state energy, is due entirely to the non-local part of the hamiltonian Then the excitation energy of the charged state, in SU(N ) gauge theory, is where the {L a } are color group generators, d r the dimension, and C r the quadratic Casimir of representation r of the color charge. Therefore, the excitation energy E r is the energy of the longitudinal color electric field due to the static source, which we can identify as the color Coulomb self-energy. This energy is ultraviolet divergent in the continuum, but of course that divergence can be regulated with a lattice cut-off. The more interesting point is that in a confining theory, E r must still be divergent at infinite volume, even after lattice regularization, due to infrared effects. The excitation energy (2.2) represents the (infrared-infinite) energy of unscreened color charge in the state (2.1), which in general is a highly excited state that is not an eigenstate of the hamiltonian. States of this kind are useful for extracting the self-energy of an isolated charge due to its associated color-Coulomb field. On the other hand, the minimal free energy E s of a state containing a static external charge, at inverse temperature T , is obtained from the value of the Polyakov loop P ∼ exp(−E s T ). This minimal energy may be infrared finite in an unconfined phase even if E r is not, providing the external charge can be screened by dynamical matter fields, or by high-temperature effects. The infrared divergence of E r must be understood as a necessary, but not sufficient, condition for confinement. We note in passing that the charged state (2.1) in Coulomb gauge corresponds, in QED in temporal gauge, to the well-known form The investigation of this type of "stringless" state with external charges in non-abelian theories, using perturbative methods, was undertaken some time ago by Lavelle and Mc-Mullan in ref. [7]. The exponential prefactor in eq. (2.5) can be identified as the gauge transformation taking an arbitrary configuration A k (x) into Coulomb gauge. This feature generalizes to non-abelian theories, and "stringless" states with static charges in temporal gauge can be formally expressed in terms of the gauge transformation to Coulomb gauge, as shown in ref. [5]. We now proceed to the lattice formulation, with an SU(2) gauge group. The link variables can be expressed as and (when the lattice version of the Coulomb gauge condition ∇ · A = 0 is satisfied) the lattice Faddeev-Popov operator is where indices x, y denote lattice sites at fixed time. Denote the Green's function corresponding to the F-P operator as 1 where φ a(n) x , λ n are the n-th normalized eigenstate and eigenvalue of the lattice F-P operator M ab xy . Defining the representation-independent factor E ≡ E r /(g 2 C r ) in the Coulomb self-energy, we find where V 3 = L 3 is the lattice 3-volume. Also defining (2.11) The Faddeev-Popov operator, on the lattice, is a 3V 3 × 3V 3 sparse matrix; the number of linearly-independent eigenstates is therefore 3V 3 . Let N (λ, λ + ∆λ) be the number of eigenvalues in the range [λ, λ + ∆λ]. We define, on a large lattice, the normalized density of eigenvalues Then as the lattice volume tends to infinity, where it is understood that the integrand is averaged over the ensemble of configurations. From this we derive a condition for the confinement phase: The excitation energy E r = g 2 C r E of a static, unscreened color-charge is divergent if, at infinite volume, (2.14) In perturbation theory, at zero-th order in the gauge-coupling, the Faddeev-Popov operator is simply a laplacian, whose eigenstates are plane waves. Then λ = k 2 , where k is the momentum, and from the volume element of momentum space it is easy to see that to zeroth order, in the limit of infinite lattice volume, which obviously does not satisfy the confinement condition (2.14). At zeroth-order we have We now recall briefly some aspects of the Gribov horizon scenario [2]. The lattice version of the Coulomb gauge-fixing condition ∇·A = 0 is satisfied by any lattice configuration U such that is stationary at the trivial gauge transformation g = I. The Faddeev-Popov operator is obtained from the second derivative of R with respect to gauge transformations, so if R is a local maximum, then all the eigenvalues of the Faddeev-Popov operator are positive. The subspace of configuration space satisfying this condition is known as the Gribov region, and it is bounded by the Gribov horizon, where M develops a zero eigenvalue. In principle, the functional integral in minimal Coulomb gauge should be restricted to a subspace of the Gribov region in which the gauge fields are global maxima of R; this subspace is known as the "Fundamental Modular Region." Part of the boundary of the fundamental modular region lies on the Gribov horizon. The dimension of lattice configuration space is of course very large, on the order of the number of lattice sites, and it has been proven that, in contrast to an abelian theory, the Gribov region is bounded and convex (cf. [6] and section 6 below). A suggestive analogy is that of a sphere in N -dimensions, which has a volume element proportional to r N −1 dr, so that most of the volume is concentrated close to the surface of the sphere. By this analogy, it is reasonable to suppose that the volume of the Gribov region is concentrated near the Gribov horizon. If that is so, then typical configurations generated by lattice Monte Carlo, fixed to Coulomb gauge by standard over-relaxation techniques, will also lie very close to the Gribov horizon, and the F-P operator for such configurations will acquire near-zero eigenvalues. This is not enough by itself to ensure confinement; even the laplacian operator will have a spectrum of near-zero modes at large lattice volumes. The conjecture is that typical configurations near the Gribov horizon may also have enhanced values for ρ(λ) and F (λ) (compared to the perturbative expressions) at λ → 0, such that the confinement condition (2.14) is satisfied. Our task is to check, numerically, whether this enhancement exists or not. Observables We apply the ARPACK routine [8], which employs a variant of the Arnoldi procedure for sparse matrices, to evaluate the lowest 200 eigenvectors and corresponding eigenvalues of the F-P matrix M ab xy in eq. (2.7), for configurations generated by lattice Monte Carlo. The first three eigenvalues are zero for SU(2) gauge theory, regardless of the lattice configuration, due to the fact that the eigenvector equation (1.1) is trivially satisfied by three linearly independent, spatially constant eigenvectors φ a(n) with zero eigenvalue. The existence of these trivial zero modes is related to the fact that physical states with non-zero total color charge cannot exist in a finite volume with periodic boundary conditions. This is true even for an abelian theory, and the reason is simple: the Gauss law cannot be satisfied for total non-zero charge in a finite periodic lattice. In such cases, the electric flux lines diverging from point sources have nowhere to end. This means that the F-P operator (or the laplacian, in an abelian theory) is non-invertible on a periodic lattice. It is precisely the existence of the trivial zero modes which makes the F-P operator non-invertible; there is no such difficulty in an infinite volume. In order to extrapolate our results on finite volumes to infinite volume, which allows non-zero total charge, there are two possibilities. First, we could get rid of zero modes by imposing non-periodic (e.g. Dirichlet) boundary conditions on the finite lattice. Second, we could perform our Monte Carlo simulations on finite periodic lattices as usual, but drop the trivial zero modes before extrapolating our results to infinite volume. In this article we choose the second approach, and exclude the trivial zero modes from all sums over eigenstates. The average eigenvalue density ρ(λ) is obtained from the remaining 197 eigenvalues in each thermalized configuration (there are L such configurations in a given L 4 lattice, one at each time-slice). The range of eigenvalues is divided into a large number of subintervals, and eigenvalues are binned in each subinterval to determine the expected number N (λ, λ + ∆λ) of eigenvalues per configuration falling into each bin. Substituting this value into the definition (2.12) of the normalized density of states, we obtain an approximation to ρ(λ) for λ values in the middle of each subinterval. We also compute the expectation value of the n-th eigenvalue and corresponding quantity F (λ n ) λ n and F n ≡ φ (n) * · (−∇ 2 ) φ (n) (2.18) for the n = 4 − 200 non-trivial eigenvectors. Our plots of F (λ) vs. λ, shown below, are obtained by plotting F (λ n ) vs. λ n . Finally, we calculate the average contribution of low-lying eigenstates with λ n < 0.15 to the energy E of unscreened color charge: For our purposes, the precise value of the upper limit in the sum is not too important. We have chosen the upper limit λ = 0.15 in order that the 200 lowest eigenvalues, on each lattice volume we have studied, include the range 0 < λ ≤ 0.15. Center projection and vortex removal Not every configuration on the Gribov horizon satisfies the confinement condition (2.14). For example, any purely abelian configuration in a non-abelian theory lies on the Gribov horizon (after gauge transformation to Coulomb gauge [5]) and therefore has a (non-trivial) zero eigenvalue, but not all such configurations will disorder Wilson loops, or lead to an F-P eigenvalue spectrum satisfying eq. (2.14). The center vortex theory of confinement holds that a particular class of field configurations dominates the vacuum state at large scales, and is responsible for the linear rise of the static quark potential. If so, then these same configurations should be responsible for the pileup of F-P eigenvalues near λ = 0, resulting in the infinite energy of an isolated color charge. This is the connection which we think must exist between the center vortex and Gribov horizon confinement scenarios. To investigate this connection, we apply standard methods to factor a lattice configuration into its vortex and non-vortex content. This is done by first fixing to direct maximal center gauge, which is the Landau gauge condition in the adjoint representation using an over-relaxation technique. The lattice configuration is factored into is the center-projected configuration, and U µ is the "vortex-removed" configuration. The center-projected (thin vortex) configuration Z µ (x) carries the fluctuations which give rise to an area law for Wilson loops. The asymptotic string tension of the vortex-removed configuration U µ (x) vanishes, as does its chiral condensate and topological charge. The numerical evidence supporting these statements is reviewed in ref. [1]. In our procedure, each thermalized lattice configuration is transformed to direct maximal center gauge and factored as above into a center-projected configuration Z µ (x), and a vortex-removed configuration U µ (x). These are then transformed separately into Coulomb gauge. Of course, any center-projected configuration Z µ (x), with links = ±I, trivially fulfills the Coulomb gauge condition k Tr σ a (U k (x) + U † k (x − k)) = 0, (2.23) but in general such configurations are far from the minimal Coulomb gauge, and are normally not even in the Gribov region. So in practice we perform a random gauge transformation on Z µ (x), and then fix to a gauge copy in the Gribov region by the usual over-relaxation method. We will refer to such copies as "vortex-only" configurations. Applying the Arnoldi algorithm to calculate the F-P eigenvectors and eigenvalues, we compute observables ǫ, ρ(λ), F n , λ n for both the vortex-only and vortex-removed configurations. Any purely center configuration lies on the Gribov horizon. By "center configuration" we mean a lattice all of whose link variables can be transformed, by some gauge transformation, to center elements of the gauge group, and the "vortex-only" configurations are center configurations in this sense. It was shown in ref. [5] that for the SU(2) gauge group, such configurations have in general three non-trivial F-P zero modes, in addition to the three trivial, spatially constant zero modes (2.17). 2 In computing ǫ for the vortex-only configurations, we therefore exclude the first six eigenvalues. Results Most of our simulations have been carried out at β = 2.1. This is not such a weak coupling, but it allows us to use modestly sized lattices whose volumes are nonetheless quite large compared to the confinement scale, and to study the volume dependence. The F-P observables are calculated in the full configurations, the thin-vortex configurations, and the vortex-removed configurations, each of which has been transformed to Coulomb gauge. Figures 1 and 2 show our results for ρ(λ) and F (λ) for the full configurations, on a variety of lattice volumes ranging from 8 4 to 20 4 (to reduce symbol overlap near λ = 0, we do not display the entire set of data points in F (λ)). The apparent sharp "bend" in ρ(λ) near λ = 0 becomes increasingly sharp, and happens ever nearer λ = 0, as the lattice volume increase. The impression these graphs convey is that in the limit of infinite volume, both ρ(λ) and F (λ) go to positive constants as λ → 0. However, for both ρ(λ) and F (λ) we cannot exclude the possibility that the curves behave like λ p , λ q near λ = 0, with p, q small powers. If we assume that the low-lying eigenvalue distribution scales with the total number of eigenvalues (3L 3 ) in the manner suggested by random matrix theory, then it is possible to deduce, from the probability distribution of the lowest non-zero eigenvalue, the power dependence of ρ(λ) near λ = 0. This analysis is carried out in Appendix A, and gives us the estimates at small λ and large volume, with perhaps a 20% error in the exponents. With this behavior the Coulomb confinement condition is satisfied, and the Coulomb self-energy is infrared divergent. In Fig. 3 we plot ǫ vs. lattice extension L, together with a best straight-line fit through the points at L = 10 − 20. The cut-off energy ǫ rises with L, and this rise is consistent with linear, although the data is not really good enough to determine the precise L-dependence. Figure 2: F (λ), the diagonal matrix element of (−∇ 2 ) in F-P eigenstates, plotted vs. F-P eigenvalue. One might wonder how it is possible that a pair of quark-antiquark charges, in a global color singlet combination, can have a finite total Coulomb energy in an infinite volume, given that each charge has a divergent self-energy. This question was addressed in ref. [5], which computed Coulomb energies from timelike link correlators. The answer is that both the quark self-energies, and the energy due to instantaneous one-gluon exchange between separated sources, contain an infrared divergent constant. It can be shown that for global color singlets these constants precisely cancel, while in non-singlets the self-energy is not entirely cancelled, and the total energy is infrared divergent. Next we consider the F-P observables for the "vortex-only" configurations, consisting of thin vortex configurations (in Coulomb gauge) which were extracted from thermalized lattices as described above. Our data for ρ(λ) and F (λ) at the same range (8 4 − 20 4 ) of This non-zero limit for ρ(λ), F (λ) at λ → 0 is supported by an analysis of the loweigenvalue universal scaling behavior as a function of L, which is reported in Appendix A. Once again, the confinement criterion (2.14) is obviously satisfied. Figure 6 shows our data for ǫ(L), again with a linear fit through the data points at L = 10 − 20, although the linear dependence is not really established. Finally, we consider the F-P observables of the vortex-removed configurations U transformed to Coulomb gauge. Our results are shown in Fig. 7 for ρ(λ), and Fig. 8 for F (λ). The behavior of these observables is strikingly different, in the vortex-removed configurations, from what is seen in the full and vortex-only configurations. A graph of the eigenvalue density, at each lattice volume, shows a set of distinct peaks, while the data for F (λ) is organized into bands, with a slight gap between each band. Closer inspection shows that eigenvalue interval associated with each band in F (λ) precisely matches the eigenvalue interval of one of the peaks in ρ(λ). In order to understand these features, we conk N k n i λ k 1 3 (0, 0, 0) 0 2 18 (1, 0, 0) 4 sin 2 (π/L) 3 36 (1, 1, 0) 8 sin 2 (π/L) 4 24 (1, 1, 1) 12 sin 2 (π/L) 5 18 (2, 0, 0) 4 sin 2 (2π/L) Table 1: Eigenvalues λ k of the zerofield lattice F-P operator −δ ab ∇ 2 , and their degeneracies N k . sider the eigenvalue density of the F-P operator M ab xy = δ ab (−∇ 2 ) xy appropriate to an abelian theory (or a non-abelian theory at zero-th order in the coupling). Although we can readily derive the result ρ(λ) ∼ √ λ (2.15) at infinite volume, this result is slightly misleading at finite volume, where the eigenvalue density is actually a sum of delta-functions In the sum, the index k labels the distinct eigenvalues of the lattice laplacian −∇ 2 , and N k is the degeneracy of λ k . Explicitly, to each distinct eigenvalue λ k on an L 4 lattice, there is a set of integers n 1−3 such that The first few values of λ k , and their degeneracies, are listed in Table 1. We have compared these degeneracies N k , of the zeroth-order F-P eigenvalues, with the number of eigenvalues per lattice configuration found inside the k-th "peak" of ρ(λ), and kth "band" of F (λ). We find there is a precise match. This leads to a simple interpretation: the vortex-removed configuration U µ can be treated as a small perturbation of the zero-field limit U µ = I. This perturbation lifts the degeneracy of the λ k , spreading the degenerate eigenvalues into the bands of finite width in λ. At least for small k, these bands do not overlap. Likewise, the perturbation broadens the infinitely narrow δ-function peaks in the density of eigenstates, eq. (2.26), into the peaks of finite width seen in Fig. 7. Because both the density of eigenvalues and the data for F (λ) seem to be only a perturbation of the corresponding zero-field results, it appears to be most unlikely that the no-vortex configurations lead to a divergent Coulomb self-energy. In fact, we find that the low-lying eigenvalue spectrum scales with lattice extension L as just as in the zero-field (or zero-th order) case. We have not plotted ǫ(L) for the vortex removed case, because for the smallest lattice volume no eigenvalues actually lie in the given range 0 < λ < λ min , and even at L = 10, 12 only a few of the lowest eigenvalues are in this range. We do note, however, that the values of ǫ on the large lattices are roughly two orders of magnitude smaller than the corresponding values for the full configurations. Vortex removal is in some sense a minimal disturbance of the original lattice configuration. The procedure only changes field strengths at the location of P-vortex plaquettes, and the fraction of P-plaquettes out of the total number of plaquettes on the lattice dives exponentially to zero with increasing β. Nevertheless, this modest change of the lattice configuration is known to set the string tension and topological charge to zero, and to remove chiral symmetry breaking. At β = 2.1, we have found that vortex removal also drastically affects the density of low-lying eigenvalues, and one sees, in the multi-peak structure of ρ(λ), the remnants of the delta-function peaks of the free theory. We have used this comparatively low value of β so that we may probe large physical distances, as compared to the distance scale set by the string tension, on rather modestly sized (up to 20 4 ) lattices. This allows us carry out the finite volume scaling analysis reported in Appendix A. However, using such a small β has a price, in terms of our confidence in the effects of vortex removal. At β = 2.1 roughly 17% of all plaquettes are P-vortex plaquettes (cf. Fig. 20 in ref. [1]), and one may object that in this case vortex removal is not such a small disturbance of the lattice configuration. Perhaps the drastic effect of vortex removal on the eigenvalue density is simply an artifact of the substantial number of plaquettes modified. In order to address this concern, we have computed ρ(λ) and F (λ) at β = 2.3 and β = 2.4, where the P-vortex densities have dropped to around 9% and 4%, respectively, of the total number of plaquettes. The results are shown in Fig. 9, where we display the data for the unmodified, vortex-only, and vortex-removed configurations on the same plot. Only two lattice volumes are shown at each coupling. The effect of vortex removal is seen to be much the same at these higher β values as at β = 2.1. Again we see a multi-peak structure in ρ(λ), and a band structure in F (λ) at the low-lying eigenvalues (although the gap between bands narrows as β increases). In each configuration, we have checked that the number of eigenvalues in each peak of ρ(λ) matches the number of eigenvalues in each band of F (λ), and that this number is again equal to the degeneracy of eigenvalues. Therefore, the interpretation proposed for the no-vortex data at β = 2.1 appears to apply equally well at the higher β values: these data are simply perturbations of the zero-field result, for which the eigenvalue density is a sum of delta-functions. The data for the vortex-only and unmodified configurations are also qualitatively similar to the results we have obtained at the lower β = 2.1 value, although the lattice volumes, in physical units, are considerably smaller than the corresponding lattice volumes at β = 2.1. We conclude that it is the vortex content of the thermalized configurations which is responsible for the enhancement of both the eigenvalue density and F (λ) near λ = 0, leading to an infrared-divergent Coulomb self-energy. Gauge-Higgs theory Next we consider a theory with the gauge field coupled to a scalar field of unit modulus in the fundamental representation of the gauge group. For the SU(2) gauge group, the lattice action can be written in the form [9] with φ an SU(2) group-valued field. Strictly speaking, this theory is non-confining for all values of β, γ. In particular, there is no thermodynamic phase transition (non-analyticity in the free energy), from the Higgs phase to a confinement phase; this is the content of a well-known theorem by Osterwalder and Seiler [10], and Fradkin and Shenker [11]. However, the Osterwalder-Seiler-Fradkin-Shenker (OS-FS) theorem is not the last word on phase structure in the gauge-Higgs system. In fact, there is a symmetry-breaking transition in this theory. If one fixes to Coulomb gauge, there is still a remaining freedom to perform gauge transformations which depend only on the time coordinate. It is therefore possible, in an infinite volume, that this symmetry is broken on any given time-slice (where the remnant symmetry is global). The order parameter for the symmetry-breaking transition is the modulus of the timelike links, averaged over any time slice If Q → 0 as L → ∞, then the remnant gauge symmetry is unbroken, the Coulomb potential (as opposed to the static potential) between quark-antiquark sources rises linearly, and the energy of an isolated color-charge state of form (2.1) is infrared divergent. Conversely, if Q > 0 at infinite volume, then the remnant symmetry is broken, the Coulomb potential is asymptotically flat, and the energy of an isolated color-charge state is finite. These matters are explained in some detail in ref. [5], where we report on a sharp transition between a symmetric ("confinement-like") phase and a broken ("Higgs") phase of remnant gauge symmetry. There is no inconsistency with the OS-FS theorem, which assures analyticity of local, gauge-invariant order parameters. The order parameter Q, when expressed as a gauge-invariant observable, is highly non-local. The transition line between the symmetric and broken phases is also not the location of a thermodynamic transition, in the sense of locating non-analyticity in the free energy. Rather, it is most likely a Kertész line, of the sort found in the Ising model at finite external magnetic field, which identifies a percolation transition [12]. In the gauge-Higgs case, the objects which percolate in the confinement-like phase, and cease to percolate in the Higgs phase, turn out to be center vortices [13] (see also [14,15]). In the previous section we investigated the effect, on F-P observables, of removing center vortices from lattice configurations by hand. The gauge-Higgs system gives us the opportunity of suppressing the percolation of vortices by simply adjusting the coupling constants; we can then study the F-P observables in screened phases with and without percolation. We note that the "confinement-like" phase is a screened phase, rather than a true confinement phase, in that the energy of a static color charge is not infinite, because it can be screened by the dynamical Higgs particles. Nevertheless, in this phase the Coulomb potential is confining, and the Coulomb energy of an isolated charged state of the form (2.1) (an unscreened charge) is infrared infinite [5]. 3 In Figs. 10 and 11 we display the results for ρ(λ) and F (λ) on a 12 4 lattice at β = 2.1 and γ = 0.6, which is inside the symmetric (or "confinement-like") phase. Data extracted from the full lattice, vortex-only, and vortex-removed configurations are shown on each of a gauge theory in the Higgs phase (the vortex-removed configuration). Vortex-removal in the Higgs phase does not affect the data for ρ(λ) appreciably. 4 It seems safe to conclude that this behavior of the F-P observables, found in the Higgs phase, is consistent with an infrared finite Coulomb self-energy. Eigenvalue density in the deconfined phase It was reported in ref. [5] that the instantaneous color Coulomb potential V coul (r) is linearly rising at large separation, i.e. V coul (r) ∼ σ coul r where σ coul is a Coulomb string tension, in the high-temperature deconfined phase of pure gauge theory. This fact is surprising at first sight, because the free energy V (r) of a static quark pair in the deconfined phase is not confining. But it is not paradoxical, because V coul (r) is the energy of unscreened charges, and provides only an upper bound on V (r) [3]. Confining behavior in V coul (R) is a necessary but not sufficient condition for confinement, as we have already noted. In fact, the confining behavior of V coul (R) in the deconfinement phase was to be expected. The color Coulomb potential is derived from the non-local term H coul in the Coulomb gauge Hamiltonian, this term depends on the Faddeev-Popov operator via the expression M −1 (−∇ 2 )M −1 , and the F-P operator, on the lattice, depends only on spacelike links at a fixed time. But we know that even at very high temperatures, spacelike links at any fixed time form a confining D = 3 dimensional lattice, and spacelike Wilson loops have an area-law falloff just as in the zero-temperature case [16]. Since V coul (R) depends only on the three-dimensional lattice, it is natural that V coul (R) confines at any temperature, and that remnant gauge symmetry (as opposed to center symmetry) is realized in the unbroken phase. The role of center vortices in producing an area law for spacelike loops at high temperatures has been discussed in refs. [17]. If the confinement property of spacelike links is eliminated by vortex removal, then by the previous reasoning we would expect σ coul to vanish as a consequence. This consequence was also verified in ref. [5]. Given these results, it may be expected that the color-Coulomb self-energy E r of an isolated static charge is infrared divergent in the deconfined phase, and that this is associated with an enhancement of the eigenvalue density ρ(λ) of the Faddeev-Popov operator, and of F (λ), the expectation value of (−∇ 2 ) in F-P eigenstates. To test this we have evaluated ρ(λ) and F (λ) at β = 2.3 on 16 3 × 2 and 20 3 × 2 lattices, which is inside the deconfined phase. We have done this for the full configurations, and also for the vortex-only and vortex-removed configurations, defined as in the zero-temperature case. The results are shown in Figs. 14 and 15. The striking feature of these figures is their strong resemblance to the corresponding figures in the confined phase at zero temperature, namely Figs. 1 and 2 for full configurations, Figs. 4 and 5 for vortex-only configurations, and Figs. 7 and 8 for the vortex-removed configurations. Although we have not attempted to determine the critical exponents in the deconfined phase, it is clear that there is an enhanced density of low-lying eigenvalues of the Faddeev-Popov operator, and that this is associated with the center-vortex content of the configuration. It is hard to avoid the conclusion that the Gribov scenario and the vortex-dominance theory apply in both the confined and deconfined phases. Thin vortices and the eigenvalue density In the preceding sections we have studied the eigenvalue distribution in thin vortex configurations, extracted via center projection from thermalized lattices. It is also of interest to study the eigenvalue spectrum of thin vortex configurations of some definite geometry, such as an array. A single thin vortex, occupying two parallel planes in the four-dimensional lattice, can be constructed in the following way: Begin with the zero-field configuration, i.e. all links This creates two P-plaquettes in every xy plane, which extends to two vortex lines (stacks of P-plaquettes along a line on the dual lattice) at a fixed time, and two vortex planes when extended also in the t-direction of the lattice. The two planes bound a connected (Dirac) 3-volume, and will therefore be referred to as a single closed vortex. Generalizing slightly, we create N vortices parallel to the zt plane by setting We will consider a class of configurations characterized by the pair of integers (N, P ), where P is the number of planar orientations (i.e. P = 1 means only vortices parallel to the zt plane, while P = 3 means vortices parallel to the xt, yt and zt planes), and N is the number of closed vortices in each planar orientation. As already noted, any configuration consisting of links equal to ±I trivially fulfills the Coulomb gauge condition, but also generally lies outside the Gribov region. We therefore perform a random gauge transformation on each thin vortex configuration, and fix to a Coulomb gauge copy in the Gribov region by over-relaxation. Then the F-P eigenvalue spectrum (the same at any time-slice) is extracted by the Arnoldi algorithm. The results are shown in Fig. 16 for the first 20 F-P eigenvalues obtained on a 12 4 lattice for zero-field (0,0), one vortex (1,1), three vortex (1,3), and nine vortex (3,3) configurations. As mentioned previously, it can be shown analytically that arrays of thin vortices can have some additional zero modes, beyond the three trivial zero modes which exist for any lattice configuration. These additional modes are seen numerically in our calculation for the vortex spectrum. Apart from these extra zero modes, we see that the low-lying eigenvalue spectrum of the one-vortex configuration is not much different from the zero-field result. But the magnitudes of the low-lying eigenvalues changes drop abruptly, compared to the laplacian (0,0) eigenvalues, upon going to a three vortex configuration, and drop still further in a nine vortex array. This is only a qualitative result, but it does illustrate quite clearly the connection between vortices and the Gribov horizon. When the vortex geometry is chosen to imitate various features of percolating vortices, e.g. piercing planes in all directions and distributed throughout the lattice, then the low-lying eigenvalues have very small magnitudes as compared to the zero-field (or, in perturbation theory, zero-th order) result. This implies a pileup of F-P eigenvalues near λ = 0, which we know is required for confinement. Convexity of FMR and Gribov regions in SU(2) lattice gauge theory In the section that follows this one, we will show that vortex configurations play a special role in the geometry of the fundamental modular region Λ and of the Gribov region Ω. But first we interrupt our narrative to establish a very general convexity property of these regions in lattice gauge theory. We start by recalling the well known convexity of Λ and Ω in continuum gauge theory [6]. The Gribov region Ω is the set of configurations that are transverse ∇ · A = 0, and for which the Faddeev-Popov operator M (A) = −∇ · D(A) is positive. It is also the set of all relative minima on all gauge orbits of the functional F A (g) = || g A|| 2 , where ||A|| is the Hilbert norm of the configuration A, and g A is its gauge transform by g. The fundamental modular region Λ is the subset of the Gribov region, Λ ⊂ Ω, which consists of all absolute minima F A (1) ≤ || g A|| 2 for all local gauge transformations g(x). It is well known that in continuum gauge theory both of these regions are convex namely, if A 1 and A 2 are configurations in Λ (or Ω), then so is A = αA 1 + βA 2 , where 0 < α < 1, and β = 1 − α. Geometrically, A lies on the line segment that joins A 1 and A 2 . We shall establish a similar, but slightly weaker property that holds in SU(2) lattice gauge theory. This is quite surprising because convexity is a linear concept, whereas lattice configurations are elements of a non-linear space. We parametrize SU(2) configurations by The a i (x) would be a complete set of coordinates, but for the sign ambiguity of b i (x) above, corresponding to northern and southern hemisphere. We call U + the set of configurations U where all link variables lie on the northern hemisphere for all x and i. The a i (x) are a complete set of coordinates in U + . In minimal Coulomb gauge the fundamental modular region Λ has the defining property that a configuration U in Λ satisfies for all g(x). Thus the gauge choice makes all the U i (x) as close to the identity as possible, in an equitable way over the whole lattice. In this gauge, the link variables U i (x) for equilibrated configurations lie overwhelmingly in the northern hemisphere, especially in the continuum limit β → ∞. We define Λ + to be the restriction of Λ to U + , the set of configurations whose link variables all lie in the northern hemisphere and we call it the restricted fundamental modular region. We expect that these are the important configurations in the continuum limit. This is in fact necessary if the gauge-fixed lattice theory possesses a continuum limit. In this case , where a is the lattice spacing, and A i (x) is the continuum gauge connection. In the minimal lattice Coulomb gauge, the a i (x) satisfy the lattice transversality condition This is a linear condition on the coordinates a b i (x). This suggests that we identify configurations in U + with the space of coordinates a b i (x), and that we add configurations by adding coordinates a i (x) = [α a 1 + β a 2 ] i (x) = α a 1,i (x) + β a 2,i (x). (6.5) This yields a well-defined configuration U ∈ U + only if a 2 i (x) ≤ 1 for all links. This is assured for the case that is of interest to us where 0 < α < 1 and β = 1 − α. Indeed, by the triangle inequality we have | a i (x)| ≤ |α a 1,i (x)| + |β a 2,i (x)| ≤ α + β = 1. The proof is given in Appendix B. We may establish the same convexity property for the Gribov region Ω, namely that Ω + ≡ Ω ∩ U + is convex. The Gribov region has the defining property for all ω. The proof for Ω is the same as for Λ because this inequality has the same structure as (6.2), being linear in a i (x) and b i (x). Vortices as vertices We have seen that Ω + is convex, so one might think it has a simple oval shape. We shall show however that when thin center vortex configurations are gauge transformed into minimal Coulomb gauge they are mapped into points U 0 on the boundary ∂Ω of the Gribov region Ω where this boundary has a wedge-conical singularity of a type that is described below. This illustrates the intimate relation of dominance by configurations in the neighborhood of the Gribov horizon or in the neighborhood of thin center vortex configurations. Previous results We call a "thin center vortex configuration" (or center configuration) one where, in maximal center gauge, all link variables are center elements U i (x) = Z i (x). [Such a configuration may be characterized in a gauge-invariant way by the statement that all holonomies are center elements. (A holonomy is a closed Wilson loop.) This definition also holds in continuum gauge theory.] When any configuration is gauge transformed by a minimization procedure, such as used here, it is mapped into the Gribov region Ω. In [5] it was shown: When a thin center vortex configuration is gauge transformed into minimal Coulomb gauge it is mapped onto a configuration U 0 that lies on the boundary ∂Ω of Ω. Moreover its Faddeev-Popov operator M (U 0 ) = −∇ · D(U 0 ) has a non-trivial null space that is (N 2 − 1)dimensional. (This is in addition to the (N 2 − 1)-dimensional trivial null-space consisting of constant eigenvectors. Here and below we do not count trivial eigenvectors that are generators of global gauge transformations.) Likewise, when an abelian configuration is gauge-transformed into the Gribov region, its Faddeev-Popov operator has a non-trivial R-dimensional null space, where R is the rank of the group. The reason is that the gauge orbit of a center or abelian configuration is degenerate so the gauge-invariant equation D i (U 0 )ω = 0, which holds for i = 1, 2, 3 simultaneously, has N 2 − 1 or R non-trivial solutions respectively. Tangent plane to Gribov horizon at a regular point , and in terms of which Ω + (see above) is convex. We write U 0 +δU = U (a 0 +δa). Here δa is an arbitrary (transverse) small variation of the coordinates at a 0 . Such a variation is a tangent vector at a 0 , and the space of tangent vectors constitutes the tangent space at a 0 . Let U 0 be a configuration in Coulomb gauge, that lies on the boundary ∂Ω of the Gribov region. By definition, the corresponding Faddeev-Popov operator has a non-trivial null eigenvector M (U 0 )ω 0 = 0, (7.1) all other eigenvalues being non-negative. We are interested in the case of a center configuration in Coulomb gauge where the non-trivial null eigenvalue of M (U 0 ) is (N 2 − 1)-fold degenerate, but for orientation we first consider the case where the non-trivial null eigenvalue is non-degenerate. Let U 0 + δU be a neighboring point that is also on the Gribov horizon, so M (U 0 + δU ) also possesses a null vector We wish to find the condition on δa that holds whenever U (a 0 + δa) also lies on the Gribov horizon ∂Ω. We have To first order in small quantities we have We contract this equation with ω 0 and obtain (ω 0 , δM ω 0 ) = 0, (7.7) Geometrically, this is the statement that δa b i (x) is perpendicular to the vector (ω 0 , ∂ xib M 0 ω 0 ), so this vector is the normal to ∂Ω at U 0 = U (a 0 ). It defines the a hyperplane that is tangent to the Gribov horizon at a 0 . As noted above, this happens for every gauge copy in Coulomb gauge of a thin center vortex configuration, where P = N 2 − 1. Under the small perturbation M (a 0 + δa) = M (a 0 )+ δM , where δM is given above, the P -fold degenerate null-space splits into P levels with eigenvalues δλ n , for n = 1, . . . , P , that depend linearly, δλ n = xia C b n,i (x)δa b i (x), on the δa. For a point a 0 on the boundary ∂Ω we define the Gribov region of the tangent space at a 0 to be the set of tangent vectors that point inside Ω. We designate it by Ω a 0 . More formally, it is the set of tangent vectors δa at a 0 such that a 0 + δa lies in Ω, Ω a 0 ≡ {δa : a 0 + δa ∈ Ω}. (7.10) Recall that by definition the Gribov region Ω consists of (transverse) configurations U (a) such that all eigenvalues of M (a) are positive. It follows that Ω a 0 is the set of δa such that all P eigenvalues δλ n (δa) are positive, Ω a 0 ≡ {δa : δλ n (δa) > 0 for n = 1, . . . , P }. (7.11) This condition is quite restrictive because, for generic δa, some number ν of eigenvalues δλ n are negative while P − ν are positive, where ν = 0, 1, . . . , P . As a result the boundary ∂Ω a 0 of the Gribov region at a 0 is not simply a tangent plane through a 0 , as before. Rather it is a high dimensional wedge-conical vertex whose shape we shall find. Degenerate perturbation theory The eigenvalues in an infinitesimal neighborhood of a point U 0 on the Gribov horizon are determined by the eigenvalue equation where Abstractly, δa is a tangent vector in lattice configuration space at point a 0 in lattice configuration space; the components of this vector, in a suitable basis, are denoted δa b i (x). We may also regard the set of numbers δa mn as components of δa in some other basis. The eigenvalue equation has P solutions δλ n . The condition that they all be positive, which determines Ω a 0 , the Gribov region at a 0 , is the condition that the matrix δa mn define a strictly positive form. Such a form is characterized by the Sylvester criterion that det δa mn be positive, together with the determinant of all principle minors (diagonal square submatrices) of this matrix. The boundary of this region is determined by the condition that one eigenvalue vanish, which happens when the determinant vanishes, det δa mn = 0. (7.16) Two-fold degeneracy We first analyze 2-fold degeneracy. In this case positivity of the two principle minors and of the determinant reads δa 11 > 0, δa 22 > 0, δa 11 δa 22 − δa 2 12 > 0. (7.17) In terms of δa + ≡ 1 2 (δa 11 + δa 22 ) and δa − ≡ 1 2 (δa 11 − δa 22 ), the last condition reads The three inequalities define the interior of the "future" cone in the 3-variables δa + , δa − and δa 12 , with vertex at the origin. Thus the boundary of the Gribov region at a 0 is a cone in these 3 variables. Taking account of the remaining components of δa, the conical singularity can be viewed as a kind of wedge in higher dimensions. Three-fold degeneracy For SU(2) gauge theory the thin-vortex configurations are 3-fold degenerate points on the Gribov horizon. In this case the positivity of the principle minors and the determinants reads These 7 inequalities characterize the Gribov region Ω a 0 of the tangent space at a point a 0 on the Gribov horizon that is 3-fold degenerate. From our discussion of 2-fold degeneracy we know that the positivity of each of the 2 by 2 determinants and of its diagonal elements defines a "future" cone. We call these future cones F 12 , F 13 and F 23 respectively. For example F 12 is defined by δa 11 > 0, and δa 22 > 0, and etc. Each is a cone of opening half-angle π/4, and we have shown that Ω a 0 is contained in the intersection of the 3 future cones, This condition and the 3 by 3 determinantal inequality det δa mn > 0 characterize the Gribov region Ω a 0 in the tangent space of a point U 0 = U (a 0 ) that is 3-fold degenerate. Over-all picture of the Gribov horizon and its center-vortex singularities We have seen that Ω + is convex and that thin center vortex configurations are wedgeconical singularities on the boundary of ∂Ω. Those that are on ∂Ω + are extremal elements, like the tips on a very high-dimensional pineapple. Indeed, each center configuration is an isolated point. If one moves a small distance from a center configuration it is no longer a center configuration. Its gauge transform U (a 0 ) in Coulomb gauge is likewise an isolated point on the Gribov horizon. Thus the wedge in the boundary ∂Ω at a 0 that we have just described occurs at an isolated point where the Gribov horizon may be said to have a "pinch". In SU(2) gauge theory there are 2 dV center configurations (where d is the number of dimensions of space, and V is the volume of the lattice) because there are dV links in the lattice and there are 2 center elements in SU(2). These are related by 2 V gauge transformations, so there are 2 (d−1)V center orbits. The absolute minimum of each of these orbits lies on the common boundary of the fundamental modular region ∂Λ and the Gribov region ∂Ω. So there are at least 2 (d−1)V tips on the above-mentioned pineapple, a truly enormous number. Moreover for each such orbit there are many Gribov copies, all lying on ∂Ω (spin glass problem). These are all singular points of the Gribov horizon of the type described. For SU(2) there may not be any other singular points on ∂Ω. It is possible that the thin center vortex configurations provide a rather fine triangulation of ∂Ω. Note: The gauge transform in Coulomb gauge of an abelian configuration also lies on the Gribov horizon. However the SU(2) group is of rank 1, so, for SU(2) gauge theory, such a configuration is invariant under a one-parameter group of transformations only. As a result, the corresponding null space M (U 0 )ω 0 = 0 is only one-dimensional, and the present considerations do not indicate that these are singularities of ∂Ω. Coulomb gauge as an attractive fixed point of the interpolating gauge The data reported above have been obtained by numerically gauge fixing to the minimal Coulomb gauge using a well-defined numerical minimization procedure on a lattice of finite volume. However not every lattice gauge corresponds to a well-defined continuum gauge, so one may well ask whether our results have a continuum analog, especially since in continuum theory the Coulomb gauge is a singular gauge. Indeed, in Coulomb gauge, some propagators of the form 1/k 2 appear instead of 1/(k 2 0 +k 2 ), which leads to unrenormalizable divergences dk 0 in some closed loops. This has been treated by using the action in phase space (first order formalism), which allows a systematic cancellation of these divergences [18], or by using an interpolating gauge [19] with gauge condition This gauge interpolates between the Landau gauge a = 1 and the Coulomb gauge, a = 0, and may also be achieved by a numerical minimization procedure. For a > 0 it is a regular continuum gauge, and the gauge parameter a provides a regularization of the divergences of the Coulomb gauge, which is obtained in the limit a → 0 of the interpolating gauge. However as a possible obstacle to this regularization, it must be noted that the quantities that appear in the interpolating gauge condition (8.1) are unrenormalized quantities, and the unrenormalized gauge parameter depends on the ultraviolet cut-off Λ, For the Coulomb-gauge limit of the interpolating gauge to be a smooth limit it is necessary that the gauge parameter a(Λ/Λ QCD ) flows toward (or at least not away from) the Coulombgauge value a = 0, as the ultraviolet cut-off is removed, Λ → ∞. In Appendix C its dependence on Λ is calculated in the neighborhood of a = 0 to one-loop order in the perturbative renormalization group, with the result in pure SU(N ) gauge theory, Conclusions We have found that the low-lying eigenvalues of the Faddeev-Popov operator, in thermalized lattices, tend towards zero as the lattice volume increases. This means that in the infinite volume limit, thermalized configurations lie on the Gribov horizon. That fact alone would not allow us to make any strong conclusions about the energy of unscreened color charge. However, the data also indicate that the density ρ(λ) of F-P eigenvalues goes as a small power of λ, at infinite volume, as λ → 0. Together with the behavior of F (λ) at λ → 0, we conclude that the energy of an unscreened color charge is infrared divergent, and that this divergence can be attributed to the near-zero modes of the Faddeev-Popov operator. This evidence clearly supports the Gribov horizon scenario advocated by Gribov and Zwanziger in ref. [2]. This scenario was invented to account for confinement, and the reader may be surprised to find a linearly rising color-Coulomb potential in the deconfined phase. However this is nicely explained by the horizon scenario in Coulomb gauge, as we now explain. In Coulomb gauge the gauge fixing is done independently on each 3dimensional time slice, and may be done on a single time slice. According to the horizon scenario, on each time slice, 3-dimensional configurations A i (x) are favored that lie near the horizon of a 3-dimensional gauge theory, and this enhances the instantaneous color-Coulomb potential. This is true for every temperature T , including in the deconfined phase, because temperature determines the extent of the lattice in the fourth dimension. Thus, the horizon scenario provides a framework in which confinement may be understood, but it is not detailed enough to tell us under what conditions the infinite color-Coulomb potential may be screened to give a finite self-energy, as measured by the Polyakov loop. By factoring thermalized lattices into vortex-only and vortex-removed components, we have also been able to show that the constant density of low-lying F-P eigenvalues can be entirely attributed to the vortex component. We find that the eigenvalue density of the vortex component is qualitatively similar to that of the full configuration. The eigenvalue density of the vortex-removed component, on the other hand, is drastically different from that of the full configuration. This density can be interpreted as simply a small perturbation of the zero-field (or zero-th order) result, and it is identical in form to the (non-confining) eigenvalue density of lattice configurations in the Higgs phase of a gauge-Higgs theory. These findings establish a firm connection between the center vortex and the Gribov horizon confinement scenarios. According to the center vortex doctrine, fluctuations in the vortex linking number are responsible for the area law falloff of Wilson loops. It now appears that vortex configurations are also responsible for the enhanced density of near-zero F-P eigenvalues, which is essential to the Gribov horizon picture. This result is consistent with previous results in ref. [20], where it was found that vortex removal also removes the Coulomb string tension of the color Coulomb potential. It is also consistent with recent investigations of Gattnar, Langfeld, and Reinhardt [21] in Landau gauge. The F-P eigenvalue spectrum at high temperatures, with and without vortices, turns out quite similar to the corresponding results at low-temperature. This similarity was to be expected, since the F-P operator depends only on spacelike links at a fixed time, and even at high T these form a confining three-dimensional ensemble for spacelike Wilson loops. The Gribov scenario must therefore be relevant to physics in the deconfined phase; cf. ref. [22] for a recent application. We also report a result which supports the consistency of Coulomb gauge itself in the continuum limit. Coulomb gauge is very singular in continuum perturbation theory, and one method of making it better defined is to view Coulomb gauge as a non-singular limit of the more general gauge condition a∂ 0 A 0 + ∇ · A = 0. The success of this approach depends on whether the Coulomb gauge limit, a = 0, is an attractive ultraviolet fixed point of the renormalization group flow. Here we have shown that this requirement is satisfied. Finally, we have uncovered an intriguing geometrical property of thin vortices in lattice configuration space. In ref. [5] it was shown that thin vortices (gauge equivalent to center configurations) lie on the Gribov horizon. The Gribov horizon is a convex manifold, and we have shown here that thin vortices are conical singularities on that manifold. Percolating thick vortices appear to be ubiquitous in thermalized lattice configurations at or near the Gribov horizon; it is conceivable that the special geometrical status of thin vortices is in some way related to the ubiquity of their finite-thickness cousins. A. Finite-volume scaling of low-lying F-P eigenvalues In N × N random matrix models, the tail of the eigenvalue distribution often displays a universal scaling behavior with N . This fact has found important applications in the study of chiral symmetry breaking, where this sort of universal scaling is found in the density of near-zero eigenvalues of the Euclidean Dirac operator as a function of lattice volume (which is proportional to the number of eigenvalues) [23]. In our case, we are also interested in the density of near-zero eigenvalues in the infinite 3-volume limit. The statement in this case is as follows (cf., e.g., ref. [24]): Suppose the normalized density of low-lying eigenvalues, at very large volumes, goes as where κ is some constant. Then the density of eigenvalues, the average spacing between the low-lying eigenvalues, and the probability density P (λ n ) of the n-th low-lying eigenvalue, agree for every lattice 3-volume V 3 , if the eigenvalues themselves are rescaled according to z = λV If we require that this number of eigenvalues depends only on the rescaled variables z, ∆z, then it is necessary that p = 1/(1 + α), and eq. (A.2) follows. Our strategy is to plot the probability density P (λ min ), rescaled by a factor V −1/(1+α) 3 , as a function of the variable z min = λ min V 1/(1+α) 3 , where λ min is the lowest non-zero eigenvalue. The rescaling of the probability density ensures that its integral over z min is unity. P (λ min ) is computed on 8 4 to 20 4 lattice volumes, at various values of α. If we can find a value of α for which the (rescaled) probability densities P (λ min ) coincide at all lattice sizes, then this determines α, and in turn the behavior of the eigenvalue density near λ = 0. The results for rescaled P (λ min ), for both the full configurations (λ min = λ 4 ), and the vortex-only configurations (λ min = λ 7 ), are displayed in Fig. 17. We show only three values of α in each case, which include our best estimate for the scaling α. We find that α = 0.25 ± 0.05 for the full configurations, and α = 0.0 ± 0.05 for the vortex-only configurations seem to give the best scaling results; the error estimate is subjective. The lattice data for L = 20 in the vortex-only configurations calls for some comment. This distribution does not match the distributions obtained at smaller L at any α. This could mean that the scaling hypothesis for low-lying eigenvalues is invalid, but in our opinion the mismatch at L = 20 has a different explanation: We believe the problem is connected to difficulties we have encountered with gauge fixing on these large lattices. As lattice size is increased, typical values of λ min become smaller, and the number of overrelaxation steps required to satisfy our Coulomb gauge convergence criterion increases. In general, on a given lattice size, the number of over-relaxation steps required to gauge fix is inversely correlated with the size of λ min in the gauge-fixed configuration. It happens occasionally that even after 10,000 gauge-fixing steps on a given time-slice, the spatial lattice configuration has not converged to Coulomb gauge. When this happens, we perform a random gauge transformation at that time slice and gauge fix a second time, to a different Gribov copy (note that in Coulomb gauge, time-slices can be, and were, gaugefixed independently). This procedure most likely biases the result towards higher average values of λ min . In doing things this way, we are almost certainly modifying the probability distribution in Gribov region, giving lower measure to Gribov copies which are closer to the horizon. On the other hand, simply excluding these hard-to-gauge-fix lattices would probably introduce an even worse bias. On smaller (L < 20) vortex-only lattices, and on the full, unmodified lattices, problems with convergence to Coulomb gauge are uncommon. However, on the L = 20 vortex-only lattice, there is a convergence failure on almost 38% of all time slices; on these slices we have performed a random gauge transformation to move to a different Gribov copy. The rate of convergence failure on vortex-only 16 4 lattices is five times lower, and the rate of failure on unmodified 20 4 lattices is eight times lower, than on the vortex-only 20 4 lattice. For this reason, we believe that the bias towards larger eigenvalues is by far the worst on the vortex-only 20 4 lattices, and this is in fact where we see the mismatch, in P (λ min ) vs. z min at α ≈ 0, with the other vortex-only lattice volumes. Note that for the vortex-only configurations, α = 0 is consistent with a finite, non-zero value for ρ(0), as shown in eq. (2.25). On the other hand, it is not excluded that α could in fact be slightly negative, in which case ρ(λ) actually diverges at λ = 0. Since the eigenvalue density in Fig. 4 does appear to actually rise as λ → 0, before suddenly falling, a divergent behavior at λ = 0 in the infinite volume limit is not at all excluded. Next we consider F (λ) as λ → 0. We have fit the average value of F at the lowest non-zero eigenvalue, F (λ min ) , to the form Since λ min → 0 at infinite volume, then F (0) = 1 for vortex-only configurations, as stated in eq. (2.25). For the full configurations, since we estimate α ≈ 0.25, with the consequence that λ min ∼ 1/L 2.4 , it is reasonable to guess that for the full configurations near λ = 0, We have therefore tried the following fit where g i (x) ≡ g(x +î)g(x) −1 . We write g i (x) = c i (x) + i σ · g i (x), where c i (x) = ±[1 − g i (x) 2 ] 1/2 , and the defining property reads, where the last factor is positive, 1 − c i (x) ≥ 0. Given that this property holds for configurations a 1 and a 2 , where a i (x) ≡ α a 1,i (x) + β a 2,i (x). It follows from this that the defining property (B.2) that we wish to establish will be proven if we can show that on each link (xi) the inequality where 0 ≤ θ 1,i (x) ≤ π/2, and 0 ≤ θ 2,i (x) ≤ π/2, and 0 ≤ θ i (x) ≤ π/2. They are related by where we have also used the convexity of the sine function. This yields the inequality θ i (x) ≤ αθ 1,i (x) + βθ 2,i (x), from which (B.11) follows. This proves the convexity of Λ + . C. Renormalization-group flows toward Coulomb gauge We compute the flow 5 of the gauge parameter a that appears in the interpolating gauge condition (8.1) in the neighborhood of a = 0. Lorentz-invariance is not manifest in the interpolating gauge, and A 0 and A renormalize independently, where superscript R designates renormalized quantities. The renormalized gauge parameter a R is defined by a = Z a a R , where the renormalization constant Z a is determined by the renormalized gauge condition, The γ-functions are defined by ∂ t ln Z a = γ a (g), where t ≡ ln Λ, and Λ is the ultraviolet cut-off. The derivative is taken at fixed renormalized coupling constant g R . We have ∂ t ln Z a = −∂ t ln Z A 0 + ∂ t ln Z A , (C.5) which gives γ a (g) = −γ A 0 (g) + γ A (g). (C.6) From the perturbative expansions Z a = 1 + c a ln Λ (g R ) 2 + . . . , we obtain γ a = c a g 2 + . . . , γ A 0 = c A 0 g 2 + . . . , (C.8) γ A = c A g 2 + . . . , and c a = −c A 0 + c A . (C.9) To one-loop order the renormalization constant satisfies ∂ t ln Z a = c a g 2 + . . . = c a 2b 0 t + . . . , (C. 10) where b 0 is the first coefficient of the β-function. In pure SU(N ) gauge theory it has the value b 0 = 11 3 N 16π 2 . (C.11) We obtain, to one-loop order, Z a = const t ca/2b 0 , (C.12) which, with t = ln Λ, gives the leading dependence of the bare gauge parameter on the cut-off Λ, a(Λ/Λ QCD ) = const (Λ/Λ QCD ) ca/2b 0 . (C.13) Clearly a = 0 is a fixed point of the renormalization group. To see if it is also a stable fixed point, we may evaluate the coefficient c a at a = 0, namely in Coulomb gauge, assuming that c a is smooth in the neighborhood of a = 0. The renormalization constants Z A 0 and Z A in Coulomb gauge, are given to one-loop order in eq. (B.37) of [18]. (C. 16)
Surviving the host: Microbial metabolic genes required for growth of Pseudomonas aeruginosa in physiologically-relevant conditions Pseudomonas aeruginosa, like other pathogens, adapts to the limiting nutritional environment of the host by altering patterns of gene expression and utilizing alternative pathways required for survival. Understanding the genes essential for survival in the host gives insight into pathways that this organism requires during infection and has the potential to identify better ways to treat infections. Here, we used a saturated transposon insertion mutant pool of P. aeruginosa strain PAO1 and transposon insertion sequencing (Tn-Seq), to identify genes conditionally important for survival under conditions mimicking the environment of a nosocomial infection. Conditions tested included tissue culture medium with and without human serum, a murine abscess model, and a human skin organoid model. Genes known to be upregulated during infections, as well as those involved in nucleotide metabolism, and cobalamin (vitamin B12) biosynthesis, etc., were required for survival in vivo- and in host mimicking conditions, but not in nutrient rich lab medium, Mueller Hinton broth (MHB). Correspondingly, mutants in genes encoding proteins of nucleotide and cobalamin metabolism pathways were shown to have growth defects under physiologically-relevant media conditions, in vivo, and in vivo-like models, and were downregulated in expression under these conditions, when compared to MHB. This study provides evidence for the relevance of studying P. aeruginosa fitness in physiologically-relevant host mimicking conditions and identified metabolic pathways that represent potential novel targets for alternative therapies. Pseudomonas aeruginosa, like other pathogens, adapts to the limiting nutritional environment of the host by altering patterns of gene expression and utilizing alternative pathways required for survival. Understanding the genes essential for survival in the host gives insight into pathways that this organism requires during infection and has the potential to identify better ways to treat infections. Here, we used a saturated transposon insertion mutant pool of P. aeruginosa strain PAO1 and transposon insertion sequencing (Tn-Seq), to identify genes conditionally important for survival under conditions mimicking the environment of a nosocomial infection. Conditions tested included tissue culture medium with and without human serum, a murine abscess model, and a human skin organoid model. Genes known to be upregulated during infections, as well as those involved in nucleotide metabolism, and cobalamin (vitamin B 12 ) biosynthesis, etc., were required for survival in vivo-and in host mimicking conditions, but not in nutrient rich lab medium, Mueller Hinton broth (MHB). Correspondingly, mutants in genes encoding proteins of nucleotide and cobalamin metabolism pathways were shown to have growth defects under physiologically-relevant media conditions, in vivo, and in vivolike models, and were downregulated in expression under these conditions, when compared to MHB. This study provides evidence for the relevance of studying P. aeruginosa fitness in physiologically-relevant host mimicking conditions and identified metabolic pathways that represent potential novel targets for alternative therapies. Introduction Pseudomonas aeruginosa is a highly drug resistant, hospitalacquired, opportunistic pathogen that thrives in the restricted nutrient environments within the host. This organism can rapidly alter its gene expression patterns, surviving with limited resources despite the onslaught of defence mechanisms by the host (Fung et al., 2010;Kruczek et al., 2016;Quinn et al., 2018). For example, P. aeruginosa displays altered transcriptional regulation in physiologically-relevant media conditions mimicking growth in wound exudate or blood (Belanger et al., 2020). Wounds and blood infections in immunocompromised patients represent a substantial proportion of hospital-acquired P. aeruginosa infections and can critically lead to sepsis (Hattemer et al., 2013). In this regard, determining differences in bacterial physiology and susceptibility under conditions that are relevant to human infections is crucial to advance our understanding of how to treat drug resistant infections by ESKAPE (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, P. aeruginosa, Enterobacter sp.) pathogens. RNA-Seq has proven to be a useful tool to explore the bacterial physiology and altered gene expression that occurs in different growth environments such as blood (Malachowa et al., 2011;Kruczek et al., 2016), serum (Hammond et al., 2010;Kruczek et al., 2014;Quinn et al., 2018), lung sputum (Fung et al., 2010), and wound models (Turner et al., 2014). Studies have used minimal medium supplemented with bovine serum or albumin to show that serum can inhibit short term biofilm formation on indwelling devices (Hammond et al., 2010), and that ironregulated bacterial virulence genes are also upregulated in response to serum albumin (Kruczek et al., 2012). Turner et al. (2014) analyzed the transcriptome of P. aeruginosa in two murine models, an acute dermal burn wound model and a chronic excision wound infection model. They found that many virulence genes such as LPS O-antigen biosynthesis genes were downregulated in vivo when compared to defined minimal medium MOPS, while pyoverdine synthesis and T3SS genes were upregulated in vivo. Furthermore, rowth of P. aeruginosa in tissue culture medium with human serum can alter the expression of ~39% of the genome when compared to growth in the nutrientrich laboratory medium, Mueller Hinton Broth (MHB) (Belanger et al., 2020). This altered gene expression in physiologicallyrelevant media appeared to increase membrane permeability and susceptibility to the antibiotic azithromycin. More profoundly, we previously demonstrated that the global transcriptome of P. aeruginosa grown in tissue culture medium with human serum displayed significant overlap with the transcriptomes of P. aeruginosa from wound and lung infections (Belanger et al., 2020;Cornforth et al., 2020). This finding demonstrated that these media represent an easy screening method for determining how gene expression changes with altered growth or susceptibility in physiologically relevant conditions. Although it can infer a great deal about an organism's physiology and how the organism changes its gene expression in response to an environmental cue, RNA-Seq cannot be used to determine which genes are required or important for survival of a bacterium under particular conditions. Instead, a method called transposon insertion sequencing (Tn-Seq) can be used to identify genes that are important for conditional fitness of an organism, including genes that, when mutated, cause lethal growth defects under physiologically-relevant conditions (Skurnik et al., 2013;Chao et al., 2016;Poulsen et al., 2019). Tn-Seq utilizes a promiscuous transposon (Tn) to create saturated pools of Tn-insertion mutants, that can be grown in selective or challenged conditions to eliminate mutants with decreased fitness in the environment of choice (Chao et al., 2016;Cain et al., 2020). Sequencing of the regions adjacent to the Tn for each mutant that survives in the pool after selection, determines which mutants were eliminated, and indicates that the eliminated genes were required for growth in the condition of interest. For example, genes that are considered essential in an organism grown in MHB are not necessarily required under conditions such as wound exudate or infected blood, and vice versa. Such conditional essentiality means that genes required for growth in, e.g., the host but not in rich lab media, might be strong candidates as targets for therapeutics that have not previously been discovered. However, such genes cannot be discovered by studying essentiality under standardized nutrient rich conditions. There are several in vivo models designed to mimic human infections such as wounds (Bobrov et al., 2022). Burn wound, and chronic wound/skin abscess animal models have been adapted for use with all ESKAPE pathogens (Turner et al., 2014;Pletzer et al., 2017a) and offer relatively simple primary screening methods for the establishment and/or treatment of bacterial infections. In wound/abscess models, bacteria are injected into the subdermal tissue below the skin or to skin that has been previously burned. By treating the abscess or injecting various strains of an organism, the resulting effects on abscess size and colony forming units (CFU) of bacteria in the abscess can help screen for effective antimicrobial agents to treat chronic infections, or for bacterial survival requirements in chronic infections (Turner et al., 2014;Mansour et al., 2016;Pletzer et al., 2017a,b). Additionally, in vitro organoid or air-liquid interface models such as a recentlydescribed human skin organoid model are addressing some of the Frontiers in Microbiology 03 frontiersin.org cost-associated and ethical concerns of animal models by using human cell lines or cells directly obtained from patients (de Breij et al., 2018;Wu et al., 2021). In this humanized system, keratinocytes are differentiated in vitro into dermal tissues that are infected with bacteria and can be treated topically with antimicrobial agents. Alternatively, in vitro physiologically relevant media conditions represent a method to screen for survival and susceptibility of pathogens while requiring much less cost and time to perform than in vivo and in vivo-like models. Media such as Roswell Park Memorial Institute 1,640 medium (RPMI) and RPMI with human serum represent a means to study P. aeruginosa in an environment meant to mimic human infection and the applicability of these conditions to host infection has been demonstrated in RNA-Seq studies (Belanger et al., 2020). Tn-Seq studies in these media conditions could further validate their use as physiologically-relevant growth environment, while providing an affordable and easy screening method to discover novel therapeutic targets specific to the host environment. Here we designed and utilized Tn-Seq to identify genes required in vitro and determined how similar fitness determinants in host mimicking conditions were to an in vivo environment. In vitro physiologically relevant media RPMI and RPMI with human serum were chosen to screen for potential pathways that are essential for survival in nosocomial wound or blood infection. Genes unique to these media compared to nutrient rich lab media, were subsequently compared to conditionally important genes in the murine abscess model and the human skin organoid model. We identified functional pathways that were uniquely important for survival under host mimicking in vitro conditions and in vivo, to inform upon how P. aeruginosa survives during wound infections in ways that are different from survival under laboratory conditions. We report on the characterization of the Tn-mutant pool in P. aeruginosa PAO1 and the identification of genes required for survival under these physiologically-relevant conditions. This study has allowed us to identify genes in key pathways that impact on survival under a range of conditions mimicking the host infectious environment, and has implications for in vitro screening and identification of novel targets for therapeutics. Materials and methods Strains and bacterial growth conditions P. aeruginosa PAO1 strain H103 (Zhang et al., 2000) was used as a WT control and for Tn-Seq studies. The Tn-insertion mutants used for susceptibility and growth defect validations were taken from an ordered Tn-insertion library in P. aeruginosa PAO1 (harboring Tn5 IS50L derivative Tn-insertions ISlacZ/hah and ISphoA/hah; Jacobs et al., 2003). E. coli strain SM10 λpir was used as a donor strain in Tn-Seq pool generation. Media included lysogeny broth (LB), experimental control condition MHB, RPMI-1640 supplemented with 5% MHB (referred to as RPMI), RPMI-1640 supplemented with 5% MHB and 20% human serum pooled from anonymous donors under ethics approval [certificate number H04-70232]. RPMI and RPMI/serum were designed to be physiologically-relevant to wound exudate or blood and their preparation was described in detail previously (Belanger and Hancock, 2021). Plasmids and DNA constructs The pBT20 plasmid was used for generation of the Tn-Seq pool and was obtained from the Filloux lab, Imperial College, London (Kulasekara et al., 2005). This is a mariner Tn vector with the Himar1 Tn that randomly inserts into Thymine-Adenine (TA) sites within the genome. The plasmid has the following characteristics: Δbla Tel r; The Tel r cassette, consisting of the kilA and telAB genes, is flanked by two identical FRT sequences. Primers used in this study are summarized in Supplementary Table S1. Constructing the Tn-Seq pool in PAO1 A Tn-Seq pool was constructed in P. aeruginosa strain PAO1 using the vector pBT20. First, E. coli SM10λpir + pBT20 was grown overnight at 37°C on LB agar containing 100 μg/ml ampicillin (Sigma-Aldrich). P. aeruginosa PAO1 was grown at 42°C overnight on LB agar with no antibiotic. The bacteria were scraped off the plates and mixed in equal volumes of each at a ratio of OD 600 40:20 donor (E. coli): recipient (P. aeruginosa). Conjugation mixtures were spotted on to LB agar plates and incubated for 2 h at 37°C. All conjugation spots were then scraped into LB and plated on LB agar with 25 μg/ml irgasan (Sigma-Aldrich) and 25 μg/ml gentamicin (Sigma-Aldrich). The cells were grown overnight then counted and collected in LB, and stored in 20% glycerol at −80°C. The procedure was repeated until at least 200,000 mutants were collected. The collected mutants were thawed at 37°C, normalized to equalize the number of colonies per mL per conjugation, and then pooled and recovered at 37°C. They were recovered by adding 0.5 ml of pooled cells into 50 ml LB with 15 μg/ml gentamicin. 20% glycerol was added to the final pool and it was aliquoted into 1 ml volumes and flash frozen with liquid nitrogen before storing at −80°C. Preparation of the Tn-Seq libraries for sequencing To determine which mutants existed in the Tn-Seq pool at baseline and after any selective pressure, total genomic DNA was extracted from the samples and the regions adjacent to the Tn-insertions were amplified and sequenced. Primers used for Tn-Seq are summarized in Supplementary Table S1 Bioinformatic analysis of Tn-Seq results The quality of raw sequence reads was assessed using FastQC (v0.11.6; Andrews, 2010) and MultiQC (v1.6;Ewels et al., 2016). Conditionally essential genes were determined using two, complementary bioinformatics tools: TraDIS v1.4.5 (referred to as Tradis; Barquist et al., 2016) and TRANSIT v2.0.0 (Transit; DeJesus et al., 2015). The sequence reads were aligned against P. aeruginosa PAO1 reference genome from the Pseudomonas Genome Database v17.1 (accession GCF_000006765.1; PseudoCap version 138). Tradis was run using default parameters by aligning reads against the reference genome using SMALT and determining insertion counts per gene. Transit used TPP pre-processing tool to map raw reads against the reference with BWA v0.7.17 (Li and Durbin, 2009) with default parameters used to tabulate the counts of reads per TA sites. The counts were passed through the Gumbel method of Transit for calculating the probability of essentiality of each gene using a Bayesian model. Genes predicted to be essential by Gumbel or Tradis in at least 4 out of 5 replicates (in experimental replicates) or 2 out of 3 replicates (for the T0 pool) were compiled into a final list of essential genes. Statistical significance of overlap between the Tn-Seq pool selected in different physiologically-relevant media was performed using hypergeometric distribution calculations and Fisher's exact test with the GeneOverlap v1.32.0 package from BioConductor release 3.15 (Shen and Sinai, 2022) in R v4.1.2 (R Core Team, 2021). Overlaps in essential genes found between two physiologically-relevant media conditions was considered significant with an odds ratio greater than 1 if p < 0.05. Functional class enrichment was performed using functional class annotations from PseudoCap (Winsor et al., 2016) and hypergeometric distribution calculations were done using phyper from the stats package v4.3.0 in R (Johnson et al., 1992;R Core Team, 2021) and correcting for multiple testing using false discovery rate (FDR) corrections. Functional classes considered significantly enriched within a comparison were those with adjusted p value <0.05. Experiments in physiologically-relevant in vitro conditions Tn-insertion pools were inoculated at 1 × 10 7 CFU/ml in 25 ml cultures in MHB, RPMI and RPMI/serum in five separate biological replicates performed over multiple days. The cultures were grown to 5 × 10 8 CFU/ml at 37°C and bacteria were harvested from 1 ml each. Total genomic DNA was extracted, libraries were prepared and sequenced, and essentiality analysis was performed using Tradis and Transit. Conditionally essential genes were compared between media (physiologically-relevant conditions vs. MHB), and essential genes in MHB were removed from the physiologically-relevant media gene lists before comparing to other host mimicking conditions. Validation experiments were performed with individual Tn-insertion mutants grown in physiologically-relevant in vitro media in 96-well plates. The wells containing RPMI/serum or MHB were inoculated with minimum 3 replicates of each mutant in each medium at 1 × 10 7 CFU/ml and grown for 16 h in a plate reader (BioTek) with constant shaking and heating to 37°C. Select mutants with interesting growth defects were then grown in 25 ml flask cultures in RPMI/serum. Cultures were grown at 37C for 25 h and samples were taken and plated for CFU every 3 h for the first 9 h and then at the final time point. Growth cultures were performed with PAO1 WT grown in parallel to mutant strains. Murine abscess model experiments Animal experiments were performed following Canadian Council on Animal Care (CCAC) guidelines and were approved by The University of British Columbia Animal Care Committee [certificate number A14-0363]. The mice used in this study were seven-week-old female outbred CD-1, purchased from Charles River Laboratories, Inc., (Wilmington, MA). The average weight was about 25 ± 3 g at the time of the experiments. One to 3% isoflurane was used to anesthetize the mice. Mice were euthanized with carbon dioxide. For murine abscess Tn-Seq an aliquot of the PAO1 Tn-Seq pool was grown to mid log phase, washed, and resuspended in PBS to and OD 600 of 0.5. The pool was administered to mice subcutaneously with a total bacterial inoculum of 2 × 10 7 CFU. Abscesses were allowed to form for 18 h, at which time the mice were sacrificed, and the abscesses removed. The experiment was repeated with five replicates. Enzymatic digestion of the abscess tissue and separation of bacterial cells followed a previously published protocol with modifications (Garcia-Garcerà et al., 2013). The abscess tissue was homogenized in 1 mL sterile PBS with sterile glass beads and 10 μl were removed for CFU counting. The tissue was then enzymatically digested at 37°C for 45 min, inactivated and filtered. The final, filtered, washed cells were pelleted and used for DNA extraction using the same method as for Tn-Seq in host mimicking in vitro conditions. Library prep, sequencing and essentiality analysis was performed as described above. Survival of ordered Tn-insertion mutants for genes identified through Tn-Seq in the murine abscess model was performed in a similar fashion to Tn-Seq experiments. Human skin air-liquid interface model experiments Tn-Seq was also performed in a human skin in vivo-like model (Wu et al., 2021). The human skin organoid (skin) model used N/TERT keratinocyte cells cultured on filter inserts in 12-well plates containing growth factors and tissue culture medium. Over approximately 3 weeks, the cells grew, differentiated, and stratified into a dermal-like tissue on which bacteria could be inoculated. The Tn pool was grown in LB to mid log phase, washed with PBS, and inoculated onto the skin at 1 × 10 6 CFU in five μL of PBS and allowed to develop for 48 h. The skin was then washed with PBS, and the entire filter with skin and bacteria attached was removed from the tissue culture wells, homogenized, and plated to measure survival. The skin was then digested following the same protocol as for abscess tissue. Whole genomic DNA extraction was performed and the Tn-genome junctions were amplified and sequenced as above. The experiment was repeated with four replicates from different skin batches. Competition between PAO1 ordered Tn-insertion mutants for genes identified through Tn-Seq and wild type (WT) PAO1 was measured after 48 h of bacterial growth on the skin. A control Tn-insertion mutant was included where the insertion is at the end of the general essential gene dnaG and thus does not demonstrate an insertional effect and would not be expected to have a growth advantage or disadvantage when compared to WT. The skin was inoculated with 1 × 10 6 CFU in 5 μl of PBS containing 50% WT and 50% mutant population. The infection was left for 48 h, then the skins were washed with sterile PBS, homogenized using glass beads, and plated on LB agar (for growth of all bacteria) and tetracycline (50 μg/ml: for growth of Tn-insertion mutants). The ratios of mutant to WT were measured for all mutants tested (Macho et al., 2007). Competitive fitness indices (CI) were assessed using the following formula CI = (M O / WT O )/(M I /WT I ), where, M was the CFU/mL of the mutant, WT was the CFU/mL of the wild type, subscript O is the output counts after a biofilm was established on the skin, and subscript I was the input counts in the inoculum. Network analysis of Tn-Seq and RNA-Seq data The web-based application PaintDB (Castillo-Arnemann et al., 2021) was utilized to create networks in order to visualize RNA-Seq and Tn-Seq data together. To explore genes essential under physiologically-relevant conditions and the transcriptional patterns of genes in the same pathways, RNA-Seq data for P. aeruginosa grown in RPMI/serum (Belanger et al., 2020) was combined with the gene list of genes conditionally essential in RPMI and/or RPMI/serum. Subnetworks of significantly enriched pathways were exported and edited for visualization using Cytoscape (v 3.8.2; Shannon, 2003). Using the ontologies tool in PaIntDB, subnetworks were constructed for pathways that were significantly enriched in these datasets using Fischer's exact test with Bonjamini/Hochberg correction and a 0.05 significance threshold. Results Characterization of the PAO1 Tn-Seq pool determined generally essential genes are comparable to previous studies A Tn-Seq pool was generated in P. aeruginosa strain PAO1 grown in LB, using the mariner Tn Himar1 that randomly inserts into TA sites within the genome (Lampe et al., 1998). Over 200,000 mutants were collected, to generate a Tn pool that was more than two-fold larger than the possible insertion sites in the genome. To identify essential genes in LB, before expansion experiments were performed, three individual aliquots (biological replicates) of the Tn-insertion pool (denoted as T0) were sequenced. On average 4.4 million reads mapped to the genome, with between 90,000 and 100,000 unique insertions sites (Supplementary Dataset 1). Nineteen genes were excluded from the analysis as they contained no TA sites (Supplementary Dataset 1). To determine essential genes, a pipeline was constructed to utilize the analysis applications Tradis (Barquist et al., 2016), which determined essentiality by assessing insertions per gene, and Transit (DeJesus et al., 2015), which calculated the probability of essentiality of each gene using reads per TA site and a Bayesian model. By integrating two distinct analysis tools, the pipeline took into account variation in results that might occur due to differing algorithms in an attempt to reduce the risk of reproducibility issues. Genes were termed essential if they were identified as essential by either tool in at least 2 out of 3 replicate aliquots. A total of 654 genes were predicted as essential in the T0 pool in this study. To analyze the robustness of the Tn-Seq pool, the essential genes identified in our experiment were compared to essential genes identified in previous Tn-Seq and ordered Tn mutant studies in P. aeruginosa, to assess similarities between the libraries ( (2006) P. aeruginosa essentiality have predicted between 336 and 1,394 generally essential genes when this organism is grown under rich growth media conditions (Table 1; Liberati et al., 2006;Skurnik et al., 2013;Lee et al., 2015;Turner et al., 2015;Poulsen et al., 2019). Overall, there were 147 genes predicted as essential in rich medium in our T0 pool, as well as in all six published essentiality studies (Supplementary Dataset 1). In addition to those predicted in all studies, another 135 were essential in the T0 pool and in five other essentiality studies, for a total of 282 common, frequently identified essential genes. Using hypergeometric distribution statistics, we determined that our T0 essential genes had significant overlap with all previous studies examined, with the highest correlation with Lee et al. (2015), Poulsen et al. (2019), and Turner et al. (2015) (Supplementary Dataset 1). Additionally, we found 10/26 Pseudocap functional classes (Winsor et al., 2016) showed common significant enrichment of essential genes in five, six, or seven of the studies examined here (Supplementary Table S2). These results indicate that the bacterial growth condition or method used to determine essentiality might strongly impact on the number and categories of genes identified as essential. Despite this, we determined significant overlapping functional classes to previous studies, and found our T0 pool to be a good representative of generally essential genes of P. aeruginosa in nutrient rich LB. Tn-Seq of Pseudomonas aeruginosa grown under in vitro and in vivo/in vivo-like conditions identified significant overlap in conditionally required genes between host mimicking conditions Following the characterization of the PAO1 Tn-insertion pool, this library was used to determine which genes were conditionally important for survival in physiologically-relevant in vitro media (Belanger et al., 2020), in an in vivo murine abscess model (Pletzer et al., 2017b), and in an in vivo-like human skin organoid model (Wu et al., 2021). From this, we aimed to understand whether genes required for survival of P. aeruginosa in host mimicking media were also required in established abscess and skin models, and to discover unexplored pathways that might be important for survival in wound models. By focusing on genes determined to be conditionally required in each medium (mutants completely absent after expansion) and excluding genes contributing to fitness but not absolutely required (mutants with decreasing numbers), we identified genes that were likely impacting to the greatest extent on survival and fitness in the host environment. In vitro in physiologically relevant media conditions We grew the Tn-Seq pool in RPMI and RPMI/serum as in vitro mimics of physiologically-relevant conditions, as well as in MHB as a nutrient rich control. The media was inoculated with 1 × 10 7 CFU/ml and grown until mid-log phase (5 × 10 8 CFU/ml), with the final bacterial concentration representing 1,000-fold the number of mutants present in the T0 pool. Five replicates of the pool were grown in the physiologically relevant media and MHB and genes were deemed essential if they were predicted by either Tradis or Transit in at least four replicates. After growth in physiologicallyrelevant in vitro media there were 280 conditionally essential genes that were not identified as essential for growth in MHB (Supplementary Figure S1; Supplementary Dataset 2A). Of these 280 genes, 70 were predicted to be conditionally essential for survival in both RPMI/serum and RPMI (Supplementary Table S3), 27 were specifically important for survival in RPMI/serum only, and 183 genes were important for survival in RPMI only. Importantly, the genes that were essential Frontiers in Microbiology 07 frontiersin.org in either physiologically relevant medium were largely belonging to the same PseudoCap (Winsor et al., 2016) functional classes (Supplementary Figure 1B). These data suggested that serum components may provide important factors affecting survival in host conditions, but that RPMI alone is a more nutrient exacting condition, leading to identification of a greater number of conditionally important genes. The combined 280 gene set for essentiality in host mimicking in vitro media, was used in further comparative analyses to other host-like conditions. Murine abscess model Next, genes conditionally important for survival in vivo were determined using the murine abscess model of P. aeruginosa infection (Pletzer et al., 2017b). The Tn-Seq pool was injected at an inoculum of 2 × 10 7 CFU into subdermal tissue of a mouse and an abscess was established for 18 h, resulting in an average of 2 × 10 9 CFU/abscess. To determine if genes important for survival in the murine abscess model were also important in the physiologically-relevant in vitro media conditions, we compared essential genes between each condition, excluding genes identified as essential in MHB. Among 354 genes conditionally essential for P. aeruginosa survival in the murine abscess model and 280 in physiologically-relevant in vitro media conditions, we identified 169 genes that were essential in both ( Figure 1A; Supplementary Dataset 2B). This overlap was determined to be highly significant (p = 4.6 × 10 −146 , Fisher's exact test) with an odds ratio of 43.0. Human skin organoid model Tn-Seq was also performed to identify predicted essential genes in P. aeruginosa PAO1 grown on a human skin organoid model (Wu et al., 2021). The PAO1 Tn-Seq pool was inoculated at 1 × 10 6 CFU onto the surface of a differentiated dermal tissue layer on a filter insert in a tissue culture plate. Low inoculation concentrations were used in order to prevent P. aeruginosa from penetrating the tissue layer and growing in the medium below. Only experiments where P. aeruginosa cultures remained on the surface of the skin were included in the analysis. Bacteria growing on the skin after 48 h (on average 4 × 10 7 CFU) were harvested, and their DNA was extracted, and Tn-genome junctions sequenced. We compared essential genes between the skin model and other physiologically-relevant models with genes essential in MHB excluded to identify genes specifically important in the host. There were 311 genes found to be conditionally important in the human skin model, cf. MHB, 141 of which were also identified in physiologically-relevant in vitro media conditions (Odds Ratio of 31.3, p = 1.5 × 10 −113 , Fisher's exact) and 177 of which were also essential in the abscess model (Odds Ratio of 38.8, p = 1.5 × 10 −147 , Fisher's exact; Figure 1A; Supplementary Dataset 2C). These data support that there is significant overlap between genes required for survival in in vitro host mimicking conditions and in vivo. In the next sections, we contrasted and compared functional classes of genes identified in common between the host mimicking conditions to explore important functions for survival in the host environment. Genes unique to physiologically-relevant in vitro or in vivo media might represent limitations of in vitro host mimicking conditions Comparing predicted essential genes lists between conditions, we found there were 67 genes identified as conditionally important for survival in both abscess and the skin model that were not identified in RPMI or RPMI/serum. Interestingly, this included a larger number of genes involved in adaptation and protection, chaperones and heat-shock proteins, transcription, translation and DNA repair ( Figure 1B; Supplementary Table S4). Among the genes involved in adaptation and protection that were found only in the murine abscess and skin models were the type VI secretion system gene tsi2, genes encoding the soluble bacteriocin pyocin S4, chemotaxis genes, and Psl polysaccharide synthesis gene pslI. Alternatively, there were 80 genes required for survival of P. aeruginosa in physiologically-relevant in vitro media only, many of which belonged to functional classes that were also predicted as essential in vivo, e.g., those influencing energy metabolism, amino acid and nucleotide metabolism, membrane integrity, and iron acquisition (Supplementary Table S5; Figure 1B). It is likely these functional classes are not discriminatory and are required for both in vitro and in vivo/in vivo-like conditions, but any differences might reflect differences in the wiring of metabolic pathways. These data supported that genes identified in physiologicallyrelevant in vitro media provide a good representation of conditionally important genes in the host but likely did not provide an exhaustive list of all essential functions, and therefore has limitations. Genes shared between all physiologically-relevant conditions represent a robust example of genes required to survive in physiologically-relevant media To determine trends in P. aeruginosa gene requirements in the utilized infection models, we compared conditionally required genes identified in vitro, in murine abscess, and in human skin models and found 110 shared genes between the three conditions when excluding genes essential in MHB (Figure 1; Supplementary Dataset 2). Classification of these 110 genes into Pseudocap functional classes identified membrane-related functions (14 genes), amino acid metabolism (8), biosynthesis of cofactors and prosthetic groups (4), nucleotide biosynthesis and metabolism (6), transcriptional regulation (6), and transport of small molecules (11) as the most represented Tn-Seq identified virulence and membrane transport genes previously implicated as important for survival in the host environment Previous studies examining genes important for survival of P. aeruginosa grown under physiologically relevant growth conditions have indicated that virulence, secretion and iron acquisition are important factors involved in survival and infection (Reimmann et al., 2001;Galle et al., 2012;Turner et al., 2014;Lee et al., 2015;Turner et al., 2015;Poulsen et al., 2019). We found that several genes encoding proteins involved in transport, secretion and membrane integrity were predicted as essential in all physiologically-relevant conditions in this study (Table 2). This included genes important for transport of divalent cations and metallic cations such as Fe 3+ , e.g., adjacent genes encoding the pyochelin synthesis proteins, PchG and PchF (Supplementary Dataset 2). Identifying genes involved in iron uptake is noteworthy, since P. aeruginosa has multiple systems for synthesizing siderophores, which are be diffusible molecules that bind and uptake iron. For this reason, we would expect the mutants to be complemented in trans. However, in our models, we found pyochelin synthesis mutants in the Tn-Seq pools did not survive. This being said, iron acquisition genes have also been Genes predicted as conditionally essential for growth in murine abscess and human skin models and in physiologically-relevant media (RPMI and or RPMI/Serum) cf. MHB as determined using Tn-Seq. (A) Venn diagrams showing essential genes that are unique or shared between particular host-like conditions (RPMI or RPMI/Serum: pink; human skin organoid model: amber; murine abscess model: blue), as determined by either Tradis or Transit cf. MHB. (B) Functional classes of essential genes of Pseudomonas aeruginosa grown in vitro (in RPMI and/or RPMI/Serum), murine abscesses, or human skin cf. MHB. Frontiers in Microbiology 09 frontiersin.org identified in previous Tn-Seq studies in murine chronic wound models (Supplementary Table S6; Turner et al., 2014) and it is well known that they play an important role in infection and virulence. Other genes involved in membrane functions and transport of small molecules, that were predicted as essential in host mimicking conditions in this study (Table 2; Supplementary Table S6) as well as previous studies included (a) genes encoding ferredoxin protein PA2297 and nitrate reductase NapF/NapE, which were also found to be essential in murine wound models (Turner et al., 2014), (b) sodium/hydrogen antiporter proteins (ShaDF) implicated as important in sodium homeostasis and virulence of P. aeruginosa in vivo (Kosono et al., 2005) and previously found to be essential in murine wounds, SCFM and lung sputum (Turner et al., 2014(Turner et al., , 2015Lee et al., 2015), (c) copper transport gene copA2 which was previously found to be essential in wound, bovine serum, and SCFM (Turner et al., 2014(Turner et al., , 2015Poulsen et al., 2019), as well as (d) translocase genes coding for the TatABC and Psc operons and HxcUW pseudopilins important for type 2 (T2SS) and type 3 (T3SS) secretion systems and previously implicated as essential in wound, bovine serum, and SCFM models (Turner et al., 2014(Turner et al., , 2015Morgan et al., 2019;Poulsen et al., 2019). Since many virulence and transport genes had overlapping essentiality between this study and previous studies, we examined whether these genes also had similar transcriptional patterns in RPMI/serum to those observed in studies exploring physiologically-relevant conditions. To do this, previously published RNA-Seq data collected from P. aeruginosa grown under RPMI/serum cf. MHB (Belanger et al., 2020) were compared to qRT-PCR analysis in the murine abscess model, (Pletzer et al., 2017b), and transcriptomic data in chronic and acute murine wound infections (Supplementary Table S7; Turner et al., 2014). Both previous abscess/wound studies focused on pyoverdine and phenazine iron uptake pathways, regulation of T3SS and T2SS, alginate synthesis, Psl polysaccharide production, motility, and rhamnolipid production as markers for virulence in vivo. Transcriptomics of P. aeruginosa grown in vitro in RPMI/ serum cf. MHB (Belanger et al., 2020) revealed increased expression of selected pyoverdine genes, and T2SS and T3SS genes, as was observed in vivo, but no alteration of expression of other genes involved in motility and rhamnolipid synthesis (Supplementary Table S7). This comparison implies similar regulatory patterns in secretion systems and virulence factors between host like conditions, but differences in the behavior of specific genes involved in motility and attachment. By observing essentiality and expression patterns of genes involved in virulence in both in vitro and in vivo host mimicking conditions compared to MHB, we found that some of the genes identified in this study had also been identified by previous studies (Lee et al., 2014;Turner et al., 2014Turner et al., , 2015Pletzer et al., 2017b;Poulsen et al., 2019) in different host environments. These observations support that the host mimicking conditions used here, reflect a sufficient representation of genes considered essential for infection of P. aeruginosa in the host environment. Genes involved in microbial metabolic functions were required for growth under physiologically-relevant conditions In addition to virulence factors and membrane transport genes, we found that a large proportion of classified essential genes in host mimicking conditions belonged to pathways involved in microbial metabolic functions. These genes were of interest since they might indicate altered metabolism and potential metabolic targets of P. aeruginosa in infection environments. Amino acid metabolism genes that were required for survival in both abscess and skin models as well as physiologically-relevant in vitro media included genes encoding proteins for phenylalanine, tyrosine, lysine, histidine and methionine biosynthesis and utilization (Table 2; Klem and Davisson, 1993;Myers et al., 1993;Xie et al., 1999). The genes carA that encodes a subunit of carbamoyl-phosphate, involved in catalyzing the biosynthesis of a precursor for arginine and pyrimidines (Cunin et al., 1986), and nagZ that codes for a beta-N-acetyl-D-glucosaminidase involved in β-lactam resistance (Zamorano et al., 2010) were also required. In this study, we also found that genes pyrF, purE, and carA belonging to pyrimidine and purine metabolism operons (Supplementary Figure S2), were required for survival in all media conditions (Table 2). Additionally, pyrD, purF, purH, and hypothetical pyrimidine biosynthesis gene PA3505 were predicted essential in 3 out of 4 host-like conditions. Genes belonging to purine and pyrimidine metabolism have been previously suggested to play an essential role in P. aeruginosa survival in host environments (Supplementary Table S6) such as blood (Samant et al., 2008), human serum (Weber et al., 2020), in vivo infection models (Turner et al., 2014), and in bovine serum (Poulsen et al., 2019) and SCFM (Turner et al., 2015). Genes belonging to operons involved in carbon catabolism and biosynthesis of cofactors had interesting overlaps between in vivo and in vitro host mimicking conditions. Malonate decarboxylase genes mdcC and mdcE were required for survival in all physiologically-relevant conditions (Table 2) and were also previously identified as important in murine wound models (Turner et al., 2014). Genes involved in the biosynthesis of pyrroloquinoline quinone (PQQ) and cobalamin (Cob; Supplementary Figure S3) were required in some or all physiologically-relevant conditions. PQQ is a co-factor involved in energy metabolism and previously demonstrated to be induced during growth on ethanol, 1-propanol, 1,2-propanediol and 1-butanol (Gliese et al., 2010). Cobalamin, or vitamin B 12 , is a complex cofactor containing a central chelated cobalt ion. CobC, cobD, cobO and cobQ were required in all physiologically-relevant conditions, and others such as cobE and cobH were essential in only certain physiologically-relevant media. Similar to iron acquisition, it was surprising to identify cobalamin biosynthesis as essential, since one would expect that loss of cobalamin biosynthesis would be complemented by other mutants in the pool. Nevertheless, the levels of cobalamin in human serum and Frontiers in Microbiology 10 frontiersin.org Table S8), and this pathway has also been demonstrated as essential in other in vivo models (Turner et al., 2014), which could make it a limiting nutrient for survival in these conditions. Integration of essential Pseudomonas aeruginosa pathways involved in nucleotide metabolism and cobalamin synthesis with gene expression in RPMI/ serum show similar patterns The availability of RNA-Seq data for P. aeruginosa grown under in vitro physiologically relevant conditions of RPMI and RPMI/serum (Belanger et al., 2020) gave us the ability to compare patterns of significantly enriched gene expression in these conditions, to the Tn-Seq data collected in this study. It was hypothesized that there would be overlapping enrichment in gene expression of pathways that were essential for survival of P. aeruginosa in the host environment. Conditionally essential genes in RPMI and/or RPMI/serum were integrated with RNA-Seq from P. aeruginosa grown in RPMI (Belanger et al., 2020), using a new web tool, PaIntDB, (Castillo-Arnemann et al., 2021) that maps such data to protein-protein interaction (PPI) networks in P. aeruginosa. PPI networks comprised nodes (circles) representing gene-encoded proteins connected by lines/edges that represent known or extrapolated PPIs representing physical, metabolic or regulatory interactions. Using the RNA-Seq and essential Tn-Seq genes, a zero-order PPI network of 1,921 genes/ proteins was constructed, involving only direct interactions between these nodes., including 1,856 nodes from RNA-Seq data and 175 from Tn-Seq data. Using the ontologies tool in PaIntDB, subnetworks were constructed for pathways that were significantly enriched in these datasets. There were a total of 150 significantly enriched gene ontology (GO) terms found (p < 0.05). A subnetwork was identified containing purine and pyrimidine byosinthesis and metabolism pathways with 76 enriched genes, 12 of which were conditionally essential in RPMI or RPMI/serum cf. MHB (Figure 2). The majority (10/12) of these essential genes were also observed to be essential in the murine abscess models, while five were essential in the skin model. Five genes were essential in both the abscess and skin models. Interestingly, most of the genes in these pathways were downregulated by up to 4-fold when compared to MHB, which was consistent with the findings of Turner et al. in murine chronic and acute wounds compared to MOPS-succinate (Turner et al., 2014). Although a smaller number of genes involved in biosynthesis of cofactors, prosthetic groups and carriers were indicated to be essential for survival in vivo, cobalamin biosynthesis was conditionally important and was also enriched in RNA-Seq data (Belanger et al., 2020) in host mimicking in vitro media (Figure 3). A subnetwork of 15 genes/proteins was obtained; of these, six were conditionally required in RPMI and/or RPMI/ Serum, two in both abscess and skin, and one in only abscess or skin, respectively. Notably, cobD, cobQ, cobO, and cobC genes were essential for growth in multiple physiologically-relevant conditions tested, with cobD being found as essential in all conditions cf. MHB (Table 2). These genes were also downregulated under physiologically-relevant conditions when compared to MHB. Therefore, integration of RNA-Seq and Tn-Seq data indicated the global importance of nucleotide and cobalamin synthesis pathways under physiological conditions, likely extenuated by the general downregulation of these pathways. Alternative nucleotide biosynthesis genes are upregulated in physiologically-relevant conditions Since nucleotide metabolism was both essential and downregulated in physiologically relevant media, we hypothesized that a lack of precursor metabolites required for biosynthesis of nucleotides could explain the requirement and downregulated of genes from the nucleotide biosynthetic pathways under physiologically-relevant conditions when compared to MHB. We explored the available RNA-Seq data (Belanger et al., 2020) and found that genes required for the utilization of precursor metabolites for pyrimidines and purines were indeed upregulated (Supplementary Table S9). These included, the arcC (Baur et al., 1989) and ansB (Holcenberg et al., 1978) genes encoding elements of the arginine metabolic pathway catalyzing the production of carbamoyl phosphate from glutamine, which can then be used in pyrimidine metabolism, as well as glutamate catabolic genes pauABCD (Yao et al., 2011), and histidine catabolic genes hutGHIU which are involved in the production, from proteins, of amino acids previously proposed to be available in the host environment (Turner et al., 2014). Interestingly, genes involved in histidine metabolism were also essential under physiologically-relevant conditions, and it is possible that increased catabolism of arginine and histidine could be shuttled into increasing glutamine production, which could then be used in purine and pyrimidine metabolism. Similarly, an alternative gene proposed to be involved in pyrimidine synthesis, panE, was also upregulated under these conditions, providing a potential alternative path to pyrimidine synthesis in the host environment. These data are thus consistent with the importance of nucleotide biosynthesis in vivo and support that alternative pathways for nucleotide metabolism in the host environment might be utilized due to nutrient limitations in this environment. Confirmation of growth defects in mutants of genes involved in nucleobase and cobalamin biosynthesis in host mimicking in vitro media and in vivo To validate growth defects of selected essential genes belonging to the same pathways as genes identified as Frontiers in Microbiology 12 frontiersin.org conditionally essential, we utilized ordered Tn-insertion mutants from the PAO1 library (Liberati et al., 2006). We selected all mutants for genes in nucleotide metabolism, and cobalamin biosynthesis that were found in the ordered Tn library. These included pyrC, pyrD, pyrE, pyrQ, pyrR, purK, and carA for nucleotide metabolism, and cobD, cobL, and cobP for cobalamin metabolism. In addition, iron acquisition gene pchF and secretion/toxin system genes exsE, toxA, and lipA were included to validate the importance of iron acquisition and virulence. Mutants were first tested for growth in physiologically-relevant in vitro media, cf. MHB in a 96 well microtitre growth set up (Figure 4). Growth defects in RPMI/ serum were observed in 12 out of 16 mutants for genes involved in toxin secretion, and metabolism of nucleobases, vitamin B 12 , and iron, with the largest defects observed for pyrD, pyrE, tadD, cobD, cobL and pchF. The remaining nine mutants had no growth defects in either medium. The lack of a growth defect despite conditional essentiality might be due to a Nucleotide metabolism genes were predicted as conditionally essential, enriched and differentially expressed in RPMI cf. MHB. Conditionally essential genes in RPMI and/or RPMI/serum were integrated with RNA-Seq from P. aeruginosa grown in RPMI, and mapped to PPI networks in P. aeruginosa. A network of enriched gene ontology (GO) terms for nucleotide metabolism and biosynthesis is visualized here. Genes indicated with a square were essential in RPMI and/or RPMI/serum but not MHB. Those genes that were also essential in abscesses cf. MHB and/or the human skin model cf. MHB are indicated with orange arrows. Frontiers in Microbiology 13 frontiersin.org competitive disadvantage in the Tn-Seq studies, cf. other mutants in the pool, that was not present when grown as isolated mutants. Four mutants of interest were selected for more detailed assessment, including mutants for nucleobase metabolism genes pyrD::lacZ and pyrE::phoA, cobalamin metabolism gene cobD::phoA, and the negative regulator of T3SS exsE::lacZ as a virulence control. Growth kinetics were first investigated by inoculation of each of these mutants and WT PAO1 into media flasks at a starting concentration of 1 × 10 6 CFU/ml. Cultures grown for 20 h at 37°C, and plated after 3, 6, 9 and 20 h to assess CFU/mL ( Figure 5A). The exsE mutant showed no significant change in growth, while the pyrD, pyrE and cobD mutants had significant defects in growth in RPMI/serum, with 5.0 (p = 0.006), 3.9 (p = 0.005), and 1.7 (p = 0.02) fold less bacteria cf. WT, respectively. Additionally, 2.5 × 10 7 CFU of each mutant or WT were inoculated subdermally in mice to determine their ability to survive and form an abscess after 18 h ( Figures 5B,C). The exsE mutant showed no growth defect and in fact demonstrated a slight but significant advantage in both CFU counts (2-fold more than WT, p = 0.03) and pathology (1.4-fold larger abscess than WT, p = 0.02). The pyrD and pyrE mutants demonstrated significantly decreased survival in vivo (2.8-fold, p = 0.02, and 29.4-fold, p = 0.005, less bacteria in abscesses cf. WT, respectively) as well as pathology, based on their ability to form abscesses (1.7 and 2.3-fold smaller abscess than WT for the pyrD (p = 0.0001) and pyrE (p = 7 × 10 −5 ), mutants, respectively). Despite the fact that the cobD insertional mutant had only slightly decreased growth in vitro when compared to WT, in the murine model it exhibited 2.1-fold (p = 0.0008) and 7.9-fold (p = 0.006) decreases in abscess size and bacterial counts, respectively. To determine the effects that these mutations had on survival in the human skin organoid model, Tn mutants were mixed with WT PAO1 at a starting ratio of 1:1 and inoculated onto developed skin. The ratio of mutant to WT bacteria was used to assess the competitive index (CI) of the mutants after 48 h incubation on the skin ( Figure 5D). A CI lower than one indicated a relative competitive fitness defect for that organism. As a negative control, we utilized a dnaG::ISlacZ/hah Tn mutant with an insertion at the end of this gene that did not disrupt the activity of this generally essential gene; it demonstrated no effect on growth rate in the skin organoid model with a CI of 0.82, that was not Cobalamin metabolism genes were predicted as conditionally essential, enriched and differentially expressed in RPMI cf. MHB and identified as conditionally essential for growth in physiologically-relevant conditions. Conditionally essential genes in RPMI and/or RPMI/serum were integrated with RNA-Seq from P. aeruginosa grown in RPMI cf. MHB, and mapped to PPI networks in P. aeruginosa. A network of enriched GO terms for cobalamin metabolism and biosynthesis is visualized here. Those genes that were also essential in abscesses cf. MHB and/or the human skin model cf. MHB are indicated with orange arrows. Frontiers in Microbiology 14 frontiersin.org significantly different from WT (using a 1-sided t-test). The two mutants for nucleobase metabolism genes had the greatest growth defect in the human skin organoid model with CIs of 0.02 for pyrD::lacZ (p = 0.0002) and 0.01 for pyrE::phoA (p = 0.00005). The cobD mutant was also significantly reduced in growth in the skin model compared to WT with a CI of 0.21 (p = 0.004). Discussion This study contributes to our understanding of genetic functions in P. aeruginosa that are needed to survive in physiologically-relevant in vitro media, in a murine chronic wound model, and in human skin infections. Exploration of these essential functions in P. aeruginosa was achieved by first generating a saturated Tn-Seq pool in PAO1 to enable comparison of genes conditionally important for survival in vitro and in vivo. Sequencing of the T0 Tn-Seq pool predicted 654 essential genes in P. aeruginosa when combining the two analysis tools Transit and Tradis. This number was larger than those found in three of six published datasets on P. aeruginosa (Lee et al., 2015;Turner et al., 2015), but was similar to that for three other studies (Liberati et al., 2006;Skurnik et al., 2013;Poulsen et al., 2019) examining essentiality of P. aeruginosa under different conditions (Table 1). Despite this study being performed in different media and using a different strain, and different analysis tools than some previous studies, all essential gene lists in this study and in previous research were considered to have significant overlap (Supplementary Dataset 1) and were composed of genes with similar overall functional properties (Supplementary Table S2). These data suggest that essential genes in P. aeruginosa are primarily involved in core metabolism, transcription and translation, and cell integrity, Growth defects of Tn-insertion mutants from the ordered PAO1 Tn mutant library in genes identified as important for survival in physiologicallyrelevant media and in murine abscess or the human skin model. Mutants harbouring Tn5 IS50L derivative Tn insertions ISlacZ/hah or ISphoA/hah (indicated as Tn5ISlacZ or Tn5ISphoA) were grown in both RPMI/Serum and MHB for 16 h in 96-well plates and OD 600 was measured every 30 min. Growth curves in RPMI/serum for mutants that showed a growth defect in RPMI/serum but not in MHB (indicated by the green boxes on the left) are shown on the right and compared to WT PAO1 grown in the same media. and that the analysis method used in this study is a robust predictor of classes of genes important for survival. Genes conditionally important in physiologically-relevant media RPMI and/or RPMI/serum, in the murine abscess in vivo model, and in the human skin organoid model (Figure 1), were predicted by identifying all surviving mutants of the Tn pool in each of these conditions, compared to that obtained in nutrient rich laboratory medium MHB, commonly used for measuring antimicrobial susceptibility (Wiegand et al., 2008). A key finding from this research was that genes conditionally required for survival in host mimicking in vitro conditions and in vivo were significantly overlapping, and that many of these genes were also identified in previous studies (Table 2; Supplementary Table S6). A total of 110 genes were predicted to be important for survival under all of our physiologicallyrelevant conditions, but were not essential in MHB (Figure 1). Deficiencies of mutants identified as important for growth in RPMI/Serum, murine abscess and/or a human skin model. (A) Growth defect compared to WT when grown in RPMI/Serum. *p < 0.05, **p < 0.01, ***p < 0.001 indicates significantly different from WT using two-way ANOVA. (B) Survival and (C) abscess formation in the murine abscess model. *p < 0.05, **p < 0.01, ***p < 0.001 indicates significantly different from WT using one-way ANOVA. (D) Competitive fitness of mutants in the human skin organoid model, cf. WT, measured as competitive index after inoculation with equal numbers of WT and mutant on the skin. *p < 0.05, **p < 0.001 indicated significantly different than 1 using 1-sample t-test. Frontiers in Microbiology 16 frontiersin.org This number represented approximately one third of the genes uniquely important in each of the skin model and murine abscess model, when compared to MHB, and almost half of the genes uniquely important for survival in physiologicallyrelevant in vitro media. The commonly important genes included those encoding proteins involved in membrane integrity, transport of small molecules, secretion, amino acid and nucleotide metabolism, and synthesis of cofactors. Remarkably, many of the genes in these functional classes that were implicated as important for in vivo survival, have also been directly or indirectly implicated as being important for antibiotic resistance or susceptibility in rich medium (Poole, 2007;Breidenstein et al., 2008;Kong et al., 2010;Zamorano et al., 2010), and have also been implicated in previous studies performed on both laboratory and clinical strains in other host environments (Turner et al., 2014;Poulsen et al., 2019). This observation supports the idea that the altered antimicrobial susceptibility that bacteria exhibit under in vitro host mimicking conditions (Belanger et al., 2020) and that observed in humans (Cornforth et al., 2018) is somehow linked, with a common requirement for certain functional pathways and genes for survival under these conditions. Virulence and transport genes predicted to be essential in vivo were dysregulated during growth in the murine abscess model (Pletzer et al., 2017b), and we found that secretion systems, iron acquisition genes and toxins were similarly upregulated between in vivo and host mimicking in vitro conditions by comparison with previously published studies (Supplementary Table S7). Furthermore, Turner et al. (2014) demonstrated that genes involved in amino acid metabolism, transport of organic ions, and energy conversion and production were also significantly dysregulated in acute and chronic wounds when compared to minimal MOPS-succinate media. The alterations in gene expression under the host mimicking conditions utilized here shared the greatest similarity with chronic wound RNA-Seq data (Supplementary Table S7). Additionally, Tn-Seq predicted that genes involved in T3SS were important for survival in RPMI/ Serum, abscesses, and the human skin model (Table 2). Consistent with this, the importance of the T2SS and T3SS in virulence in vivo has been well demonstrated in P. aeruginosa (Ball et al., 2002;Pletzer et al., 2017b). Intriguingly, an insertional inactivation mutant for the negative regulator of T3SS, exsE, (Rietsch et al., 2005) did not result in significantly altered growth rates in RPMI/serum (Figure 4) but led to increased virulence and abscess size in the murine abscess model ( Figure 5). This supports the conclusion that the regulation of T3SS is essential for prolonged survival/virulence of P. aeruginosa during infection, and that a complete lack of negative regulation by ExsE can result in extremely virulent infections. Microbial metabolic pathways engaged in the survival of P. aeruginosa in the host environment included metabolism of amino acids, nucleotides, and cofactors. The estimated concentrations of amino acids in blood plasma and serum differs widely depending on the study, the diet of individuals, and the time at which samples are taken (Krebs, 1950;Fukuda et al., 1984;Corso et al., 2017). However, compared to nutrient rich MHB (composed of beef extract, casein, and starch), most if not all amino acids are severely lacking (Supplementary Table S8). This suggests that there are crucial differences in amino acid availability and utilization in host mimicking conditions cf. MHB that are of interest in comparisons to genes essential in vivo. Furthermore, it was previously proposed (Turner et al., 2014) that certain metabolites were poorly or not available in the host environment during bacterial wound infections, including the amino acids glutamine, tyrosine, phenylalanine, aspartic acid and asparagine, purine nucleotides and potentially pyrimidine nucleotides, lysine, and methionine. Indeed, we identified here that pathways involved in biosynthesis of most of the leastavailable amino acids in vivo, as well as purines, and pyrimidines were required for P. aeruginosa survival, both in vivo and under physiologically-relevant in vitro conditions cf. MHB. The importance of purine and pyrimidine metabolism genes found here corroborates previous studies, which include those performed on P. aeruginosa grown in acute and chronic murine wounds (Turner et al., 2014) and in fetal bovine serum (Poulsen et al., 2019), in K. pneumoniae grown in human serum (Weber et al., 2020), and in Escherichia coli grown in human blood (Samant et al., 2008). P. aeruginosa is capable of de novo pyrimidine synthesis using genes in the Pyr and Car pathways (Supplementary Figure S2) and also contains homologs (PA4396-PA4399) of genes from the secondary pyrimidine pathway of Salmonella (Frodyma and Downs, 1998). Purine biosynthesis uses the same backbone as pyrimidines, (phosphoribosyl pyrophosphate; PRPP), and production of both pyrimidine and purine nucleotides is positively regulated by glutamine and negatively regulated by intermediates of arginine and nucleotide biosynthesis (O'Donovan and Neuhard, 1970). Additionally, we demonstrated that synthesis of cobalamin, a cofactor that was previously indicated to be potentially available in the host (Turner et al., 2014), was conditionally important for survival in our in vivo and in vivo-like models and under physiologically-relevant in vitro medium conditions when compared to nutrient rich medium. Vitamin B 12 , or cobalamin, is important for enzymatic activities in bacterial cells, including transmethylation, methionine synthesis, ribonuclease reductase, and anaerobic ethanolamine, glycerol and propanediol fermentation (Rodionov et al., 2003). In P. aeruginosa certain ribonucleotide reductases that catalyze the formation of deoxyribonucleotides from ribonucleotides, also require a cobalamin cofactor (Crespo et al., 2018). Pseudomonas can aerobically synthesize cobalamin de novo from aminolaevulinic acid (ALA), threonine and dimethyl benzimidazole as precursors and is also predicted to be able to salvage it from cobinamide (Fang et al., 2017; Supplementary Figure S3). Analysis of previous Tn-Seq datasets identified that cob genes were also conditionally important for survival in murine wound models (Turner et al., 2014), but were not required in bovine serum (Poulsen et al., 2019) or sputum containing media (Lee et al., 2015;Turner et al., 2015;Supplementary Table S6). By combining Tn-Seq data with previously published RNA-Seq data (Belanger et al., 2020), we observed that the enriched genes involved in nucleotide biosynthesis and cobalamin biosynthesis were significantly downregulated in RPMI/serum when compared to MHB (Figures 2, 3). This downregulation was also observed in murine wounds where nucleobase synthesis genes were previously considered to be important for fitness, and a purF mutant was deficient for virulence in acute infections (Turner et al., 2014). Intuitively, one would expect that genes that are important for survival under particular circumstances might also be upregulated in that condition; however, research comparing global expression and phenotypic data has repeatedly found that there is no consistent correlation either way (Turner et al., 2014;Evans, 2015;Cain et al., 2020). Although these pathways are important for survival in host conditions, the nutrient limiting environment of the host lacks precursors for nucleotide and cobalamin synthesis via the traditional routes (Jaishankar and Srivastava, 2017). Analysis of RNA-Seq data (Belanger et al., 2020) from P. aeruginosa grown in RPMI/serum cf. MHB, indicated that histidine and glutamine catabolism were upregulated, possibly in an effort to produce metabolites that could be utilized in nucleotide synthesis. Alternative nucleotide metabolism genes were conversely upregulated in RPMI/serum when compared to MHB (Supplementary Table S9). Downregulation of cobalamin synthesis pathways in RPMI/ serum compared to MHB, could be in response to limitation of precursor metabolites, and downregulation of processes requiring enzymes that utilize cobalamin as a cofactor. This downregulation might also explain why there were no rescue phenotypes in the Tn-Seq libraries grown in these conditions. Nevertheless, in these nutrient limiting environments, production of nucleotides and cobalamin were essential functions, and secondary screening experiments validated the importance of these genes for growth in RPMI/serum, murine abscesses, and the human skin organoid model (Figures 4, 5). This research demonstrates that in physiologicallyrelevant media containing human serum, P. aeruginosa demonstrates strongly overlapping gene requirements for survival when compared to in vivo infections. The medium also accurately reflects the in vivo dysregulation of genes involved in iron uptake, secretion systems and toxin production when compared to nutrient rich conditions. Pathways involved in amino acid and nucleotide biosynthesis were validated for their importance in vivo, and vitamin B 12 / cobalamin biosynthesis was demonstrated to be a conditionally important function for P. aeruginosa survival under physiologically-relevant conditions and in vivo. In addition to increasing our understand of pathogenesis and survival in P. aeruginosa infection, this study highlights opportunities for progress in therapeutic development. Overlapping essentiality between in vivo and host mimicking conditions supports the use of physiologically-relevant media as a robust screen for antimicrobials (Belanger and Hancock, 2021). Data availability statement The data presented in the study are deposited in the Gene Expression Omnibus as a BioProject with the accession number: GSE214167. Ethics statement The animal study was performed following Canadian Council on Animal Care (CCAC) guidelines and was reviewed and approved by The University of British Columbia Animal Care Committee [certificate number A14-0363]. Author contributions CB and RH conceived the study. Most of the experimental design, experimentation, data analysis, writing of the paper and editing was performed by CB. MD significantly contributed to experimental design and Tn-Seq library construction. Library prep and sequencing of Tn-Seq experiments was performed by CB, MD, and RF. Bioinformatic analysis of Tn-Seq results was performed with the assistance of TB, AL, and BD. NR, DP, CB, and MD performed murine experiments. NA, BW, CB, and MD performed human skin organoid model experiments. JC-A assisted with PaIntDB analysis. CH assisted with Tn-Seq project design. RH obtained funding for this study and contributed to the writing of the paper. All authors edited and approved the manuscript. Funding We gratefully acknowledge funding to RH from the Canadian Institutes for Health Research grant FDN-154287. RH holds a UBC Killam Professorship. CB was funded by a Cystic Fibrosis Canada graduate fellowship award #498801. MD was supported by the Graduate Award Program of the Centre for Blood Reasearch, UBC.
Reflection principle and Ocone martingales Let $M =(M_t)_{t\geq 0}$ be any continuous real-valued stochastic process. We prove that if there exists a sequence $(a_n)_{n\geq 1}$ of real numbers which converges to 0 and such that $M$ satisfies the reflection property at all levels $a_n$ and $2a_n$ with $n\geq 1$, then $M$ is an Ocone local martingale with respect to its natural filtration. We state the subsequent open question: is this result still true when the property only holds at levels $a_n$? Then we prove that the later question is equivalent to the fact that for Brownian motion, the $\sigma$-field of the invariant events by all reflections at levels $a_n$, $n\ge1$ is trivial. We establish similar results for skip free $\mathbb{Z}$-valued processes and use them for the proof in continuous time, via a discretisation in space. Introduction and main results Local martingales whose law is invariant under any integral transformations preserving their quadratic variation were first introduced and characterized by Ocone [2]. Namely a continuous real-valued local martingale M = (M t ) t≥0 with natural filtration F = (F t ) t≥0 is called Ocone if In the primary paper [2], the author proved that a local martingale is Ocone whenever it satisfies (1.1) for all processes H belonging to the smaller class of deterministic processes: A natural question for which we sketch out an answer in this paper is to describe minimal sub-classes of H characterizing Ocone local martingales through relation (1.1). For instance, it is readily seen that the subset { 1I [0,u] (t) − 1I ]u,+∞[ (t) t≥0 , with u ∈ E} of H 1 characterizes Ocone martingales if and only if E is dense in [0, ∞). Let us denote by M the quadratic variation of M. In [2] it was shown that for continuous local martingales, (1.1) is equivalent to the fact that conditionally to the σ-algebra σ{ M s , s ≥ 0}, M is a gaussian process with independent increments. Hence a continuous Ocone local martingale is a Brownian motion time changed by any independent nondecreasing continuous process. This is actually the definition we will refer to all along this paper. When the continuous local martingale M is divergent, i.e. P-a.s. (ii) For every F-predictable process H, measurable for the product σ-field B(R + ) ⊗ σ( M ) and such that ∞ 0 H 2 s d M s < ∞, P-a.s., (iii) For every deterministic function h of the form n j=1 λ j 1I [0,a j ] , It can be easily shown that the equivalence between (1.1) and (i), (ii), (iii) also holds in the case when M is not necessarily divergent. This fact will be used in the proof of Theorem 1. We also refer to [8] for further results related to Girsanov theorem and different classes of martingales. In [3], the authors conjectured that the class H 1 can be reduced to a single process, namely that (1.1) is equivalent to: [3]. Hence if the Lévy transform of Brownian motion is ergodic, then B M and M are independent and (1.3) implies that M is an Ocone local martingale. The converse is also proved in [3], that is if (1.3) implies that M is an Ocone local martingale, then the Lévy transform of Brownian motion is ergodic. Different other approaches have been proposed to prove ergodicity of the Lévy transform but this problem is still open. Among the most accomplished works in this direction, we may cite papers by Malric [6], [7] who studied the density of zeros of iterated Lévy transform. Let us also mention that in discrete time case this problem has been treated in [4] where the authors proved that an equivalent of the Lévy transform for symmetric Bernoulli random walk is ergodic. In this paper we exhibit a new sub-class of H 1 characterizing continuous Ocone local martingales which is related to first passage times and the reflection property of stochastic processes. If M is the standard Brownian motion and T a (M) the first passage time at level a, i.e. T a (M) = inf{t ≥ 0 : M t = a}, (1.4) where here and in all the remainder of this article, we make the convention that inf{∅} = +∞, then for all a ∈ R: It is readily checked that this identity in law actually holds for any continuous Ocone local martingale. This property is known as the reflection principle at level a and was first observed for symmetric Bernoulli random walks by André [1]. We will use this terminology for any continuous stochastic process M and when no confusion is possible, we will denote by T a = T a (M) the first passage time at level a by M defined as above. Let (Ω, F , F, P) be the canonical space of continuous functions endowed with its natural right-continuous filtration F = (F t ) t≥0 completed by negligible sets of F = t≥0 F t . The family of transformations Θ a , a ≥ 0, is defined for all continuous functions Note that Θ a (ω) = ω on the set {ω : T a (ω) = ∞}. When M is a local martingale, Θ a (M) can by expressed in terms of a stochastic integral, i.e. is a subclass of H which provides a family of transformations preserving the quadratic variation of M and we will prove that it characterizes Ocone local martingales. But the fact that the transformations ω → Θ a (ω) are defined for all continuous functions ω ∈ Ω allows us to characterize Ocone local martingales in the whole set of continuous stochastic processes as shows our main result. be a continuous stochastic process defined on the canonical probability space, such that M 0 = 0. If there exists a sequence (a n ) n≥1 of positive real numbers such that lim n→∞ a n = 0 and for all n ≥ 0: then M is an Ocone local martingale with respect to its natural filtration. Moreover, if T a 1 < ∞ a.s., then M is a divergent local martingale. In an attempt to identify the sequences (a n ) n≥1 which characterize Ocone local martingales, we obtained the following theorem. Let a = (a n ) n≥1 be a sequence of real numbers with lim n→∞ a n = 0 and let I a the sub-σ-field of the invariant sets by all the transformations Θ an , i.e. (ii) The sub σ-field I a is trivial for the Wiener measure on the canonical space (Ω, F ). Remark 2. It follows from Theorems 1 and 2 that if the sequence (a n ) contains a subsequence (2a n ′ ) (this holds, for instance, when (a n ) is dyadic sequence ), then the sub σ-field I a is trivial for the Wiener measure on (Ω, F ). So, our open question is equivalent to: is the sub σ-field I a trivial for any sequence (a n ) decreasing to zero ? In the next section, we prove analogous results for skip free processes. We use them as preliminary results to prove Theorem 1 in section 3. In section 2.2, we give counterexamples in the discrete time setting, related to Theorem 3. Finally, in section 4, we prove Theorem 2. 2 Reflecting property and skip free processes 2.1 Discrete time skip free processes A discrete time skip free process M is any measurable stochastic process with M 0 = 0 and for all n ≥ 1, ∆M n = M n − M n−1 ∈ {−1, 0, 1}. This section is devoted to an analogue of Theorem 1 for skip free processes. To each skip free process M, we associate the increasing process which is called the quadratic variation of M. In this section, since no confusion is possible, we will use the same notations for discrete processes as in continuous time case. For every integer a ≥ 0, we denote by T a the first passage time by M to the level a, T a = inf{k ≥ 0 : M k = a} . We also introduce the inverse process τ which is defined by τ 0 = 0 and for n ≥ 1, We recall that skip free martingales are just skip free processes being martingales with respect to some filtration. It is well known that for any divergent free skip martingale M, that is satisfying lim n→+∞ [M] n = +∞, a.s., the process S M is a symmetric Bernoulli random walk on Z. This property is the equivalent of the Dambis-Dubins-Schwartz theorem for continuous martingales. In discrete time, the proof is quite straightforward and we recall it now. A first step is the equivalent of Lévy's characterization for skip free martingales : any skip free martingale S such that S n+1 − S n = 0, for all n ≥ 0 (or equivalently, whose quadratic variation satisfies [S] n = n) is a symmetric Bernoulli random walk. Indeed for n ≥ 1, S 1 , S 2 − S 1 , . . . , S n − S n−1 are i.i.d. symmetric Bernoulli r.v.'s if and only if for any subsequence 1 ≤ n 1 ≤ · · · ≤ n k ≤ n: and this identity can be easily checked from the martingale property. Finally call F = (F n ) n≥0 the natural filtration generated by M. Since [M] n is an F-adapted process, from the optional stopping theorem, S M is a martingale with respect to the filtration (F τn ) n≥0 and since its increments cannot be 0, we conclude from Lévy's characterization. We recall also the following important property: any skip free process which is a symmetric Bernoulli random walk time changed by an independent nondecreasing skip free process, is a local martingale with respect to its natural filtration. This leads to the definition: Definition 1. A discrete Ocone local martingale is a symmetric Bernoulli random walk time changed by any independent increasing skip free process. We emphasize that in this particular case, Definition 1 coincides with the general definition of Ocone [2]. It should also be noticed that the symmetric Bernoulli random walk of Definition 1 is not necessarily the same as in (2.7). It coincides with S M if M is a divergent process. If M is not divergent, then it can obtained obtained from the initial one by pasting of an independent symmetric Bernoulli random walk (see Lemma 3), otherwise the independence can fail. A counterpart of transformations Θ a defined in (1.5) for skip free processes is given for all integers a ≥ 0 by Again in the following discrete time counterpart of Theorem 1, we characterize discrete Ocone local martingales in the whole set of skip free processes. Theorem 3. Let M be any discrete skip free process. Assume that for all a ∈ {0, 1, 2}, then M is a discrete Ocone local martingale with respect to its natural filtration. If in addition The proof of Theorem 3 is based on the following crucial combinatorial lemma concerning the set of sequences of partial sums of elements in {−1, +1} with length m ≥ 1: For each sequence s ∈ Λ m , and each integer a, we define T a (s) = inf{k ≥ 0 : s k = a}, Lemma 1. Let m ≥ 1 be fixed. For any two elements s and s ′ of the set Λ m such that s = s ′ , there are integers a 1 , a 2 , . . . , a k ∈ {0, 1, 2} depending on s and s ′ such that (2.10) Moreover, the integers a 1 , . . . , a k can be chosen so that s ∈ Λ m a 1 and Proof. The last property follows from the simple remark that for s ∈ Λ m we have that Θ a (s) = s if and only if s ∈ Λ m a . So, for the rest of the proof we suppose that all transformations used verify the above property. Lets (m) be the sequence of Λ m defined bys (m) 1 = 1 and ∆s First we prove that the statement of the lemma is equivalent to the following one: for any sequence s of Λ m such that s =s (m) , there are integers Indeed, suppose that the later property holds and let We notice that the transformations Θ a are involutive, i.e. for all x ∈ Λ m , which implies (2.10). The fact that (2.10) implies (2.11) is obvious. Now we prove (2.11) by induction in m. It is not difficult to see that the result is true for m = 1, 2 and 3. Suppose that the result is true up to m and let s ∈ Λ m+1 such that s =s (m+1) . For j ≤ m, we call s (j) the truncated sequence s (j) = (s 0 , s 1 , . . . , s j ) ∈ Λ j . From the hypothesis of induction, there exist b 1 , b 2 , . . . , b p ∈ {0, 1, 2} such that (2.14) Then, let us consider separately the case where m is even and the case where m is odd. If m is even and ∆s m ∆s m+1 = −1, then we obtain directly that Indeed, from (2.14), none of the transformations Θ b i−1 . . . Θ b 1 , i = 2, . . . , p affects the last step of s, so the identity follows from (2.13). If m is even and ∆s m ∆s m+1 = 1, then from the hypothesis of induction there exist which, from the above remark, may be chosen so that Then by applying transformation Θ 2 , we obtain: Hence, from (2.15) and since none of the transformations Θ d r−i . . . Θ dr , i = 0, 1, . . . , r−1 affects the last step of (s (m−1) , 2, 1), we have Finally from (2.13) and (2.14), we have and the induction hypothesis is true at the order m + 1, when m is even. The proof when m is odd is very similar and we will pass over some of the arguments in this case. If m is odd and ∆s m ∆s m+1 = −1, then we obtain directly that If m is odd and ∆s m ∆s m+1 = 1 then from the hypothesis of induction, there exist d 1 , d 2 , . . . , d r ∈ {0, 1, 2} such that and and by performing the transformation Θ 1 Θ 0 Θ 1 = Θ −1 , which finally gives from (2.13) and (2.14), and ends the proof of the lemma. In the proof of Theorem 3 for technical reasons we have to consider two cases: T 1 < ∞ a.s. and P(T 1 = ∞) > 0. Lemma 2 proves that in the first case M is a divergent process. with inf{∅} = +∞. We denote the extension of the process M by X where for all Note that X = M, on the set {T = ∞}. Proof. We show that reflection property holds for X. In this aim, we consider the two processes Y and Z such that for all k ≥ 0, and we write the same kind of decomposition for X: In view of (2.23) and (2.24), to obtain X L = Θ a (X) it is sufficient to show that for all bounded and measurable functional F , Since reflection is a transformation which preserves the quadratic variation of the process, the random time T can be defined as a functional of Y as well as a functional of Z. So we see that the last equality is equivalent to X , and, hence, the process M is itself an Ocone martingale by Definition 1. Counterexamples Similar presentation is valid for M: To see that the laws of Θ 1 (M) and M are equal it is convenient to pass to increments of corresponding processes. If Actually, M is constructed as follows: M 0 = 0, M 1 = ε 0 and for all k ≥ 1 and n ∈ [2 k , 2 k+1 − 1], the increments M n − M n−1 have the sign of ε k . In particular, the increments of (M n ) are −1 or 1 and since, from the discussion at the beginning of section 2, the only skip free local martingale with such increments is the Bernoulli random walk, it is clear that M is not an Ocone local martingale. Continuous time lattice processes As a preliminary result for the proof of Theorem 1, we state an analogue of Theorem 3 for continuous time lattice processes. We say that M = (M t ) t≥0 is a continuous time lattice process if M 0 = 0 and if it is a pure jump càdlàg process whose jumps ∆M t = M t − M t− verify : |∆M t | = η, for some fixed real η > 0. If we denote by (τ k ) k≥1 the jump times of M, i.e. with τ 0 = 0, for k ≥ 1, with inf{∅} = τ k−1 , then for all t ≥ 0 and P-a.s. The quadratic variation of M is given by: Note that τ k admits the equivalent definition τ k = inf{t ≥ 0 : [M] t = kη 2 }. We define the time changed discrete process S M by S M = (M τ k ) k≥0 which has values in the lattice ηZ. In particular, we have: , we obtain that S M is a symmetric Bernoulli random walk on the lattice ηZ which is independent of [M]. It means that it is local martingale with respect to its own filtration. Finally, when T η < ∞ a.s., M is a divergent local martingale since N is so. 3 Proof of theorem 1 Let (Ω, F , F, P) be the canonical space of continuous functions with filtration F satisfying usual conditions. Let M be a continuous stochastic process which is defined on this space and satisfying the assumptions of Theorem 1. Without loss of generality we suppose that the sequence (a n ) is decreasing. Proof of Theorem 1. First of all we note that since the map x → Θ x (ω) is continuous on C(IR + , IR), the hypothesis of this theorem imply that Θ 0 (M) L = M, i.e. M is symmetric process. Now, fix a positive integer n. We define the continuous lattice valued process M n by using discretisation with respect to the space variable. In this aim, we introduce the sequence of stopping times (τ n k ) k≥0 i.e. τ n 0 = 0 and for all k ≥ 1 τ n k = inf{t > τ n k−1 : |M t − M τ n k−1 | = a n } , with inf{∅} = τ n k−1 . Then M n = (M n t ) t≥0 is defined by: We can easily check that M n is a continuous time lattice process verifying the assumptions of Proposition 1. Therefore according to this proposition, [M n ] is a continuous time lattice Ocone local martingale. From the construction of M n we have the almost sure inequality sup t≥0 |M t − M n t | ≤ a n . Since the properties (i) and (iii) given in introduction are equivalent, it is sufficient to verify that for every deterministic function h of the form k j=1 λ j 1I ]t j−1 ,t j ] with t 0 = 0 < t 1 < · · · t k we have: Then in order to obtain (3.32), we will show by straightforward calculations that To prove (3.33) we first write where ∆M n t j = M n t j − M n t j−1 , ∆S r j = S r j − S r j−1 and r j = a −2 n [M n ] t j , 1 ≤ j ≤ k. Since S and [M n ] are independent and E [exp(ia∆S k )] = cos(a) for all a ∈ IR, we have: where u n j = ⌊a −2 n ω t j ⌋, j = 0, 1, . . . , k and ⌊x⌋ is the lower integer part of x. Moreover, it is not difficult to see that lim n→∞ k j=1 [cos(λ j a n )] (u n j −u n uniformly on compact sets of IR k + . Then, the expression (3.34) and the convergence relations (3.31), (3.35) imply (3.33). Proof of Theorem 2 In what follows we assume, without loss of generality, that the process M is divergent. We begin with the following classical result of ergodic theory, a proof of which may be found in [3], Lemma 1. Let (Ω, F , F, P) be canonical space of continuous functions endowed by natural right-continuous filtration F = (F t ) t≥0 completed by negligible sets of F = t≥0 F t . Lemma 4. Let Θ be a measurable transformation of Ω to Ω which preserves P. A random variable X ∈ L 2 (Ω, F , P) is a.s. invariant by Θ if and only if for all Y ∈ L 2 (Ω, F , P). Let Θ n , n ≥ 1 be a family of transformations defined on canonical space of continuous functions (Ω, F , F, P). Let I be the sub σ-algebra of the invariant events by all the transformations Θ n , n ≥ 1, i.e. The following lemma extends Theorem 1 in [3]. Proof. The proof almost follows from this of Theorem 1 in [3] along the lines. We first prove that (j) implies (jj). Proof of Theorem 2. If (ii) holds then from Lemma 5, B M and M are independent, so (i) holds. Let us prove that (i) implies (ii). Suppose that (ii) fails. We show that (i) fails, too. Namely we show that one can construct a continuous martingale M = B A , where B is standard Brownian motion and A is non-decreasing continuous adapted process, such that M verify reflection properties of (i) but it is not Ocone martingale. Let X be a non trivial B −1 (I a )-measurable bounded random variable. Call (F B t ) the natural filtration generated by B. Let N t = E(X | F B t ) for all t ≥ 0 and N = (N t ) t≥0 . We remark that N is a (F B t )-martingale invariant by all transformations (Θ an ): Now, we can construct a finite non-constant stopping time T which is invariant by all the transformations Θ an by setting T = inf{t ≥ t 0 | N t ∈ K}, where t 0 is large enough and K is a suitable Borel set. For instance we can choose K such that P(X ∈ K) ≥ 2/3. Since N t → X a.s. as t → ∞ we can find t 0 such that for t ≥ t 0 , P(N t ∈ K) ≥ 1/2. Finally, for α > 0, let us define the following increasing process
ON LEUCAENA LEUCOCEPHALA AS ONE OF THE MIRACLE TIMBER TREES Leucaena leucocephala trees are commonly known as White Lead tree. It is native to Southern Mexico and Northern Central America and spread across many tropical and sub-tropical locations. It has multipurpose uses, such as generation of firewood, timber, greens, fodder, and green manure, as well as to provide shade and control soil erosion. It has been used for medicinal purposes because of possessing multiple pharmacological properties. Studies have shown the presence of various secondary metabolites such as alkaloid, cardiac glycosides, tannins, flavonoids, saponins, and glycosides in this species. In traditional medicine, it is used to control stomach ache and as contraception and abortifacient. In the present study, the global distribution, taxonomy, chemical composition, pharmacological activities, and potential uses of Leucaena leucocephalaare discussed. INTRODUCTION Leucaena leucocephala (Family: Fabaceae) is a small, fastgrowing tree, and has multiple common names by which it is known such as White Lead tree, White Popinac, Jumbay, and Wild Tamarind [1]. It is native to Southern Mexico and Northern Central America and diffused in over 35 countries across all continents, except Antarctica (table 1) [2]. Botanical description The leaves of L. leucocephala are compound pinnate; pinnular rachis 5-10.2 cm long in general, are bipinnate with 6-8 pairs of pinnae bearing 9-20 pairs leaflets, linear-lanceolate 8-15 mm long, 2-4.5 mm wide, slightly asymmetric, acute at tip, linear-oblong to weakly elliptic, glabrous except on margins rounded to obtuse at base. L. leucocephala leaves fold up due to heat, cold or lack of water [10]. The paired inflorescences of axillary globose head measures between 12 and 20 mm in diameter, with the peduncle length measuring between 2 and 3 cm, and numerous flowers produced. The axillaries are on long stalks, white in color, in dense global heads measuring 1-2 cm across; fruit pod with raised border, flat, thin; becoming dark brown and hard when mature, 10-15 cm long, 1.6-2.5 cm wide, dehiscent at both sutures and each legume contains 15-20 hard, shiny, brown seeds that are flat and tear-drop shaped. This species is a polyploid with 2n = 104 chromosomes [11][12][13][14][15]. Flowering phenology of L. leucocephala varies widely among varieties and with respect to their growing location. However, it can flower all year-round [16]. L. leucocephala starts flowering within 4 to 6 mo of seed germination. Usually, the flowering period is seasonal or twice a year. The spherical whitish flower heads are 2to 2.5 cm in diameter, 100 to 180 flowers for each head, 2 to 6 in leaf axils per group, arising on actively growing young shoots. The color of flowers can be white or pale cream-white and are borne on stalks 2 to 3 cm long at the ends or sides of twigs [8,16]. Pods measure from 11 cm to 19 cm long, 15 mm to 21 mm wide, 5 to 20 for each flower head, linear-oblong shape, rounded at apex, flat, 8 to 18 seeded, mid-to orange-brown, glabrous and slightly lustrous in white velvety hairs, papery, opening along both margins. The seeds are hard, dark brown with a hard shining testa measured from 6.7 mm to 9.6 mm long, 4 mm to 6.3 mm wide, aligned transversely in pod. From the first year onward, the fruits grow in abundance. The seeds are small (8 mm long), shiny, teardrop-shaped, flat and dark brown with a thin but fairly durable seed coat. There are about 17,000 to 21,000 seeds per kilogram [17,18]. The dispersal agents of the seeds in pastures are legumes, wind, ruminants, and non-ruminants. Legumes can release the seeds while they are still on the tree. The wind can carry the seeds to some distances. Ruminants and non-ruminants can eat the legumes and then disperse the seeds through their fecal matters [17,18]. Harvested fruits and germination requirements Leucaena leucocephala fruits are harvested from branches when it changes their color to dark brown before dehiscence. The fruits are sun-dried after harvest and then threshed to release seeds by beating the dried legumes in cloth bags [16]. L. leucocephala seeds can then be stored as un-scarified or scarified seeds. Unscarified seeds can be stored for more than one year under dry conditions at ambient temperature and up to 5 y when stored at 2 °C to 6 °C, dried. In contrast, scarified seeds can be stored for 6 mo to a year [16,19,20]. The harvested seeds of L. leucocephala should be decontaminated from larvae using fumigation by exposing the seeds to 32 g/m methyl bromide for 2 h at 27 °C. Although seeds can be sown directly after harvest without pre-sowing treatment, the seed germination, in that scenario becomes very low. Therefore, to increase the rate of germination, one of the pre-sowing treatments such as scarification, hot water treatment and/or sulfuric acid treatment should be used. Soaking L. leucocephala seeds in 100 °C water for 20 s and subsequently in water at room temperature for 48 h had the highest seed germination rate, higher cumulative germination (CGP) and shortened period of complete dormancy (CDP),when compared to the germination rate when seeds were soaked for only 24 h or untreated seeds [21]. Soaking L. leucocephala seeds in hot water at 80 °C for 3-4 min followed by soaking in the water at room temperature for 12 h or soaking L. leucocephala seeds in concentrated sulfuric acid for 15-30 min are the best pre-sowing treatments that can be used to increase the seed germination of L. leucocephala. However, scarification is the most effective treatment that can be used as pre-sowing inoculation of seeds as it facilitates good field establishment of nitrogen-fixing rhizobium bacteria in soil devoid of rhizobia strains [20]. Germination percentage of L. leucocephala seeds are 50 % to 98 % for fresh seeds [8,19]. The complete dormancy period for scarified seeds are 6 to 10 d after sowing and for unscarified seeds are 6 to 60 d after sowing [20]. The sowing of L. leucocephala seeds should be on or near the soil surface, but not any deeper than 2 cm. For growth in the nursery, the growing medium should be well drained, have proper nutrients and water holding capacity, and have a pH between 5.5 and 7.5 [16]. Light shade is recommended during the seedling development and full sun thereafter [16]. In young seedlings, taproot development is rapid and seedlings reach plantable size of 20 cm height in 2 to 3 mo [16,22]. Weeding in plantations, until the seedlings outgrow competing grasses or herbaceous competitors in plant biomass, is recommended [16]. Direct seeding by planting container seedlings, bare root seedlings and stem cuttings of 2 to 5 cm in diameter can be used as a method of plantation development [23]. It grows moderately rapid but not as fast as the giant variety for which most of the data are available [24]. The animal feeding pattern of Leucaena in Indonesia was studied by Lowry [27]. L. leucocephala is was known as a high potential fodder for several centuries. Its nutritional value is comparable or superior to alfalfa (Medicago sativa) with high ß-carotene content [24]. The leaves of L. leucocephala are most commonly used to feed chicken and pigs and processed as a pellet for freshwater fish. The dry matter digestibility (DMD) of L. leucocephala was 57.7% and crude protein based on the dry matter was 29.5% [28]. Forage quality of L. leucocephala is higher than other Leucaena species such as L. pallida and L. diversifolia as stated by Castillo et al. [29]. Leaves of L. leucocephala contained 6.70% moisture, 22.76% crude protein, 22.29% crude fibre, 4.60% fat, and 9.73% ash content [30]. In another study by El-Baha [31], leaves were reported to contain the highest percentage of minerals (12.5% and 14.0%), pods the highest percentage of crude protein (33.0% and 30.9%), twigs contained the highest percentage of crude fiber (31.5% and 37%) and calcium (1.9% and 2.1%), and dry seeds possessed the highest percentage of crude fat (7.2% and 10.1%) and nitrogen free extract (55.9% and 58.8%) for the 2-and 4-years-old plants, respectively. Use of Leucaena leucocephala as ruminant feeds Forage containing 40% to 60% L. leucocephala leaves gave a maximum gain in weight in rabbits, goats, sheep, and cows. Rushkin [32] reported that "L. leucocephala is palatable forage, digestible and serves to increase milk output in both the humid and the monsoonal tropics for ruminants and non-ruminants. However, when L. leucocephala is fed at levels above 7.5% (dry mass) of the diet, nonruminants lost weight and had general health problems due to the mimosine toxicity." When using L. leucocephala leaves in a rationed manner for fattening cattle, it is equivalent to cottonseed cake [33] and superior to groundnut cake [34]. In Queensland, Australia, a very high live weight gain was recorded using L. leucocephala leaves [32,[34][35] and the same is done as well in several other places [36]. Several reports showed that L. leucocephala could be a substitute for the imported protein supplements fed to dairy cows [32]. Dairy cattle produce well when fed with L. leucocephala [32,37]. Henke and Morita [38] reported that dairy cows produce milk with higher fat content when they are fed with L. leucocephala compared to similar cows fed on pasture and concentrates or ammoniated rice straw in the grass-based diet. In Australia, Hawaii and Indonesia, annual milk production of 5,000 to 9,700 L/ha was recorded [32]. Feeding cows and buffaloes on L. leucocephala foliage at 10% of their diets produce higher milk yield by 20% than that of the control group [39]. Jones [40] reported that feeding dairy cows on L. leucocephala foliage increases milk fat and protein contents and also increases milk production by 14% on average. Feeding dairy cows on grazing Brachiaria decumbens with L. leucocephala produce higher milk yield than cows fed only with grass. However, the use of L. leucocephala for cattle feedings has problems, due to mimosine toxicity. Symptoms of mimosine toxicity include infertility, decreased weight gain, goiter, cataract in young animals, and loss of hair [41]. Cattle fed completely on L. leucocephala will not die but may lose some of their coarse hairs. However, newborn calves have shown signs of enlarged thyroids, which may result in death within a few days if their mothers have signs of toxicity [42]. In addition, thyroxine levels were accounted to be higher in the group (10-month age) fed on an exclusive diet of L. leucocephala for 23 mo [43]. In sheep, L. leucocephala provides very palatable forage. Higher performances in sheep were noted when it is fed on dried L. leucocephala leaves at levels between 25% and50% of grass hay [44,45]. In diet scarcity periods, sheep can be fed on higher amounts of dried L. leucocephala leaves [45][46]. The leaf meal or fresh leaves are comparable to rice straw in the grass-based diet of sheep because it increases DM intake, protein intake, N retention and thus improves growth performance of sheep and therefore is suitable to replace concentrated or ammoniated rice straw in the grass-based diet of sheep [47,48]. Feeding ruminant animals on L. leucocephala foliage increased survival rate and growth rate, for instance in lambs [49][50][51], rams [52,53], and ewes [54]. In goats, L. leucocephala provides very palatable, digestible, and nutritious forage. L. leucocephala gives better dry matter intake, weight gain, and reproductive performance than other legumes such as alfalfa, Lablab purpureus, and Gliricidia sepium [54][55][56][57]. In a grassbased diet for goat, L. leucocephala dry matter foliage can be included at 30% to 75% [58][59][60], and it does not affect the goats' growth and milk production [60]. Fresh or wilted L. leucocephala is better than dried L. leucocephala leaves as a dry matter intake, growth rate and nitrogen utilization [61]. Angora goats fed on natural pastures with 45% of L. leucocephala leaf meal showed higher crude protein intake, weight gain and fibre growth [62,63]. Leucaena leucocephala as non-ruminant feeds Ruminant animals are able to tolerate mimosine than non-ruminants and therefore, L. leucocephala could not be a major portion of the non-ruminant diet. They could tolerate rations that contain up to 5% to 10% L. leucocephala (dry weight) [32]. The best rations used to growing pigs were 5% to 10% of L. leucocephala leaf meals [64,65]. To improve nitrogen retention, L. leucocephala was treated with acetic acid (30 g/kg) or zeolite (5%) and up to 20% L. leucocephala leaves or leaf meal can then be used to feed pigs [66,67]. Using up to 40% of L. leucocephala leaves in camel rations reduced feed conversion efficiency [67]. In poultry, 5%, 20% and 30% of L. leucocephala leaf meal in the diet caused a decline of feed intake, weight gain and egg production [68][69][70]. These low performances may be due to the toxicity of mimosine or poor amino acid digestibility [71]. 5% of L. leucocephala leaf meal in rations of broilers gave higher feed conversion [72]. 15% of roasted L. leucocephala leaves can be included in rations with no effect on the decline of animal performance [73]. In laying hens, 6% of L. leucocephala leaf meal in rations is recommended [74]. L. leucocephala can be used to reduce feed costs, improve animal performance and yolk colour by the xanthophyll which is extracted from L. leucocephala leaves [75]. Feeding rabbits on fresh or dried L. leucocephala or leaf meal improve animal performance. The inclusion of 24% to 40% of fresh L. leucocephala leaves is recommended for growing or fattening rabbits [76][77][78][79][80][81]. L. leucocephala can replace concentrate alfalfa (Medicago sativa) in the diet of rabbits [82]. L. leucocephala is more palatable than Arachis pintoi. 25% of L. leucocephala leaf meal can be included in supplementing a diet with cassava peels and Gliricidia sepium and 30% to 40% with Arachis pintoi. [83]. However, when more than 10% to 15% dried L. leucocephala was included in the diet and replaced with wheat bran resulted in a decrease in growth in rabbits. [84]. 20% to 25% of fresh L. leucocephala leaves in diet resulted up to 55% mortality of female and young rabbits [85][86]. For fish, a few studies have been used with L. leucocephala leaf meal as a protein source in fish feeds and the data obtained are conflicting. Hossain et al. [87] revealed improved growth responses of Clarias gariepinus (African catfish) on diets containing 30% L. leucocephala leaf meal. However, Santiago et al. [88] obtained slow growth rate of C. macrocephalus (Asian catfish) on diets in which 30% of the fish meal was replaced by L. leucocephala leaf meal. Leucaena leucocephala as human food Almost every part of the L. leucocephala species is consumed as human food since the era of the Mayans [6]. In Indonesia, Thailand, and Central America, people eat the young leaves, flowers, and young pods in soups [6]. In the Philippine Islands, the young pods are cooked as a vegetable and roasted seeds are used as a substitute for coffee. The young dry seeds are popped like popcorn [6]. In Indonesia, Thailand, Mexico and Central America people also eat the young leaves, flowers, and young pods as an ingredient for soups and salads. Seeds are being considered as non-conventional sources of protein, together with other leguminous seeds [6]. In addition, it is one of the medicinal plants used to control stomach ache, as contraception and abortifacient. Phytochemical studies The phytochemical screening of leaf extract of L. leucocephala revealed the presence of various secondary metabolites as alkaloid, cardiac glycosides, tannins, flavonoids, saponins and Glycosides [3]. Bioactivity studies on this plant revealed its anthelmintic, antibacterial, anti-proliferative and antidiabetic activities [8]. The L. leucocephala leaves possess many biological properties such as antimicrobial, anticancer, cancer preventive, diuretic, anti-inflammatory, antioxidant; antitumor, antihistaminic, nematicide, pesticide, antiandrogenic, hypocholesterolemic and hepatoprotective (table 2) [6]. L. leucocephala seeds have great medicinal properties and are used to control stomachache, as contraception and abortifacient. The seed gum used as a binder in tablet formulation [6]. A sulfated glycosylated form of polysaccharides from the seeds was reported to possess significant cancer chemo-preventive and antiproliferative activities [1]. The extracts of the seeds have reported as anthelmintic, antidiabetic and have a broad spectrum antibacterial activity [1]. Recently, the seed oil was used in engineering as a novel bio-device useful in biomembrane modelling in lipophilicity determination of drugs and xenobiotics [1]. The plant is reported to be a worm repellent. L. leucocephala seed extracts have antioxidant activity. The antioxidant activity is likely due to the phenolic content. An application of this extract should be considered as it can affect renal function by reducing the levels of albumin, ALP and total protein [91]. Antidiabetic activity L. leucocephala has been reported to possess medicinal properties that control stomach diseases, facilitate abortion and provide contraception, and it is often used as an alternative, complementary treatment for diabetes [25]. Leaf and seed extracts also have antidiabetic activity [90]. An aqueous extract derived from its boiled seeds was taken orally to treat Type-2 diabetes [92]. The seed extract from L. leucocephala inhibits elevated blood glucose and lipids levels but increases the number of pancreatic islets [93]. The active fractions from L. leucocephala seeds have been reported to have antidiabetic activity [94]. Moreover, the seed extract from L. leucocephala exhibits antidiabetic and antioxidant activities and can be used for the treatment of diabetes without affecting hepatic function, but there is an impact on renal function [91]. In Indonesia, an aqueous extract derived from boiling the seeds of L. leucocephala is taken orally to treat type-2 (NIDDM) diabetes and is claimed to be efficacious [95]. Antimicrobial activity L. leucocephala seed oil extract had a concentration-dependent activity against both Gram-positive (Staphylococcus aureus, Bacillus subtilis) and Gram-negative (Pseudomonas aeruginosa, Esherichia coli) bacteria and the lotion formulation with an emulsifying agent had good pharmaceutical properties [96]. The crude extract of L. leucocephala leaves exhibits anti-tubercular activity that supports the use of this plant as mentioned in the folklores [97]. Anti-inflammatory The anti-inflammatory property of chloroform, ethyl acetate and methanol extracts of leaves of L. leucocephala was reported [6]. Antitumor activity Hexane, petroleum ether, ethyl acetate and methanol extracts of leaves of L. leucocephala showed antitumor activity [6]. Wood uses of L. leucocephala Its uses have been expanded to gum production, furniture and construction timber, pole wood, and pulpwood [3,24] CONCLUSION L. leucocephala is one of the miracle timber trees. It has multipurpose uses including beneficial pharmacological properties. Further studies revealed the presence of various secondary metabolites as alkaloid, cardiac glycosides, tannins, flavonoids, saponins and Glycosides. Its seeds have great medicinal properties and used to control stomachache, as contraception and abortifacient. The seed gum used as a binder in tablet formulation and the extracts of the seeds used as anthelmintic, antidiabetic and has a broad spectrum antibacterial activity. To date, no information is available about the pharmacological activities of flower, fruit, bark, wood branch, stem and root of L. leucocephala which need further studies.
Semiclassical shell-structure micro-macroscopic approach for the level density Level density $\rho(E,A)$ is derived for a one-component nucleon system with a given energy $E$ and particle number $A$ within the mean-field semiclassical periodic-orbit theory beyond the saddle-point method of the Fermi gas model. We obtain $~~\rho \propto I_\nu(S)/S^\nu$, with $I_\nu(S)$ being the modified Bessel function of the entropy $S$. Within the micro-macro-canonical approximation (MMA), for a small thermal excitation energy, $U$, with respect to rotational excitations, $E_{\rm rot}$, one obtains $\nu=3/2$ for $\rho(E,A)$. In the case of excitation energy $U$ larger than $E_{\rm rot}$ but smaller than the neutron separation energy, one finds a larger value of $\nu=5/2$. A role of the fixed spin variables for rotating nuclei is discussed. The MMA level density $\rho$ reaches the well-known grand-canonical ensemble limit (Fermi gas asymptotic) for large $S$ related to large excitation energies, and also reaches the finite micro-canonical limit for small combinatorial entropy $S$ at low excitation energies (the constant"temperature"model). Fitting the $\rho(E,A)$ of the MMA to the experimental data for low excitation energies, taking into account shell and, qualitatively, pairing effects, one obtains for the inverse level density parameter $K$ a value which differs essentially from that parameter derived from data on neutron resonances. I. INTRODUCTION Many properties of heavy nuclei can be described in terms of the statistical level density . A well-known old example is the description of neutron resonances using the level density. Usually, the level density ρ(E, A), where E and A are the energy and nucleon number, respectively, is given by the inverse Laplace transformation of the partition function Z(β, α). Within the grand canonical ensemble the standard saddle-point method (SPM) is used for integration over all variables, including β, which is related to the total energy E [2,4]. This method assumes large excitation energies U , so that the temperature T is related to a well-determined saddle point in the integration variable β for a finite Fermi system of large particle numbers. However, data from many experiments for energy levels and spins also exist for regions of low excitation energy U , where such a saddle point does not exist. For presentation of experimental data on nuclear spectra, the cumulative level-density distribution -cumulative number of quantum levels below the excitation energy U -is conveniently often used for statistical analysis [24][25][26] of the experimental data on collective excitations [26][27][28][29]. For calculations of this cumulative level density, one has to integrate the level density over a large interval of the excitation energy U . This interval extends from small values of U , where there is no thermodynamic equilibrium (and no meaning to the temperature), to large values of U , where the standard grand canonical ensemble can be successfully applied in terms of the temperature T in a finite Fermi system. Therefore, to simplify the calculations of the level density, ρ(E, A), we will, in the following, carry out the integration over * Email: magner@kinr.kiev.ua the Lagrange multiplier β in the inverse Laplace transformation of the partition function Z(β, α) more accurately beyond the SPM [30][31][32]. However, for a nuclear system with large particle number A one can apply the SPM for the variable α, related to A. The case of neutronproton asymmetry of the Fermi system will be worked out separately. Thus, for remaining integration over β we shall use approximately the micro-canonical ensemble which does not assume a temperature and an existence of thermodynamic equilibrium. Notice that there are other methods to overcome divergence of the full SPM for low excitation-energy limit U → 0; see Refs. [18,21,[33][34][35]. The well-known method suggested in Ref. [34] is applied successfully for the partition function of the extended Thomas-Fermi (ETF) theory at finite temperature to obtain the smooth level density and free energy; see also Refs. [35] and [36], and references therein. For formulation of the unified microscopic canonical and macroscopic grand-canonical approximation (MMA) to the level density, we will find a simple analytical approximation for the level density ρ which satisfies the two well-known limits. One of them is the Fermi gas asymptotote, ρ ∝ exp(S), with the entropy S, for large entropy S. Another limit is the combinatorics expansion in powers of S for a small entropy S or excitation energy U , always at large particle numbers A; see Refs. [2,7,37,38]. The empiric formula, ρ ∝ exp[(U − E 0 )/T ] with free parameters E 0 , T , and a preexponent factor, was suggested for the description of the excited low energy states (LESs) in Ref. [3]. Later, this formula was named the constant "temperature" model (CTM) where the "temperature" is considered an "effective temperature" related to the excitation energy (with no direct physical meaning of temperature for LESs); see also Ref. [21,22]. We will show below that the MMA has the same power expansions as the CTM for LES at small excitation energies U . We will also show that, within the MMA, the transition between these two limits is sufficiently rapid, when considered over the dimensionless entropy variable S. Therefore, our aim is to derive approximately a simple statistically averaged analytical expression for the level density ρ(S) with the correct two limits, mentioned above, for small and large values of S. Such an MMA for the level density ρ was suggested in Refs. [30,31] in terms of the modified Bessel function of the entropy variable in the case of small excitation energy U as compared to the rotational energy E rot . The so-called a "classical rotation" of the spherical or axially symmetric nucleus was considered alignment of nucleons along the symmetry axis on the basis of the periodic orbit theory with a fixed angular momentum and its projection (see Ref. [39]), in contrast to the collective rotation around the perpendicular axis [40,41]. The yrast line was defined to be at zero excitation energy for a given angular momentum within the cranking model [42,43]. One of the important characteristics of the yrast line is the moment of inertia (MI). The Strutinsky shell-correction method (SCM) [44,45], extended by Pashkevich and Frauendorf [46] to the description of nuclear rotational bands, was applied [30,31] for studying the shell effects in the MI near the yrast line. We will extend the MMA approach [30], in order to consider the shell effects in the yrast line as a minimum of the nuclear level density (minimum excitation energy), for the description of shell and collective effects in terms of the level density itself for larger excitation energies U . The level density parameter a is one of the key quantities under intensive experimental and theoretical investigations; see, e.g., Refs. [1-5, 7-9, 14, 23]. Mean values of a are largely proportional to the particle number A. The inverse level density parameter K = A/a is conveniently introduced to exclude a basic mean A dependence in a. Smooth properties of K as function of the nucleon number A have been studied within the framework of the selfconsistent ETF approach [9,20]. However, for instance, shell effects in the statistical level density are still an attractive subject. This is due to the major shell effects in the distribution of single-particle (s.p.) states near the Fermi surface within the mean-field approach. The nuclear shell effects influence the statistical level density of a heavy nucleus, which is especially important near magic numbers, see Refs. [7,8] and references therein. In the present study, for simplicity, we shall first work out the derivations of the level density ρ(E, A) for a one-component nucleon system, taking into account the shell, rotational and, qualitatively, pairing effects. This work is concentrated on LESs of nuclear excitation-energy spectra below the neutron resonances. The paper is organized as the following. The level density ρ(E, A) is derived within the MMA by using the POT in Sec. II. We extend the MMA to large excitation energies U , up to about the neutron separation energy, taking essentially into account the shell effects. Several analytical approximations, in particular the spin dependence of the level density are presented in Sec. III. Illustrations of the MMA for the level density ρ(E, A) and inverse level density parameter K versus experimental data, discussed for typical heavy nuclei, are given in Sec. IV. Our conclusions are presented in Sec. V. The semiclassical POT is described in Appendix A. The level density, ρ(E, A, M ), derived by accounting for the rotational excitations with the fixed projection of the angular momentum M and spin I of nuclei in the case of spherically symmetric or axially symmetric mean fields, is given in Appendix B. The full SPM level density derivations generalized by shell effects are described in Appendix C. II. MICROSCOPIC-MACROSCOPIC APPROACH For a statistical description of level density of a nucleus in terms of the conservation variables, the total energy, E, and nucleon number, A, one can begin with the microcanonical expression for the level density, where E i and A i represent the system spectrum, and S = ln Z(β, α)+βE −αA is the entropy. Using the mean field approximation for the partition function Z(β, α), one finds [4] ln where ε i are the s.p. energies of the quantum states in the mean field. In the transformation from the sum to an integral, we introduced the s.p. level density g(ε) as a sum of the smooth,g(ε), and oscillating shell, δg(ε), components, using the SCM (see Refs. [44,45]): Within the semiclassical POT [36,50], the smooth and oscillating parts of the s.p. level density, g(ε), can be approximated, with good accuracy, by the sum of the ETF level density,g ≈ g ETF , and the semiclassical PO contribution, δg(ε) ≈ δg scl , Eq. (A.5). In integrating over α in Eq. (1) for a given β by the standard SPM, we use the expansion for the entropy S(β, α) near the saddle point α = α * as S(β, α) = S(β, α * ) + 1 2 The first-order term of this expansion disappears because the Lagrange multiplier, α * , is defined by the saddlepoint condition Introducing, for convenience, the potential Ω = −lnZ/β, one can use its SCM decomposition in terms of the smooth part and shell corrections for the level density g, see Eq. (3) and Ref. [30], through the partition function, ln Z (Eq. (2)): Here,Ω ≈ Ω ETF is the smooth ETF component [23,30], whereẼ ≈ E ETF is the nuclear ETF energy (or the liquid-drop energy). For a given β, the chemical potential, λ = α * /β, is a function of the particle number A, according to Eq. (5), and λ ≈λ is approximately equal to the SCM smooth chemical potential. With the help of the POT [36,50,51], one obtains [30] for the oscillating (shell) component, δΩ, in Eq. (6), For the semiclassical free-energy shell correction, δF scl (see Appendix A), we incorporate the POT expression: where, and Here, t PO = k t k=1 PO (λ) is the period of particle motion along the PO (taking into account its repetition, or period number k), and t k=1 PO is the period of the particle motion along the primitive (k = 1) PO. The period t PO (and t k=1 PO ) and the partial oscillating level density component, g PO , given by Eq. (A. 6), are taken at the chemical potential ε = λ; see also Eqs. (A.5) and (A.6) for the semiclassical s.p. level-density shell correction δg scl (ε) (see Refs. [36,50]). Notice that equivalence of the variations of the grand-canonical-and canonical-ensemble potentials, Eq. (8), is valid approximately in the corresponding variables, for large particle numbers A. This equivalence has to be valid in the semiclassical POT. Expanding, then, x PO /sinh(x PO ), Eq. (10), in the shell correction δΩ [Eqs. (8) and (10)] in powers of 1/β 2 up to the quadratic terms, ∝ 1/β 2 , one obtains Here E 0 is the ground state energy, E 0 =Ẽ + δE, and δE is the energy shell correction of a cold nucleus, δE ≈ δE scl , Eq. (A.14). In Eq. (12), a is the level density parameter a, whereã ≈ a ETF and δa are the ETF and the shell correction components, Note that for the ETF components one commonly accounts for self-consistency using Skyrme interactions, see Refs. [20,23,32,36,53,54]. For the semiclassical POT level density, δg scl (λ), one employs the method of Eqs. (A.5) and (A.6), see Refs. [36,40,[49][50][51][52]. Note that in the grand canonical ensemble, the level density parameter a, Eqs. (13) with (14), is function of the chemical potential λ. We may include, generally speaking, the collective (rotational) component into E 0 ; see Sec. III E and Appendix B. Substituting Eq. (4) into Eq. (1), and taking the error integral over α in the extended infinite limits including the saddle point α * , one obtains where U = E − E 0 is the excitation energy, and a is the level density parameter, given by Eqs. (13) and (14). In equation (15), J is the one-dimensional Jacobian determinant [c number, J (λ)] taken at the saddle point over α at α = α * = λβ, Eq. (5): The asterisks mean the saddle point for integration over α for any β (here and in the following we omit the superscript asterisk in J ). Differentiating the potential Ω, Eq. (6), over λ within the grand-canonical ensemble we obtain for the smooth part of the Jacobiañ J = − ∂ 2 Ω ETF /∂λ 2 * ≈ g ETF (λ). We note that, for not too large thermal excitations, the main contribution from the oscillating potential component δΩ as function of λ is coming from the differentiation of the sine function in the PO energy shell correction factor E PO , Eq. (11), through the PO action phase S PO (λ)/ of the PO level density component g PO (λ), Eq. (A.6). The temperatures T = 1/β * , when the saddle point β = β * exists, are assumed to be much smaller than the chemical potential λ. The reason is that for large particle numbers A the semiclassical large parameter, ∼ S PO / ∼ A 1/3 , appears. This leads to a dominating contribution, much larger than that coming from differentiation of other terms, the β-dependent function x PO (β), and the PO period t PO (λ). Using Eqs. (8), (A. 16), and (A.17), one approximately obtains for the oscillating Jacobian part δJ (λ), Eq. (16), the expression: where x PO (β, λ) [through t PO (λ)] is the dimensionless quantity, Eq. (10), proportional to 1/β. The total Jacobian J (λ) as function of λ can be presented as where ξ(β, λ) is defined by [see also Eqs. (16) and (12)] This approximation was derived for the case when a smooth (E)TF part can be neglected. Notice, that the rotational excitations can be included into the ETF part and shell corrections of the potential Ω; see Sec. III E and Appendix B. In this case, Eq. (18) will be similar but with more complicate expressions for the twodimensional JacobianJ , especially for its shell component δJ . Substituting now λ, found from Eq. (5), for a given particle number A, one can obtain relatively small thermal and shell corrections to the smooth chemical potential in λ(A) of the SCM [45]. For simplicity, neglecting these correction terms for large particle numbers, A 1/3 ≫ 1, one can consider λ as a constant related to that of the particle number density of nuclear matter; see Sec. 2.3 of Ref. [4]. Therefore, λ is independent of the particle number A for large values of A. III. MMA ANALYTICAL EXPRESSIONS In linear approximation in 1/β 2 , one finds from Eq. (19) for ξ and Eq. (10) for where (21) see also Eq. (19). In Eq. (21), D sh ≈ λ/A 1/3 is the distance between major shells; see Eq. (A.15). For convenience, introducing the dimensionless energy shell correction, E sh , in units of the smooth ETF energy per particle, E ETF /A, one can present Eq. (21) as In the applications below we will use ξ > 0 and E sh > 0 if δE < 0. The smooth ETF energy E ETF in Eq. (22) [see Eq. (A.10)] can be approximated as E ETF ≈g(λ)λ 2 /2. The energy shell correction, δE, was approximated, for a major shell structure, with the semiclassical POT accuracy (see Eqs. (A.14) and (11), and Refs. [36,[50][51][52]) by, The correction ∝ 1/β 4 of the expansion of the Jacobian (18) in 1/β through the oscillating part δJ , Eq. (17), is relatively small for β which, at the saddle point values T = 1/β * , is related to the chemical potential λ as T ≪ λ. The high order, ∝ 1/β 4 , term of this expansion can be neglected under the following condition: Using typical values for parameters λ = 40 MeV, A = 200, and K ≈ 10 MeV, 1/g ≈ 0.1−0.2 MeV; see Ref. [20]; we may approximately evaluate very right-hand-side of Eq. (24) as 20 MeV. For simplicity, small shell and temperature corrections to λ(A) from the conservation equation (5) are neglected by using linear shell effects of the leading order [45] and constant particle number density of nuclear matter, ρ 0 . Taking ρ 0 = 2k 3 F /3π 2 = 0.16 fm −3 , one finds about constant λ = 2 k 2 F /2µ ≈ 40 MeV, where µ is the nucleon mass. In the derivations of the condition (24), we used the POT distance between major shells, D sh , Eq. (A.15). Evaluation of the upper limit for the excitation energy at the saddle point β = β * = 1/T is justified because this upper limit is always so large that this point does certainly exist. Therefore, for consistence, one can neglect the quadratic, 1/β 2 (temperature T 2 ), corrections to the Fermi energy ε F in the chemical potential, λ ≈ ε F , for large particle numbers A. Under the condition of Eq. (24), one can obtain simple analytical expressions for the level density ρ(E, A) from the integral representation (15), because the Jacobian factor J −1/2 in its integrand can be simplified by expanding in small values of ξ or of 1/ξ [see Eq. (20)]. Notice that one has two terms in the Jacobian J , Eq. (18). One of them is independent of the integration variable β and the other one is proportional to 1/β 2 . These two terms are connected to those of the potential Ω, Eq. (12), by the inverse Laplace transformation (1) of the partition function (2) and the corresponding direct operation transformation. Expanding the square root J −1/2 in the integrand of the integral representation (15), for small and large ξ at linear order in ξ and 1/ξ, respectively, one arrives at two different approximations marked below as (i) and (ii) cases, respectively. At each finite order of these expansions, one can exactly take the inverse Laplace transformation. Convergence of the corresponding corrections to the level density, Eq. (15), after applying the inverse transformation, Eq. (B.12), will be considered in the next subsections. Substituting this expression for the Jacobian factor, J −1/2 , into Eq. (15) one obtains two terms, which are related to those of the last equation in (25). Due to the transformation of the integration variable β to τ = 1/β in the first term and using β directly as the integration variable in the second term, they are reduced to the analytical inverse-Laplace form (B.12) for the transformation from τ to a variables [55]. Thus, one can approximately represent the level density ρ(E, A) as a superposition of the two Bessel functions of the orders of 3/2 and 1/2, Here where ξ is given in Eq. (21), K = A/a, a is the level density parameter, Eq. (13), and This expression is associated with an entropy in the mean field approximation because of its two clear asymptotic limits for large and small excitation energies, U [both asymptotic limits in terms of the level density, ρ(E, A), will be discussed below]. The relative contribution of the second term in Eq. (26) decreases with the shell effects, E sh , inverse level density parameter, K, and excitation energy, U . In the case (i), referred to below as the MMA1 approach, up to these corrections (small r 1 ), one arrives approximately at expression (11) of Ref. [32]: where ξ > 0, Eq. (20) (for δE < 0). Substituting this approximate expression for the Jacobian factor into Eq. (15) and transforming the integration variable β to τ = 1/β in the integral representation for the level density ρ(E, A), we obtain by using the inverse Laplace transformation (B.12) from τ to a variable: ρ(E, A) ≈ ρ 5/2 S −5/2 I 5/2 (S) + r 2 S −9/2 I 9/2 (S) , (31) with where ξ is given by Eqs. (21) and (22), and In contrast to case (i), the relative contribution of the second term in the r.h.s. of Eq. (31) [case (ii)] has the opposite behavior in the values of parameters E sh and K, and is almost independent of U . Up to small contribution of the second term in Eq. (31), one arrives approximately at where ρ 5/2 is given by Eq. (32). This approximation is referred to below as the MMA2 approach. Figure 1 shows good convergence of different approximations to their main term (n = 0) for ρ(E, A). Here we accounted for the first (n = 1) analytical and second (n = 2) numerical corrections in the expansion of the Jacobian factor J −1/2 [see Eq. (18) for the Jacobian J ], over 1/ξ (MMA2) and over ξ (MMA1) as functions of the excitation energy U . Calculations are carried out for typical values of the parameters: the inverse level density K, the relative energy shell corrections E sh , and a large particle number A. The results of the analytical MMA1 approach, Eq. (26), and MMA2, Eq. (31), with the first correction terms, are compared with those of Eqs. (34) and (29) without first correction terms, respectively, using different values of these parameters. The contributions of these corrections to the simplest analytical expressions, Eq. (29) and (34), are smaller the smaller excitation energies U for the MMA1 and the larger U for the MMA2 such that a transition between the approaches, Eq. (26) and (31), takes place with increasing value of U ; see Fig. 1. We also demonstrate good convergence to the leading terms (n = 0) by taking into account numerically the next order (n = 2 in this figure) corrections in the direct calculations of the integral representation (15). Such a convergence occurs for the MMA1 better for smaller U with increasing inverse density parameter K and decreasing relative energy shell correction E sh . An opposite behavior takes place for the MMA2 approach. Especially, a good convergence with increasing excitation energy U is seen clearly with n = 1 and 2 for the MMA1 in panels (a) and (c); see, e.g., panel (c) for larger values of both K and E sh . Notice that for the case (ii) when the shell effects are dominating, the derivatives are relatively large, a ′′ (λ)λ 2 /a ≫ 1, but at the same time the shell corrections, E sh , can be small. In this case, referred to below as the MMA2b approach, we have for the coefficient ρ 5/2 Here, in the calculation of ρ 5/2 given by Eq. (32), we used the TF evaluation of the level density,g ∝ A/λ, and its derivatives over λ in the first equation of (21) for ξ. C. Disappearance of shell effects with temperature As well known (see for instance Refs. [30,36,40,50]), with increasing temperatures T , the shell component δΩ, Eq. (8), disappears exponentially as exp(−2π 2 T /D sh ) in the potential Ω or free energy F , see also Eqs. (9) and (10). This occurs at temperatures T ≈ D sh /π = 2 − 3 MeV (D sh = λ/A 1/3 = 7 − 10 MeV in heavy nuclei, . For such large temperatures with excitation energies U , near or larger than neutron resonances energies, one can approximate the Jacobian factor J −1/2 in Eq. (15) as whereJ ≈g, and and x PO = πt PO / β, Eq. (10). With this approximation, using the transformation of the integration variable β to τ = 1/β in Eq. (15), one can analytically take the inverse Laplace integral [Eq. (B.12)] for the level density. Finally, one obtains ρ =ρ + δρ, where Here, a sh =ã − πt PO / is the shifted level density parameter due to the shell effects, and S sh = 2 a sh U is the shifted entropy. For a major shell structure, one arrives at [see Eq. (23)], and Hence, the shifted inverse level-density parameter is K = A/a =K 1 + ∆K/K , where the dimensionless shift is given by This is approximately equal to ∆K ≈ 1 − 2 MeV forK = 10 MeV (see Refs. [20,23,56,57]) at typical parameters λ = 40 MeV and A = 100 − 200 (∆K ≈ 6 − 9 MeV forK = 20 MeV). We note that an important shift in the inverse level density parameter K for double magic nuclei near the neutron resonances is due to a strong shell effect. D. General MMA All final results for the level density ρ(E, A) discussed in the previous subsections of this section can be approximately summarized as with corresponding expressions for the coefficient ρ ν (see above). For large entropy S, one finds At small entropy, S ≪ 1, one obtains also from Eq. (42) the finite combinatorics power expansion [2,7,37,38] where Γ(x) is the gamma function. This expansion over powers of S 2 ∝ U is the same as that of the "constant temperature model" (CTM) [3,21,22], used often for the level density calculations at small excitation energies U , but here we have it without free parameters. In order to clarify Eq. (43) for the MMA level density at a large entropy, one can directly obtain a more general full SPM asymptote, including the shell effects, by taking the integral over β in Eq. (15) using the SPM (see Appendix C). We have where ξ * is ξ of Eq. (19) at the saddle point β = β * , which is, in turn, determined by Eq. (C.2): We took the factor J −1/2 , obtained from the Jacobian J of Eq. (18), off the integral (15) at β = β * = 1/T . The Jacobian ratio ξ * of δJ /J at the saddle point, β = β * (λ = λ * = α * T is the standard chemical potential of the grand-canonical ensemble), Eq. (46), is the critical quantity for these derivations. The quantity ξ * is approximately proportional to the semiclassical POT energy shell correction, δE, Eq. (23), through E sh , Eq. (22), the excitation energy, U = aT 2 , and to a small semiclassical parameter A −1/3 squared for heavy nuclei (see Ref. [32] and Appendix A). For typical values of parameters, λ = 40 MeV, A ≈ 200, and and ETF approaches [36], these values are given by the realistic smooth energy E ETF for which the binding energy approximately equals E ETF + δE [58]. Accounting for the shell effects, Eq. (45) is more general large-excitation energy asymptote with respect to the well-known Bethe expression [1] ρ where such effects were neglected; see also Refs. [2][3][4]. This expression can be alternatively obtained as the limit of Eq. (45) at large excitation energy, U → ∞, up to shell effects [small ξ * of the case (i)]. This asymptotic result is the same as that of expression (29), proportional to the Bessel function I ν of the order ν = 3/2 [the case (i)], at the main zero-order expansion in 1/S; see Eq. (43). For large-entropy S asymptote, we find also that the Bessel solution (34) with ν = 5/2 in the case (ii) (ξ * ≫ 1) at zero-order expansion in 1/S coincides with that of the general asymptote (45). The asymptotic expressions, Eqs. (43), (45), and, in particular, (47), for the level density are obviously divergent at U → 0, in contrast to the finite MMA limit (44) for the level density; see Eq. (42) and, e.g., Eqs. (29) and (34). Our MMA results will be compared also with the popular Fermi gas (FG) approximation to the level density ρ(E, N, Z) as a function of the neutron N and proton Z numbers near the β stability line, (N − Z) 2 /A 2 ≪ 1 [2,3,14]: Notice that in all our calculations of the statistical level density, ρ(E, A) [also ρ(E, N, Z), Eq. (48)], we did not use a popular assumption of small spins at large excitation energy which is valid for the neutron resonances. For typical values of spin I ∼ > 10, moment of inertia Θ ≈ Θ TF ≈ 2µR 2 A/5, Eq. (A.12), radius R = r 0 A 1/3 , with r 0 = 1.14 fm, and particle number A ∼ < 200, one finds that, for large entropy, the applicability condition (B.10) is not strictly speaking valid. In these estimates, the corresponding excitation energies U of LESs are essentially smaller than neutron resonance energies. However, near neutron resonances the excitation energies U are large, spins are small, and Eq. (48) is well justified. We should also emphasize that the MMA1 approximation for the level density, ρ(E, A), Eq. (29), and the Fermi gas approximation, Eq. (47) can be also applied for large excitation energies U , with respect to the collective rotational excitations, if one can neglect shell effects, ξ * ≪ 1. Thus, with increasing temperature T ∼ > 1 MeV (if it exists), or excitation energy U , where the shell effects are yet significant, one first obtains the asymptotical expression (45) at ξ * ≫ 1, i.e., the asymptote of Eq. (34). Then, with further increasing temperature to about 2-3 MeV with the disappearance of shell effects (section III C), one gets the transition to the Bethe formula, i.e., the large entropy asymptote (47) of Eq. (29). In Fig. 2 we show the level density dependence ρ(S), Eq. (42), for ν = 3/2 in (a) and ν = 5/2 in (b), on the entropy variable S with the corresponding asymptote. In this figure, small [S ≪ 1, Eq. (44)] and large [S ≫ 1, Eq. (43)] entropy S behaviors are presented. For small S ≪ 1 expansion we take into account the quadratic approximation "2", where S 2 ∝ U , that is the same as in the linear expansion within the CTM [3,21]. For large S ≫ 1 we neglected the corrections of the inverse power entropy expansion of the preexponent factor in square brackets of Eq. (43), lines "3", and took into account the corrections of the first [ν = 3/2, (a)] and up to second [ν = 5/2, (b)] order in 1/S (thin solid lines "4") to show their slow convergence to the accurate MMA result "1" (42). It is interesting to find almost a constant shift of the results of the simplest, ρ ∝ exp(S)/S ν+1/2 , SPM asymptotic approximation at large S (dotted lines "3") with respect to the accurate MMA results of Eq. (42) (solid lines "1"). This may clarify one of the phenomenological models, e.g., the back-shifted Fermi-gas (BSFG) model for the level density [8,14,59]. Figure 3 shows the shell effects in the main approximations derived in this section, Eqs. (29), (34), and (45), taking two essentially different values of finite E sh = 2.0 and much smaller 0.002, between which one can find basically those given by Ref. [58]. For convenience, we show these results as functions of the entropy S in panel (a), and the excitation energy U in panel (b), taking the value of the averaged inverse density parameter K found in Ref. [20]; see also Ref. [23]. As expected, the shell effect is very strong for the MMA2 approach, as can be seen from the difference between solid and dotted black lines 1 depending on the second derivatives of strong oscillating functions of λ, a ′′ (λ) ≈ δa ′′ ∝ δg ′′ (λ) [see Appendix A around Eq. (A.17) and Sec. III below Eq. (23)]. This is not the case for the full SPM asymptotic GFG, Eq. ure, the MMA1, Eq. (29), independently of E sh , converges rapidly to the GFG with increasing excitation energy U as well as to the Bethe formula (47). They all coincide at small values of U , about 0.5 MeV, particularly for E sh = 0.002. The Bethe approach is very close everywhere to the GFG line at E sh = 0.002 and therefore, it is not shown in this figure. Notice also that MMA2 at this small E sh is also close to the MMA1 everywhere. Again, one can see that the MMA1 and MMA2 have no divergence at zero excitation energy limit, U → 0, while the full SPM asymptotic GFG, Eq. (45), and, in particular, the Bethe approach, Eq. (47), both diverge at U → 0. E. The spin-dependent level density Assuming that there are no external forces acting on an axially symmetrical nuclear system, the total angular momentum I and its projection M on a space-fixed axis are conserved, and states with a given energy E and spin I are 2I + 1 degenerated. As shown in Appendix B, for the "parallel" rotation around the symmetry axis Oz, i.e., an alignment of the individual angular momenta of the particle along Oz (see Ref. [30] for the spherical case), in contrast to the "perpendicular-to-axis Oz" collective rotation (see, e.g., Ref. [41]), one can derive the level density ρ(E, A, M ) within the MMA approach in the same analytical form as for the ρ(E, A), Eq. (42): where and In Eq. (49), the argument of the Bessel-like function, f ν (S) ∝ I ν (S), Eq. (42), is the entropy S(E, A, M ), Eq. (28), with the M -dependent excitation energy U . Indeed, in the adiabatic mean-field approximation, the level density parameter a in Eq. (28) is given by Eq. (14). For the intrinsic excitation energy U in Eq. (28), one finds where, E 0 =Ẽ + δE, is the same intrinsic (nonrotating) shell-structure energy as in Eq. (12). With the help of the conservation equation (B.3) for the saddle point, κ * = ωβ, we eliminated the rotation frequency ω, obtaining the second equation in Eq. (52); see Appendix B. For the moment of inertia (MI) Θ one has a similar SCM decomposition: whereΘ is the (E)TF MI component which can be approximated largely by the TF expression, Eq. (A.12), and δΘ is the MI shell correction which can be presented finally for the spherically symmetric mean field by Eq. (B.5). As mentioned above, Eqs. (49)-(53) are valid for the "parallel" rotation (an alignment of nucleons' angular momenta along the symmetry axis Oz); see Appendix B for the specific derivations by assuming a spherical symmetry of the potential. In these derivations we used Eq. (52) Equation (49), with M = K, if it exists, can be used for the calculations of the level density ρ(E, A, K), where K is the specific projection of the total angular momentum I on the symmetry axis of the axially symmetric potential [31] (K in notations of Ref. [60]). We note that it is common to use in application [1,2,4] the level density dependence on the spin I, ρ(E, A, I). We will consider here only the academic axially symmetric potential case which can be realized practically for the spherical or axial symmetry of a mean nuclear field for the "parallel" rotation mentioned above. Using Eq. (49), under the same assumption of a closed rotating system and, therefore, with conservation of the integrals of motion, the spin I and its projection M on the space-fixed axis, one can calculate the corresponding spin-dependent level density ρ(E, A, I) for a given energy E, particle number A, and total angular momentum I by employing the Bethe formula [1,4,7,8], For this level density, ρ(E, A, I), one obtains from Eqs. (49) and (52), where S is given by Eq. (28) one finds the standard separation of the level density, ρ MMA (E, A, I), into the product of the dimensionless spindependent Gaussian multiplier, R(I), and another spinindependent factor. Finally, for the case (i) (ν = 2), one finds The spin-dependent factor R(I) is given by where q 2 = Θ U 0 /a/ 2 is the dimensionless spin dispersion. This dispersion q at the saddle point, β * = 1/T = a/U 0 , is the standard spin dispersion ΘT / 2 ; see Refs. [1,2]. Similarly, for the ν = 3 (ii) case one obtains Note that the power dependence of the preexponent factor of the level density ρ(E, A, I) on the excitation energy, U 0 = E − E 0 , differs from that of ρ(E, A, M ); see Eqs. (49) and (43). The exponential dependence, ρ ∝ exp(2 a(E − E 0 )), for large excitation energy E − E 0 is the same for ν = 2 (i) and 3 (ii), but the pre-exponent factor is different; cf. Eqs. (57) and (59). A small angular momentum I means that the condition of Eq. (56) was applied. Equations (57) and (59) General derivations of equations applicable for axially symmetric systems (a "parallel" rotation) in this section are specified in Appendix B by using the spherical potential to present explicitly the expressions for the shell correction components of several POT quantities. However, the results for the spin-dependent level density, ρ(E, A, I) in this section, Eqs. (55)-(59), cannot be immediately applied for comparison with the available experimental data on rotational bands in the collective rotation of a deformed nucleus. They are presented within the unified rotation model [60] in terms of the spin I and its projection K to the internal symmetry axis for the deformed nuclei. We are going to use the ideas of Refs. [60][61][62][63][64] (see also Refs. [7,8]) concerning another definition of the spin-dependent level density ρ(E, A, I) in terms of the intrinsic level density and collective rotation (and vibration) enhancement in a forthcoming work. The level density ρ(E, A, K), e.g., Eq. (49) at M = K, depending on the spin projection K on the symmetry axis of an axially-symmetric deformed nucleus, can be helpful in this work. IV. DISCUSSION OF THE RESULTS In Fig. 4 and Table I we present results of theoretical calculations of the statistical level density ρ(E, A) (in logarithms) within the MMA, Eq. (42), and Bethe, Eq. (47), approaches as functions of the excitation energy U and compared with experimental data. The results of the popular FG approach, Eq. (48), and our GFG, Eq. (45), are very close to those of the Bethe approximation and, therefore, they are presented only in Table I. All of the presented results are calculated by using the values of the inverse level density parameter K obtained from their least mean-square fits (LMSF) to experimental data for several nuclei. The data shown by dots with error bars in Fig. 4 are obtained for the statistical level density ρ(E, A) from the experimental data for the excitation energies U and spins I of the states spectra [65] by using the sample method: ρ exp i = N i /U s , where N i is the number of states in the ith sample, i = 1, 2, ..., N tot ; see, e.g., Refs. [6,8]. The dots are plotted at mean positions U i of the excitation energies for each ith sample. Convergence of the sample method over the equivalent sample-length parameter U s of the statistical averaging was studied under statistical plateau conditions, for all plots in Fig. 4. The sample lengths U s play a role which is similar to that of averaging parameters in the Strutinsky smoothing procedure for the SCM calculations of the averaged s.p. level density [44,45]. This plateau means almost constant value of the physical parameter K within large enough energy intervals U s . A sufficiently good plateau was obtained in a wide range around the values near U s for nuclei presented in Fig. 4 and Table I [19,65]. Some values of U s are given in the caption of Fig. 4. Therefore, the results of Table I, calculated at the same values of the found plateau, do not depend, with the statistical accuracy, on the averaging parameter U s within the plateau. This is similar to the results that the energy and density shell corrections are independent of the smoothing parameters in the SCM. The statistical condition, N i ≫ 1 at N tot ≫ 1, determines the accuracy of our calculations. Microscopic details are neglected under these conditions, but one obtains more simple, general, and analytical results, in contrast to a micro-canonical approach. As in the SCM, in our calculations by the sample method with good plateau values for the sample lengths U s (see the caption of the Fig. 4), one obtains a sufficiently smooth statistical level density as a function of the excitation energy U . We require such a smooth function because the statistical fluctuations are neglected in our theoretical derivations. The relative quantity σ of the standard LMSF (see Table I), which determines the applicability of the theoretical approximations, ρ(U i ) (Sec. III) for the description of the experimental data [65] ρ exp i , is given by where y = ln ρ and ∆y i ≈ 1/ √ N i . For the theoretical approaches one has the conditions of the applicability assumed in their derivations. We consider the commonly accepted Fermi gas asymptote [ (45). In a forthcoming work we will use the asymptote of Eqs. (43) and (45), and the sample method for evaluations of the statistical accuracy of the experimental data at relatively large excitation energies (near and higher than neutron resonances). It is especially helpful in the case of low-resolution dense states at sufficiently large excitation energies. The examination using the value of σ obtained by the LMSF is an additional procedure for examining these theoretical conditions, using the available experimental data. Notice also that application of the sample method in determining the experimental statistically averaged level density from the nuclear spectra in terms of σ 2 differs essentially from the methods employed in previous works (see, e.g., Ref. [14]) by using the statistical averaging of the nuclear level density and accounting for the spin degeneracies of the excited states. We do not use empiric free parameters in all of our calculations, in particular, for the FG results shown in Table I. The commonly accepted nonlinear FG asymptote (43) could be a critical (necessary but, of course, not sufficient) theoretical guide which, with a given statistical accuracy, is helpful for understanding spectrum completeness of the experimental data at large excitation energies where the spectrum is very dense. Figure 4 shows the two opposite situations concerning the state distributions as functions of the excitation energy U . We show results for the spherical magic 144 Sm (a) and double magic 208 Pb (c) nuclei with maximal (in absolute value but negative) shell correction energies, in terms of the positive, E sh ; see Table I and Ref. [58]. In these nuclei, there are almost no states with extremely low excitation energies in the range of U ∼ < 1 − 2 MeV [65]. In Table I, we present also results for the deformed nucleus 148 Sm where only a few levels exist in such a range which yields entropies S ∼ < 1. For the significantly Table I. Experimental dots (with error bars, ∆ρi/ρi = 1/ √ Ni) are obtained directly from the excitation states (with spins and their degeneracies) spectrum [65] in shown nuclei (Table I) Fig. 4(d)], which has a complicated strong shell structure including subshell effects [58]. Thus, we also present the results for two deformed nuclei, 166 Ho and 230 Th, from both sides of the desired heavy particle-number interval A ≈ 140 − 240. In Fig. 4, the results of the MMA approaches (1 and 2) are compared with those of the well-known "Bethe3" [1] [Eq. (47)] asymptote; see also [58], are shown versus the results of a small shell effects approach MMA1 (i), Eq. (29) (ξ * ≪ 1 at β = β * ). For a very small value of E sh , but still within the values of the case (ii), Eq. (34) with (35) (in particular, large ξ * ), we have the approach named MMA2b. Results for the MMA2b approach are also shown in Fig. 4. Results of calculations within the full SPM GFG asymptotical approach, Eq. (45), and within the popular FG approximation, Eq. (48), which are in good agreement with the standard Bethe3 approximation, are only presented in Table I. For finite realistic values of E sh , the results of the MMA2a approach are closer to those of the MMA1 approach. Therefore, since the MMA2b approach, Eqs. (34) with (35), is the limit of the MMA2 one at a very small E sh within the case (ii), we conclude that the MMA2 approach is a more general shell-structure MMA formulation of the statistical leveldensity problem. In all panels of Fig. 4, one can see the divergence of the level densities of the Bethe formula [also, the FG, Eq. (48), and the GFG, Eqs. (45) and (43)], near the zero excitation energy, U → 0. This is, obviously, in contrast to any MMAs, combinatorics expression (44) in the limit of zero excitation energy; see Eqs. (42), (29), and (34). The MMA1 results are close to the Bethe, FG and GFG approaches everywhere, for all presented nuclei. The reason is that their differences are essential only for extremely small excitation energies U where MMA1 is finite while other (Bethe, FG and GFG) approaches are divergent. However, there are almost no excited states in the range of their differences in the nuclei under consideration. The results of the MMA2b approach [the same as [58]); the inverse level density parameter K (fifth and eighth columns), found by the LMSF with the precision of the standard expression for σ, Eq. (60), (sixth and ninth columns) by using the sample method and experimental data from Ref. [65], are shown for the version of the approximation in the fourth and seventh columns at the relative shell corrections E sh of the third column. The MMA1 and MMA2b (with the same notations for different MMA as in Fig. 4) are MMA approaches (29) (ν = 3/2) and (34) (ν = 5/2 at extremely small E sh ); GFG is the general full Fermi gas asymptote (45). The MMA2a is a more general MMA, Eq. (34), at different relative shell corrections E sh [58]. The asterisks denote the MMA1 and MMA2a approaches which are shifted along the excitation energy U axis by the assumed pairing condensation energy E cond ≈ 1.1 and 2. Table I). In contrast to the 166 Ho excitation energy spectrum with many very LESs below about 1 MeV, for 144 Sm (a) and 208 Pb (c) one finds no such states. For the MMA2b [MMA2 for very small E sh , but within the (ii)] approach we have larger values of σ, σ ≫ 1 for 144,148 Sm and little larger for 208 Pb, versus those of other approximations. In particular, for MMA1 (i), and the other asymptotic approaches of Bethe, FG, and GFG, one finds almost the same σ of the order of one, that is in better agreement with data [19,65]. We obtain basically the same for MMA2a (ii) with realistic values of E sh . Notice that for 144,148 Sm and 208 Pb nuclei, the MMA2a [Eq. (34)] at realistic E sh is close to the MMA1 (i), Bethe, FG, and GFG approaches. The MMA1 and MMA2a (at realistic values of E sh ) as well as Bethe, FG and GFG approaches are obviously in much better agreement with experimental data [65] for 144 Sm (or 148 Sm) and 208 Pb [ Fig. 4(a) and (c)], for which one has the opposite situation: very small states number in the LES range. We note that the results of the MMA1 and MMA2a with shifted excitation energies U → U eff = U − E cond > 0 by constant condensation energies E cond ≈ 1.1 and 2.2 MeV, shown by arrows in Fig. 4 for 144 Sm and 208 Pb, respectively, may indicate the pairing phase transition effect because of disappearance of the pairing correlations [7,8,69]. With increasing U , one can see a sharp jump in the level density for the double magic 208 Pb nucleus within the shown spectrum range. In 144 Sm, one finds such a phase transition a little above the presented range of the excitation energies. This effect could be related to the pairing phase transition 2 near the critical temperature T cr = 0.47 MeV in 208 Pb (0.57 MeV in 144 Sm), i.e. at the critical effective excitation energy, [7,8,[66][67][68][69]. Therefore, for disappearance of pairing gap, the critical temperature, Tcr = γ∆ 0 /π, where γ is defined by the Euler constant, ln γ = 0.577.... Evaluating the condensation energy, E cond = g∆ 2 0 /4 = 3A∆ 2 0 /(2π 2 K), one arrives at the effective excitation energy, U eff = U − E cond . condensation energy, E cond ≈ 1 MeV. This procedure is a self-consistent calculation. Starting from a value of the condensation energy, E cond , one can obtain the inverse level density parameter K. Then, one evaluates a new E cond and reiterates till convergence in the values of K and E cond is achieved, at least in order of the magnitudes. This can be realized for the MMA1 for 144 Sm and MMA2a for 208 Pb; see Table I and Fig. 4(a) and (c). The phase transition jump is well seen in the plot (c) but is not seen in plot (a) being above the excitation energy range, at both the effective excitation energies U eff mentioned above. One of the reasons for the exclusive properties of 166 Ho [ Fig. 4(b)] as compared to both 144 Sm (a) and 208 Pb (c) might be assumed to be the nature of the excitation energy in these nuclei. Our MMA (i) or (ii) approaches could clarify the excitation nature [see Sec. III E and Appendix B for the rotational contribution which can be included in E 0 of Eq. (12) as done in Eq. (B.6)]. Since the results of the MMA2b (ii) approach are in much better agreement with experimental data than those of the MMA1 (i) approach for 166 Ho, one could presumably conclude that for 166 Ho one finds more clear thermal excitations, U ≫ E rot , Eq. (24), for LESs. For 144 Sm and 208 Pb one observes more regular (large spins owing to the alignment) excitation contributions for dominating rotational energy E rot , Eq. (B.10); see Ref. [30]. The latter effect is much less pronounced in 208 Pb than in 144 Sm, but all the inverse level density parameters K are significant for states below neutron resonances; see Table I. However, taking into account the pairing effects, even qualitatively, the thermal contribution (ii) is also important for 208 Pb while the regular nonthermal motions might be dominating in 144 Sm. In any case, the shell effects are important, especially for the (ii) case which does not even exist without taking them into account. For 230 Th [ Fig. 4(d)], one has the experimental LESs data in the middle of two limiting cases MMA1 (i) and MMA2b (ii). This agrees also with an intermediate number of very LESs in this nucleus. As shown in Fig. 4(d) and Table I, the MMA2a approach at realistic values of E sh is in good agreement with the data. The shell structure is, of course, not so strong in 230 Th as compared to that of the double magic nucleus, 208 Pb, but it is of the same order as in other presented nuclei. Also notice that, in contrast to the spherical nuclei in Figs. 4 (a) and (c), the nuclei 166 Ho (b) and 230 Th (d) are significantly deformed, which is also important, in particular, because of their large angular momenta of the LES excitation spectrum states. We do not use free empiric parameters of the BSFG, spin cutoff FG, and empiric CTM approaches [14]. As an advantage, one has only the parameter K with the physical meaning of the inverse level density parameter. The variations in K are related, e.g., to those of the mean field parameters through Eq. (28). All the densities ρ(E, A) compared in Fig. 4 and Table I do not depend on the cutoff spin factor and moment of inertia because of sum-mation (integrations) over all spins (however, with accounting for the degeneracy 2I + 1 factor). In line with the results of Ref. [18], the obtained values of K for the MMA2 approach can be essentially different from the MMA1 ones and those (e.g., FG) found, mainly, for the neutron resonances (NRs). However, the level densities with the excitation energy shifted by constant condensation energies, due to pairing, for 208 Pb (c) and 144 Sm (a) in Fig. 4, notably improve the comparison with data [65]. These densities correspond to inverse level-density parameters K, smaller even than those obtained in the FG approach which agreed with NR data. We note that for the MMA1 approach one finds values of K which are of the same order as those of the Bethe, FG and GFG approaches. These values of K are mostly close to the NR values in order of magnitude. For the FG approach, Eq. (48), in accordance with its nondirect derivation through the spin-dependent level density ρ(E, A, I), Eq. (57) (Sec. III E), it is obviously because the neutron resonances occur at large excitation energies U and small spins; see Eqs. (24) and (56). Large deformations, neutron-proton asymmetry, spin dependence for deformed nuclei, and pairing correlations [2,7,8,12,13,21,22] in rare earth and actinide nuclei should be also taken into account to improve the comparison with experimental data. V. CONCLUSIONS We derived the statistical level density ρ(S) as function of the entropy S within the micro-macroscopic approximation (MMA) using the mixed micro-and grandcanonical ensembles beyond the standard saddle point method of the Fermi gas model. The obtained level density can be applied for small and relatively large entropies S or excitation energies U of a nucleus. For a large entropy (excitation energy), one obtains the exponential asymptote of the standard SPM Fermi gas model, but with significant powers of 1/S corrections. For small S one finds the usual finite combinatorics expansion in powers of S 2 . Functionally, the MMA at linear approximation in S 2 ∝ U expansion, at small excitation energies U , coincides with the empiric constant "temperature" model except it is obtained without using free fitting parameters. Thus, MMA unifies the commonly accepted Fermi gas approximation with the empiric CTM for large and small entropies S, respectively, in line with the suggestions in Refs. [3,21]. The MMA clearly manifests an advantage over the standard full SPM approaches at low excitation energies, because it does not diverge in the limit of small excitation energies, in contrast to every full SPM approaches, e.g., Bethe asymptote and FG asymptote. Another advantage applies when nuclei have many more states in the very low energy state range. The values of the inverse level density parameter K were compared with those of experimental data for LESs below neutron resonances (NRs) in spectra of several nuclei. The MMA results with only one physical parameter in the least mean-square fit, the inverse level density parameter K, were usually better with larger number of the extremely low energy states, certainly much better than for the results with the FG model in this case. The MMA values of the inverse level density parameter K for LESs can be significantly different from those of the neutron resonances within the FG model. We found significant shell effects in the MMA level density for the nuclear LES range within the semiclassical periodic orbit theory. In particular, we generalized the known SPM results for the level density in terms of the full SPM GFG approximation accounting for the shell effects using the POT. Exponential disappearance of shell effects with increasing temperature was analytically studied within the POT for the level density. Shifts in the entropy S and in the inverse level density parameter K due to the shell effects were also obtained and given in the explicit analytical forms. The shifts occur at temperatures much lower than the chemical potential, near the NR excitation energies. Simple estimates of pairing effects in spherical magic nuclei, by pairing condensation energy to the excitation energies shift, significantly improve the comparison with experimental data. Pairing correlations essentially influence the level density parameters at low excitation energies. We found an attractive description of the wellknown jump in the level density within our MMA approach using the pairing phase transition. Other analytical reasons for the excitation energy shifts in the BSFG model are found by also using a more accurate expansion of the modified Bessel expression for the MMA level density at large entropies S, taking into account high order terms in 1/S. This is important in both the LES and NR regions, especially for LESs. We presented a reasonable description of the LES experimental data for the statistical averaged level density obtained by the sampling method within the MMA with the help of the semiclassical POT. We have emphasized the importance of the shell and pairing effects in these calculations. We obtained values of the inverse level density parameter K for the LES range which are essentially different from those of NRs. These results are basically extended to the level density dependence on the spin variables for nuclear rotations around the symmetry axis of the mean field due to alignment of the individual nucleon angular momenta along the symmetry axis. Our approach can be applied to statistical analysis of experimental data on collective nuclear states. As the semiclassical POT MMA is better with larger particle number in a Fermi system, one can also apply this method to study metallic clusters and quantum dots in terms of the statistical level density, and to problems in nuclear astrophysics. The neutron-proton asymmetry, large nuclear angular momenta and deformation for collective rotations, additional consequences of pairing correlations, as well as other perspectives, will be taken into account in a future work in order to improve the com-parison of the theoretical results with experimental data on the level density parameter significantly, in particular below the neutron resonances. So far we did not specify the model for the mean field. For nuclear rotation, it can be associated with alignment of the individual angular momenta of nucleons called a "classical rotation" in Ref. [30]: rotation parallel to the symmetry axis Oz, in contrast to the collective rotation perpendicular to the Oz axis [41]. In particular, in the case of the "parallel" rotation, one has for a spherically and axially symmetric potential the explicit partition function expression: Here, ε i and m i are the s.p. energies and projections of the angular momentum on the symmetry axis Oz of the quantum states in the mean field. In the transformation from the sum to an integral, we introduced the s.p. level density g(ε, m) as a sum of the smooth and oscillating shell components, The Strutinsky smoothed s.p. level densityg can be well approximated by the ETF level density g ETF ,g ≈ g ETF . For the spherical case, the s.p. level density in the TF approximation is given by [70] g ≈ g TF = µd s π ℓ 0 |m| dℓ rmax rmin dr 2µ (ε − V (r)) − 2 l 2 /r 2 −1 , where µ is the nucleon mass, d s is the spin (spin-isospin) degeneracy, ℓ 0 is the maximum of a possible angular momentum of nucleon with energy ε in a spherical potential well V (r), and r min and r max are the turning points. For the oscillating component δg scl (ε, m) of the level density g(ε, m), Eq. (A.2), we use, in the spherical case, the following semiclassical expression [30] derived in Ref. [39]: The sum is taken here over the classical periodic orbits (PO) with angular momenta ℓ PO ≥ |m|. In this sum, g PO (ε) is the partial contribution of the PO to the oscillating part g scl (ε) of the semiclassical level density g(ε) (without limitations on the projection m of the particle angular momentum), see Eq. (3), with Here, S PO (ε) is the classical action along the PO, µ PO is the so called Maslov index determined by the catastrophe points (turning and caustic points) along the PO, and φ 0 is an additional shift of the phase coming from the dimension of the problem and degeneracy of the POs. The amplitude A PO (ε) in Eq. (A.6) is a smooth function of the energy ε, depending on the PO stability factors [36,50,52]. For the spherical cavity one has the famous explicitly analytical formula [36,45,50]. The Gaussian local averaging of the level density shell correction δg scl (ε) (Eq. (A.5)) over the s.p. energy spectrum ε i near the Fermi surface ε F can be done analytically by using the linear expansion of relatively smooth PO action integral S PO (ε) near ε F as function of ε with the Gaussian width parameter Γ [36,50,52], where t PO = ∂S PO /∂ε is the period of particle motion along the PO. All the expressions presented above, except for Eqs. (A.3) and (A.4), can be applied for the axiallysymmetric potentials, e.g. for the spheroidal cavity [51,52,71] and deformed harmonic oscillator [36,72]. Let us use now the decomposition of Ω ≡ − ln Z/β with the corresponding variables within the SCM POT in terms of its smooth part,Ω ≈ Ω ETF , and shell correction δΩ: The smooth (in the sense of the SCM [44,45]) groundstate energy of the nucleus is given bỹ whereg(ε) is a smooth level density approximately equal to the ETF level density,g ≈ g ETF . The smooth chemical potentialλ in the SCM is the root of equation A = λ 0 dεg(ε), and λ ≈λ in the POT. The chemical potential λ (orλ) is approximately the solution of the corresponding conservation particle number equation: The quantity Θ ETF in Eq. (A.9) is the ETF (rigid-body) moment of inertia for the statistical equilibrium rotation, whereρ ≈ ρ ETF (r) is the ETF particle density. For the "parallel" rotation, m 2 is the smooth component of the square of the angular momentum projection of nucleon m 2 . Here and below we neglect a small change in the chemical potential λ, due to the internal nuclear thermal and rotational excitations, which can be approximated by the Fermi energy ε F , λ ≈ ε F . is the semiclassical free-energy shell correction of a nonrotating nucleus (ω = 0); see Eqs. (9) and (10). In deriving the expressions for the free energy shell correction δF scl and the potential δΩ scl , the action S PO (ε) in their integral representations over ε with the semiclassical level-density shell correction δg(ε), Eqs. (A.5) and (A.6), was expanded as function of ε near the chemical potential λ. Then, we integrated by parts over ε, as in the semiclassical calculations of the energy shell correction δE scl [36,50]. We used the expansion of δΩ(β, λ, ω) over a relatively small rotation frequency ω, ℓ 2 F ω/λ ≪ 1, up to quadratic terms. Nonadiabatic effects for large ω, considered in Ref. [30] for the spherical case, are out of the scope of this work. In Eq. (A.13), the period of motion along a PO, t PO (ε) = ∂S PO (ε)/∂ε, and the PO angular momentum of particle, ℓ PO (ε), are taken at ε = λ. For large excitation energies, β = β * = 1/T (T is the temperature), one arrives from Eqs. (9), (10), and (A.13) at the well-known expression for the semiclassical free-energy shell correction of the POT [30,36], δF = δΩ (in their specific variables); see also Ref. [10] for the magnetic-susceptibility shell corrections. These shell corrections decrease exponentially with increasing temperature T . For the opposite limit to the yrast line (zero excitation energy U , β −1 ∼ T → 0), one obtains from δΩ, Eq. (A.13), the well-known POT approximation [36,50] to the energy shell correction δE, modified however by the frequency ω dependence. The POT shell effect component of the free energy, δF scl , Eqs. (9) and (10), is related in the nonthermal and nonrotational limit to the energy shell correction of a cold nucleus, δE scl [36,40,50,52]: where E PO is the partial PO component [Eq. (11)] of the energy shell correction δE. Within the POT, δE scl is determined, in turn, by the oscillating level density δg scl (λ); see Eqs. (A.5) and (A.6). The chemical potential λ can be approximated by the Fermi energy ε F , up to small excitation-energy and rotational-frequency corrections (T ≪ λ for the saddle point value T = 1/β * if it exists, and ℓ F ω/λ ≪ 1). It is determined by the particle-number conservation condition, Eq. (B.4), which can be written in the simple form (A.11) with the total POT level density g(ε) ∼ = g scl = g ETF + δg scl . One now needs to solve equation (A.11) for a given particle number A to determine the chemical potential λ as function of A, since λ is needed in Eq. (A.14) to obtain the semiclassical energy shell correction δE scl . If one were to use in Eq. (A.11) the exact (SCM) level density g(ε) ≈ g SCM =g + δg Γ (ε), whereg is the Strutinsky smooth s.p. level density,g ≈ g ETF , and δg Γ is the averaged level-density shell correction with Gaussian width Γ, one would obtain a steplike function of the needed chemical potential λ (Fermi energy ε F ) as a function of the particle number A. Using the semiclassical level density g scl (ε), Eq. (3), with δg scl (ε) given by Eqs. (A.5) and (A.6), similar discontinuities would appear. To avoid such a behavior, one can apply the Gauss averaging, e.g., Eq. (A.7), on the level density g Γ (ε) in Eq. (A. 11) or, what amounts to the same, on the quantum SCM states density with, however, a width Γ = Γ 0 . This Gauss width should be much smaller than that obtained in a shell-correction calculation, Γ = Γ sh , with Γ 0 ≪ Γ sh ≪ D sh , where D sh is the distance between major shells. Because of a slow convergence of the PO sum in Eq. (A.5), it is, however, more practical to use in Eq. (A.11) the SCM quantum density, g(ε) ≈ g SCM (ε), averaged with Γ 0 to determine the function λ(A). For a major shell structure near the Fermi energy surface, ε ≈ λ, the POT shell correction δE scl [Eq. (A.14)] is in fact approximately proportional to that of δg scl (λ) [Eqs. (A.5) and (A. 6)]. Indeed, the rapid convergence of the PO sum in Eqs. (A.14) and (11) is guaranteed by the factor in front of the density component g PO , Eq. (A.6), a factor which is inversely proportional to the period time t PO (λ) squared along the PO. Therefore, only POs with short periods which occupy a significant phase-space volume near the Fermi surface will contribute. These orbits are responsible for the major shell structure, that is related to a Gaussian averaging width, Γ ≈ Γ sh , which is much larger than the distance between neighboring s.p. states but much smaller than the distance D sh between major shells near the Fermi surface. According to the POT [36,50,52], the distance between major shells, D sh , is determined by a mean period of the shortest and most degenerate POs, t PO [36,50]: Taking the factor in front of g PO in the energy shell correction δE scl , Eq. (A.14), off the sum over the POs, one arrives at Eq. (23) for the semiclassical energy-shell correction [40,[50][51][52]. Differentiating Eq. (A.14) using (A.6) with respect to λ and keeping only the dominating terms coming from differentiation of the sine of the action phase argument, S/ ∼ A 1/3 , one finds the useful relationship By the same semiclassical arguments, the dominating contribution to g ′′ (λ) for major shell structure is given by Again, as in the derivation of Eqs. (23) and (A.16), for the major shell structure, we take the averaged smooth characteristics for the main shortest POs which occupy the largest phase-space volume off the PO sum. Appendix B: MMA spin-dependent level density For statistical description of the level density of a nucleus in terms of the conservation variables, the total energy E, nucleon number A, and the angular momentum projection M to a space-fixed axis Oz, one can begin with the micro-canonical expression for the level density, where E i , A i , and M i , respectively, represent the system quantum energy spectrum. This level density can be identically rewritten in terms of the inverse Laplace transformation of the partition function Z(β, α, κ) over the corresponding Lagrange multipliers β, α, and κ; see, e.g., Refs. [4,7,8]: We will calculate by the SPM the integrals in this equation over the restricted set of Lagrange multipliers α and κ, related to A and M , respectively. However, as in Sec. II, the last integral in Eq. (B.2) over the variable β, related to the energy E, will be calculated more accurately beyond the SPM approach. The saddle points over other variables (marked by asterisks; see below) are determined by saddle point equations: The asterisk mean that α = α * and κ = κ * . These equations can be considered also as conservation laws for a given set of M and A. Equations (B.3) for the saddle point values α * = λβ and κ * = ωβ in terms of the chemical potential λ and rotation frequency ω in the case of axially symmetric (or spherical) mean fields for the "parallel" rotation (Sec. III E) can be written in more explicit way: where F PO is given by Eqs. (10) and (11). In deriving Eq. (B.5) we used explicitly the spherical symmetry of the mean field as in Eq. (A.4) for the oscillating level density δg scl (ε, m) and Eq. (A.13) for the potential shell correction δΩ scl . These components for small excitation energies and major shell-structure averaging, g −1 ≪ Γ ≪ D sh , of δg are much smaller than the average rigid body valueΘ [Eq. (A.12)], δΘ/Θ ≈ δg/3g ≈ 2π 2 E sh /3A 1/3 ≪ 1; see Eqs. (22) and (23). In the derivations of Eq. (A.13) we used the conservation conditions for the particle number and angular momentum projection, Eq. Here, λ ≡ α * /β, ω ≡ κ * / β, and J is the twodimensional Jacobian for the transformation between the two shown sets of variables. Finally for any value of the integration variable β (α = λβ and κ = ωβ). It is the well-known potential of the grand canonical ensemble when taken at all the saddle points as Ω * = Ω (β * , λ * , ω * ), where β * = 1/T with T being the system temperature, which, if it exists, can be defined using, λ * = α * T , ω * = κ * T / . We have also E = Ω * + (β∂Ω/∂β) * + λ * A. Note that within the grand canonical ensemble, the quantities λ * and ω * are the standard chemical potential and rotational frequency, respectively. Below we consider λ = α * /β and ω = κ * /β (for any value of β) as the generalized chemical potential and rotational frequency. The potential Ω(β, λ, ω), Eq. (52), contains two contributions: the thermal intrinsic excitation energy, U (β * ) = aT 2 , related to the entropy production, and the rotational excitation energy, E rot (ω) = Θω 2 /2. Assuming a small thermal excitation energy, U ∝ 1/β 2 (i.e., aT 2 in the asymptotically large excitation energy limit), with respect to rotational ones, E rot (i.e., Θω 2 /2 in the adiabatic approximation) but large as compared to a mean distance between neighbor level energies for validness of the statistical and semiclassical arguments, one writes, at β ≈ β * , The level density parameter a is given by Eq. (13) modified, however, by the rotational ω 2 corrections: a ≈ π 2 6 g (λ) + ω 2 6 PO g PO (λ) t 2 PO l 2 PO . (B.11) The second term in the square brackets is explicitly presented for the spherical potential. Note that the condition (B.10) is satisfied for smaller nuclear excitation energies U ∼ < 3 MeV for typical rotational excitation energies ω ∼ < 1 MeV; cf. Eq. (24). The same limit U ∼ > 1/g(λ) in Eqs. (42), (24), and the left-hand side of Eq. (B.10) is due to the fact that, in the calculation of the quantity Ω (β, λ, ω), Eqs. (B.9) and (A.1), the sum over the s.p. states was approximately replaced by the integral, and the continuous s.p. level-density approximation for g(ε, m), Eqs. (A.2) -(A.6), was used. In Eq. (B.10), for a typical rotation energy ω ∼ < 0.1 MeV, one has 0.2 ∼ < U ∼ < 3 MeV (λ ≈ 40MeV). Under the (i) condition (B.10) (see also Sec. III A), one takes the two-dimensional Jacobian J , Eq. (16), J ≈J , as a smooth quantity, off the integral over β in Eq. (B.7). Then, in the calculations of this integral, we used the transformation of the variables, β = 1/τ , to arrive at the integral representation for the modified Bessel functions I ν of the order of ν (e.g., ν = 2). This representation is the well-known inverse Laplace transformation [55], where I ν (z) is the same modified Bessel function of the order of ν as used in Eqs. (42) and (49). In these transformations we assumed that the integrand in Eq. (B.7) is an analytical function of the integration variable τ = 1/β on the right of the imaginary axis (c > 0). This means that there are no equilibrium states (poles) for the excitation energy U > 0. Notice that the Jacobian J can be also taken off the integral over β at β = β * within the full SPM if the saddle point β * exists; see Ref. [4] where the assumption of constant s.p. level density near the Fermi surface was used. In the following derivations, we will neglect small thermal and rotational corrections to the chemical potential λ as compared to the Fermi energy ε F . Excitation energies of the approximate condition, Eq. (24), should also be smaller than a distance between major shells, D sh , Eq. (A.15), in the adiabatic approximation for rotational excitations. At the same time, we neglect the oscillating β dependence of the Jacobian, δJ (Jacobian subscript is ∞ in Ref. [30]), under the condition of case (i) [see Eq. (B.10) and Sec. III E for the typical rotational energy ω ∼ < 0.1 MeV]. Thus, one finally arrives at Eq. (49) for ν = 2 in the case (i). For the coefficient ρ ν in the case (i) but for arbitrary ν, one finds The superscript 2ν−2 of the smooth part of the Jacobian, J (2ν−2) , Eq. (16), provides the number of the integrals of motion beyond 1 (energy E). In the considered case of n = 3 integrals of motion, one has ν = (n + 1)/2 = 2, and the corresponding smooth Jacobian is given bỹ J (2) ≈ g ETF (λ)Θ/ 2 . Note that the expressions (49) and (B.13), for the case (i), are presented in a general form for axially symmetric potentials and arbitrary number of integrals of motion n. They are valid under the condition (B.10), e.g., n = 3 and ν = (n + 1)/2 = 2 in this appendix and the same as in Ref. [30]. For the specific case n = 2, the case (i) (ν = 3/2) in Sec. III A, one obtains Eq. (29), with Eq. (26) for the constant ρ 3/2 , and its Bethe asymptote (47). In the opposite case (ii) (Sec. III B) for a small rotational energy E rot with respect to the thermal excitations
Optoelectric spin injection in semiconductor heterostructures without ferromagnet We have shown that electron spin density can be generated by a dc current flowing across a $pn$ junction with an embedded asymmetric quantum well. Spin polarization is created in the quantum well by radiative electron-hole recombination when the conduction electron momentum distribution is shifted with respect to the momentum distribution of holes in the spin split valence subbands. Spin current appears when the spin polarization is injected from the quantum well into the $n$-doped region of the $pn$ junction. The accompanied emission of circularly polarized light from the quantum well can serve as a spin polarization detector. We have shown that electron spin density can be generated by a dc current flowing across a pn junction with an embedded asymmetric quantum well. Spin polarization is created in the quantum well by radiative electron-hole recombination when the conduction electron momentum distribution is shifted with respect to the momentum distribution of holes in the spin split valence subbands. Spin current appears when the spin polarization is injected from the quantum well into the n-doped region of the pn junction. The accompanied emission of circularly polarized light from the quantum well can serve as a spin polarization detector. One of the most important problems in spintronics is the efficient injection of spin currents into semiconductor structures. A possible way is to inject spin polarization from a ferromagnet by passing a dc current across the interface [1]. If the ferromagnet is metallic, the efficiency of such injection is controversial [2], and the observed weak spin current has been attributed to the large mismatch of spin diffusion constants between the adjacent semiconductor and metal [3]. Spin injection can be enhanced by using an appropriate interface tunneling barrier [4]. On the other hand, rather high degree of spin current polarization has been detected if the spin is injected from magnetic semiconductors [5], although its high efficiency is restricted to low temperatures. In this Letter we propose a new method to use a dc current to inject spin polarization, not from any ferromagnetic material, but from a quantum well (QW) embedded into a pn junction. The spin polarized current is generated during the radiative electron-hole recombination in the QW, accompanied by the emission of circularly polarized light. Our new injection mechanism can be easily explained. It is well known that in a QW which is asymmetric along the growth direction, at each finite wave vector k the spin degeneracy of hole subbands is removed. The corresponding splitting of hole energies increases with k and can reach a quite high value. For example, in a p inversion layer of a GaAs-AlGaAs heterojunction, the splitting of the topmost heavy-hole subband was calculated [6] to be about 5 meV at k=2·10 6 cm −1 . Each of the split states is a linear combination of four angular momentum eigenstates which are specified by the z-component J z =±3/2 and ±1/2. The resulting mean spins of hole states are parallel to the QW interfaces. For a given k the spins in each pair of spin-split subbands have opposite directions, and in a given spin-split subband the mean spin at k is equal and opposite to that at -k. The spin orientations in the topmost heavy-hole subband are shown in Fig. 1, where the Fermi energy µ e (or µ h ) of the quasiequilibrium degenerate electron gas (or hole gas) is indicated. When a σ-spin electron in the k state of the conduction band makes a radiative transition to the topmost heavy-hole subband, as marked by the downward vertical arrow line in Fig. 1, the probability of this process depends on σ because the population of hole at the k state in the split (+)-subband may be different from that in the split (-)-subband. Hence, at k the conduction band electron spin will be polarized along σ direction if an electron with (-σ)-spin has higher recombination rate. However, since the hole spin orientation at -k is reversed with respect to that at k, the spin polarization of the conduction band electron at the -k state will be along -σ direction. Consequently, if the momentum distributions of both the electron gas and the hole gas are isotropic, there will be no net spin polarization of the conduction band electrons. On the other hand, if the momentum distributions of the electron gas and/or the hole gas are anisotropic, the generation of spin polarization becomes possible. We will explain the generating process with the help of Fig. 1, where the anisotropic momentum distribution is indicated by a shift δk of the quasiequilibrium momentum distribution of electrons with respect to that of holes. The state 3 in the (+)-subband and the state 4 in the (-)-subband are equally occupied by holes with spins in opposite directions. Hence, the total probability of recombination of a conduction electron with the two holes in the states 3 and 4 is independent of the spin of the electron. However, a hole appears in the state 1 in the (+)-subband but not in the state 2 in the (-)-subband. Hence, the conduction electrons in the shaded region can recombine only with the holes in the (+)-subband, and the corresponding recombination probability is spin dependent. Such processes create a nonequilibrium electron spin polarization in the QW. When this polarization diffuses out of the QW, a spin current is then generated in the n-doped region. One way to realize the situation shown in Fig. 1 is to apply an electric field E parallel to interfaces. Then, both electrons and holes gain their respective drift velocities. Let v be the relative drift velocity of electrons with respect to drift velocity of holes. If we ignore the drift of the low mobile holes, the resultant band positions are illustrated in Fig. 1. The shift δk is simply m * v/ =m * µE/ , where µ is the electron mobility. The suggested device for spin current injector, a QW embedded into a pn junction, is illustrated in Fig. 2. A bias voltage V b is applied along the z axis, and the transverse voltage source V t creates an electric field E along x axis. The potential profile along z axis and the spin current are also shown schematically in Fig. 2. At low temperatures, for degenerate electron and hole gases, besides the band to band electron transitions, excitonic recombination processes must be considered. To avoid the complicated analysis which will not change the essential physics, in this Letter we will neglect the excitonic effect on the spin generation. For the valence band we define |J z as the zone center Bloch state and ψ ± k,n,Jz (z) the n-th confined state in the QW associated to the two-dimensional wave vector k and the hole spin projection J z . Then, the wave functions Ψ ± n,k (z) of the n-th (±) split valence subbands can be expressed as the sum of ψ ± k,n,Jz (z)|J z over J z =± 3 2 and ± 1 2 . For electrons in the lowest conduction subband, we can similarly define |σ and χ k (z). If we neglect the small spin-orbit splitting, the electron wave functions Ψ k (z) can be written as a linear combination of two degenerate states C 1 2 are normalized amplitudes. When an electron and a hole recombine from their respective quantum states to emit a photon of wave vector q and polariza-tion vector e λ q , to the lowest order, the quantum probability amplitude of this process can be easily derived as where ν=± represents the (±) split valence subbands, and In the above equation, p σ,Jz ≡ J z |p|σ is the interband matrix element of the electron momentum operator p, and A q ≡ 2πc/nq. Knowing the recombination probability of the conduction band electrons with a specific spin, the spin generation during the recombination process can be calculated. The resultant spin density in the conduction band is derived as where 2s σ ′ σ are Pauli matrices. The matrix elements G σσ ′ have the form where A is a numerical factor, and f (k) [f ν n (k)] is the momentum distribution functions of electrons (or holes). Within the perturbation theory, f (k) and f ν n (k) are assumed to be spin independent. A relevant dimensionless parameter which measures the efficiency of spin generation is P=G s /G, where G≡G 1 2 , 1 2 +G − 1 2 ,− 1 2 is the number of radiative recombinations per the time unit . As mentioned earlier, we will consider higher temperature such that the excitonic effect can be neglected. Hence, each momentum distribution function is the sum of the equilibrium Boltzmann distribution and the nonequilibrium correction due to the electric field E. Since the drift of holes can be ignored, we have where ǫ(k) is the lowest conduction subband, and ǫ ν n (k) (±) split valence subbands. The zero reference energy is set at the bottom of the lowest conduction subband. We need the valence subband wave functions ψ ± k,n,Jz (z) for calculating of M λ,ν σ,n (k). Let the growth direction z be along the [001] crystal. The wave functions can be obtained by applying a proper unitary transformation [6] which block diagonalizes the Luttinger Hamiltonian. They can be expressed in the general form as where the real functions ξ ± nh (z) and ξ ± nl (z) represent the partial amplitudes of heavy and light holes in the (+)and the (-)-state. The phases are φ k =3η k =-3ϕ k /2 with ϕ k =cos −1 (k x /k). To obtain (5) we have neglected the band warping. For convenience we set the x-axis along the drift velocity direction. With all above equations (1)- (5) we are ready to calculate the efficiency of spin generation P=G s /G. In terms of the overlap integrals , and R ± z,nk =0. Then, P is derived as where F k,n,ν =exp {−[ǫ ν n (k) + ǫ(k)]/k B T }. As expected, only the anisotropic part (v·k term) of the electron momentum distribution function f (k) has contributed to the generated polarization. Based on the above formula, P can be investigated numerically in a broad range of parameters. However, qualitative results can be derived from (6) analytically for kd≪1, where d is the typical length of hole confinement in the QW. We notice that in (6), at a temperature T , a characteristic wave vector k T can be defined as the mean value of k with respect to the Boltzmann factor F k,n,ν . Then, an order of magnitude evaluation of P can be obtained at k T d≪1. Our calculation indicates that in the region kd≪1, the energy split ∆ n (k)=ǫ + n (k)ǫ − n (k) is much less than the separation of two adjacent hole subbands, and b + nl ≃b − nl as well as b + nh ≃b − nh . Hence, R + nk ≃R − nk , and the term (R + nk F k,n,+ -R − nk F k,n,− ) to be summed up in the numerator of (6) becomes proportional to 1-exp [−∆ n (k)/k B T ]. With such simplification we can analyze the physical processes which contribute to P. Let us consider the contribution of the lowest hole subband with n=1. If this is a heavy hole subband, then the small admixture of light hole states to this subband allows us to evaluate b 1l to obtain b 1l ∝(kd) 2 b 1h . As a result, we have R y,1k ≫R x,1k , and the generated spin polarization is oriented perpendicular to the drift direction of electrons. On the other hand, if the n=1 subband corresponds to the light hole, similar analysis gives b 1h ∝(kd) 2 b 1l and R x,1k ≫R y,1k . Consequently, the so generated spin polarization is oriented parallel to the electron drift direction. In the range of temperatures where ∆ 1 (k T )/k B T ≪1, we derive from (6) where β=2 (or β=0) if the lowest subband is of heavy (or light) hole type. Hence, in the region of small characteristic wave vectors, the optoelectric generation of spin polarization is more efficient if the lowest hole subband is of the light hole type. We should mention that in III-V semiconductor QW, the lowest hole subband is of light hole type if the QW is sufficiently strained. One example is InAs/GaAs QW. In (7) the factor vk T /k B T is due to the anisotropic momentum distribution of the electrons. The other factor ∆ 1 (k T )(k T d) β /k B T is originated from the ratio of the two summations in (6), and increases with T through k T and ∆ 1 (k T ). For example, in GaAs at room temperature, k T ≃2 · 10 6 cm −1 , and then k T d≃1 for d=100Å. Under this situation, ∆ 1 (k T ) becomes comparable to the quantization energy of hole subbands [6], and (7) is no longer valid. However, a simple scaling analysis shows that around k T d≃1, Eq.(7) can still be used to evaluate P with the factor (k T d) β replaced by a numerical factor of the order of unity. We have shown that spin polarization of conduction electrons can be generated in a quantum well due to the radiative electron-hole recombination. Let |e|I be the electric current across the pn junction, and η=τ nr /(τ nr + τ r ) the luminescence quantum efficiency, where τ r (or τ nr ) is the radiative (or nonradiative) electron-hole recombination time. Then, the number of radiative recombinations per time unit is G=Iη. The so generated spin polarization in the conduction band can either diffuse out of the QW into the n-doped region of the pn junction, or relax within the QW with a spin relaxation time τ sw . In III-V semiconductor QWs the dominating process is the the D'yakonov-Perel' spin relaxation [8], although the Bir-Pikus mechanism [8] can be efficient due to presence of a large number of holes. In the steady state, the spin current which flows out of the QW is given as where S is the spin polarization, and 1/τ =1/τ r +1/τ nr . We choose the spin quantization axis parallel to the direction of the vector P defined in (6). Then, S is simply (n 1 2 -n − 1 2 )/2, where n σ is sheet electron density in the QW with σ-spin. To relate S to I, we need to specify the transport process between the QW and the n-doped region, which involves the spin diffusion and relaxation in the n-doped semiconductor. To avoid such complication, we will consider a simple model of thermionic transport over the barrier, which is suitable for not too low temperatures. In this case the σ-spin current i σ is determined by the balance of Richardson currents emitted from both sides of the potential barrier. Such emission currents depend on the chemical potentials µ σw in the QW and µ σb in the n-doped bulk semiconductor. In the linear response regime, |µ σw -µ σb |≪k B T , and we have where A * is the Richardson constant, and the the barrier height U is indicated in Fig. 2. With this expression, the spin current I s =(i 1 2 -i − 1 2 )/2 is a function of the chemical potentials. In the n-doped bulk semiconductor we assume diffusive motion of electrons with a diffusion constant D and a spin relaxation time τ sb . We should mention that within the framework of linear transport theory the spin current in a bulk material can be driven only by a spin density gradient, and the corresponding characteristic length of the spin density variation has the form L s = √ Dτ sb . In an n-doped bulk semiconductors τ sb can be very long [9], and a large L s will reduce the efficiency of spin injection. This is the same problem encountered in the study of spin injection from ferromagnetic metals into semiconductors [3], which can be overcome [4] with a proper choice of the barrier height U . If U is so chosen that (L s /l) exp [(µ σb − eU )/k B T ]≪1, where l is the electron mean free path, the injected spin current is no longer limited by the low spin relaxation rate of the bulk semiconductor. For a moderately doped III-V semiconductors, if we take from [8,9] τ sb =100 ps and the electron mobility 5·10 4 cm 2 /Vsec, L s /l≃10 at room temperature. Therefore, the above inequality can be easily satisfied even if the barrier height is rather low. In this situation we obtain where ∆µ=(µ 1 2 ,w +µ − 1 2 ,w -µ 1 2 ,b -µ − 1 2 ,b )/2. Although ∆µ/k B T is assumed to be small in linear response theory, the ratio τ /τ sw can be large, and so the second term in the square brackets of (10) can not be neglected. For example, in bulk III-V semiconductors with carrier density in the range between 10 17 cm −3 and 10 18 cm −3 , at room temperatures the ratio τ /τ sw is about ten [8,9], assuming τ in the range of nanosecond. The similar ratio was also found in QWs [10]. In our system, as illustrated in Fig 2, the electron density and the hole density in the asymmetric QW are spatially shifted with respect to each other. The electron-hole recombination becomes spatially indirect, and therefore τ increases. On the other hand, τ sw increases if the spin relaxation is dominated by the Bir-Pikus mechanism, but remains almost the same if the D'yakonov-Perel' mechanism dominates. Consequently, the actual value of τ /τ sw varies with the system to be studied. From (10), we see that the upper limit of the spin polarization of the injected current is I s /I=Pη. Since the spin polarization in the QW is created by the emission of circularly polarized light, the spin generation can be detected by measuring the polarization of emitted photons. Circular polarization of a photon is represented by the imaginary off-diagonal elements of the photon polarization matrix ρ. This Hermitian matrix can be calculated in a way similar to the calculation of spin polarization. It can be shown that the imaginary elements are related to P as ρ xz ∝iP y and ρ yz ∝-iP x . Hence, most of the circularly polarized photons are emitted with their wave vectors parallel to P. If we introduce a ferromagnetic layer to the n-doped part of our sample, the optoelectric injection of spins can be investigated by measuring the resistance of the system. The separation of the ferromagnetic layer and the QW must be less than L s . In this case, the measured resistance depends on relative spin polarizations in the paramagnetic and the ferromagnetic materials [1,2], and can be varied by changing the direction of the applied electric field E. We close this Letter with one remark. Within the framework of linear response theory used in our analysis, the ratio vk T /k B T must be small. With a stronger electric field to increase the electron drift velocity v, we conjecture an enhancement of the spin generation under an anisotropic momentum distribution of hot electrons. A. G. M. acknowledges the support from the Royal Swedish Academy of Sciences, and from the Crafoord Foundation.
Use of a Chimeric Hsp70 to Enhance the Quality of Recombinant Plasmodium falciparum S-Adenosylmethionine Decarboxylase Protein Produced in Escherichia coli S-adenosylmethionine decarboxylase (PfAdoMetDC) from Plasmodium falciparum is a prospective antimalarial drug target. The production of recombinant PfAdoMetDC for biochemical validation as a drug target is important. The production of PfAdoMetDC in Escherichia coli has been reported to result in unsatisfactory yields and poor quality product. The co-expression of recombinant proteins with molecular chaperones has been proposed as one way to improve the production of the former in E. coli. E. coli heat shock proteins DnaK, GroEL-GroES and DnaJ have previously been used to enhance production of some recombinant proteins. However, the outcomes were inconsistent. An Hsp70 chimeric protein, KPf, which is made up of the ATPase domain of E. coli DnaK and the substrate binding domain of P. falciparum Hsp70 (PfHsp70) has been previously shown to exhibit chaperone function when it was expressed in E. coli cells whose resident Hsp70 (DnaK) function was impaired. We proposed that because of its domain constitution, KPf would most likely be recognised by E. coli Hsp70 co-chaperones. Furthermore, because it possesses a substrate binding domain of plasmodial origin, KPf would be primed to recognise recombinant PfAdoMetDC expressed in E. coli. First, using site-directed mutagenesis, followed by complementation assays, we established that KPf with a mutation in the hydrophobic residue located in its substrate binding cavity was functionally compromised. We further co-expressed PfAdoMetDC with KPf, PfHsp70 and DnaK in E. coli cells either in the absence or presence of over-expressed GroEL-GroES chaperonin. The folded and functional status of the produced PfAdoMetDC was assessed using limited proteolysis and enzyme assays. PfAdoMetDC co-expressed with KPf and PfHsp70 exhibited improved activity compared to protein co-expressed with over-expressed DnaK. Our findings suggest that chimeric KPf may be an ideal Hsp70 co-expression partner for the production of recombinant plasmodial proteins in E. coli. Introduction E. coli is often the host of choice in the production of recombinant proteins. However, one of the challenges of producing recombinant proteins in E. coli remains that the products are occasionally released from ribosomes as insoluble inclusion bodies. In addition, the use of strong promoters and high inducer concentrations can generate product yields exceeding 50% of the total cellular protein [1]. Under such circumstances, the rate of protein production overwhelms the protein folding machinery, resulting in the generation of poor quality, mis-folded recombinant proteins. Mehlin and co-workers [2] analysed 1000 genes from P. falciparum parasites that were over-expressed in E. coli and reported that only 337 were successfully produced. Of these, only 63 were reported as soluble proteins. It has been proposed that the recombinant expression of plasmodial proteins in E. coli in the presence of molecular chaperones of similar origin could improve both yield and quality of the product [3] [4]. S-adenosylmethionine decarboxylase (PfAdoMetDC) of the malaria parasite, P. falciparum, is a component of the unique bifunctional PfAdoMetDC-ODC (S-adenosylmethionine decarboxylase-ornithine decarboxylase) controlling the biosynthesis of essential polyamines, making it a potential antimalarial drug target [5] [6]. Obtaining a pure and active form of monofunctional PfAdoMetDC in fairly large quantities would facilitate its further characterisation by methods such as crystallisation. Although recombinant PfAdoMetDC has been expressed in E. coli, the protein co-purified with E. coli proteins, amongst them DnaK [7]. DnaK belongs to the heat shock protein 70 (Hsp70) family of molecular chaperones whose main function is to bind mis-folded proteins to allow them to fold. It is therefore plausible that PfAdoMetDC is released from ribosomes in mis-folded status, attracting DnaK. Hsp70/DnaK binds proteins exhibiting extended hydrophobic patches which would normally be buried in a fully folded protein [8] [9]. The binding of DnaK to mis-folded proteins facilitates their refolding [8] [9]. Heat shock proteins (Hsps) constitute the central molecular machinery of the cell which facilitates protein folding. Hsp70/DnaK is one of the most prominent molecular chaperones. Hsp40 and GrpE co-operate with Hsp70 in chaperone action. The role of Hsp40 (E. coli DnaJ) is to bind substrates and present them to Hsp70 and simultaneously modulate the ATPase activity of Hsp70 [10]. Hsp40s thus regulate the functional specificity of Hsp70. In the ADPbound state, Hsp70 binds to its substrates with high affinity, whilst it releases its substrates in the ATP-bound state [11]. The nucleotide exchange function of DnaK is facilitated by a cochaperone named GrpE [12]. An Hsp70 from P. falciparum parasites (PfHsp70), which is thought to be important for quality control in the parasite, was previously over-expressed [13] in E. coli dnaK756 cells whose DnaK is functionally compromised [14]. In light of its capability to exhibit chaperone function in E. coli cells, PfHsp70 was previously co-expressed with P. falciparum GTP cyclohydrolase I (PfGCHI) in E. coli [4]. It was reported that the co-expression of PfGCH1 with PfHsp70 led to improved quality of the PfGCH1 [4]. A chimeric Hsp70 protein, KPf, has previously been described (Fig 1) [13]. This chimeric protein was constructed by fusing the ATPase domain of E. coli DnaK to the substrate binding domain of PfHsp70 [13]. The over-expression of KPf led to protection of E. coli dnaK756 cells (express a resident DnaK that is functionally compromised) against heat stress [13]. We surmised that KPf could serve as a more effective molecular chaperone partner for boosting the yield and quality of recombinant plasmodial proteins in E. coli. This is because it is likely to cooperate with E. coli co-chaperones (Fig 1). Furthermore, because it possesses the PfHsp70 substrate binding domain, it is likely to recognise target plasmodial recombinant proteins, facilitating their fold in E. coli. GroEL, a protein that belongs to the Hsp60 family and is a barrel-shaped chaperonin whose structure is composed of 14 identical domains that make up seven distinct subunits [15]. E. coli GroEL is composed of an ATPase domain, a middle hinge-domain and an apical substrate binding domain. GroEL has a preference for substrates that range between 20-50 kDa and which are characterised by elaborate α/β or β + β topologies [16]. GroES is made up of a heptameric ring constituted by 10 kDa subunits which bind to the ends of the GroEL barrel and thus serving as the 'lid' of the GroEL barrel [17]. Production of supplemented GroEL-GroES has been associated with improved processing of recombinant protein produced in E. coli [18]. It has been proposed that although DnaK and trigger factor, another E. coli chaperone that facilitates folding of newly synthesised, both improve the folding process of newly synthesised peptides, they also significantly slow down the folding process [19]. It is possibly for this reason that co-expression of certain recombinant proteins with supplemented E. coli chaperones does not always yield positive outcomes. For instance, de Marco and co-workers [20] reported that only 26 of the 50 target proteins that were co-overproduced with supplemented E. coli chaperones showed enhanced solubility and improved yields. Altogether, this suggests that E. coli chaperones may not be acquiescent to facilitate folding of certain recombinant proteins, especially those of eukaryotic origin. For this reason, it has been proposed that rehosting the E. coli protein folding landscape by matching chaperones and target proteins from the same species for co-expression in E. coli could improve yields and quality of recombinant proteins [3]. Since production of malarial proteins in E. coli is problematic, this approach may possibly assist. Indeed, a previous study demonstrated that the co-expression of PfHsp70 with another plasmodial protein, PfGCHI, improved the solubility and functional status of the latter [4]. In the current study we investigated the merit of the chimeric Hsp70, KPf, as a chaperone co-expression partner towards improving the quality of recombinant PfAdoMetDC produced in E. coli. Our study further investigated the effect of co-expressing PfAdoMetDC with combinations of supplemented chaperones from both E. coli and P. falciparum. Our findings indicate that , connected by the linker. A single star represents a weak likelihood of cooperation with E. coli co-chaperones, or malarial protein recognition by the respective Hsp70 protein, and three stars represents a strong likelihood. This model proposes KPf as an ideal Hsp70 co-expression partner for production of malarial proteins in a bacterial host as it is likely to cooperate with E. coli cochaperones as well as recognise substrates of malarial origin. supplementation of chaperones of plasmodial origin improves the quality and stability of recombinant malarial proteins produced in E. coli. We discuss our findings with respect to their application in recombinant protein biotechnology and their impact on our understanding of protein folding in E. coli. Results Alteration of the hydrophobic pocket residue of KPf abrogates its chaperone function Hsp70 binds substrates through its substrate binding cavity characterised by three components: the α-helical lid, the arch defined by residues A429 and M404 and the hydrophobic central pocket composed of the V436 residue, in prokaryotes [21]. The substrate binding arch of DnaK is constituted by residues M404 and A429; and in Hsp70s of eukaryotic origin, the residues in these positions are A and Y, respectively [22]. Hence Hsp70s of eukaryotic origin are regarded as possessing an 'inverted' arch compared to that of DnaK [23]. The arch is thought to make direct contact with substrates, allowing access to acidic and hydrophobic enriched peptides [23]. Apart from the arch residues, another important component regulating interaction of Hsp70 with its substrates is a highly conserved valine residue (V436 in DnaK) that is located in the substrate binding cavity of Hsp70 [21]. KPf was previously shown to protect E. coli dnaK756 cells against heat stress [13]. Since KPf is a chimeric Hsp70 which was made up by combining the ATPase domain of DnaK and the substrate binding domain of PfHsp70; we surmised that it possessed two key advantages over DnaK and PfHsp70 as co-expression chaperone in recombinant protein production: (1) it possibly interacts with DnaJ and (2) its substrate binding domain is primed to bind malarial proteins. We investigated if making changes in the arch and hydrophobic pocket of KPf would influence its function in cytoprotecting E. coli dnaK756 cells against heat stress. We made the following substitutions to investigate if these would abrogate KPf function: A404Y; Y429A; A436F and A404Y/Y429A. The substitutions introduced in the substrate binding cavity of KPf (A404Y, Y429A and A404Y/Y429A) did not influence its function (Fig 2A). Only the V436F substitution in the hydrophobic pocket of KPf led to a functional abrogation (Fig 2A). SDS-PAGE analysis showed that the protein was produced to a level comparable to the original KPf chimera (Fig 2B). This suggests that the V to F substitution led to blockage of the hydrophobic pocket, restricting access of substrates to the substrate binding cavity of KPf. Co-expression of PfAdoMetDC with supplemented molecular chaperones Upon induction using AHT, PfAdoMetDC was recombinantly expressed in E. coli BL21 (DE3) Star cells (Fig 3). Additionally, the following chaperones could be successfully co-expressed with PfAdoMetDC: DnaK+DnaJ, DnaK+DnaJ+KPf and DnaK+DnaJ+PfHsp70. (Fig 3A-3C). Supplemented GroEL-GroES was also successfully expressed in tandem with one of the following chaperone sets: DnaK+DnaJ; KPf+DnaJ or PfHsp70+DnaJ (Fig 3D-3F). The successful co-expression of PfAdoMetDC with the above-mentioned chaperone sets created a platform for further enquiries regarding their role in influencing the quality of PfAdoMetDC. We noted that PfHsp70 and KPf co-expressed with supplemented GroEL-GroES were resolved on Western blots as full length protein forms and products of smaller sizes (Fig 2). It appears that GroEL-GroES overproduction may have compromised the processing of PfHsp70 and KPf as full length proteins in E. coli BL21 (DE3) Star cells. Nonetheless, a fair amount of full length forms of the two chaperones that were produced in the presence of supplemented GroEL-GroES. There was no evidence that exogenous expression of DnaJ improved the levels of protein beyond those of the resident form of the protein (Fig 3A). Hsp40 proteins are generally produced at low levels in vivo, and over-expression of Hsp40 can, in certain cases lead to toxicity and a decrease in cell viability [24]. E. coli cells circumvent production of toxic levels of DnaK, GrpE and DnaJ by using these proteins as negative regulators of the expression of heat shock genes through their effect on the stability of σ 32 [25]. Therefore, the over-expressed exogenous DnaJ may have suppressed the production of the resident form of the protein. Because GroEL and PfAdoMetDC are nearly of the same size (~60 kDa), their expression could not be resolved by SDS-PAGE (Fig 3, upper panels). To validate the expression of PfAdoMetDC we conducted Western blotting using α-Strep antibodies (Fig 3D-3F, lower panels). The expression of GroEL was confirmed by Western blot analysis (data not shown) that was conducted using α-Hsp60 antibodies [26]. PfAdoMetDC co-expressed with supplemented plasmodial Hsp70s and GroEL/ES is associated with less contaminating species As in previous attempts to purify the protein [7], PfAdoMetDC expressed in E. coli BL21 (DE3) Star cells endowed with resident levels of DnaK co-purified with DnaK. In the current study, we also observed that DnaK co-purified with PfAdoMetDC (Fig 4). Thus our findings indicate that supplementation of DnaK does not improve the purity of recombinant PfAdo-MetDC. This could be because DnaK binds to PfAdoMetDC stably, suggesting that the PfAdo-MetDC protein was produced as a mis-folded species. In addition, in the absence of supplemented GroEL/ES, there is no evidence that KPf or PfHsp70 co-expression reduced DnaK contamination in the PfAdoMetDC protein that was purified (Fig 4). However, the introduction of GroEL/ES combined with either KPf or PfHsp70 led to a reduction in the level of DnaK contamination (Fig 4). Furthermore, PfHsp70 and KPf did not co-purify with PfAdo-MetDC expressed either in the absence or presence of supplemented GroEL/ES (S1 Fig). Overall, this suggests that DnaK contamination could not be reduced by supplementing GroEL/ES only. On the other hand, supplemented GroEL/ES did appear to reduce DnaK contamination in the presence of KPf and PfHsp70. E. coli ΔdnaK cells are capable of over-expressing PfAdoMetDC Although the supplementation of plasmodial chaperones (KPf and PfHsp70) along with overexpressed GroEL/ES reduced the levels of the persistent association of DnaK with the purified recombinant PfAdoMetDC, we enquired if E. coli cells deficient of DnaK would over-express PfAdoMetDC. In addition, we speculated that the co-expression of PfAdoMetDC in the presence of DnaK may confound the folding process of PfAdoMetDC. PfAdoMetDC was successfully expressed and purified from E. coli ΔdnaK cells ( Fig 5). As expected, there was no contaminating DnaK in PfAdoMetDC purified from the E. coli ΔdnaK strain, as verified by Western blot analysis using α-DnaK antibody (Fig 5). The successful expression and purification of PfAdoMetDC from the E. coli ΔdnaK strain suggests that not all recombinant malarial proteins require DnaK for their production. PfAdoMetDC produced in E. coli cells rehosted with various chaperone constituents exhibits distinct structural features We employed limited proteolysis to gain insight on the conformation of PfAdoMetDC expressed in the presence of the various Hsp70-DnaJ chaperones ( Fig 6A). PfAdoMetDC produced in E. coli BL21 (DE3) Star cells, which were not supplemented with exogenous chaperones, was completely degraded by proteinase K within 5 minutes, generating small fragments (approximately 25 kDa in size) that could not be detected by Western blot analyses ( Fig 6A, lane 1). PfAdoMetDC co-expressed with supplemented DnaK-DnaJ and KPf-DnaJ chaperones was fairly resistant to the action of proteinase K for 30 minutes (Fig 6A). Although the coexpression of PfAdoMetDC with PfHsp70-DnaJ improved the stability of the former, the protein was more susceptible to proteolytic action compared to protein recovered from cells that were supplemented with DnaK+DnaJ and KPf+DnaJ ( Fig 6A). The products that were generated from PfAdoMetDC lysis exhibited unique profiles depending on the supplemented Hsp70 co-expression partner present ( Fig 6A). This suggests that PfAdoMetDC produced in each case had a unique conformation. Overall, PfAdoMetDC expressed in E. coli cells supplemented with over-expressed DnaK+DnaJ was the most resistant to proteolytic action ( Fig 6A). PfAdoMetDC was further co-expressed with the following supplemented chaperone combinations: DnaK+DnaJ+GroEL-GroES, KPf+DnaJ+GroEL-GroES and PfHsp70+DnaJ+-GroEL-GroES ( Fig 6B). Recombinant PfAdoMetDC purified from E. coli BL21 (DE3) cells endowed with resident levels of DnaK and supplemented with GroEL-GroES ( Fig 6B, lane 1) was just as susceptible to proteolytic digestion as the protein produced by cells expressing resident DnaK in the absence of supplemented GroEL-GroES ( Fig 6A, lane 1). This suggests that GroEL-GroES was not able to improve the proteolytic stability of PfAdoMetDC produced in the presence of resident DnaK levels. However, of PfAdoMetDC produced by cells endowed with supplemented DnaK+DnaJ+GroEL-GroES had improved stability to proteolytic action ( Fig 6B, lane 2). However, the product was not as stable as PfAdoMetDC produced by cells endowed with KPf+DnaJ+GroEL-GroES and PfHsp70+GroEL-GroES chaperone combinations (Fig 6B, lanes 3 and 4). We further subjected PfAdoMetDC produced in the E. coli ΔdnaK strain to limited proteolysis. The protein was digested to smaller fragments, represented by faint bands on SDS-PAGE and which could not be resolved by Western blot analysis (Fig 6C). These findings demonstrate that recombinant PfAdoMetDC was produced in E. coli ΔdnaK cells as proteolytically susceptible molecule. PfAdoMetDC contains a Strep-tag at its C-terminus which was recognised by the α-Strep antibody we used [27]. We noted that some fragments that were generated upon proteolysis of PfAdoMetDC may have lost their Strep-tag as they were not detected by Western blotting in spite of their evident presence based on SDS-PAGE analysis. PfAdoMetDC produced in E. coli cells rehosted with Hsp70 chaperones exhibits improved enzymatic activity The activity of PfAdoMetDC purified from E. coli under various chaperone supplementations was evaluated compared to the un-supplemented scenario (normalised to 100%; Fig 7). PfAdo-MetDC co-expressed with supplemented DnaK did not exhibit higher activity than the protein produced in the presence of only resident DnaK levels. On the other hand, the activity of PfA-doMetDC, co-expressed with KPf and PfHsp70, was enhanced, resulting in a 3.24-and 2.77-fold increase, respectively (Fig 7). Unexpectedly, the activity of PfAdoMetDC produced the E. coli ΔdnaK strain exhibited the highest activity. It is interesting that although PfAdo-MetDC produced in E. coli ΔdnaK cells was the least stable to proteolytic treatment, it exhibited the highest enzymatic activity. On the other hand, PfAdoMetDC produced in cells that were supplemented with DnaK (in the absence of supplemented GroEL) was fairly resistant to proteolytic cleavage, but exhibited marginally improved activity (Figs 6A and 7). This shows that DnaK did not necessarily improve the activity of PfAdoMetDC. On the other hand, PfAdoMetDC co-expressed with either KPf or PfHsp70 exhibited both improved activity and resistance to proteolytic cleavage (Figs 6 and 7). Altogether, the findings suggest that PfAdoMetDC may have been recognised by both PfHsp70 and KPf, resulting in it attaining proper fold compared to protein that was produced in unmodified E. coli BL21 (DE3) Star cells. Assessment of the activities of the various individual chaperones and their combinations in vitro Having investigated the effects of co-expressing the individual Hsp70 chaperones as well as their combinations with GroEL-GroES on the quality of recombinant PfAdoMetDC, we next set to investigate the function of the chaperones in vitro. The in vitro function of these chaperones would provide insight on their possible contribution to protein quality control in the cell. Of particular interest to us was to establish the functional compatibility of the plasmodial Hsp70 chaperones and E. coli chaperones (DnaJ and GroEL). Malate dehydrogenase (MDH) is susceptible to heat stress and for this reason it is widely employed to study the role of heat The activities of PfAdoMetDC co-expressed with supplemented molecular chaperones-DnaK, KPf, and PfHsp70 were normalised against PfAdoMetDC that was produced in E. coli cells endowed with resident levels of molecular chaperones represented as "Resident DnaK". "Over-expressed DnaK", represents PfAdoMetDC that was co-produced with supplemented DnaK; and PfAdoMetDC that was purified from an E. coli dnaK minus strain is represented by "ΔdnaK". "Statistical significance was calculated using a Student's t-test; * denotes p < 0.05.". shock proteins and other molecules with chaperone-like features in protein quality control [28] [29] [30]. Exposure of MDH to heat stress leads to its aggregation which is detected as increase in turbidity. Some molecular chaperones are known to be capable of suppressing MDH aggregation in vitro [29][30] [31]. We expressed and purified preparations of the molecular chaperones that were employed in this study as his-tagged species (Fig 8A). As expected, in the absence of molecular chaperones, MDH aggregated upon heat treatment (Fig 8B). The addition of the molecular chaperones (DnaK, PfHsp70, KPf, DnaJ and GroEL) resulted in the suppression of MDH aggregation. DnaJ (by itself) exhibited the lowest activity compared to other chaperones. It is known that DnaJ possesses limited independent chaperone function as its main purpose is to bind mis-folded substrates, handing them over to DnaK for subsequent folding [10]. In the absence of DnaK, DnaJ exhibits limited protein aggregation suppression capability [32]. The in vitro chaperone function of PfHsp70 has been previously demonstrated [29]. Although the chaperone activity of KPf has been reported before [13], the findings were based on its ability to protect E. coli dnaK756 cells from heat stress. However, this is the first study that demonstrates that this chimeric Hsp70 chaperone made up of the ATPase domain of DnaK coupled to the substrate binding domain of PfHsp70, KPf, is capable of suppressing protein aggregation in vitro. Overall, the data suggest that Hsp70-DnaJ combinations improved the solubility of MDH in vitro. In addition, the introduction of GroEL-GroES to the respective Hsp70-DnaJ combinations did not appear to significantly alter the outcome. Altogether, the functional capabilities exhibited by the chaperones/chaperone combinations whose activities were investigated here may reflect their function in protein quality control when over-expressed in E. coli. This assay sought to establish the function of the various chaperones in the absence of ATP. Therefore the assay represents only the protein aggregation suppression function of the chaperones and not their capability to refold mis-folded substrates. Discussion It has been proposed that expression of supplemented molecular chaperones could improve the yield and quality of recombinant malarial proteins produced in E. coli [3] [4]. In the current study, first we sought to validate the chaperone role of KPf, the chimeric protein constituted by the ATPase domain of DnaK and the substrate binding domain of PfHsp70. We observed that changes in the arch residues of KPf did not affect its function. However, the V436F mutation representing a substitution in the hydrophobic pocket of KPf abrogated the protein's function based on a complementation assay. Since introduction of this mutation led to death of E. coli dnaK756 cells subjected to heat stress, this indicates that KPf protected the cells through a specific chaperone function as the V436F mutation is likely to have blocked access of mis-folded protein substrates to the substrate binding cavity of KPf. We therefore surmise that the substrate binding domain of KPf, though of plasmodial origin, is able to recognise mis-folded E. coli proteins. However, we also hypothesized that KPf would be capable of potentially binding recombinant plasmodial proteins expressed in E. coli. PfAdoMetDC co-produced in E. coli with PfHsp70 and its derivative, KPf, demonstrated improved stability and activity compared to the protein that was produced in the presence of supplemented and/or resident E. coli DnaK. In addition, our findings highlight that supplemented E. coli DnaK had adverse effects on the quality of PfAdoMetDC both in terms of purity and activity (Figs 4 and 6). E. coli DnaK binds to its substrates for longer than Hsp70 of eukaryotic origin [19]. For this reason, DnaK may delay the folding process of proteins of eukaryotic origin that are expressed in E. coli. Since KPf possessed the ATPase domain of DnaK, it may have interacted with GrpE and DnaJ co-chaperones, and this may explain why PfAdoMetDC produced in the presence of KPf possessed a different conformation (Fig 6) and exhibited higher activity (Fig 7) compared to protein co-expressed with PfHsp70. The residues in the ATPase domain of E. coli DnaK that interact with DnaJ Y145, N147, D148, N170 and T173 and residues G400, D526 and G539 in the peptide binding domain (as reviewed in [8]) are very well conserved in PfHsp70 and by extension KPf (S2 Fig). Our findings suggest that the Hsp70s of plasmodial origin (KPf and PfHsp70) are primed to interact transiently with PfAdoMetDC to facilitate its proper fold. This is in contrast with DnaK which possibly binds more stably to PfAdoMetDC, leading to the production of a less active, and a more likely inappropriately folded form of the latter. It is also possible that PfHsp70 and its derivative, KPf, may have out-competed the resident DnaK to bind PfAdo-MetDC facilitating its improved folding. This is conceivable as both chaperones possess a substrate binding domain that is acquiescent to recognise peptides of plasmodial origin. Alternatively, KPf and PfHsp70 may have acted by indirectly creating protein folding conditions that promoted PfAdoMetDC folding. The contribution of GroEL-GroES towards improved folding of PfAdoMetDC may be due to the possibility that PfAdoMetDC technically qualifies as a GroEL-GroES substrate in spite of its varied species origin. It is known that GroEL-GroES substrates are nearly of the same size as itself; and furthermore GroEL-GroES binds to non-native forms of its substrates but does not bind to their native forms [28] [33]. Furthermore, GroEL-GroES binds to mis-folded proteins that are unlikely to be rescued by other molecular chaperones in the cell [34]. It is likely that KPf directly binds PfAdoMetDC to facilitate folding of the latter. In addition, through its possible interaction with DnaJ, KPf may also facilitate refolding of a broad spectrum of E. coli proteins as well. Interaction of Hsp70 with Hsp40 is crucial to their function in protein folding (foldase function). However, Hsp70 is independently capable of binding misfolded proteins to stabilise them against aggregation (holdase function) [35] [36]. Thus assuming that wild type PfHsp70 may have failed to interact with the DnaK co-chaperones in E. coli, its role would have been limited to suppressing protein aggregation, amongst them, recombinant PfAdoMetDC. Nonetheless, it is interesting to note that co-expression of either PfHsp70 or KPf with PfAdoMetDC improved the quality of the latter. The previously reported association of purified recombinant PfAdoMetDC with E. coli DnaK [7] suggests that the former is produced in a mis-folded state thus may exhibit hydrophobic patches which attract DnaK. In addition, the rate of translation in bacteria is much higher (approximately 20 amino acids per second) than in eukaryotes (approximately 4 amino acids per second) [33]. The slower translation rate in eukaryotes is consistent with the production of multi-domain proteins which require more time for their folding. Therefore the rate of PfAdoMetDC synthesis in E. coli may have been rapid, giving the protein less time to fold. This would have led to the generation of a product that was not fully folded to which DnaK bound with high affinity. The residence time for DnaK on peptides varies from 30 s to 25 minutes; only proteins that solely depend on DnaK for folding (associated with high frequencies of predicted DnaK binding sites) are released rapidly [37]. On the other hand, proteins that exhibit low cellular abundance, fewer DnaK binding sites and those that tend to assume dynamic structural intermediates (slow folding proteins which do not easily bury their hydrophobic patches) exhibit higher DnaK residence time [37]. Typically such proteins require DnaK for their sustained folding in the cell [37]. Consequently, the extended binding of DnaK to peptides may slow their folding, leading to detrimental consequences. For this reason, it has been proposed that to circumvent DnaK contamination, expression of recombinant proteins in E. coli ΔdnaK cells is recommended [38]. Indeed, our findings suggest that DnaK confounds the folding of PfAdoMetDC. Interestingly, PfAdoMetDC expressed in E. coli ΔdnaK cells displayed enhanced activity. However, the protein was highly susceptible to proteolytic cleavage. This suggests that Hsp70 may not be crucial for PfAdoMetDC production in E. coli, however it is required for the correct fold and stability of the recombinant protein. In the current study, only PfHsp70 and its derivative, KPf exhibited a positive effect on the quality (improved activity and resilience to proteolytic action) of recombinant PfAdoMetDC produced. It was surprising to us that PfAdoMetDC produced in E. coli ΔdnaK cells exhibited higher activity than protein produced in the presence of DnaK and KPf/PfHsp70. E. coli ΔdnaK cells were cultured at 30°C as this is their ambient growth temperature. This may have resulted in slower rate of protein synthesis and improved quality of recombinant PfAdoMetDC in spite of the absence of DnaK. Limited proteolysis was used to examine the conformation of PfAdoMetDC expressed both in the absence and presence of supplemented molecular chaperones [39] [40]. This technique is based on the hypothesis that the segments of the polypeptide chain that are likely to be accessible to proteinases, are exposed loops within domains or linking segments between domains [40]. It is interesting to note that PfAdoMetDC co-expressed with the various Hsp70 chaperones exhibited the following profile in decreasing order of resilience to proteolytic action; protein co-expressed with: DnaK+DnaJ > KPf+DnaJ > PfHsp70+DnaJ > no supplemented chaperones > ΔdnaK (Fig 6A and 6C). On the other hand, PfAdoMetDC produced in the presence of supplemented Hsp70-DnaJ-GroEL-GroES combinations exhibited the following proteolytic stability profiles depending on the chaperone set co-expressed with it: KPf+DnaJ +GroEL-GroES > PfHsp70+DnaJ+GroEL-GroES > DnaK+DnaJ+-GroEL-GroES > GroEL-GroES (Fig 7B). Furthermore, the unique proteolytic stability profiles exhibited by PfAdoMetDC produced under the various chaperone conditions testify that slight changes to protein folding conditions in E. coli have huge implications on the folding fate of recombinant proteins produced in the cell. Overall, the findings suggest that the plasmodial Hsp70s (KPf and its parental protein, PfHsp70) facilitated fold of PfAdoMetDC both in E. coli cells expressing only resident levels of GroEL-GroES as well as in cells endowed with supplemented GroEL-GroES. On the other hand, supplementing both DnaK and GroEL-GroES did not improve the resistance of PfAdoMetDC to proteolytic cleavage compared to supplementing with DnaK only. We further demonstrated the chaperone activity of KPf in vitro by showing its capability to suppress heat-induced aggregation of MDH, an aggregation prone protein. This is the first study showing the independent chaperone capability of this chimeric Hsp70 protein. We surmise that KPf similarly bound to recalcitrant PfAdoMetDC recombinant protein produced in E. coli, suppressing its aggregation. The possible interaction of KPf with E. coli DnaK chaperones, such as GrpE and DnaJ makes it possible for KPf to facilitate refolding of PfAdoMetDC co-expressed with it in E. coli. This is because DnaJ speeds up the otherwise rate-limiting ATP hydrolysis step of DnaK binding, while GrpE facilitates nucleotide exchange. In eukaryotes, Hsp70 mediates substrate transfer to Hsp60 (TRiC for TCP-1 ring complex) by directly interacting with TRiC [41]. However, TRiC does not share the same specificity for substrates as GroEL; substrate sequence and structure varies and may even reach 100-120 kDa in size [42]. In addition, TRiC is more intimately linked to Hsp70 in eukaryotes, facilitating direct substrate transfer from Hsp70 to TRiC [43]. The TRiC protein folding cycle occurs at a much slower rate compared to that of GroEL-GroES and thus more time is provided for encapsulation and folding of substrates by the chaperonin [34]. The differences between the cooperation of Hsp70 and TRiC in eukaryotes compared to the DnaK+GroEL-GroES functional partnership may explain why certain proteins of eukaryotic origin may not fold properly in E. coli [44]. It is possible that KPf may cooperate with GroEL-GroES more productively to facilitate processing of recombinant malarial proteins produced in E. coli compared to the DnaK--GroEL-GroES partnership. However, this remains to be directly validated. We hypothesize that because of their acquiescence to recognising PfAdoMetDC, PfHsp70 and KPf took over the responsibility of facilitating PfAdoMetDC from resident DnaK. The expression of plasmodial Hsp70s (KPf and PfHsp70) in the presence of supplemented GroEL--GroES led to the recovery of purer PfAdoMetDC recombinant protein. Based on size criteria and its inclination to being recalcitrant, PfAdoMetDC is most likely a substrate of both Hsp70 and GroEL-GroES. GroEL-GroES is known to rescue the folding of proteins that other E. coli chaperones do not fold effectively and most of its substrates are nearly the same size as itself [33] [34]. PfAdoMetDC fits both criteria and may therefore require GroEL-GroES to facilitate its full processing. It is plausible that KPf provides an ideal Hsp70 to partner with GroEL--GroES in facilitating the fold of PfAdoMetDC. It remains to be studied however, if the role of KPf could be extended to facilitate processing of other recalcitrant malarial recombinant proteins. E. coli strains and plasmids The E. coli ΔdnaK strain, BB1553 (MC4100 ΔdnaK52::CmR sidB1) and E. coli dnaK mutant strain, BB2362 (dnaK756 recA::Tc R pDMI,1) and plasmids pBB535, expressing genes DnaK and DnaJ, and pBB542, expressing DnaK, DnaJ and GroEL-GroES, were a kind donation from Dr Bernd Bukau (Heidelberg University, Germany). E. coli dnaK756 BB2362 strain is resistant to bacteriophage lambda [45], and is unable to grow above 40°C [45] [46]. BB2362 expresses mutant DnaK with three glycine-to-aspartate substitutions [47]. Both plasmids pBB535 and pBB542 are under the control of the IPTG regulated promoter PA1/lacO-1 and carries spectinomycin resistance [45]. We have routinely used the construct pQE30/PfHsp70 to express PfHsp70 in E. coli [13] [29]. The pASK-IBA3/PfAdoMetDC hosting the codon-harmonised PfAdoMetDC gene encoding for the α-subunit of the protein (approximately 60 kDa) has previously been described [7]. The PfAdoMetDC was expressed as a C-terminally Strep-tagged molecule. The pASK-IBA3 plasmid is under the control of the tet promoter which is regulated by anhydrotetracycline (AHT). The tet repressor keeps the promoter in a repressed state until the addition of AHT; expression leakage is thus minimal. A description of all strains and plasmids used in this study is provided in supporting information (S1 Table and S2 Table). Introduction of arch and hydrophobic pocket substitutions in KPf To determine the role of the arch and hydrophobic pocket residues of the substrate binding cavity of KPf, mutations were introduced in this subdomain based on the same approach and primers that we previously employed to introduce similar changes on the full length PfHsp70 protein [29]. In the current study, we made the changes on KPf, a derivative of PfHsp70 that possessed an ATPase domain from DnaK. Plasmid pQE60/KPf was used as the parental DNA to generate modified plasmids encoding KPf with mutations in the substrate binding cavities. The Stratagene QuikChange site-directed mutagenesis kit was used to modify the plasmids, following the instructions of the supplier. The following are the derivatives we sought to generate from a construct, pQE60/KPf [13]: pQE60/KPf: pQE60/KPf-A404Y (encoding for KPf-A404Y protein), pQE60/KPf-Y429A (encoding for KPf-Y429A protein), and pQE60/KPf-A436F (encoding for KPf-A436F protein). All the changes were verified by DNA sequencing. Investigating the role of the arch and hydrophobic residues of KPf using a complementation assay We previously demonstrated that PfHsp70 and KPf both confer cytoprotection to E. coli dnaK756 against heat stress [13]. In the current study, we introduced changes to the residues in the substrate binding cavity and hydrophobic pocket of KPf. Our aim was to validate if the cytoprotective function of KPf in E. coli is dependent on the integrity of residues constituting the arch and hydrophobic pockets that are located in its C-terminal substrate binding domain. E. coli dnaK756 cells were transformed with plasmids encoding the proteins KPf and its respective derivatives with mutations in the substrate binding cavities. pQE60 plasmid vector was used as a negative control. Cells transformed with pQE60/DnaK constituted a positive control. E. coli dnaK756 is resistant to bacteriophage lambda [46], and is unable to grow above 40°C [46] [47]. This strain's resident DnaK contains three amino acid substitutions, one of which reduces its affinity for GrpE, whilst the two other substitutions elevate the basal ATPase activity of DnaK [48]. The cells were transformed using plasmids encoding the KPf variants before being subjected to heat stress (43.5°C) in order to assess the capabilities of the respective proteins to reverse the thermo-sensitivity of the cells. Freshly transformed E. coli dnaK756 cells were grown overnight at 30°C in 2 x YT broth (16 g of tryptone powder, 10 g of yeast extract powder and 5 g of sodium chloride in 1000 mL of double distilled water) containing 50 μg/ml kanamycin, 10 μg/ml tetracycline and 100 μg/ml ampicillin. The overnight inoculum was transferred into fresh broth and incubated under the same growth conditions. At mid-log phase of growth, some cells were induced with 1 mM IPTG whilst others were not. The cells were left to grow to A 600 = 2.0. The cultures were standardised to a cell density of 0.2 A 600 before being spotted onto 2 x YT agar plates containing the necessary antibiotics and 20 μM IPTG, and incubated overnight at 37°C and 43.5°C, respectively. Production of pBB535 and pBB542 based constructs for the expression of PfHp70 and its derivative To facilitate co-expression of PfHsp70 and its derivative, KPf, with PfAdoMetDC in E. coli, plasmid vectors were selected based on compatible origins of replication and independent antibiotic selection. pBB535, originally encoding for DnaK and DnaJ, was altered to encode for PfHsp70+DnaJ, and KPf+DnaJ, respectively. Similarly, pBB542, encoding for DnaK, DnaJ and GroEL-GroES, was modified such that DnaK was replaced by PfHsp70 and KPf, respectively. The pBB535 construct was used as template to conduct site-directed mutagenesis to generate the constructs pBB535-PfHsp70 and pBB535-KPf, respectively. A BamHI was introduced before the starting codon of DnaK followed by SmaI site that was introduced after the stop codon of DnaK. The forward primer, 5´-GACTCTCTTCCGGGGATCCATGCCATACCGC GAAAGGTTTTGC-3´and reverse primer, 5'-GCAAAACCTTTCGCGGTATGGCATGGAT CCCCGGAAGAGAGTC-3' were used to introduce the BamHI site, respectively. The introduction of the SmaI was facilitated using the forward primer, 5´-CAAAGACAAAAAATAAC CCGGGATAAACGGGTAATTATACTGACACGGGC-3´; and reverse primer, 5´-GCCCGT GTCAGTATAATTACCCGTTTATCCCGGGTTATTTTTTGTCTTTG-3´, respectively. E. coli BB1553 (MC4100 ΔdnaK52::CmR sidB1) cells lack the dnaK gene [50]. We investigated the expression of PfAdoMetDC in these cells. Briefly, competent E. coli ΔdnaK cells were transformed with the pASK-IBA3/PfAdoMetDC construct. Following transformation, a single colony was inoculated into 2 x YT broth (16 g of tryptone powder, 10 g of yeast extract powder and 5 g of sodium chloride in 1000 mL of double distilled water) supplemented with 35 μgmL-1 chloramphenicol and 100 μgmL-1 ampicillin and the culture was left to grow overnight at 30°C. The following morning, 5 μL of inoculum from the overnight culture was transferred into 45 mL of fresh 2 x YT broth, supplemented with 35 μgmL-1 chloramphenicol and 100 μg/mL ampicillin. The cells were incubated at 30°C with shaking to optical density (OD600) of 0.6. PfAdoMetDC production was induced by the addition of 2 ng/ml AHT. The cells were harvested for protein expression and purification studies. Analysis of PfAdoMetDC using limited proteolysis We sought to investigate conformational changes of PfAdoMetDC that was expressed in E. coli BL21 (DE3) Star cells in the absence and presence of supplemented molecular chaperones using limited proteolysis [41]. We also subjected PfAdoMetDC protein that was expressed and purified from E. coli ΔdnaK cells to limited proteolytic analysis. Purified PfAdoMetDC (0.2 mg/ml) was incubated with 0.33 mg/ml proteinase K at 37°C for 30 minutes. Proteolytic digestion of PfAdoMetDC was analysed using SDS-PAGE and Western analysis was for verification using monoclonal α-Strep-tag II antibodies to detect fragments recombinantly produced with the C-terminal located Strep-tag II. Assessment of the enzymatic activity of PfAdoMetDC The enzymatic activity of PfAdoMetDC preparations that had been expressed under varied protein folding conditions was determined. The assay constituents included 5 ug enzyme, 100 uM S-adenosy-L-methionine chloride (Sigma-Aldrich, Germany) and 50 nCi S-[Carboxyl-14 C] adenosyl-L-methionine (55 mCi/mmol, Amersham Biosciences, England) in assay buffer (50 mM KH 2 PO4 pH 7.5, 1 mM EDTA, 1 mM DTT) as previously described [7] [54]. All the assays were performed in triplicate and the specific enzyme activities were expressed as the amount of CO 2 produced in nmol/min/mg. Assessment of the effectiveness of molecular chaperones to suppress protein aggregation in vitro The ability of the recombinant Hsp70 proteins (PfHsp70, KPf and DnaK) to suppress heatinduced aggregation of malate dehydrogenase (MDH) was determined spectrophotometrically based on a previously reported assay [29] [30]. Furthermore, the heat-induced aggregation of MDH was investigated in the presence of DnaJ in a ratio of 2:1 (DnaJ:Hsp70) and GroEL--GroES in a ratio of 1:1 (GroEL-GroES:Hsp70). The proteins were suspended in assay buffer (100 mM NaCl, 50 mM Tris, pH 7.4). The aggregation of the protein was determined by reading absorbance at 360 nm using a 96-well plate reader (BioteK, ELx808). A non-chaperone, BSA, was used as a control.
On Solar System and Cosmic Rays Nucleosynthesis and Spallation Processes A brief survey of nuclide abundances in the solar-system and in cosmic rays and of the believed mechanisms of their synthesis is given. The role of spallation processes in nucleosynthesis is discussed. A short review of recent measurements, compilations, calculations, and evaluations of spallation cross sections relevant to nuclear astrophysics is given as well. It is shown that in some past astrophysical simulations, old experimental nuclear data and theoretical cross sections that are in poor agreement with recent measurements and calculations were used. New astrophysical simulations using recently measured and reliably calculated nuclear cross sections, further researches in obtaining better cross sections, and production of evaluated spallation cross sections libraries for astrophysics are suggested. Introduction A considerable success was achieved over the last decades in determination of abundances of nuclides in the solar system and in cosmic rays as well as in understanding the mechanisms of their synthesis (see, e.g., Burbidge, Burbidge, Fowler, & Hoyle 1957;Crosas & Weisheit 1996;McWilliam 1997;Wallerstein et al. 1997;Käppeler, Thielemann, & Wiescher 1998;Cameron 1999;Bethe 1999;Busso, Gallino, & Wasserburg 1999;Ginzburg 1999;Hamann & Ferland 1999;Henley & Schiffer 1999;Käppeler 1999;Khlopov 1999;Salpeter 1999;Wolfenstein 1999). Nevertheless, many interesting questions still remain. So, the light-element abundances, especially that of beryllium, and the origin of low-energy cosmic rays and their role in the lightelement production require a critical reexamination . Another open question on chemical evolution in galaxies is, e. g., the fact that plots of abundances relative to hydrogen, [Be/H] and [B/H] versus [Fe/H] in halo stars both exhibit a slope of +1, rather than the value +2 that is expected for normal supernova recycling of interstellar material (Crosas & Weisheit 1996;Duncan et al. 1992). Another open question is related with the effect of hypothetical sources of non-equilibrium particles on the radiation-dominant (RD) stage of expanding hot Universe, like the effect of antiproton interaction with 4 He on abundance of light elements (Khokhlov 1999). The abundance of light elements is much more sensitive to possible effects of non-equilibrium particles than to the spectrum of the thermal electromagnetic background, so more complete analysis of effects of non-equilibrium particles on the RD stage of the Universe is still to be performed in the future (Khokhlov 1999). Note that in spite of a determinative role of nuclear astrophysics in understanding mechanisms of nucleosynthesis and of a great improvement of nuclear data over the past decades, some of the remaining questions about abundances of both stellar and inerstellar elements are related with uncertainties of nuclear data used in astrophysics. So, one of the biggest remaining uncertainties in nuclear astrophysics today concerns the precise parameters of a pair of resonance levels in 16 O, just below the thermonuclear energy range (Salpeter 1999). Also, some of the important for astrophysics nuclear reactions are either not measured yet, or the results of recent measurements and model calculations by nuclear physicists are little known and not widely used yet by astrophysicists. At the same time, some questions about elemental abundances, especially of the interstellar light elements, are related more with the cosmology itself and with elementary particle physics (Khokhlov 1999;Salpeter 1999;Turner & Tyson 1999) rather than with the "old" nuclear physics. So, as mentioned by Salpeter (1999), to predict today's interstellar abundances quantitatively we need to know how many stars of various masses were born and have already died, since only in old age (e.g., planetary nebulae) and death (supernovae) does the material from a star's interior reach interstellar space. This mass distribution, the "initial mass function," is still somewhat uncertain (Salpeter 1999). The aim of the present paper is to review briefly the believed today mechanisms of nucleosynthesis and the elemental abundances of stellar and iterstellar matter and to highlight places where nuclear spallation processes are important. Nuclear spallation is our field of research for decades, so we hope to find points where our experience and knowledge may help to a little better understanding some astrophysical questions. The Solar System and Cosmic Rays Abundances of Elements Let us briefly review in the beginning the believed today scenario of the origin of elements and of their abundances, so that we may discuss later a possible contribution to nucleosynthesis from spallation processes. The abundance of the solar system elements is shown in Fig. 1. Data shown by the thick black curve are taken from Table 38 by Lang (1980) and are based upon measurements of Type I carbonaceous chrondite meteorites (meteorites containing carbon compounds with a minimum of stony or metallic chrondite metals, and are thought to be of a better representation than the old Suess and Urey's (1956) curve (thin, blue) which was based on measurements of terrestrial, meteoric, and solar abundances. Table III by Suess and Urey (1956) which are based on measurements of terrestrial, meteoric, and solar abundances. These data were used by Burbidge, Burbidge, Fowler, and Hoyle (1957) in postulating the basic nucleosynthetic processes in stars in their seminal work which become widely known as "B 2 FH," the "bible" of nuclear astrophysics. The thick black curve shows newer data from the compilation published in Table 38 by Lang (1980) which are based upon measurement of Type I carbonaceous chrondite meteorites, and are thought to be a better representation than Suess and Urey's curve. The nuclear processes which are thought to be the main stelar mechanisms of nuclide production are shown as well in the figure. rays compared with the corresponding abundances of elements in the solar system is shown in Fig. 2. One can see that while abundances of the majority of elements in cosmic rays are very close to what we have for the solar system, there are groups of nuclides, like the one in the Sc-V-Mn region and, especially, the light LiBeB group, whose abundances in cosmic rays are many orders of magnitude lower than in the solar system. Abundace relative to silicon = 10 6 70−280 MeV/nucleon cosmic rays (Simpson, 1983) Solar system elements (Lang, 1980 Simpson, 1983) compared to the solar system abundances (open circles, taken from Tab. 38 by Lang, 1980) normalized to Si = 10 6 . It is natural that abundances shown in both Figs. 1 and 2 are not definitive. With development of better measurement methods and techniques and with increasing our general understanding of the astrophysics, more reliable data will be obtained in the future. As one can see from Table 1 (adopted from Schramm, 1995), not only the precision of measurements increases with time but even the objects of observation of elements and their presumed origins change considerably in the course of time. Nevertheless, the two sets of data shown in Fig. 1 suggest us that one may expect no sweeping changes in the main features of already measured abundances of the solar system elements. So, even not definitive, these abundances can be used confidently to study and to understand the origin of elements. It is believed today that the elements we observe at present have been generated mainly by three different processes (Reeves 1994): The first one is the primordial nucleosynthesis, i.e., via thermonuclear reactions in the first few minutes after the Big Bang and prior to formation of stars (this concerns mainly D, 3 H, 3 He, 4 He, 7 Li, and perhaps some of the observed today Be and B; heavier elements could be produced by primordial nucleosynthesis, but were probably burned thereafter in nuclear reactions during the stellar era). The second mechanism generating most of the observed nuclei is nucleosynthesis in stars (most of elements heavier than Li). A third contribution to nucleosynthesis comes from spallation reactions in the interstellar medium (a part of the observed Li, Be, B, and some heavier nuclides). By convention, into the last group of nuclide production mechanisms can be included as well nuclear reactions induced by ν (see, e.g., Ryan et al. 1999;Khokhlov 1999), although ν-process nucleosynthesis is considered every so often in the literature as a special mechanism (Woosley et al. 1990). Let us discuss briefly below all these processes in turn. Big-Bang Nucleosynthesis (BBN) According to modern concepts, at time t ≃ 15 s after the Big Bang the temperature of the Universe would have been decreased to T ≃ 3×10 9 K and nucleosynthesis would then begin through the synthesis of deuterium from protons p + p → D + e + + ν e . (1) This would have been the end of the "radiative era," when radiation existed separately from matter as hadrons and leptons, and the beginning of the "nucleosynthesis era". Note that the binding energy of the nucleons in deuterium is very small, of only 2.2 MeV, which corresponds to T ∼ 2.5 × 10 10 K. Therefore, at this stage, almost all deuterium produced is rapidly destroyed by high-energy photons and further synthesis of heavier nuclei by means of reactions D + D → 3 H + p, 3 H + D → n + 4 He, is not possible until the temperature of the Universe decreases to a value of T ∼ 10 9 K. With further decrease in temperature the photodisintegration of deuterons practically ceases and deuterons begin to accumulate. At the same time almost all of the neutrons are utilized in the creation of helium through the reaction (4). By this time neutron decay would have shifted the neutron-proton balance to 13% of neutrons and 87% of protons (see, e.g., Fig. 3.13 by Tsipenyuk 1997). This moment of time corresponds approximately to the third minute after the Big Bang and to a temperature of ∼ 10 9 K. Beside reactions (1-4), there are other ways to get 3 He and 4 He from nucleons during the BBN. So, the following reactions are usually considered along with (1-4) to produce Helium from Hydrogen at the BBN stage: p + n → D + γ, n + 3 He → 4 He + γ, D + 3 He → 4 He + p, p + 3 H → 4 He + γ, D + D → 3 He + n. Nuclei which are heavier than helium would not have been produced in significant quantities during this time interval as there are no stable nuclei in the Nature with the mass numbers 5 and 8. Therefore two energy gaps would have appeared and synthesis of heavier nuclei would have stopped for some time. The gap at A = 5 is overcome and the production of 7 Li, 7 Be, and 6 Li together with their subsequent possible destruction proceed through: 7 Li + p → 4 He + 4 He, 7 Li + D → 4 He + 4 He + n, 3 He + 4 He → 7 Be + γ, 7 Be + D → 4 He + 4 He + p, 6 Li + n → 7 Li + γ. The gap at A = 8 prevents primeval production of heavier isotopes in any significant quantities. Generally, it should be mentioned that different models assume different numbers of chains considered in BBN calculations. So, while some authors limit themselves to only 12 most important reactions in their BBN calculations (see, e.g., Smith et al. 1993;Sarkar 1999), other recent works consider up to 22 possible chains (Lopez & Turner 1998), or even much more extended nuclear networks (see, e.g., Thomas et al. 1993) like the one shown in Fig. 3, kindly supplied by Keith Olive. Usually, one tries to determine the primeval ratios of abundances of nuclei produced before the "star era" began, avoiding in observations regions where the remnant matter from the Big Bang was processed through stars. So, although all stars start on the main sequence and produce light elements in their interiors, it is believed that most of the observed today interstellar helium was already there when the galaxy was formed, i.e., most of it is primordial and not from stars (Salpeter 1999). One reason for this is that there is little mixing from a star's center to its surface (and usually little mixing between stars and interstellar gas); another reason is that much of the interior helium is processed into heavier elements before a star dies. The primordial abundances of 4 He, D, 3 He, 7 Li, and other light elements measured in such a way are used further to fit the main parameters of the BBN. The "standard model" of the big bang nucleosynthesis, in which it is assumed that the baryon distribution was uniform and homogeneous during that period, is described by only one parameter, η, the baryon to photon ratio, or by the baryon density, ρ B , related to η by ρ B = 6.88η × 10 −22 g cm −3 . In practice, usually the baryon density is expressed not directly units of ρ B but by a related parameter Ω B h 2 , where Ω B is the baryon density in terms of the critical mass density, ρ c : Ω B = ρ B /ρ c , where ρ c = 1.88 × 10 −29 h 2 g cm −3 and h is related to the Hubble constant, H 0 , by the relation H 0 = 100h km s −1 Mpc −1 (see, e.g., Lang 1999). Thomas, Schramm, Olive, and Fields (1993), with kind permission from Keith Olive. As one can see from Fig. 4 (Turner 1999), a reasonable agreement between predicted by the BBN abundances of 4 He, D, 3 He, 7 Li and recent measurements may be achieved only in a narrow range of values for the baryon density, namely, Ω B h 2 = 0.019 ± 0.0024. When we go beyond the Standard Model, there is another fundamental parameter which affects the BBN abundances, namely, the number of massless neutrino species in the Universe, N ν , which affects the expansion temperature-time relation and hence the way in which nuclear reactions go out of thermal equilibrium. The presence of additional neutrino flavors (or of any other relativistic species) at the time of nucleosynthesis increases the energy density of the Universe and hence the expansion rate, leading to a larger value of the temperature at the freeze-out of the weak-interaction rates, T f , to a larger value of n/p ratio, and ultimately, to a higher value of the primeval 4 He abundance, Y p = 2(n/p)/[1 + n/p)]. By means of a likelihood analysis on η and N ν based on 4 He and 7 Li it was found that the 95% CL range are 1.7 ≤ N ν ≤ 4.3 (Cassco 1998). As one can see from Fig. 5, adapted from Copi, Schramm, & Turner (1997), a recent analysis of the deuterium abundance in high-redishift hydrogen clouds helps to sharpen this limit to N ν ≤ 3.4(3.2), for Y p = 0.242, and to N ν ≤ 3.8(4.0), for Y p = 0.252, that is in a good agreement with the Standard Model's value of N ν = 3. This fact can be treated as one more confirmation of the dramatic success of the Big Bang model, which provides agreement with the observed element abundances only if the number of massless neutrino species is three, which correspond exactly to the three species (electron, muon, and tau) we know to exist. Besides the mentioned above two fundamental parameters, the BBN calculations involve also a number of "working" parameters, namely, the cross sections for processes considered in the BBN nuclear networks. More exactly, traditionally, in astrophysics are used not directly nuclear cross sections but the so called "nuclear reaction rates" derived from measured or evaluated cross sections of relevant reactions convoluted with a thermal (Maxwell-Boltzmann) relative velosity distribution. Useful references on reaction rates works performed before 1993 can be found in Smith, Kawano, & Malaney (1993). The last and most complete compilation of reaction rates involving light (1 ≤ Z ≤ 14), mostly stable, nuclei, called NACRE (Nuclear Astrophysics Compilation of REaction rates), have been published recently by a big consortium of nuclear physics and astrophysical European laboratories (Angulo et all. 1999), where further detailed references may be found (see the recent work by Vangioni-Flam, Coc, Casse, & Oberto (2000), where NACRE have been already used in an updated BBN model to study primordial abundances of light elements up to 11 B). When we have already fixed the two fundamental parameters of the BBN, η and N ν and have chosen the "working horses", the needed thermonuclear reaction rates, we can perform BBN calculations to study how abundances of different light elements have changed with the time (or temperature) after the big bang, like shown in Fig. 6, adapted from Burles, Nollett, & Turner (1999). He, and 7 Li (number relative to hydrogen) as a function of the baryon density; widths of the curves indicate "2σ" theoretical uncertainty. The dark band highlights the determination of the baryon density based upon the recent measurement of the primordial abundance of deuterium (Burles & Tytler, 1998a,b), Ω B h 2 = 0.019 ± 0.0024 (95% cl); the baryon density is related to the baryon-to-photon ratio by ρ B = 6.88η × 10 −22 g cm −3 . [From Turner 1999, with (D/H) P = (2.5 ± 0.5) × 10 −5 (dashed-dotted line). In each case we have assumed the 7 Li abundance that results in the least stringent limit toÑ . The fact thatÑ = 3 is well within the 95% credibility interval is indicative of the consistency of big-bang nucleosynthesis with three massless neutrino species. [From Copi, Schramm, and Turner, Phys. Rev. C55, 3389 (1997), with kind permission from Michael Turner.] Burles, Nollett, & Turner (1999), with kind permission from Kenneth Nollett. Our knowledge of the observed primordial abundances is still uncertain, allowing and involving different models for primordial nucleosynthesis. Though the standard cosmology tells us that all nuclei heavier than carbon were produced in stellar interiors after the galaxy formation, recent observations in the sub-giant CD-38 • 245, one of the oldest stars in first generation which were born a few million years after the galaxy formation, as well as recently observed absorption lines in quasi stellar objects (QSO) with red-shift factor z = 2 (see, e.g., Kajino 1992) suggest that there were some production activities of medium and heavy elements (up to Ba) before or during the galaxy formation. At present, there are no definite measurements of the primordial abundances for carbon and heavier elements, therefore all such observations should be interpreted only as an upper, possible limit. At the end of this section, let us note one more point of interest in context with the aim of the present paper. Even if the standard BBN explains the origin of light elements D, 3 He, 4 He, and 7 Li and their primordial abundances, it is hoplessly ineffective in generating 6 Li, 9 Be, 10 B, 11 B (see, e.g., Vangioni-Flam, Casse, & Audoze 1999). Due to their low binding energy, these nuclei are not produced significantly in the BBN or in stellar nuclear burning, and are, in fact, destroyed in stellar interiors. Instead, it is believed today that LiBeB are made mostly by spallation processes due to energetic nuclei and neutrinos (see Fields, Olive, Vangiony-Flam, & Olive 1999 and references therein). We will return to this question again in Section 5. Let us also mention that a new, good, and useful review on BBN nucleosynthesis and primordial abundances will be published shortly in Physica Scripta by Tytler, O'Meara, Suzuki, & Lubin (2000). Nucleosynthesis in Stars After the Big Bang, the story of nucleogenesis is considered mostly with the physics of stellar evolution and nucleosynthesis in stars (see, e.g., McWilliam 1997). In the B 2 FH paper, the "bible" of nuclear astrophysics, to describe all features of the abundance curve known as of 1957, eight separate processes were necessary to be taken into account: 1) Hydrogen Burning; 2) Helium Burning; 3) α Process; 4) e Process; 5) r Process; 6) p Process; 7) s Process; and 8) x Process. Today, 43 years later, nearly the same processes are still considered to be as fundamental ones for stellar nucleosynthesis (Wallerstein et al. 1997). For completeness sake, let us briefly list bellow in turn processes shown in Fig. 1 which are believed today to be of the main importance for stellar nucleosynthesis. Hydrogen Burning Hydrogen burning starts in stars with the proton-proton and deuteron-proton reactions (1) and (6) discussed in Section 3. Other reactions of the hydrogen burning chain suggested and discussed half a century ago by many prominent physicists (see detailed references in Lang 1999) are (13) and (15), as well as: 3 He + 3 He → 4 He + p + p, 8 Be → 4 He + 4 He. The energy released in each of these and other reactions discussed may be found in Lang 1999. For stars more massive than the Sun, hydrogen will be fused into helium by the fast C-N cycle provided that carbon, nitrogen, or oxygen are present to act as a catalyst: 15 N + p → 12 C + 4 He. Additional proton capture reactions, which may take place to form the complete C-N-O bi-cycle (see references in Lang 1999) are: It is possible that the CNO cycle produces most of the 14 N found in nature. During supernovae explosions, a rapid CNO cycle might take place in which the (n,p) reactions replace the beta decays in the cycle. Helium Burning It is believed now that helium burning results in the production of approximately equal amount of 12 C and 16 O in stars of masses from 0.5 to 50 M ⊙ . The reactions assigned to be the triple alpha process, 4 He + 4 He + 4 He → 12 C + γ, are (see references in Lang 1999): Once 12 C is formed, and with increasing temperature in the stellar core, 16 O and other heavier nuclei up to the very stable, double magic, 40 Ca, or even a little further will be produced by successively α-capture: 36 Ar + 4 He → 40 Ca + γ. As suggested by Cameron about half a century ago (see references in Cameron 1999 andLang 1999), α-capture reactions on products of the C-N-O cycle might play a role of neutron producer in stars: 22 Ne + 4 He → 25 Mg + n. Carbon and Oxygen Burning At the condition of helium burning, the predominant nuclei are 12 C and 16 O (Lang 1999). When temperatures greater than 8 × 10 8 K are reached, carbon will begin to react with itself according to the reactions: At about 2 × 10 9 K, oxygen will also react with itself according to the reactions: The α particles, protons, and neutrons which are produced via reactions (52-61) will interact with the other products of the burning to form many other nuclides with 16 ≤ A ≤ 28. It is now thought that most of the carbon, oxygen, and silicon burning, which account for the observed solar system abundances for 20 ≤ A ≤ 64, occurs during fast explosions, and these explosive burning processes are discussed briefly in Section 4.7. Silicon Burning At the completion of carbon and oxygen burning, the most abundant nuclei will be 32 S and 28 Si with significant amount of 24 Mg (Lang 1999). Because the binding energies for protons, neutrons, and α particles in 32 S are smaller than those in 28 Si, the nuclide 32 S will be the first to photodisintegrate according to the reactions: 29 Si + γ → 28 Si + n. The resulting reactions will leave little but 28 Si. Silicon will then begin to photodisintegrate at temperatures greater than 3 × 10 9 • K according to the reactions: 28 Si + γ → 24 Mg + 4 He. As the (γ, 4 He) reaction has the lower threshold, it is the dominant reaction at low temperatures T < 2× 10 9 • K; whereas the (γ, p) reaction has the shorter lifetime at higher temperatures. Further photodisintegrations lead to the build-up of lighter elements according to the reactions: The abundances of most of the nuclei in the range 28 ≤ A ≤ 60 are thought to be determined by equilibrium or quasi-equilibrium processes in which the importance of many individual reaction rates is diminished (see references in Lang 1999). Most nuclear species between 28 Si and 59 Co, except the neutron-rich species ( 36 S, 40 Ar, 43 Ca, 46 Ca, 48 Ca, 51 Ti, 54 Cr, and 58 Fe), are generated by a quasi-equilibrium process in which the only important thermonuclear reaction rates are thought to be those of 44 Ca, 45 Sc, and 45 Ti (Lang 1999). The abundances of the neutron-rich species could be determined by the s or r processes discussed briefly bellow. s, r, and p Processes Because the binding energy per nucleon decreases with increasing A for nuclides beyond the iron peak (A ≥ 60), and because these elements have large Coulomb barriers, they are not likely to be formed by fusion or alpha and proton capture (Lang 1999). It is thought that most of these elements are formed by neutron capture reactions which start with the iron group nuclei (Cr, Mn, Fe, and Ni). If the flux of neutrons is week, most chains of neutron capture will include only a few capture before the beta decay of the product nucleus. As the neutron capture lifetime is slower (s) than the beta decay lifetime, this type of neutron capture is called the s process. This process can continue all the way up to lead and bismuth; beyond bismuth the resulting nuclei alpha decay back to Pb and Tl isotopes (Wallerstein et al. 1997). Good reviews on laboratory measurements, stellar models, and abundance studies of the s-process elements may be found in Secs. X and XI of the recent comprehensive surveys by Wallerstein et al. (1997) and in Käppeler (1999). When there is a strong neutron flux, as it is believed to occur during a supernovae explosion, the neutron-rich elements will be formed by the rapid (r) neutron capture process, in which the sequental neutron captures take place on a time scale which is much more shorter than for beta decay of the resulting nuclei. This process produces much more neutron-reach progenitors that are required to account for the second set of abundance peaks that are observed about 10 mass units above the s-process abundance peaks corresponding to the neutron magic numbers, N = 50 and 82. We forward readers interested in more details about both the physics and astrophysical scenario of the rapid neutron capture to the Sec. XII of the mentioned above review by Wallerstein et al. (1997) and to a more recent and useful work by Cowan et al. (1999). The proton rich medium and heavy elements are much less abundant than the elements thought to be produced by r and s processes, and are thought to be formed by a proton capture (p process) at high enough temperature to overcome the coulomb barrier. Burbidge, Burbidge, Fowler, & Hoyle (1957) described in their "bible", two possible mechanisms by which p-nuclides could be formed: proton radiative captures, (p, γ), in a hot (T ∼ 2 − 3 × 10 9 • K) proton-rich environment, or photoninduced n, p, and α-particle removal reactions, also in a hot environment. A possible occasion for this process is the passage of a supernova shock wave through the hydrogen outer layer of a pre-supernova star. The separate mechanisms that are believed today as contributing to p-process nucleosynthesis, as well as their strengths and weaknesses are discussed in details in Sec. XIV of the review by Wallerstein et al. (1997). It is believed today that some nucleosynthesis of the lighter p-nuclides is provided by the so called rp process. The rp-process is very similar to the r-process, except it goes by a successive rapid proton absorption and β + decay. At present, it is believed that the rp-process can provide contributions to the nucleosynthesis of proton rich isotopes after the hot C-N-O cycle up through 65 As, to as high as 68 Se, or even to 96 Ru (see details and references in Wallerstein et al. 1997). Equilibrium Processes Another type of processes of nucleosynthesis in stars discussed intensively in the literature since the pioneering work by Hoyle (1946) and reviewed in B 2 FH are the equilibrium processes, called in B 2 FH as "e processes". Such processes are possible only if the matter is in equilibrium with the radiation, and if every nucleus is transformable into any other nucleus. Hoyle (1946) showed that matter is in equilibrium with radiation at temperatures T ≈ 10 9 • K, and that all known nuclei may be transformed into any other nucleus by nuclear reactions at T 2 × 10 9 • K. Though statistical equilibrium requires that the entropy of a system should be at the maximum, that may be a too strong requirement, not fulfilled exactly for real systems (Lang 1999;Wallerstein et al. 1997), this method proved to be very successful for the description of abundances of nuclei in the iron group and around (28 ≤ A ≤ 60) (see detailed references in Lang 1999). What is more, if to assume a thermodynamic equilibration in a star, than its composition (elemental abundances) may be calculated without determining individual reaction rates, and only the binding energies and partition functions of the various nuclear species need to be specified. Under conditions of statistical equilibrium, the number density, N i , of particles of the ith kind is given by (Lang 1999): where V is the volume, µ i is the chemical potential of the ith particle, the plus and minus signs refer to Fermi-Dirac and Bose-Einstein statistics, respectively, and the summation is over all energies, ε ir , which includes both internal energy levels and the kinetic energy. If an internal level has spin, J, then 2J + 1 states of the same energy must be included in the sum. When the nuclides are non-degenerate and non-relativistic, Maxwellian statistics can be employed to give (Lang 1999): where p is the particle momentum, M i is its mass, the partition function ω i = (2J r +1) exp(−ε r /kT ), and here ε r refers to internal states only. For particles p i , p j , · · · which react according to the chemical potentials are related by the equation where Hoyle (1946) and Burbidge, Burbidge, Fowler, & Hoyle (1957) considered the condition of statistical equilibrium between the nuclei, (A, Z), and free protons, p, and neutrons, n. For a nucleus, there are Z protons and (A − Z) neutrons and the statistical weight of both protons and neutrons is two. It then follows from Eqs. (73) to (76) that for equilibrium between nuclides, protons, and neutrons, the number density, N (A, Z), of the nucleus, (A, Z), is given by: where the partition function, ω(A, Z), of the nucleus, (A, Z), is given by where I r and E r are, respectively, the spin and energy of the rth excited level, the binding energy, Q(A, Z), of the nucleus, (A, Z), is given by where M n , M p , and M (A, Z) are, respectively, the masses of the free neutron, free proton, and the nucleus, (A, Z), the factor where T 9 = T /10 9 , the atomic mass unit is M µ , and N n and N p denote, respectively, the number densities of free neutrons and protons. As one can see, Eq. (77) contains indeed only the binding energy Q(A, Z) and does not require any cross sections or nuclear rates. Further details, more references, and newer and more general notions on equilibrium processes may be found, e.g., in Lang 1999 andWallerstein et al. 1997. There is an allied process to the equilibrium nucleosynthesis, the so called "quasi-equilibrium" process, when the total number of nuclei in different ranges of atomic number or mass number might be slowly varying and we may see only a quasi-equilibrium between nuclides of some separate groups, but not between different groups. So, Michaud and Fowler (1972) showed that with an initial neutron enhancement of 4 × 10 −3 the natural abundances for nuclei with 28 A 59 may be accounted for by quasi-equilibrium burning. In this case, a quasi-equilibrium between elements with 24 ≤ A ≤ 44 and a separate equilibrium for elements with 46 ≤ A ≤ 60 is assumed, and detailed nuclear reactions are given for the "bottleneck" at A = 45 (see further details and references on quasi-equilibrium processes in Lang 1999 andWallerstein et al. 1997). This quasiequilibrium silicon burning process must have taken place in a short time, t 1 sec, and at high temperatures, T 4.5 × 10 9 • K, suggesting the explosive burning processes discussed briefly in the next subsection. Explosive Burning Processes As explained by Burbidge, Burbidge, Fowler, and Hoyle (1957), the successive cycles of static nuclear burning and contraction, which successfully account for much stellar evolution, must end when the available nuclear fuel is exhausted (Lang 1999). B 2 FH showed that the unopposed action of gravity in a helium exhausted stellar core leads to violent instabilities and to rapid thermonuclear reactions in the stellar envelope. Later, Arnett (1968) showed that when cooling by neutrino emission in a highly degenerate gas is considered, the 12 C + 12 C reaction will ignite explosively at core density of about 2 × 10 9 g cm −3 . The stellar material is instantaneously heated and then expands adiabatically so that the density, ρ, and temperature, T , are related by for a Γ 3 = 4/3 adiabat, and a time variable, t. The appropriate time is the hydrodynamic time scale, τ HD , given by (Lang 1999) τ HD ≈ 446ρ −1/2 sec. The initial temperature and density must be such that the mean lifetime, τ R , for a nucleus undergoing an explosive reaction, R, must be close to τ HD . For the interaction of nucleus 1 with a nucleus 2, where the mass density is ρ, the X 2 , A 2 , and N 2 are, respectively, the mass fraction, mass number, and number density of nucleus 2, and N A < σv > is the reaction rate. used a mean carbon nucleus lifetime, log τ12 C ≈ 37.4T −1/3 9 − 25.0 − log 10 ρ ≈ log 10 τ HD for carbon burning to determine the initial condition of explosive carbon burning. Knowing the reaction rates, Eqs. (81) and (84) allow us to calculate expected abundances using the corresponding abundance equations discussed briefly bellow. Abundance rations which closely approximate those of the solar system were found for 20 Ne, 23 Na, 24 Mg, 25 Mg, 26 Mg, 27 Al, 29 Si, and 30 Si, when it was assumed that a previous epoch of helium burning produced equal amounts of 12 C and 16 O, and that T p = 2 × 10 9 • K ρ p = 1 × 10 5 g cm −3 (85) η = 0.002 . Here T p and ρ p denote, respectively, the peak values of temperature and mass density in the shell under consideration, and the neutron excess, η, is given by where N n and N p denote, respectively, the number density of free neutrons and protons (Lang 1999). Similarly, many works by different authors were dedicated to study explosive oxygen and silicon burning. Useful references and more details on explosive nucleosynthesis may by found in the comprehensive monograph by Lang (1999) and in the recent reviews by Arnett (1995) and Woosley and Weaver (1995). In a general case, the equation governing the change in the number density, N (A, Z), of the nucleus (A, Z) is of the form (Lang 1999): where N i is the number density of the ith species, < σv > ij is the product of cross section and the relative velocity for an interaction involving species i and j, the N m N n is replaced by N 2 n /2 for identical particles, and the summation is over all reactions which either create or destroy the species, i. The probabilistic interpretation of this equation is obvious: the number density, N i of the species i at a given time t is built by all processes resulting in the species of interest minus the contribution of all processes destroying these species. In practice, for numerical calculations, instead of N i one usually uses the following parameter (Lang 1999): where ρ is the mass density of the gas under consideration and N A is the Avogadro's number. Then, Eg. (87) can be rewritten as: where the vector flow, f ij , which contains nuclei i and j in the entrance channel, is given by The real, explicit view of this equation in a concrete calculation depends on processes we like to take into account. So, in a general case when we take into account negative and positive β-decays, electron and neutrom captures, alpha decay, photodisintegration, as well as all possible reactions between two interacting nuclei, Eq. (89) becomes (Lang 1999): where the symbol λ denotes the decay rate or the inverse mean lifetime, the subscritps β − , β + , K, α, and γ denote, respectively, negative beta decay, positive beta decay, electron capture, alpha decay, and photodisintegration, σ T is the cross section for neutron capture in cm 2 , N n is the number density of neutrons, the summation j denotes all reactions between the nucleus (A,Z) and any other nucleus, the summation ik denotes all reactions between two nuclei which have (A, Z) as a product, ρ is the gas mass density, and N A < σv > is the reaction rate. Numerical solution of such complex set of nuclear reaction networks requires a number of approximations and assumptions. Details on abundance equations for s, r, equilibrium, and quasiequilibrium processes as well as useful references can be found in Lang (1999). Li-Be-B Generation and Spallation Processes The observed today rare light nuclei, lithium, beryllium, and boron, are not products of the BBN or stellar nucleosynthesis, and are, in fact destroyed in hot stellar interiors. This condition is reflected in the comparatively low abundances of these nuclei (see Figs. 1 and 2). In contradiction with measurements, the primordial 6 Li, 9 Be, 10 B, and 11 B abundances calculated using the best of the available today evaluations for the reaction rates are many orders of magnitude below compared to 7 Li, so, the standard BBN is ineffective in generating 6 Li, 9 Be, 10 B, and 11 B (Vangioni-Flam, Cassé, & Audouze 1999). Up to recently, the most plausible formation agents of LiBeB were thought to be Galactic Cosmic Rays (GCRs) interactions with interstellar medium (ISM), mainly C, N, and O nuclei. (The most abundant and energetic cosmic-ray particles are protons and α-particles.) Other possible origins have been also identified: primordial and stellar ( 7 Li) and supernova neutrino spallation (for 7 Li and 11 B), while 6 Li, 9 Be, and 10 B are thought to be pure spallation products (Vangioni-Flam, Cassé, & Audouze 1999). Recent measurements in a few halo stars with the 10 meter KECK telescope and the Hubble Space Telescope indicate a quasi linear correlation between Be and B vs Fe, at least at low metallicity, contradictory at first sign to a dominating GCRs origin of the light elements which predicts a quadratic relationship (see the appendix in Vangioni-Flam, Cassé, & Audouze 1999). As a consequence, the theory of the origin and evolution of the LiBeB nuclei has yet to be reassessed This linearity came as a surprise since a quadratic relation was expected from the GDR mechanism, whereas the suprnovae origin would lead naturally to slope 1. This was a strong indication that the standard GCRs are not the main producers of Li-Be-B in the early Galaxy (Vangioni-Flam, Cassé, & Audouze 1999). Concerning lithium, as one can see from Fig. 7, the flat portion of the lithium abundance, usually referred to as the Spite plateau (after the original work of Francois and Monique Spite in 1982) expends up to [Fe/H] ∼ -1. It is believed that it represents the abundance of Li generated by the BBN nucleosynthesis. Beyond, Li/H is strongly increasing until its solar value of 2 × 10 −9 . This increase in the Li/H ratio is believed to be related with nucleosynthesis in a variety of Galactic objects, including Type II supernovae, novae and giant stars, as well as production by cosmic rays (Ramaty, Kozlovsky, & Lingenfelter 1998). A stringent constraint on any theory of Li evolution arises from such a form of Li/H curve: it should avoid to cross the Spite's plateau below [Fe/H] = -1. Accordingly, the Li/Be production ratio should be less than about 100 (Vangioni-Flam, Cassé, & Audouze 1999). Galactic cosmic rays represent the only sample of matter originating from beyond the Solar System. They are constituted by bare nuclei stripped from their electrons. Their energy density (about 1 eV cm −3 similar to that of stellar light and that of galactic magnetic field), indicate that they are an important component in the dynamics of the Galaxy (Vangioni-Flam, Cassé, & Audouze 1999). A key point for us is that, as can be seen from Fig. 2, GCRs are exceptionally LiBeB rich (LiBeB/CNO ∼ 0.25) compared to the Solar System matter (LiBeB/CNO ∼ 10 −6 ). For detailed calculations of LiBeB production by the GCR mechanism, the formation rate of an light isotope (i.e., Li, Be, or B, noted here as L) from the spallation of an medium isotope (e.g., 12 C, 14 N, 16 O, and 20 Ne, noted as M ) by a flux of protons with energy spectrum, ϕ(E), is given by (Lang 1999): where M denotes any of "medium" elements and L, any of "light" nuclei, the number densities of the M and L elements are, respectively, N M and N L , the time variable is t, the proton energy is E, and the spallation reaction cross section is σ(M, L, E). For the low energy cosmic (LEC) rays mechanism, where LiBeB are produced by interaction of low energy (less than 100 MeV/A) interactions of "medium" nuclei with interstellar H and 4 He, we have just a similar formula. The main difference is in the energy dependence of fluxes of the projectiles, while the values of cross sections are the same, for identical bombarding energies per nucleon. For spallative nucleosynthesis calculations, cross sections at energies from ∼ 1 MeV to ∼ 100 GeV are required, in contrast with the stellar nucleosynthesis that occur at low energies, from ∼ 1 keV to ∼ 100 keV. Current assumptions about energy dependences of the GCR and LEC fluxes, as well as further interesting points of the LiBeB story may be found in Vangioni-Flam, Cassé, & Audouze (1999) and in proceedings of the recent special conference on LiBeB held in December 1998 in Paris (Ramaty, Vangioni-Flam, Cassé, & Olive 1999). Spallation cross sections are discussed in the next section. According to modern concepts (see, e.g., Woosley et al. 1990;Woosley & Weaver 1995), neutrino spallation (NS) is also a source of 7 Li and 11 B via the interaction of neutrinos (predominantly ν µ and ν τ ) on nuclei, specifically on 4 He and 12 C (Vangioni-Flam, Cassé, & Audouze 1999). Recently, ν-process nucleosynthesis was incorporated into a model of galactic chemical evolution (Olive et al. (1994) and Vangioni-Flam et al. (1996) which had included as well the LEC component) with the primary purpose of augmenting the low value for 11 B/ 10 B produced by standard GCR nucleosynthesis. To fit the observed ration of 4, it was found that the yields of NS predicted by Woosley & Weaver (1995) had to be turned down by a factor of about 2 to 5, to avoid the overproduction of 11 B. Turning down the NS yields ensured as well that the production of 7 Li was insignificant, in accordance with the Spite plateau (Vangioni-Flam, Cassé, & Audouze 1999). Note that if taking the full NS yield, all galactic boron would be produced by ν spallation. This could be a problem since 9 Be is not coproduced and 7 Li overproduced.Thus, the NS mechanism acts as a complement to nuclear spallation at a level estimated to at most 20 percent concerning 11 B, if one wants to fulfil the observational constraints of LiBeB discussed by Vangioni-Flam et al. (1999). An example of contribution from primordial, galactic cosmic rays, and ν-nucleosynthesis to the total Li abundance, as calculated by Ryan, Beers, Olive, Fields, & Norris (1999) is shown in Fig. 9. Although these results were obtained not without fitting parameters (therefore are not completely definitive), they may help us to understand the relative role of different production mechanisms of light elements. One can see that the primordial contribution to Li abundance decreases at high metallicity due to astration, but other components increase with metallicity as discussed by Ryan, Beers, Olive, Fields, & Norris (1999). The main conclusion from these results as well as from recent works by other authors (see details and references in Ryan, Beers, Olive, Fields, & Norris 1999) is that LiBeB evolution may be understood only if we take into account a combination of BBN, cosmic ray, and ν-process nucleosyntheses, but the ν-process scenario seems to not play a major role. Fig. 9.-Contributions to the total lithium abundance from different reaction mechanisms shown on the plot, as predicted by the one-zone (closed box) GCE model ) compared with available experimental data for low metallicity and high metallicity stars (Ryan, Beers, Olive, Fields, & Norris 1999). The solid curve is the sum of all components; 6 Li is thought to be produced only by spallation reactions (Fields & Olive 1998). This figure is taken with the kind permission of authors from Ryan, Beers, Olive, Fields, & Norris (1999), where further details may be found. The ν-process may contribute as well to production of some other of the lowest abundance p-nuclei, like 11 B and 19 F (Boyd 1999). Generally, the process is thought to occur in the neutrino wind generated by stellar collapse in supernovae. The nuclides synthesized clearly depend on the shell in which the ν-process occurs. For example, 11 B and 19 F would be expected to be made in shells in which the dominant constituents were 12 C and 20 Ne respectively, both by processes in which a neutrino would excite the target nucleus via the neutral-current interaction (Boyd 1999). The ν-process could also make two of the rarest stable nuclides in the periodic table: 138 La and 180 Ta (Boyd 1999). The latter would be made by the 181 Ta(ν, n) 180 Ta (neutral current) reaction, which appears to produce an abundance consistent with what observed. Similarly, the 139 Ta(ν, n) 138 La (neutral current) reaction, together with the 138 Ba(ν, e) 138 La (charge current) reaction, appear capable of synthesizing roughly the observed 138 La abundance. Thus, the ν-process seems to provide a natural mechanism for synthesis of 138 La and 180 Ta, which has evaded description for several decades, as well as some other nuclides (Boyd 1999). However, it should be noted that such results are somewhat uncertain due to questions about the neutrino spectrum resulting from a Type II supernova and many questions on neutrino processes have yet to be solved in the future (see, e.g., Woosley, Hartmann, R.D. Hoffman, & Haxton 1990, Boyd 1999, Ginzburg 1999, Henley & Schifer 1999, Lang 1999, Khlopov 1999, Turner & Tyson 1999, Wolfenstein 1999, and references therein). Spallation Cross Sections Precise nuclear spallation cross sections are needed in astrophysics not only to calculate abundances of light elements with formulas of the type (93) but also for many other tasks. So, it is believed today that low energy cosmic ray interactions with ISM are responsible not only for a part of LiBeB-production discussed in the previous section, but also for the production of some of the now extinct radioisotopes that existed at the time of the formation of the solar system and found recently in meteorites, like 26 Al, 41 Ca, and 53 Mn (see Ramaty, Kozlovsky, & Lingenfelter 1996a and references therein). To estimate the abundances of these extinct radioisotopes in the solar system one uses formulas similar to (93) and one needs reliable cross sections for interaction of a variety of nuclei from the LEC with H and 4 He, the most abundant constituents of the ambient medium (see details in Ramaty, Kozlovsky, & Lingenfelter 1996a). Generally, a lot more spallation cross sections are needed to study meteorites besides the ones related with the extinct radioisotope production. During the recent years, large number of meteorites were found on Antarctic icefields and in hot deserts, in particular in the Sahara. These meteorite finds have increased the interest in the investigation of cosmogenic nuclides. Besides direct measurements of radionuclide composition performed for some of the found meteorites, such investigations usually involve theoretical calculations of production rates of cosmogenic nuclides in meteoroids by folding depth-and size-dependent spectra of primary and secondary cosmic-ray particles with the cross sections of the underlying reactions. The quality and reliability of the calculated production rates exclusively depend on the accuracy of the available spallation cross sections. A serious progress in interpretation the cosmogenic nuclide production in meteorites by galactic and solar cosmic rays and in understanding the cosmic radiation itself was achieved during the last years by the group of Prof. Rolf Michel at Hannover (see, e.g., Michel, Leya, & Borges 1996, Gilabert et al. 1998, Weigel 1999, and references therein). Another problem on cosmogenic nuclide production study requiring reliable spallation cross sections from interactions of protons and alphas up to about 200 MeV with a variety of nucleitargets is the investigation of solar cosmic ray (SCR) exposure of the lunar surface material, as well as of the earth atmosphere (see, e.g., Bodemann et al. 1993 and references therein). The survey by Reedy and Marti (1990) may serve as a good review on this subject and a source for further references. As mentioned by Tsao, Barghouty, & Silberberg (1999), it is believed today that the elements Li, Be, B, Cl, K, Sc, Ti, V, Mn and much of N, Al, and P in cosmic rays (see Fig. 2) are produced by nuclear spallation of the more abundant elements of the cosmic-ray source component, i.e., C, O, Ne, Mg, Si, Ca, and Fe. Studies of the composition, propagation, and origin of galactic cosmic rays are still to a large degree model dependent and conclusions made from such works depend essentially on nuclear cross sections used in calculations, therefore as precise as possible estimates of the relevant cross sections are needed. Let us mention just one more particular problem in astrophysics requiring reliable cross sections. Recently, the gamma-ray line at 0.511 MeV has been observed from a variety of astrophysical sites, including solar flares (see references in Kozlovsky, Lingenfelter, & Ramaty 1987). It is thought that this line is due to positron annihilation on a electron (e + + e − → γ + γ, where one photon will have a high energy and, if the electron is at rest, the other photon will have an energy on the order of m e c 2 = 0.511 MeV) from decay of radioactive nuclei and pions. One possibility of positron emitters production in a solar flare is via interactions of particles accelerated in the flare with the ambient solar atmosphere. To estimate the annihilation of positrons from such radioactive nuclei one need to know a great variety of proton-and α-induced spallation cross sections for the production of such positron emitters (see details in Kozlovsky, Lingenfelter, & Ramaty 1987). The list of astrophysical tasks requiring reliable cross sections can be continued further and further. As mentioned recently by Waddington (1999), it appears that the most serious limitation to deducting abundances of energetic nuclei in the cosmic radiation arises not from our lack of astrophysical measurements and observations, but just from our lack of the appropriate nuclear cross sections. Let us also note, that such spallation cross sections are of great importance as well both for fundamental nuclear physics and for many nuclear applications, e.g., for accelerator transmutation of waste (ATW), accelerator-based conversion (ABC), accelerator-driven energy production (ADEP), accelerator production of tritium (APT), for the optimization of commercial production of radioisotopes used in medicine, mining, and industry, for solving problems of radiation protection of cosmonauts, aviators, workers at nuclear facilities, and for modeling radiation damage to computer chips, etc. (see details and references, e.g., in Mashnik, Sierk, Bersillon, & Gabriel 1997). In the following subsections, we present a short survey of available experimental, calculated, and evaluated spallation cross sections for astrophysics and other fields together with our thought of how to possibly improve the present status of this problem. Experimental Data Cosmic rays consist of all the elements in the periodic table, up to uranium, therefore reactions induced by any projectile are of interest for astrophysics. However, since hydrogen is the dominant element, followed by helium, spallation cross sections from reactions induced by protons and alphas are of the first priority, while we do mention as well the importance of nucleus-nucleus reactions for many astrophysics problems, as surveyed recently by Tsao, Barghouty, & Silberberg (1999). Thousands of measurements of spallation cross sections relevant to astrophysics (mainly, protoninduced) have been performed over the last half a century. A good survey of experimental cross sections for proton induced spallation reactions measured before 1966, was done by Bernas, Gradsztajn, Reeves, & Schatzman (1967) and included thereafter in Chapter 9 by Audoze, Epherre, & and Chapter 8 by Gradsztajn (1967) of the well known book High-Energy Nuclear Reactions in Astrophysics edited by B. S. P. Shen and published by W. A. Benjamin, Inc. in 1967 in New York. In a way, this survey was like a "bible" of nuclear cross sections in astrophysics, as it was widely known and used, to our knowledge, without questions in almost all astrophysical simulations, up to very recent years. A short but comprehensive review of experimental results obtained by 1976 may be found in Hudis (1976). Another known in astrophysics paper serving as a survey of both proton-and alpha-induced experimental spallation cross sections was published 11 years later (only figures, as a by-product) by Kozlovsky, Lingenfelter, & Ramaty (1987). The last published short astrophysical survey on spallation cross sections measurements was, to our knowledge, the work by Tsao, Barghouty, & Silberberg (1999). Meanwhile, many other reliable measurements were performed that are not covered by these compilations, and, as one can see from Fig. 10, not all old cross sections agree well with the new data. Many efforts have been previously made as well by nuclear physicists to compile experimental spallation cross sections from proton and heavier projectiles induced reactions. So, very good and comprehensive reviews of experimental excitation functions from proton-, deuteron-, and alphainduced reactions on a number of light and medium nuclei-targets from Carbon to Chlorine, as well as on Cu and Au, were published by Tobailem and co-authors from 1971 to 1983 at CEA, Saclay, France, in a convenient form of Reports (in French) with tables and figures (Tobailem et al. 1971(Tobailem et al. , 1972(Tobailem et al. , 1975(Tobailem et al. , 1977(Tobailem et al. ,1981a(Tobailem et al. , 1981b(Tobailem et al. , 1982(Tobailem et al. , and 1983. But to the best of our knowledge, the most complete compilation (ever published, in any fields of nuclear cross sections data) was performed by Sobolevsky and co-authors at INR, Moscow, Russia, and was published by Springer-Verlag from 1991 to 1996 in eight separate subvolumes (Sobolevsky et al. 1991(Sobolevsky et al. , 1992(Sobolevsky et al. , 1993(Sobolevsky et al. , 1994a(Sobolevsky et al. , 1994b(Sobolevsky et al. , 1995(Sobolevsky et al. , 1996a(Sobolevsky et al. , and 1996b). Sobolevsky and co-authors have performed a major work and compiled all data available to them for target elements from Helium to transuranics for the entire energy range from thresholds up to the highest energy measured. For example, for proton-induced reactions, this compilation contains about 37,000 data points published in the first four Subvolumes, I/13a-d, (the following Subvolumes, I/13e-h, concern pion, antiproton, deutron, triton, 3 He, and alpha induced reactions). This rich compilation is also currently available in an electronic version as an IBM PC code named NUCLEX, published only a month ago by Springer-Verlag in a hardcover format accompanied by a CD-ROM with the NUCLEX code, as the ninth subvolume of this series and a suppliment to previous eight subvolumes (Sobolevsky et al. 2000; see a detailed description of NUCLEX in Ivanov, Sobolevsky, & Semenov (1998)). Due to the increasing interest in intermediate-energy data for ATW, ABC, ADEP, APT, astrophysics, and other applications, precise and voluminous measurements of proton-induced spallation cross sections have been performed recently, and are presently in progress, by the group of Prof. Michel from Hannover University (see, e.g., Michel et. al. 1997, Michel, Leya, & Borges 1996, Gilabert et al. 1998, and the Web page http://sun1.rrzn-user.uni-hannover.de/zsr/survey.htm#url=overview.htm), Yu. E. Titarenko et al. at ITEP, Moscow (Titarenko 1999a, 1999b, and references therein), Yu. V. Aleksandrov et al. at JINR, Dubna (Aleksandrov et al. 1995 (Venikov, Novikov, & Sebiakin 1993), A. S. Danagulyan et al. at JINR, Dubna (Danagulyan et al. 2000 andreferences therein), H. Vonach et al. at LANL, Los Alamos (Vonach et al. 1997), S. Sudar and S. M. Qaim at KFA, Jülich (Sudar & Qaim 1994), D. W. Bardayan et al. at LBNL, Berkeley (Bardayan et al. 1997), J. M. Sisterson et al. at TRIUMF andother accelerators (Sisterson et al. 1997), etc. Finally, we note another, "new" type of nuclear reaction intensively studied in recent years, which provides irreplaceable data both for nuclear astrophysics and nuclear physics itself. These are from reactions using reverse kinematics, when relativistic ions interact with hydrogen targets and they often provide the only way to obtain reliable data for interaction of intermediate energy protons with separate isotopes of an element with a complex natural isotopic composition. Good data for this type of reactions have been recently obtained, e.g., by W. R. Webber et al. at the LBL Bevalac (Webber, Kish, & Schrier 1990, Chen 1997, and refences therein) and L. Tassan-Got et al. at GSI, Darmstadt (Tassan-Got et al. 1998, Wlazlo et al. 2000. Further references on several more such "new" type of measurements, as well as on recent spallation cross sections from nucleus-nucleus interactions may be found in Silberberg, Tsao, & Barghouty (1998) and Tsao, Barghouty, & Silberberg (1999). These new data, as well as a number of other new and old measurements have not been covered by NUCLEX. Let us note that for our needs, we compiled in the T-2 Group at LANL also an experimental data library of spallation cross sections, refered below as LANL T-2 Library (Mashnik, Sierk, Van Riper & Wilson 1998). Our library is only for proton-induced reactions and was completed so far only for 33 elements-targets: C, N, O, F, Ne, Na, Mg, Al, P, S, Cl, Ar, K, Ca, Fe, Co, Zn, Ga, Ge, As, Y, Zr, Nb, Mo, Sn, Xe, Cs, Ba, La, Ir, Au, Hg, and Bi. But for the 91 targets (separate isotopes or natural compostion) of these elements, our library is the most complete, as far as we know, and contains 23,439 data points covering 2,562 reactions, in comparison with NUCLEX, having only 13,703 data points and 1594 reactions for the same 33 elements. For these elements, we produced also a calculated cross section library both for proton-and neutron-induced reactions up to 5 GeV, as well as an evaluated library, discussed briefly in the next subsection. In developing our experimental LANL T-2 library, we did not confine ourselves solely to NU-CLEX as a source of experimental cross sections; instead, we compile all available data for the targets in which we are interested, searching first the World Wide Web, then any other sources available to us, including the compilation from NUCLEX. We also have begun to store in our library data for intermediate energy neutron-induced reactions, but so far we have only 95 data points for Bi and C targets covering 14 reactions induced by fast neutrons. (Extensive neutroninduced experimental and evaluated activation libraries at energies bellow 150 MeV have been produced, validated, and used by many authors; see, e.g., Muir & Koning (1997), Korovin et al. (1999), Chadwick et al. (1999), Fessler et al. (2000 and references therein.) Our library is still in progress, we permanently update it when new data for our elements are available, and we hope to extend it, depending on our needs, and to make it available public through the Web. Note, that many data (especially, recent) on experimental spallation cross sections are already included in the Experimental Nuclear Reaction Data Retrivals (EXFOR) compilation, available to users from the Web through the international nuclear data banks (see, e.g., the Web page of the NEA/OECD, Paris at http://www.nea.fr/html/dbdata/dbexfor/html). From our point of view, it would be useful for the astrophysical community to merge the NU-CLEX data library (Sobolevsky et al. 1991(Sobolevsky et al. -2000, our LANL T-2 compilation (Mashnik, Sierk, Van Riper, & Wilson 1998), and the data permanently updated in the EXFOR database with already existing data libraries, considered by the Nuclear Astrophysics Data Effort Steering Committee (Smith, Cecil, Firestone, Hale, Larson, & Resler 1996) as Nuclear Data Resources for Nuclear Astrophysics, CSIRS (The Cross Section Information Storage and Retrieval System), ECSIL (The LLNL Experimental Cross Section Information Library), and ECSIL2 (a LANL/LLNL extension of ECSIL) as well as to make available this information through the recent powerful NASA Astrophysical Data System (Krutz, Eichhorn, Accomazzi, Grant, Murray, & Watson 2000). Calculated and Evaluated Cross Sections Experiments to measure all data necessary for astrophysics and other fields are costly and there are a limited number of facilities available to make such measurements (Blann et al. 1994, Nagel et al. 1995. In addition, most measurements have been performed on targets with the natural composition of isotopes for a given element and, what is more, often only cumulative yields of residual product nuclei are measured. In contrast, for astrophysical simulations and other applications, as well as to study the physics of nuclear reactions, independent yields obtained for isotopically separated targets are needed. Furthermore, only some 80-100 cross section values of residual product nuclei are normally determined by the γ spectrometry method in the experiments with heavy nuclei, whereas, according to calculations, over 1000 residual product nuclei are actually produced. Therefore, it turns out that reliable theoretical calculations are required to provide the necessary cross sections (Blann et al. 1994, Nagel et al. 1995, Koning 1993. In some cases, it is more convenient to have fast-computing semiempirical systematics for various characteristics of nuclear reactions instead of using time-consuming, more sophisticated nuclear models. Therefore, to our knowledge, in most astrophysical simulations one uses predictions of different semiempirical systematics (see, e.g., Barghouty 1998, Tsao, Barghouty, &Silberberg 1999 andreferences therein). After many years of effort by many investigators, many empirical formulae are now available for spallation cross sections and excitation functions. Many current systematics on excitation functions have been reviewed by Koning (1993); most of the old systematics available in 1970 were analyzed in the comprehensive monograph by Barashenkov and Toneev (1972); the majority of systematics for mass yields, charge dispersions, energy and angular distributions of fragments produced in pA and AA collisions at relativistic energies available in 1985 are presented in the review by Hüfner (1985); useful systematics for different hadron-nucleus interaction cross sections may be found in our review (Gabriel & Mashnik 1996); improved parametrizations for fragmentation cross sections were recently published by Sümmerer and Blank (2000); the last update of the well known and widely used in astrophysics code YIELD together with further references may be found in Silberberg, Tsao, & Barghouty (1998) and Tsao, Barghouty, & Silberberg (1999). Let us mentioned as well the old but widely used in the past in astrophysical simulations systematics by Rudstam (1966), Gupta, Das, & Biswas (1970), Silberberg & Tsao (1973a, 1973b, Foshina, Martins, & Tavares (1984), and direct readers interested in references on other phenomenological systematics to surveys by Koning (1993), Barashenkov & Toneev (1972), Hüfner (1985), Gabriel & Mashnik (1996), Tsao, Barghouty, & Silberberg (1999), as well as to the recent work by Michel et al. (1995). Michel with co-authors (1995) have performed a special analysis of predictabilities of different semiempirical systematics and have concluded that "Semiempirical formulas will be quite successful if binding energies are the crucial parameters dominating the production of the residual nuclides, i.e. for nuclides far from stability. In the valley of stability, the individual properties of the residual nuclei, such as level densities and individual excited states, determine the final phase of the reactions. Thus, the averaging approach of all semiempirical formulas will be inadequate." In this case, one has to perform calculations in the Products in 59-Co irradiated with 0.07GeV protons 10 -2 (Titarenko et al. 1999a). Results labeled as YIELDX and " Foshina et al." are obtained with the updated systematics by Silberberg, Tsao, & Barghouty (1998) and using the semiempirical formulas by Foshina, Martins, & Tavares (1984), respectively, that are often used in astrophysical simulations. Results labeled as CEM95, LAHET, INUCL, and HETC were calculated with Monte Carlo codes by Mashnik (1995), Prael & Lichtenstein (1989), Stepanov (1989), and Armstrong & Chandler (1972), respectively. One can see discrepancies more than an order of magnitude for spallation cross sections of some isotopes. framework of reliable models of nuclear reactions. As was mentioned by Silberberg, Tsao, & Shapiro (1976), there are also additional cases when Monte Carlo calculations should be used: (1) when it is essential to know the distributions in angle and energy for the ejected nucleons, (2) when the nuclear reaction is induced by neutrons, and (3) when the particles have relatively low energies (E ≤ 60 MeV). As an example, Fig. 11 shows a comparison between the new data for isotope production from interaction of 70-MeV protons with 59 Co by Titarenko et al. (1999a) and results obtained with the systematics by Silberberg, Tsao, & Barghouty (1998), noted in figure as YIELDX, with semiempirical formulas by Foshina, Martins, & Tavares (1984), together with calculations using the Monte Carlo codes CEM95 (Mashnik 1995), LAHET (Prael & Lichtenstein 1989), INUCL (Stepanov 1989), and HETC (Armstrong & Chandler 1972). One can see that for these reactions, neither the phenomenological systematics by Silberberg, Tsao, & Barghouty (1998), nor the semiempirical formulas by Foshina, Martins, & Tavares (1984), both widely used in astrophysics, provide a good description of all data, therefore we can not rely exclusively on them in astrophysical and other simulations. In such situations, one has to perform calculations in the framework of reliable Monte Carlo models of nuclear reactions and to use available experimental data. As was mentioned by Mashnik, Sierk, Van Riper, & Wilson (1998), ideally, it would be desirable to have for applications a universal evaluated library that includes data for all nuclides, projectiles, and incident energies. At present, neither the measurements nor any of the current models or phenomenological systematics can be used alone to produce a reliable evaluated activation library covering a large area of target nuclides and incident energies. As one can see from Fig. 11, some of the best Monte Carlo codes also have big difficulties in describing part of the data. The problem is to find out the predictive power of different models, codes, and phenomenologilal systematics, and to identify the regions of projectiles, targets, incident energies, and produced nuclides where each model or systematics works better. When we know this, we can create a reliable evaluated library as we did in our medical isotope production study (Van Riper, Mashnik, & Wilson 1998;. We think, a similar library would be very useful for astrophysical simulations as well, therefore let us remind here our main consept. We chose to create our evaluated library (Mashnik, Sierk, Van Riper & Wilson 1998) by constructing excitation functions using all available experimental data along with calculations using some of the more reliable codes, employing each of them in the regions of targets and incident energies where they are most applicable. When we had reliable experimental data, they were taken as the highest priority for our approximation as compared to model results, and wherever possible, we attempted to construct a smooth transition from one data source to another. The recent International Code Comparisons for Intermediate Energy Nuclear Data organized by NEA/OECD at Paris (Blann et al. 1994, Michel & Nagel 1997, our own comprehensive benchmarks (Van Riper et al. 1997, Mashnik, Sierk, Van Riper & Wilson 1998, Van Riper, Mashnik, & Wilson 1998, several studies by Titarenko et al. (1999aTitarenko et al. ( , 1999b, and refereces therein), and the recent Ph.D. thesis by Batyaev (1999), specially dedicated to benchmark currently available models and codes, have shown that a modified version of the Cascade-Exciton model (CEM) as realized in the code CEM95 (Mashnik 1995) and the LAHET code system (Prael & Lichtenstein 1989) generally have the best predictive powers for spallation reactions at energies above 100 MeV as compared to other available models. Therefore, we choose CEM95 (Mashnik 1995), the recently improved version of the CEM code, CEM97x, , and LAHET (Prael & Lichtenstein 1989) above 100 MeV to evaluate the required cross sections. The same benchmarks have shown that at lower energies, the HMS-ALICE code (Blann & Chadwick 1998) most accurately reproduces experimental results as compared with other models. We therefore use the activation library calculated by Chadwick (M. B. Chadwick 1998, private communication) with the HMS-ALICE code (Blann & Chadwick 1998) for protons below 100 MeV and neutrons between 20 and 100 MeV. In the overlapping region, between 100 and 150 MeV, we use both HMS-ALICE and CEM95 and/or LAHET results. For neutrons below 20 MeV, we consider the data of the European Activation File EAF-97, Rev. 1 (Muir & Koning 1996, Sublet, Kopecky, Forrest, & Niegro 1997 with some recent improvements by Herman (1996), to be the most accurate results available; therefore we use them as the first priority in our evaluation. Measured cross-section data from our LANL T-2 compilation described in the previus subsection (Mashnik, Sierk, Van Riper & Wilson 1998), when available, are included together with theoretical results and are used to evaluate cross sections for study. We note that when we put together all these different theoretical results and experimental data, rarely do they agree perfectly with each other, providing a smooth continuity of evaluated excitation functions. Often, the resulting compilations show significant disagreement at energies where the available data progresses from one source to another. These sets are thinned to eliminate discrepant data, providing data sets of more-or-less reasonable continuity defining our evaluated cross sections. An examples with typical results of evaluated activation cross sections for both proton-and neutron-induced reactions is shown in Fig. 12. by broad gray lines. 51 similar color figures for proton-induced reactions and 57 figures for neutrons, can be found on the Web, in our detailed report (Van Riper, . We think that constructing and using similar evaluated libraries in astrophysical calculations (at least for the most important reactions) would significantly improve the reliability of final results and would help us, for instance, to better understand the origin of some light and medium elements, their abundances, and the role of spallation processes in nucleosynthesis. New reliable measurements, in particular, on separate isotopes of (enriched) targets or using reverse kinematics as mentioned above, and further development of nuclear reaction models and phenomenological systematics are necessary to produce a reliable evaluated library of spallation cross sections. Excitation functions, i.e., spallation cross sections as functions of the kinetic energy of projectiles, are a very "difficult" characteristic of nuclear reactions as they involve together the different and complicated physics processes of spallation, evaporation, fission, and fragmentation of nuclei. A lot of work is still necessary to be done by theorists and code developers before a reliable complex of codes able to satisfactorily predict arbitrary excitation functions in a wide range of incident energies/projectiles/targets/final nuclides will be available. At present, we are still very far from the completion of this difficult task (Mashnik, Sierk, Bersillon, & Gabriel 1997). In the meantime, to evaluate excitation functions needed for astrophysics, nuclear science, and applications, it is necessary to use and analyze together the available experimental data, and for each region of incident energies/projectiles/targets/final nuclides, the predictions of phenomenological systematics, and the results of calculations with the most reliable codes, and not to limit ourselves just to one source of data, as was practiced in many past astrophysical simulations. (Garland, Schenter, Talbert, Mashnik, & Wilson 1999). Experimental data for protons from the LANL T-2 compilation (Mashnik, Sierk, Van Riper, & Wilson 1998) are shown by triangles, and for neutrons, from the European Activation File EAF-97 (Sublet, Kopecky, Forrest, & Niegro 1997), by the magenta line marked with "E". Calculations with the HMS-ALICE code (Blann & Chadwick 1998) are shown by blue lines marked with "A", and with the CEM95 code (Mashnik 1995), by red lines marked with "C". Evaluated cross sections are shown by broad gray lines. Summary We have performed a brief review of nuclide abundances in the solar system and in cosmic rays and of the believed today mechanisms of their production. We have shown on a number of examples that nuclear spallation processes play an important role in synthesis not only of the light nuclei, Li-Be-B, but also in production of other elements in the solar system, in cosmogenic nucleosynthesis, in production of most energetic nuclei in cosmic rays, in cosmic ray exposure of the lunar (and planets) surface material and of meteorites, as a source of positron emitters, etc. To study and understand these processes, reliable spallation cross sections for a variety of reactions are needed. We have performed a brief review of recent measurements, compilations, calculations, and evaluations of spallation cross sections relevant to astrophysics. We have shown on several examples that in some past astrophysical simulations old experimental cross sections were used that are in poor agreement with recent measurements and calculations with reliable modern models of nuclear reactions. We suggest to not limit in astrophysical calculations only to one source of spallation cross sections as was done in some previous works but to use and analyze together all available experimental data, and for each region of incident energies/projectiles/targets/final nuclides, the predictions of phenomenological systematics, and the results of calculations with the most reliable models and codes. Even better it would be to produce an universal evaluated library of spallation cross sections needed for astrophysics, using together available experimental data and calculations with the most reliable codes, as was done before in the group T-2 at LANL for a number of reactions of interest for our medical isotope production study. Such an evaluated data library would be very useful not only for astrophysical simulations, but also for fundamental nuclear physics itself and a number of important applications, like ATW, ABC, ADEP, APT, medical isotope production, etc. New reliable measurements on separate isotopes of (enriched) targets or using reverse kinematics, extending and updating already created compilations of spallation cross sections by nuclear physicists, like NUCLEX and the LANL T-2 library, as well as merging these data libraries with astrophysical libraries, like CSIRS, ECSIL, and ECSIL2, and, finally, further development of nuclear reaction models and phenomenological systematics are necessary to successfully complete this goal.
5G NR-V2X: Towards Connected and Cooperative Autonomous Driving This paper is concerned with the key features and fundamental technology components for 5G New Radio (NR) for genuine realization of connected and cooperative autonomous driving. We discuss the major functionalities of physical layer, Sidelink features and its resource allocation, architecture flexibility, security and privacy mechanisms, and precise positioning techniques with an evolution path from existing cellular vehicle-to-everything (V2X) technology towards NR-V2X. Moreover, we envisage and highlight the potential of machine learning for further enhancement of various NR-V2X services. Lastly, we show how 5G NR can be configured to support advanced V2X use cases in autonomous driving. I. INTRODUCTION The fifth generation (5G) mobile communication networks, aiming for highly scalable, converged, and ubiquitous connectivity, will be a game changer in opening the door to new opportunities, services, applications, and a wide range of use cases. One of the most promising 5G use cases, expected to shape and revolutionize future transportation, is vehicle-toeverything (V2X) communication, which is seen as a key enabler for connected and autonomous driving. V2X communications, as defined by the 3rd generation partnership project (3GPP), consists four types of connectivity: vehicle-to-vehicle (V2V), vehicle-topedestrian (V2P), vehicle-to-infrastructure (V2I), and vehicle-to-network (V2N). Next generation vehicles will be equipped with cameras, radar, global navigation satellite system (GNSS), wireless technologies, and various types of sensors to support autonomous driving at different levels. However, the functionality of these embedded sensors and cameras is limited by the need for line-ofsight propagation. This can be circumvented by equipping vehicles with cellular V2X (C-V2X) technology complementing embedded sensor functions by sensor data exchange between vehicles, thus providing a higher level of driver situational awareness. So far, C-V2X has attracted significant interests from both the academic and industrial communities. A very promising technology to realize V2X communications and autonomous driving is 5G New Radio (NR). The 5G network is expected to provide ultra-high reliability, low-latency, high throughput, flexible mobility, and energy efficiency. From a communication point of view, 5G should support the following three broad categories of services: enhanced mobile broadband (eMBB), massive machine-type communications (mMTC) and ultra-reliable low-latency communications (URLLC). Specifically, eMBB, aiming to provide data rates of at least 10 Gbps for uplink and 20 Gbps for downlink channels, plays a pivotal role for in-car video conferencing/gaming, various multimedia services, or high-precision map downloading, etc. mMTC will allow future driverless vehicles to constantly sense and learn environment changes from embedded sensors deployed in cars or within the infrastructure. URLLC targets 1 ms over-the-air round trip time (RTT) and 99.999% reliability for single transmissions, which are critical for autonomous driving. In fact, 5G NR is designed as a unified framework to address a wide range of the service requirements and to enable novel V2X use cases, as illustrated in Fig. 1. The primary objective of this paper is to indicate how 5G NR supports the realization of autonomous driving. We first discuss the 3GPP standardization roadmap focusing on key features of NR-V2X communications. Next, we present the design considerations, technology components, functionalities, and key enhancements of NR-V2X. We provide an insight into novel and powerful attributes of the NR physical layer (PHY). We discuss NR Sidelink resource allocation and highlight its enhanced functionalities for broadcasting and multicasting. We briefly outline 5G NR architecture deployment options focusing on flexible mobility and dual connectivity. We explain security and privacy issues of NR-V2X communications. In addition, we discuss how machine learning can be exploited to improve the performance of V2X communications. Finally, we discuss advanced use cases for tangible applications of NR-V2X. II. 5G NR: 3GPP ROADMAP The first C-V2X specifications incorporating longterm evolution (LTE) communication technology into vehicular networks, denoted LTE-V2X, were introduced in 3GPP Rel. 14 [1]. In LTE-V2X two operation modes are defined: 1) Network-based communication uses conventional LTE infrastructure to enable vehicles to communicate with the network. The LTE-Uu interface refers to the logical interface between a vehicle and network infrastructure. 2) Direct communication mode is based on device-to-device (D2D) communication, defined in 3GPP Rel. 13 [2]. This mode allows devices to exchange real-time information directly without involving network infrastructure. The PC5 interface, known as LTE Sidelink, is designed to enable direct short-range communications between devices (e.g., V2V, V2P, V2I). 3GPP Rel. 14, completed in June 2017, forms the basis and roadmap for LTE-V2X towards further enhancements and integration into 5G NR. Despite all its capabilities, LTE-V2X does not address the stringent requirements for autonomous driving, specifically for URLLC. 3GPP has defined a new end-to-end network architecture for 5G NR that can meet the needs for autonomous driving. 3GPP timeline for the 5G standard follows two consecutive phases. Phase 1, Rel. 15, the first step on the 5G NR standardization roadmap, focused on eMBB and initial URLLC [3]. Phase 2 started with 3GPP Rel. 16, and focuses on expanding and optimizing the features developed in Phase 1. NR-V2X in Rel. 16 aims at bringing enhanced URLLC and higher throughput, while maintaining backward compatibility with Rel. 15 [4]. NR-V2X, in addition to broadcast transmissions, will support both unicast and multicast transmissions. 3GPP Rel. 17 will continue to improve 5G coverage, mobility, deployment, latency, and services. Fig. 2 illustrates the 3GPP roadmap towards NR-V2X. III. KEY FEATURES OF 5G NEW RADIO This section summarizes NR key features that will fulfill the diverse and stringent requirements of autonomous driving from networks, users and applications perspectives. A. The 5G NR Physical Layer (PHY) Design The 5G NR PHY design needs to deal with harsh V2X channel conditions and diverse data service requirements, specifically: 1) Highly dynamic mobility from low-speed vehicles (e.g., less than 60 km/h) to high-speed cars/trains (e.g., 500 km/h or higher). The air interface design for high mobility communication requires more time-frequency resources to deal with the impairments incurred by Doppler spread and multi-path channels. 2) Wide range of data services (e.g., in-car multimedia entertainment, video conferencing, highprecision map downloading, etc) with different qualityof-service (QoS) requirements in terms of reliability, latency, and data rates. Some requirements (e.g., high data throughput against ultra-reliability) may be conflicting and hence it may be difficult to support them simultaneously. Against this background, the frame structure of 5G NR [5] allows flexible configurations for enabling the support of a majority of C-V2X use cases. Similar to LTE, 5G NR uses orthogonal frequency-division multiplexing (OFDM) whose performance is sensitive to inter-carrier interference (ICI) incurred by carrier frequency offsets and Doppler spreads/shifts. The maximum channel bandwidth per NR Carrier is 400 MHz compared to 20 MHz in LTE. Identical to LTE, the frame length is fixed to 10 ms, the length of a subframe is 1 ms, the number of subcarriers per resource block (RB) is 12, and each slot comprises 14 OFDM symbols (12 symbols for extended cyclic-prefix mode). Compared to the LTE numerology with subcarrier spacing of 15 kHz, the NR frame structure supports multiple subcarrier spacings including 15, 30, 60, 120, or 240 kHz. A small subcarrier spacing could be configured for C-V2X use cases requiring high data rates but with low/modest mobility, while a large subcarrier spacing is of particular interest for the suppression of ICI in high mobility channels. Channel coding plays a fundamental role in C-V2X PHY to accommodate a diverse range of requirements in terms of data throughput, packet length, decoding latency, mobility, rate compatibility, and capability of supporting efficient hybrid automatic repeat request (HARQ). Unlike LTE, which uses convolutional and Turbo codes, two capacity-approaching channel codes have been adopted in 5G NR [6]: low-density paritycheck (LDPC) codes and polar codes. While the former is used to protect user data, the latter is for control channels in eMBB and URLLC which require ultra-low decoding latency. Excellent quasi-cyclic LDPC (QC-LDPC) codes have been designed for 5G NR. The unique structure of QC-LDPC allows parallel decoding in the hardware implementation (i.e., lower decoding latency). C-V2X services in 5G NR are expected to share and compete with other vertical applications for system resources (e.g., spectrum/network bandwidth, storage and computing, etc.) within a common physical infrastructure. A central question is how to design an efficient network to provide guaranteed QoS for V2X while balancing data services to other vertical applications. Network slicing (NS), the paradigm to create multiple logical networks tailored to different types of data services and business operators [7], offers a mechanism to meet the requirements of all use cases and enables individual design, deployment, customization, and optimization of different network slices on a common infrastructure. Although initially proposed for the partition of core networks using techniques such as network function virtualization (NFV) and software-defined networking (SDN), the concept of NS has been extended to provide efficient end-to-end data services by slicing PHY resources in radio access networks (RANs). The slicing of PHY resources mainly involves the dynamic allocation of time and frequency resources by providing multiple numerologies, each of which constitutes a set of data frame parameters such as multi-carrier waveforms, subcarrier spacings, sampling rates, frame and symbol durations. B. NR Sidelink Features and Resource Allocation Through Sidelink protocols, each vehicle can directly exchange its own status information, such as location, speed, trajectory and intended local route, with other vehicles, pedestrians, and road infrastructure. The basic functionalities of the NR Sidelink are the same as those in the LTE Sidelink. However, NR Sidelink introduces major enhancements in functionality that enable advanced 5G use cases and could enhance autonomous driving. The key enhancements in the NR Sidelink protocols are as follows: i) Sidelink feedback channel for higher reliability and lower latency, ii) carrier aggregation with support for up to 16 carriers, iii) modulation scheme supporting up to 256-QAM for increased throughput per single carrier, and iv) modified resource scheduling for reduced resource selection time. Moreover, NR-V2X, along with traditional broadcast communication, supports unicast and groupcast communications, where one vehicle can transmit different types of messages with different QoS requirements. For instance, a vehicle can transmit some periodic messages by broadcasting and aperiodic messages through unicast or groupcast. The reliability of unicast and groupcast communications can be improved via a re-transmission mechanism. It is noted that the retransmission in LTE-V2X is carried out in a blind manner, i.e., when the source vehicle uses retransmissions, it re-transmits regardless whether the initial transmission was successful or not. In the case of successful transmission however, such blind retransmission leads to resource wastage. When several transmissions are required, blind re-transmission may be highly inefficient. In NR-V2X, a new feedback channel, called physical sidelink feedback channel (PSFCH), is introduced to enable feedback-based re-transmission and channel state information acquisition [8]. Detailed operations and procedures of PSFCH feedback transmissions is presented in [9]. In NR-V2X, the available resources for direct communication between vehicles can be either dedicated or shared by cellular users. To manage the resources, two Sidelink modes are defined for NR-V2X, Mode-1 and Mode-2. In Sidelink Mode-1, it is assumed that the vehicles are fully covered by one or more base stations (BSs). The BSs allocate resources to vehicles based on configured and dynamic scheduling. Configured scheduling adopts a pre-defined bitmap-based resource allocation, while dynamic scheduling allocates or reallocates resources every millisecond based on the varying channel conditions. In Sidelink Mode-2, resources need to be allocated in a distributed manner without cellular coverage. There are four sub-modes, sub-mode 2(a)-2(d) for Mode-2. In 2(a), each vehicle can select its resources autonomously through a sensing based semi-persistent transmission mechanism. 2(b) is a cooperative distributed scheduling approach, where vehicles can assist each other in determining the most suitable transmission resources. In 2(c), a vehicle selects the resources based on preconfigured scheduling. In 2(d), a vehicle schedules the Sidelink transmissions for its neighbouring vehicles. C. Dual Connectivity and Mobility Robustness 3GPP has defined multiple options for 5G NR deployment, which can be broadly categoried into two modes, non-standalone (NSA) and standalone (SA). In order to accelerate the deployment of 5G networks, the initial phase of NR will be aided by existing 4G infrastructure and deployed in NSA operation mode. In contrast, the full version of NR will be implemented and deployed in SA mode. The NSA mode supports interworking between 4G and 5G networks. The NSA architecture is comprised of LTE BS (eNB), LTE evolved packet core (EPC), 5G BS (gNBs) and 5G core (5GC) network. NSA has the salient advantage of shorter implementation time as it leverages an existing 4G network with only minor modification. It can support both legacy 4G and 5G devices. Essentially, NSA mode implies multi radio access technologies (RATs) and dual connectivity for end-users [10]. Among all the NSA deployment options, 3, 4 and 7 are the most common options supporting dual connectivity and mobility robustness [11], as illustrated in Fig. 3. The SA mode consists of only one technology generation, LTE or NR. The SA operation in NR is envisaged to have an entirely new end-to-end architecture and a 5GC network. In fact, gNBs, directly connected to 5GC, utilizing 5G cells for both control and user plane transfer. The SA mode is designed to enhance URLLC, whilst fulfilling the requirements of eMBB and mMTC. The key advantages of SA mode are easy deployment and improved RAT and architecture performance. That said, it requires the 5G RAT to be rebuilt and a cloud native 5G core to fully realize all the potential benefits of a 5G network. In addition, the SA mode facilitates a wider range of new use cases and supports advanced NS functions. D. Security Aspects of 5G NR-V2X 5G inherits the basic security mechanisms in 4G. Accordingly, NR-V2X will utilize equivalent functionalities as in the LTE-V2X. However, due to fundamental changes in 5G architecture and its end-toend communication, new mechanisms need to be adopted. Basically, the required functionalities and security enhancement for 5G networks are largely dependent on deployment strategy. The scope of security enhancements in NSA is limited, as it is dependent on the underlying 4G deployment and requires identification of 5G functions which match to 4G components. In contrast, security enhancement with SA deployment will allow the network to support more security features to tackle potential security challenges. Next, we briefly discuss the key enhancement of NR-V2X security. From an architectural perspective, NR-V2X should ensure security of users, vehicles, end-to-end communication entities, functions, and interfaces [12]. This can be achieved with new 5G core entities, new network functions, stronger authentication and authorization schemes between vehicles, vehicle to RAN, and vehicle to core. The security anchor function (SEAF), defined in Rel. 15, is a new function which is used to enhance security at network level and to provide flexible authentication and authorization schemes. SEAF can provide more flexible deployment of access and mobility management function (AMF) and session management function (SMF) entities. With this feature, device access authentication is separated from data session setup and management, which provides secure mobility and authorized access to V2X services for vehicles and users. The general principle of user equipment (UE) authorization is similar to that in LTE systems. The only difference is that authorization in 5G is provided by the policy control function (PCF). Autonomous driving demands a real-time and reliable authentication process while keeping the overhead introduced by security protocols as low as possible. In terms of privacy protection, the major concern is related to encryption schemes for concealing the subscriber permanent identifier (SUPI) to protect user data leakage through initial messages. In 5G, subscriber/device privacy is provided by SUPI which is a major change from LTE with international mobile subscriber identity (IMSI). While IMSI is typically transmitted in plain text over the air, SUPI travels in ciphertext over the radio link to be protected against spoofing and tracking. Moreover, 5G enhances authentication by exploiting extensible authentication protocol (EAP) and supporting EAP authentication and key agreement (EAP-AKA) in order to separate authentication and authorization procedure in a flexible manner. The second major issue is related to user data privacy over PC5, as vehicles may need to share private information (e.g., user identity, vehicle's location). While restrictions are required with regard to the sharing of private data, some of them may need to be accessible to trusted authorities (e.g., police, rescue team) to detect malicious attackers or to ensure timely handling of emergencies such as accidents. As far as NS is concerned, services instantiated as a NS may have different security requirements. The access to a NS should be granted only to authenticated subscribers. Security protocol should ensure communication integrity, confidentiality, and authorization. 5G introduces the concepts of slice isolation, robust slice access, and slice security management to ensure that an attack mounted on one slice does not increase the risk of attack on another slice. In 5G NR, every gNB is logically split into central unit (CU) and distributed unit (DU). These modules interact via a secure interface. The security provided with this interface can prevent an attacker from breaching the operator's network, even in the case of successful access to the radio module. E. Precise positioning Satellite-based positioning systems are unable to provide sufficiently accurate positioning needed for autonomous driving. LTE-V2X has been exploiting several radio signal-based mechanisms to improve the positioning accuracy, namely: downlink-based observed time difference of arrival (OTDOA), uplink time difference of arrival (UTDOA), and enhanced cell ID (E-CID). NR-V2X combines the existing positioning technologies with new positioning methods such as multicell round trip time (Multi-RTT), uplink angle of arrival (UL-AoA), downlink angle of departure (DL-AoD), and time of arrival (TOA) triangulation to provide more precise vehicle positioning [13]. Moreover, NR-V2X can also use real-time kinematic (RTK) positioning, which is an accurate satellite-based relative positioning measurement technique, to provide a centimetre-level positioning accuracy in some outdoor scenarios. By using wider bandwidth, flexible massive antenna systems, and beamforming NR-V2X will provide more precise timing and accurate measurement of equivalent signal techniques in LTE-V2X. Note that no single approach may be able to reliably provide the positioning accuracy required for autonomous driving in all environmental conditions. Hence, hybrid solutions that optimally combine NR advanced positioning techniques with multitude of embedded sensors into next generation vehicles, are the most promising approaches to achieve vehicle positioning accuracy for autonomous driving. IV. APPLICATION OF MACHINE LEARNING FOR NR-V2X Rapidly varying vehicular environments due to vehicle mobility, frequent changes of network topology and wireless channel, as well as stringent requirements of URLLC, increase the system design complexity for a end-to-end V2X network. In such a dynamic environment, machine learning (ML) can be an effective tool to address operational challenges compared to traditional network management approaches which are more suitable for relatively low mobility scenarios. As aforementioned, vehicles are envisioned to be equipped with many on-board advanced sensors which will generate a high volume of data. In this regard, ML can efficiently analyse large volumes of data, find unique patterns and underlying structures, and finally make proper decisions by adapting to changes and uncertainties in the environment. In addition, ML can be implemented in a distributed manner to manage network issues for reduced complexity and signalling overhead as compared to a centralized approach. Thus, ML is applicable to various operational aspects of vehicular networks by using vehicle kinetics (e.g., speed, direction, acceleration), road conditions, traffic flow, wireless environments for adaptive data-driven decisions. From the PHY perspective, in high-mobility channels, synchronization and channel estimation are challenging tasks for V2X communication system design. The V2X system may experience frequent loss of synchronization and has to deal with short-lived channel state information (CSI) estimates due to very short channel coherence times. In addition, the use of mmWave band requires fast and efficient beam tracking and switching to establish and maintain reliable links in rapidly changing environments. Here, ML can be useful in learning, tracking and predicting relevant information (i.e., the synchronization points and CSI in highly volatile channels, and beamforming directions) by exploiting historical information (i.e., user location, received power, previous beam settings, context information covering network status, and so on.). ML may also help improve multiple UE grouping which involves cross-layer operation between PHY and medium access control (MAC). For example, should V2X use orthogonal multiple access (OMA) or nonorthogonal multiple access (NOMA) in a specific vehicular channel? Although 3GPP has decided to leave NOMA study items to beyond 5G, it is anticipated that NOMA will play an important role in autonomous driving. A major advantage of NOMA is that it serves multiple users on the same time/frequency resources. ML can help to switch between OMA and NOMA based on the requirements of each specific V2X use case. High vehicle mobility in vehicular networks also makes the design of efficient radio resource management (RRM) mechanisms an extremely challenging problem. The conventional approach for RRM is to adopt optimization. However, due to the highly dynamic vehicular environment, traditional optimization approaches may not be feasible. For instance, a small change in the vehicular environment may require a rerun of optimization, leading to prohibitively high overhead and inefficiency. In addition, to accommodate different requirements of V2X services, multi-objective optimization could be complicated and time-consuming. ML-based approaches may be more efficient in such a scenario for a number of resource allocation problems including channel and power allocation, user association and handoff. Considering NR Sidelink transmission, vehicles are expected to reach a more sophisticated level of coordinated driving through intent sharing. In this regard, ML based transmission mode selection and resource allocation would be of interest in view of the stringent requirements in V2V communication. While resource allocation algorithms for D2D have been mostly developed in a centralized manner, the centralized control will incur huge overhead to obtain the global network information, possibly leading to bottleneck. In contrast, a decentralized ML based approach has the potential to allow every vehicle to learn the optimal resource allocation strategy from its observations. Thus, a multi-agent learning approach where each vehicle can learn and cooperate is highly desired. Vehicle trajectory prediction has been receiving increasing interest to support driver's safe driving features such as collision avoidance and road hazard warning. While a motion model is learned based on previously observed trajectories, vehicles' future locations may be predicted via ML with the observed mobility traces and movement patterns. Additionally, unexpected factors (but affecting the trajectories) such as drivers' intention, traffic patterns, and road structures, may also be implicitly learned from historical data. Such vehicle trajectory prediction can also be helpful for handoff control, link scheduling, and routing. For instance, the most promising relay node can be selected for message forwarding and seamless handoff between V2V and V2I in an effective routing scheme using vehicle trajectory prediction. While ML is expected to take an important role to lead data-driven intelligence and edge-and UE-based intelligence (beyond network-side intelligence), there are also challenges in adopting ML in vehicular networks. Firstly, ML may produce undesired results. While minor errors could lead to huge impacts for safetysensitive services, significant efforts need to be made to improve the robustness and security of an ML-based approach [14]. Additionally, the on-board computational resources in each vehicle may be limited. Due to stringent end-to-end delay constraints, the use of cloudbased computing resources may not be feasible. In such cases, advanced ML techniques are needed for vehicles with limited computing capability. Thus, techniques such as model reduction or compression should be considered in designing ML-based approaches to alleviate the computation resource limitation without degradation in the performance of V2X communication. V. 5G NR-V2X USE CASES IN COOPERATIVE AND AUTONOMOUS DRIVING The success of 5G NR, in practice, is largely related to the question of how well 5G NR can fulfill the requirements of designated services and advanced use cases. One of the main objectives for the NR-V2X standard is to support use cases with stringent requirements of ultra-high reliability, ultra-low latency, very accurate positioning and high throughput, which may not be achieved by LTE-V2X. NR-V2X is not intended to replace LTE-V2X services but to complement them with advanced services. While LTE-V2X targets the basic safety services, NR-V2X can be used for advanced safety services as well as cooperative and connected autonomous driving. The following use cases [15] are among the target services that may be supported by NR-V2X: • Trajectory sharing and coordinated driving: Intention/trajectory of each vehicle will be shared to enable fast, yet safe maneuvers by knowing the planned movements of surrounding vehicles. Exchange of intention and sensor data will ensure more predictable, coordinated autonomous driving, as they know the intended movements of other vehicles. • Vehicle platooning: This is an application of cooperative driving which refers to a group of vehicles, traveling together in the same direction and at short inter-vehicle distances. To dynamically form and maintain platoon operations, all the vehicles need to receive periodic data (i.e., direction, speed and intentions) from the leading vehicle. • Extended sensors sharing: Enables the exchange of raw or processed data gathered through local sensors or live video images among vehicles, roadside units, devices of pedestrian and V2X application servers. The vehicles can increase the perception of their environment beyond what their own sensors can detect and have a broader and more holistic view of the local situation. • Remote driving: A remote driver or a cloud-based V2X application take control of the vehicle. Examples of remote driving/teleoperated applications are for incapacitated persons, public transportation, remote parking, logistics, or driving vehicles in dangerous environments (e.g., Mines). The end-to-end latency and reliability requirements for the aforementioned use cases are presented in Fig. 4. As can be seen, three zones are identified. In LTE-V2X zone services which require less than 90% reliability and latency between 10-100 ms can be supported. In the second zone, service which requires 99% reliability and latency between 5-10 ms may be supported by LTE-V2X but surely are supported by NR-V2X. The services in the NR-V2X zone, which require less than 5 ms latency and above 99% reliability are only supported by NR-V2X. Fig. 4. End-to-end latency and reliability requirements for advanced NR-V2X use cases. VI. CONCLUSIONS In this paper, we have presented the design considerations, technology components, functionalities, and key features of NR-V2X towards connected and cooperative autonomous driving. We have discussed how NR-V2X is designed and configured to fulfill a number of stringent QoS requirements associated with autonomous driving in terms of throughput, latency, reliability, security, and positioning. We have also shown that ML can be exploited to significantly improve the performances of V2X communications. It is believed that 5G NR will be a transformative technology for a highly connected and cooperative vehicular world.
Clinical significance of skip lymph-node metastasis in pN1 gastric-cancer patients after curative surgery Abstract Background In addition to the stepwise manner of lymph-node metastasis from the primary tumour, the skip lymph-node metastasis (SLNM) was identified as a low-incidence metastasis of gastric cancer (GC). So far, both the mechanism and outcome of SLNM have not been elucidated completely. The purpose of this study was to analyse the clinical significance and the potential mechanism of SLNM in GC patients who had lymph-node metastasis. Methods Clinicopathological data and follow-up information of 505 GC patients who had lymph-node metastasis were analysed to demonstrate the significance of SLNM in evaluating the prognostic outcome. According to the pathological results, all GC patients who had lymph-node metastasis were categorized into three groups: patients with the perigastric lymph-node metastasis, patients with the perigastric and extragastric lymph-node metastasis and patients with SLNM. Results: Among the 505 GC patients who had lymph-node metastasis, 24 (4.8%) had pathologically identified SLNM. The location of lymph-node metastasis was not significantly associated with 5-year survival rate and overall survival (OS) (P = 0.194). The stratified survival analysis results showed that the status of SLNM was significantly associated with the OS in patients with pN1 GC (P = 0.001). The median OS was significantly shorter in 19 pN1 GC patients with SLNM than in 100 patients with perigastric lymph-node metastasis (P < 0.001). The case–control matched logistic regression analysis results showed that tumour size (P = 0.002) was the only clinicopathological factor that may predict SLNM in pN1 GC patients undergoing curative surgery. Among the 19 pN1 GC patients with SLNM, 17 (89.5%) had metastatic lymph nodes along the common hepatic artery, around the celiac artery or in the hepatoduodenal ligament. Conclusions SLNM may be considered a potentially practicable indicator for prognosis among various subgroups of pN1 GC patients. Introduction Gastric cancer (GC) is the second leading cause of cancer-related deaths worldwide. Moreover, lymph-node metastasis, which represents cancer-cell biological behaviour, has been identified as one of most important clinicopathological variables for evaluating the prognosis of GC patients [1,2]. The Union for International Cancer Control (UICC) pathological N (pN) category based on the number of metastatic lymph nodes has been generally recognized as the optimal category of lymph-node metastasis for predicting the overall survival (OS) of patients [3]. Some studies insisted that the location of lymph-node metastasis affected the OS independently and showed that the extended lymphadenectomy was not significantly associated with an increase in post-operative death rates [4]. The mechanism of lymph-node metastasis is a sophisticated invasive process throughout the course of GC, and this process covers many kinds of biological behaviours of cancer cells. Referring to the anatomic regions of lymphatic drainage surrounding the stomach, the perigastric lymph nodes should be considered the first-tier lymph nodes that are prone to invasion by cancer cells departing from the primary tumour. The secondtier lymph nodes surrounding the stomach are called extragastric lymph nodes, which usually are located in the original portion of the celiac artery, anterior portion of the common hepatic artery, near half portion of the splenic artery and lower left portion of the hepatoduodenal ligament [5,6]. In theory, in most GC patients, the spreading cancer cells follow the regular pattern from the first-tier lymph nodes to the second-tier lymph nodes. Actually, a few GC patients without the perigastric lymph-node metastasis would be identified to have the extragastric lymph-node involvement through pathologic examination after surgery, which was called as skip lymph-node metastasis (SLNM) [7]. So far, few investigations focusing on SLNM of cancer have elucidated its clinical significance and its potential mechanism for the purpose of evaluating prognosis [7,8]. The SLNM distribution of GC patients has not been fully elucidated, although the SLNM occurrence probability of GC has been reported to reach 11% [9]. Some authors have reported that patients with SLNM presented with similar clinicopathological variables and prognosis to those with perigastric lymph-node metastasis and a longer median OS than patients with the perigastric þ extragastric lymph-node metastasis after surgery [10]. Other researchers found that the prognosis for patients with SLNM was worse than that of those with the perigastric lymph-node metastasis, and was similar to that of patients with the perigastric and extragastric lymph-node metastases [11]. In this study, we aimed to retrospectively analyse the clinicopathological characteristics of 505 GC patients with lymphnode metastasis to explore the clinical significance of SLNM of GC and the potential mechanism of SLNM in GC patients. Patients A total of 1156 patients were diagnosed with gastric adenocarcinoma and underwent the curative gastrectomy plus D2 lymphadenectomy in Tianjin Medical University Cancer Hospital (China) between 2003 and 2011. Eligibility criteria for inclusion in this study were as follows: (i) gastric adenocarcinoma identified by histopathological examination, (ii) histologically confirmed R0 resection, (iii) availability of complete follow-up data, (iv) radical resection and D2 lymphadenectomy performed and (v) no fewer than 16 lymph nodes examined. Clinicopathological variables Medical records were reviewed and the following clinicopathological characteristics were analysed: age at the time of surgery (65 years or younger vs older than 65 years), sex (male vs female), location of the primary tumour (the lower, middle or upper thirds of the stomach vs more than two-thirds of the stomach), size of the primary tumour (4 cm or less vs more than 4 cm), depth of the primary tumour invasion (pT1 vs pT2 vs pT3 vs pT4), Lauren classification (intestinal or diffuse vs mixed), number of metastatic lymph nodes (pN0 vs pN1 vs pN2 vs pN3a vs pN3b), type of gastrectomy (subtotal gastrectomy vs total gastrectomy) and number of examined lymph nodes (fewer than 16 vs 16 or more). Follow-up After curative surgery, all patients were followed every 6 months for the first 2 years and then once a year until death. B-ultrasonography, computed tomography, chest X-ray and endoscopy were performed every visit. Statistical analysis The median OS was determined using the Kaplan-Meier method. The log-rank test was used to compare the survival distributions of each univariate. The variables that were deemed to be of potential importance in univariate analyses (P < 0.05) were included in the multivariate analyses. Multivariate analyses were performed by means of the Cox proportional hazards model, using the forward stepwise procedure for variable selection. Hazard ratios (HRs) and 95% confidential intervals (CIs) were generated. To assess the potential bias in comparing prognostic factors with different clinicopathological characteristics, the Bayesian Information Criterion (BIC) was used. A smaller BIC value indicated a better model for predicting outcome. To overcome the constituent ratio error among the subpopulation of patients, case-control matched logistic regression was used. Chi-square was adopted to demonstrate the association between SLNM and various clinicopathological variables in the logistic regression analyses. The significance level was defined as P < 0.05. All statistical analyses were performed using a statistical analysis program package SPSS 22.0 (SPSS Inc; Chicago, IL, USA). Clinicopathological outcomes In the present retrospective study, data from 505 consecutive patients (363 males and 142 females) with lymph-node metastasis for primary GC between March 2003 and August 2011 were examined. The median follow-up period was 84 months (range, 6-144 months). The patients' ages ranged from 20 to 87 years, with an average age of 59.1 years. In accordance with the 8th edition of the UICC/American Joint Committee on Cancer (AJCC) pathological TNM classification of GC, of the 505 patients, 125 (15.9%), 183 (23.1%), 138 (17.4%) and 59 (7.4%) had pN1, pN2, pN3a and pN3b category GC, respectively (Supplementary Table 1). The type of gastrectomy (total gastrectomy for 178 patients and subtotal gastrectomy for 237 patients) was selected based mainly on the GC treatment guidelines in Japan. Among 505 patients with lymph-node metastasis, 275 had perigastric lymph-node metastasis, 206 had perigastric þ extragastric lymph-node metastasis and 24 had SLNM. The 5-year survival rate of the patients with lymph-node metastasis was 19.0%; 96 patients were alive at the last follow-up and the median OS of all patients after surgery was 25.0 months. Univariate survival analysis The univariate analysis showed that, in GC patients with lymph-node metastasis, age at surgery (P ¼ 0.021), tumour size (P ¼ 0.005), type of gastrectomy (P ¼ 0.001), Lauren's classification of primary tumour (P ¼ 0.050), depth of primary tumour invasion (pT category) (P < 0.001), location of lymph-node metastasis (P < 0.001) and number of metastatic lymph nodes (pN category) (P < 0.001) were significantly associated with the median OS of patients (Supplementary Table 1). We found that (i) the more deeply the primary tumour invaded, the shorter the median OS of patients was; (ii) the higher the number of metastatic lymph nodes, the shorter the median OS of patients was; and (iii) the median OS of patients with extragastric lymphnode metastasis was shorter than that of patients with perigastric lymph-node metastasis or that of patients with SLNM ( Figure 1A). Multivariate survival analysis All night variables listed above were included in a multivariate Cox proportional hazards model (forward stepwise procedure) to adjust for the effects of covariates (Supplementary Table 1). In that model, age at surgery (P ¼ 0.006), depth of primary tumour invasion (P ¼ 0.002) and number of metastatic lymph nodes (P < 0.001) were significantly associated with the median OS of GC patients with lymph-node metastasis. However, the location of metastatic lymph nodes was not significantly associated with the median OS of GC patients with lymph-node metastasis (P ¼ 0.194). Thus, we analysed the median OS of patients with SLNM on respective pN stages. By using the stratified survival analysis, we found that SLNM was significantly associated with the median OS in patients with pN1 GC (perigastric lymph-node metastasis vs SLNM, P < 0.001; Supplementary Table 2). Univariate and multivariate survival analyses of pN1 GC patients The univariate analyses showed that the type of gastrectomy (P ¼ 0.031) and the location of metastatic lymph nodes (P ¼ 0.002) were significantly associated with the median OS of pN1 GC patients (Table 1). We included the two variables mentioned above into a multivariate Cox proportional hazards model (forward stepwise procedure) to adjust for the effects of covariates. The results showed that the location of metastatic lymph nodes (HR ¼ 1.675; 95% CI ¼ 1.184-2.370, P ¼ 0.004) and type of gastrectomy (HR ¼ 1.624; 95% CI ¼ 1.022-2.581, P ¼ 0.040) were significantly associated with the median OS of pN1 GC patients ( Table 1). The median OS was longer in GC patients who underwent subtotal gastrectomy than in those who underwent total gastrectomy (69.0 vs 39.0 months, log-rank P ¼ 0.031); the median OS of patients with SLNM was shorter than that of those with perigastric lymph-node metastasis (26.0 vs 62.0 months, P < 0.001); however, there was no statistical difference between the median OS of patients with SLNM and perigastric þ extragastric lymph-node metastases (26.0 vs 36.0 months, P ¼ 0.642; Figure 1B). BIC value performance BIC values were obtained by using logistic regression according to the survival status of patients. We found that the BIC value of the location of metastatic lymph nodes was lower than that of the type of gastrectomy (28.683 vs 32.467) in patients with pN1 GC (Table 1). Associations between SLNM and various clinicopathological variables in pN1 GC patients based on the case-control matched logistic regression We adopted case-control matched logistic regression (using the forward stepwise procedure) to directly analyse the various clinicopathological variables and considered the different statuses of SLNM. We matched 125 patients in terms of sex (male vs female), age at surgery, tumour size, tumour location (lower third vs middle third vs upper third vs more than two-thirds of the stomach), depth of primary tumour invasion (pT1 vs pT2 vs pT3 vs pT4), Lauren classification of primary tumour (intestinal vs diffuse vs mixed), number of lymph nodes and type of gastrectomy (subtotal vs total). The results showed that the tumour size (P ¼ 0.002, v 2 ¼ 30.476) was the only clinicopathological variable associated with the SLNM in pN1 GC patients undergoing curative surgery (P ¼ 0.002, v 2 ¼ 30.476; Table 2). Among the pN1 GC patients with SLNM, 17 (89.5%) had positive lymph nodes along the common hepatic artery, around the coeliac artery or in the hepatoduodenal ligament (Supplementary Table 3). Discussion The lymph-node metastasis from GC basically follows the law of stepwise spread through the anatomical regional lymphatic bed; however, predicting the second-tier lymph-node metastasis, including the SLNM, is impossible. Therefore, D2 lymphadenectomy is recommended as the key procedure in curative gastrectomy, even for some early-stage GC patients with suspected lymph-node metastasis [7,12,13]. Some studies suggested that minimally invasive therapy of lymph-node dissection should be supplemented for early-stage GC patients who undergo endoscopic mucosal resection, wedge resection or laparoscopy-assisted gastrectomy, taking into consideration the potential of lymph-node metastases and SLNM [14,15]. Kim et al. [7] analysed the data of 997 GC patients with lymph-node metastasis and found that patients with SLNM showed a lower frequency of vascular invasion than those with first-tier lymphnode metastasis, and showed smaller tumour size and lower incidence of lymphatic, vascular and perineural invasions than those with the stepwise second-tier lymph-node metastasis. Moreover, researchers have agreed that predicting whether patients had SLNM before surgery is impossible at this point; therefore, D2 lymphadenectomy is recommended as the optimal treatment strategy for patients with potential SLNM. Theoretically, the SLNM of GC can be presented in the following situations: (i) true SLNM, which may be induced from the blockage of afferent lymphatic vessels among partial firsttier lymph nodes [16] and (ii) false SLNM, which indicates that cancer cells have invaded the extragastric lymph nodes gradually through the local lymphatic vessels; however, the perigastric lymph-node metastasis cannot be examined because of the morphological and structural damage in the first-tier lymph nodes by cancer-cell proliferation [17], only micro-metastases or isolated tumour cells in the first-tier lymph nodes [18] or the insufficiently examined lymph-node count [19]. In these studies, 24 (4.7%) of 505 GC patients were pathologically identified as SLNM cases after curative gastrectomy with D2 lymphadenectomy. The median OS of those patients with SLNM was 23.0 months, which is shorter than that of patients with the perigastric lymph-node metastasis (32.0 months) and higher than that of patients with perigastric þ extragastric lymph-node metastasis (19.0 months). No statistically significant difference in the median OS was observed between patients with SLNM, patients with the perigastric lymph-node metastasis and patients with the perigastric þ extragastric lymph-node metastasis in this study. Therefore, we considered that SLNM should be deemed as the potential perigastric þ extragastric lymphnode metastasis in terms of patient prognosis and pathological outcomes. Furthermore, we found that SLNM is only applicable for distinguishing the differences in median OS among subgroups of pN1 stage GC patients by using the stratified survival analysis (Supplementary Table 2). Upon multivariate survival analysis, the location of metastatic lymph nodes (P ¼ 0.004) and the type of gastrectomy (P ¼ 0.040) were identified as the independent predictors for evaluating the median OS of pN1 stage patients after curative surgery (Table 1). Among those independent prognostic predictors, the locations of metastatic lymph nodes were demonstrated as the most intensive factor when evaluating the prognosis of pN1 stage patients after curative surgery, owing to the low BIC value of SLNM (Table 1). Additionally, the logistic regression analysis between SLNM and other clinicopathological variables showed that only the tumour size was a relative factor to the SLNM in pN1 stage patients in this study. Compared with previous reports, our study included a higher proportion of advanced GC patients (98.4%) and presented a comparatively shorter OS. Therefore, we do think that many patients with SLNM in this study might be false SLNM patients whose perigastric lymph nodes were destroyed by cancer cells or that the examined lymph-node counts were not enough. In conclusion, the causes of SLNM of GC remain ambiguous and vague in clinical settings. The pN1 stage GC patients with SLNM presented a worse prognosis than those without SLNM. The examined lymph-node count, based on standard lymphnode dissection (D2 lymphadenectomy), should be sufficient for improving the accuracy of SLNM in GC patients after surgery. Tumour size, as an important relative factor to SLNM prediction, indicated that neoadjuvant chemotherapy needs to be recommended for patients with large GC tumours for eradicating the micro-metastasis in the lymphatic system [20]. Supplementary data Supplementary data is available at Gastroenterology Report online. Funding This study was supported in part by grants from the Programs of National Natural Science Foundation of China
Prevalence of social media networking on academic achievement and psychological health of undergraduate students in Federal Universities in Nigeria Social Media are mediae that allow users to meet online through the internet to communicate in social forum. To investigate the prevalence of social media networking, as it relates to academic achievement and psychological health of undergraduate students in federal universities in Nigeria, the study adopted a correlational survey research design. The study was carried out in federal universities in Nigeria. The population comprised all 28,120 undergraduates in the faculties of Education and Engineering in the 40 federal universities in Nigeria in 2015/2016 academic session. Simple and stratified random sampling techniques were used to draw a sample of 351 undergraduate students in 200 level in faculties of education and engineering in 4 sampled federal universities from 40 federal universities in Nigeria used for the study. Six research questions and four null hypotheses formulated guided the study. Strong reliability evidence was found for social media networking and psychological health scales which yielded 0.78 and 0.74 respectively. The instrument for data collection were well-structured interview schedule, self report questionnaire and students raw scores on a four point likert-type format which were administered to elicit information on students’ social media networking, whilst an achievement test was administered to ascertain their academic achievement. Data for research questions 1 and 2 were answered using mean and standard deviation, whilst data for research questions 3-6 were analyzed using pearson’s r and R-square. The hypotheses were tested using Analysis of variance (ANOVA) statistic at 0.05 level of significance. It was found that students’ social media networking significantly predicts their academic achievement and psychological health. Based on the findings, of this study, the researchers recommended among others that, the parents, peers and teachers should be on guard to ensure that these students use social networking for appropriate period or not and as well help these students to be aware of the negative effects. Introduction In the recent time, there is a dramatic use of internet by individuals in a large system of connected computers, phones and note pad around the world, these are used to share academic and other pleasurable information with one another using online messages. These social network sites were created to make friends and to stay in touch with family members that are away. The drastic increase in popularity of the social network sites in the last decade probably has been necessitated by the fact that college and university students as well as teens used it extensively to get global access. Networking is referred to as the connection of two or more computers to communicate with one another, that is, when millions of computers in different locations around the world are connected together to allow users send and receive message from one another. Social networking is an online service, platform, or site that focuses on building and reflecting of social networks or social relations among people, who, for example, share interests and/or activities (Adenubi, Olalekan, Afolabi & Opeoluwa, 2013). Social Media are mediae that allow users to meet online via the Internet, communicate in social forum like Facebook, Twitter, and other chat sites, where users generally socialize by sharing news, photo or ideas and thoughts, or respond to issues and other contents with other people (Buhari, Ahmad & HadiAshara, 2014). According to Ibidapo (2014) social media networking (SMN) is the 'new media' that speeds up conversations in a more interactive way which makes communication more effective and worthwhile. It is an online media that takes communication beyond the limitations of the traditional media, which most often delivers content but does not permit readers, or as the case may be, viewers or listeners, to participate in the formation or development of the content. In other words SMN is a category of online media where people talk, participate, share, network, and bookmark online. Social media is simply a system that disseminates information 'to' others (Hartshorn, 2010). SMNS can be used to describe community-based web sites, online discussions forums, chatrooms and other social spaces online (Vangie, 2016). There is quite a good number of online SMNS in the world, such as YouTube, Twitter, LinkedIn, Facebook, pinterest, google+, fumblr, meetup, xing, renren, disgus, snapchat, instagram, vine, whatsApp, vk.com, badoo and medium. The social media networking websites users interact by adding friends, connecting on profiles, joining groups and having discussions. As observed by Clement (1990), users developed informal collaborative networks in organizations. Studies have shown that one of the most effective channels for gathering information and expertize within an organization is its informal networks of collaborators, colleagues and friends. However, some of these social media networking such as Twitter, whatsapp and Facebook have become a raging craze for most individuals especially the youth. This dramatic interest and involvement of youth in SMN have been a source of concern to all and sundry, especially parents who are mostly concerned about how media exposure and content may influence their children. Those SMNS were created to catch fun at leisure time, for advertisement and also good places to study. For instance, there are many educational groups on facebook and linkedIn, there are also many other advantages of social network sites, in other words, many people meet their lover at social media network sites. According to (Blogger, 2012) the negative effects of these SMNS seem to apparently overweigh the positive ones. Researchers indicate that these sites have caused some potential harm to the society. The students become victims of social networks more often than anyone else. This is because when they are studying or searching for their course material online; in order to kill the boredom in their study time they get attracted to these sites, which divert their attention from their study. The attraction, however, makes them forget their major reason of using internet. This wastes their time and sometimes makes them unable to deliver their work in the specified time frame and consequently leads to low grades of students in school work. It could also lead to loss of motivation among students-The student's motivational level reduces due to the use of these SMNS. They rely on the virtual environment instead of gaining practical knowledge from the real world (Blogger, 2012). Blogger, further explained that other negative effects of social networking websites include: Reduced learning and research capabilities-Students seem to rely more on the information accessible easily on these SMNS and the web. This reduces their learning and research capabilities because some of them smuggle their phone into the exam hall to get answers to exam questions which sometimes becomes impossible and leads to exam failure. Multitasking-Students who get involved in activities on social media sites while studying, get their focus of attention reduced, this results in lack of concentration to study well and consequently poor academic performance. Moreover, the more time the students spend on these social media sites, the less time they spend socializing personally with others. This reduces their communication skills or lack of the ability to communicate and socialize effectively in person with others. The effective communication skills are key to success in the real world. Reduces command over language usage and creative writing skills. Students mostly use slang words or shortened forms of words on social networking sites. They start relying on the computer grammar and spelling check features. This reduces their command over the language and their creative writing skills. Many Nigerian students lost interest in reading because they are addicted to SMNS, while some hardworking students became lazy as a result of bad company on SMNS, these Nigerian students were introduced to Examination malpractice (exam runz) at SMNS. This has however, contributed to lowering of Nigeria education standard in the form of numerous certified illiterates in Nigeria (Penkraft, 2015). According to Penkraft (2015) SMNSs were not aimed to decrease the academic performance of students, but rather to be used for academic purposes. The enthusiasm of Nigerian students for SMNS is one of the causes of their poor academic performance. Most Nigerian students prefer to exhaust all their time online chatting at their lesson period, they do not even have time to do their home work as well as read for examinations. These activities have a negatively tremendous influence on their academic achievement. According to Adimora (2016) achievement is accomplishing whatever goals one set for oneself. Academic achievement is the overall academic performance of a student in the school which could be assessed by the use of tests and examinations. It is the attainment of standard of academic excellence. Ask (2015) explained academic achievement as student's success in meeting short-or long-term goals in education. In the big picture, academic achievement means completing high school or earning a college degree. In a given semester, high academic achievement places a student on the honor roll. Teachers and school administrators can measure students' academic achievement through schoolwide standardized tests, state-specific achievement tests and classroom assessment. Standardized and state tests enable educational professionals to see how students in a school are achieving in a variety of subjects compared to those at other schools and geographic locations. Classroom assessments enable teachers to see how well students are learning concepts for a specific class (Ask, 2015). An indication of the quality of learning that takes place in the classroom is the performance of students in external examinations especially certificate examinations. Such standard examinations can be seen as a common denominator for comparing the academic attainment of all students at the same educational level. One may therefore, consider the performance of students at the end of secondary education in this regard. The West Africa Secondary School Certificate Examination (WASSCE) presents acceptable picture of the standard of learning at the end of secondary school education in Nigeria. Over the years, the performance of students in the WASSCE has not been cheering. The percentage of students who obtained credit passes in five subjects including English Language and Mathematics which are the core subjects in the recent years is below average percentage level (Belo-Osagie, 2011). However, the overuse of these sites on a daily basis seem to have many negative effects on the physical and psychological health of students, because it makes them lethargic and unmotivated to create contact with the people in person. An excessive use of these sites could be detrimental to these students' psychological health (Blogger, 2012). According to About.com (2006) psychological health is a mental state of someone who is functioning at a satisfactory level of emotional and behavioural adjustment. It may also include an individual's ability to enjoy life, and create a balance between life activities and efforts to achieve psychological resilience. World Health Organization (WHO) explained mental health as subjective wellbeing, perceived self-efficacy, autonomy, competence, intergenerational dependence, and selfactualization of one's intellectual and emotional potential (World health report, 2001). WHO further states that the well-being of an individual is encompassed in the realization of their abilities, coping with normal stresses of life, productive work and contribution to their community (Mental health, 2014). According to Cornblatt (2009) SMNS such as Facebook and MySpace seem to provide people with a false sense of connection that ultimately increases loneliness in people who feel lonely. Cornblatt further asserted that, social networking can foster feelings of sensitivity to disconnection, which can lead to loneliness. Furthermore, if an individual tends to, trust people and, have a significant number of face-to-face interactions, the individual is likely to assess their own well-being as relatively high. The researchers found that online social networking plays a positive role in subjective well-being when the networking is used to facilitate physical interactions, but networking activities that do not facilitate face-to-face interactions tend to erode trust, and this erosion could negatively affect subjective wellbeing (independent of the online social interaction itself). It was also revealed by Cornblatt, that the overall effect of networking on individual welfare is significantly negative. A research carried out by Oyewumi, Isaiah and Adigun (2015) revealed that excessive and uncontrolled or compulsive social networking use has been known to have negative effects on psychological well-being of adolescents, such as loneliness. They also noted that incessant use of the internet is associated with various measures of loneliness and stress among adolescents. A study from the University of Michigan collected data about Facebook users and how it correlated with their moods. They found that the more passionate users were overall unhappier than those who used the site less. Over more time, passionate users also reported lower satisfaction in their lives overall. Fear of missing out is a phenomenon that occurs when one feels pressure to do what everyone else does, attends every event, and shares every life experience. It can evoke anxiety and cause social media users to question why everyone is having fun without them. Surveys have found that people feel insecure after using Pinterest because they feel that they are not crafty or creative enough. Pinterest serves as a giant virtual idea and inspiration board, it lets people share pictures, creative thoughts, or (especially) before-and-after pictures of projects that others can pin, save, or duplicate (Milanovic, 2015). Facebook and Twitter can make people feel like they are not successful or smart enough (Tavakoli, 2015). A 33-year-old Nigerian named Collins Obianke who in University in Malaysia was charged to court by the police in the country for allegedly engaging in an online fraud scam, Collins, allegedly used it to defraud two women of about $2,389 after convincing them that they have received a gift from overseas that requires them to deposit $2,389 into a bank account. Undoubtedly, this online transaction caused the young man and the women a serious psychological ill health (Danchen, 2016). The relationship between students' gender and their social networking is another area of interest to the researcher that needs to be addressed. Gender as a psychological construct has been used to describe maleness and femaleness. Gender as a term describes the behaviour and attitude expected of an individual on the basis of being born male or female (Adimora, 2016). There are inconsistencies in students' academic achievement through technological interactions. Demographic research on online social media network users reveal gender differences. A study reveals SMNS such as facebook, Pinterest and instagram as popular with females, and that overall females subscribe to online social network platforms to a greater extent than men. Nigerian female undergraduate students spend more time on facebook than males and self-report higher levels of anxiety if they are not able to access the platform. In an exploratory Nigerian collegiate study of gender, academics, and self-efficacy, Nigeria males were more likely to use SMN for academic pursuits as compared to females who prefer using it for pleasure (Issa, Isaias and Kommers, 2016). Research on gender issues on SMN and students' psychological health is still unknown and calls for an urgent research attention, for instance, a study of Swedish SMN users found that women were more likely to have expressions of friendship, specifically in the areas of publishing photos of their friends, specifically naming their best friends, and writing poems to and about their friends. Women were also more likely to have expressions related to family relationships and romantic relationships. One of the key findings of this research is that those men who do have expressions of romantic relationships in their profile had expressions just as strong as the women. However, the researcher speculated that this may be in part due to a desire to publicly express heterosexual behaviours and mannerisms instead of merely expressing romantic feelings (Sveningsson, 2007). Research suggests that females are more likely to be on the receiving end of cyberbullying than they are with traditional face-to-face bullying and that new forms of sexual and gender harassment, such as "sexting," "morphing," "virtual rape," and "revenge porn," have emerged and females seem to be victim. Some of the differences between face-to-face bullying and cyberbullying serve to exacerbate the impacts of the cyberbullying on victims. The longer "shelf life" of cyberbullying text or images, for example, can place the victim in harm's way for longer periods of time compared to faceto-face bullying. Previous studies of middle and high school students have found higher proportions of girls reporting that they have been victims of cyberbullying. Females are also more likely to be perpetrators of cyberbullying primarily targeting other females, sometimes within their friendship groups (Faucher, Jackson & Cassidy, 2014). However, it is not yet clear if students' poor academic achievement and poor psychological health could be attributed to their involvement in social media networking. To the best of researchers' knowledge, social media networking and academic achievement of students have research evidence in some western countries, but such assertion has not been empirically investigated in Nigeria especially as it relates with psychological health and academic achievement of undergraduate students in Nigerian universities. On that note, therefore, relationship among poor academic achievement and poor psychological health of undergraduate students in Nigeria is still unknown and calls for urgent research attention. Against this background, therefore, the researchers investigated social media networking as a predictor of academic achievement and psychological health of undergraduate students in federal universities in Nigeria. The problem which arose for this study, stated in question form, therefore is; what is the predictive power of social media networking on academic achievement and psychological health of undergraduate students in federal universities in Nigeria? Purposes of the Study The general purpose of this study is to investigate the predictive power of social media networking, academic achievement and psychological health of students in universities in Enugu State, Nigeria. Specifically, the study ascertained the 1. Nature of students' online social networking. Adimora, D. E., Ngwuchukwu, M. N., & Onuoha, J. C. (2016). Prevalence of social media networking on academic achievement and psychological health of undergraduate students in Federal Universities in Nigeria. Global Journal of Psychology Research: New Trends and Issues, 6(3), 135-147. 140 2. Reasons for use of social networking sites. 3. Social networking as a predictor of students' academic achievement. 4. Social networking as a predictor of students' psychological health. 5. Male and female students' social networking as a predictor of their academic achievement. 6. Male and female students' social networking as a predictor of their psychological health. The null hypotheses formulated to guide the study are: Ho1: There is no significant predictive power on students' social networking and their academic achievement. Ho2: Students' social networking has no significant predictive power on their psychological health. Ho3: Male and female students' social networking does not significantly predict their academic achievement. Ho4: Male and female students' social networking in social networking does not have a significant predictive power on their psychological health. Methods A correlational survey research design was adopted for this study. A correlational survey according to Bernstein, Penner, Clarke-Stewart and Roy (2006) examines relationships among variables in order to describe research data fully, to test predictions and to suggest new hypotheses about why people think and act as they do. The population comprised 28,120 undergraduate students in the faculties of Education and Engineering in 40 federal universities in Nigeria in 2015/2016 academic session. Using a simple random and stratified random sampling techniques, a sample of 351 undergraduate students in faculties of education and engineering in 4 sampled federal universities from 40 federal universities in Nigeria in 200 level were used for the study. The instrument used for the study were students annual result and a questionnaire of two clusters; social media networking scale and psychological health scales which yielded 0.78 and 0.74 respectively. The questionnaire has responses that ran on a four point scale thus: Very Often (VO) = 4 points, Often (O) = 3 points, Sometimes (O) = 2 points, Never (N) = 1 point. The instrument was validated by three experts. Strong reliability evidence was found for Social media networking and psychological health scales which yielded 0.78 and 0.74 respectively. Research questions 1 and 2 were answered using mean and standard deviation, whilst data for research questions 3-6 were analyzed using pearson's r and R-square. The hypotheses were tested with ANOVA statistic at 0.05 level of significance. It was found that students' social media networking significantly correlates with their academic achievement and psychological health. For each, respondent, an overall mean score and standard deviation for all the items were computed. An overall mean score of 2.5 and above showed that the undergraduate students' SMN correlates with their academic achievement and psychological health. Below 2.5 showed high engagement in SMN which is detrimental to their psychological health and academic activities. The results of the study were presented based on the research questions and corresponding hypotheses. Results The results of this study were presented in line with the research questions and corresponding hypotheses. Data presented on Table 1 indicates 11 items on students' SMN. Among the 11 items relating to students' social networking, 4 items on linkedin, pinterest, google+ and google search which associates with academic activities received mean scores of 1.91, 2.21, 2.10 and 2.41 respectively and item number 8 which does not relate to academic activity received a mean rating of 2.32 because most students did not have an in-depth knowledge of it and were not engaged in it. However, it is obvious that students' use of the above SMN that relates to academic activities, received the mean rating of below 2.50. On the other hand, those SMN that relates to pleasure such as youtube, twitter, facebook, whatsapp, instagram and badoo received the mean ratings of 3.18, 3.05, 3.09, 3.47, 3.21 and 2.95. These mean ratings are above 2.50. Using the benchmark of 2.50, the results suggest that the six items mentioned on Table 1 above reveal that students prefer using social networking that are not related to their academics to the ones that would boost their academic achievement. The total mean score of 2.72 suggests that students derive pleasure in engaging in SMN with less academic gain. Table 2 presents data on students' reasons for social media networking. The data indicate 12 items, 8 items among the 12 items on non-academic activities using social networking had mean ratings of 3. 34, 2.57, 3.48, 2.95, 3.14, 3.68 and 3.21 for items 1,2,3,4,5,6 and 12 respectively. According to the benchmark which indicates acceptance for items with mean ratings above 2.50, the above mean scores which are above 2.50 benchmark indicate that these students enjoy SMN that do not relate with academic activities such as downloading pictures and or music, interacting and chatting with friends, watching pornographic movies, watching photos and business propagation. The items that relate to academic activities such as items 7,8,9 and 10 had mean ratings of 2.21, 2.42, 2.16 and 1.68 respectively. In other words, the benchmark which indicates rejection for items with mean ratings below 2.50 reveals that these students had poor mean scores on issues relating to academic activities. The total mean score of 2.80 indicates that students engage more in distracting SMN than the ones that partain to academic activities. The prediction of SMN with academic achievement reveals the t-value as 23.681 at 672 degree of freedom and 443.681 as the mean square. This reveals that the relationship between social networking and students' academic achievement is negative and linear. The adjusted R-square which is shown to be 0.46 means that the predictor variable contributes only 46%. This reveals a weak predictive power of social networking on students' academic achievement, the remaining 54% could be as a result of other significant factors. The corresponding hypothesis which predicted no significant correlation of students' social networking on their academic achievement was further subjected to (ANOVA) as shown on Table 4. The relationship between students' social media networking and their academic achievement is negative and linear. This was revealed in the prediction of observed value predictor variable "social networking" and the criterion variable which is "academic achievement", the Pearson's correlation coefficient (r) is 0.030 a which is significant at 0.802 b which was above the 0.05 probability level at which the null hypothesis was tested. Therefore, the null hypothesis which predicted no significant relationship between Nigerian undergraduate students' social networking and their academic achievement is accepted. The relationship between students' social media networking and ttheir psychological health reveals the t-value as 42.413 at 652 degree of freedom and 372.891 mean square. This is an indicator that the predictive power of SMN on students' psychological health is negative and weak. The adjusted Rsquare which is the magnitude of the predictive power of students' social networking and their psychological health which is shown to be 0.43 means that the predictive power of social networking on psychological health contributes 43% which reveals a negative and weak relationship between the predictor and criterion variables. This shows that students' use of social networking negatively and poorly predicts their psychological health. The corresponding hypothesis which predicted no significant predictive power of students' social networking on their psychological health was further subjected to analysis of variance (ANOVA) statistic as shown on Table 6. The predictive power of social media networking on students' psychological health is negative and weak. This was shown in the predictive power of the observed value predictor variable (social networking) and the criterion variable which is psychological health, the pearson's correlation coefficient (r) is 0.320 which is significant at 0.415. This was above the 0.05 probability level at which the null hypothesis was tested. The hypothesis which predicted a negative significance of social networking on students' psychological health is therefore rejected. Male and female students' social media networking as it predicts their academic achievement shows the t-value of male as 23.015 and that of female as 25.827. the degree of freedom for both is 487 and the mean square is 412.124 for male and 332.109 for female. This indicates that relationship of both male and female social networking on their academic achievement as negative and weak. The adjusted R-square as the magnitude of the predictive power of students' gender in social networking on students' academic achievement are 0.44 for male and 0.47 for female, this means that the predictive power of male and female students' social networking contributes 44% and 47% respectively which reveals low predictive power of both male and female students' social networking on their academic achievement. This indicates that male or female engagement in social networking has no significant predictive power on students' academic achievement. The corresponding hypothesis which predicted "no significant predictive power of male and female students' social networking on their academic achievement" was further subjected to analysis of variance (ANOVA) as shown on table 8. The predictive power of male and female students' social networking on their academic achievement are negative and weak. This was indicated in the predictive power of the observed value predictor variable (social networking) and the criterion variable which is academic achievement, the pearson's correlation coefficient (r) are 0.432 and 0.473 which are significant at 0.431 and 0.398. These were above the 0.05 level of significance at which the null hypothesis was tested. The hypothesis which predicted "no significant predictive power of male and female students' social networking on their academic achievement is therefore accepted. Male and female students' social networking as it predicts their psychological health reveals the tvalue of male as 28.019 and that of female as 26.732. The degree of freedom is 487 and the mean square is 426.923 for male and 435.206 for female. This indicates the predictive power of both male and female students' social networking on their psychological health as negative and weak. The adjusted R-square as the magnitude of the predictive power of both male and female students' social networking on their psychological health are 0.48 and 0.46 for male and female respectively, this is an indicator that the predictive power of male and female students' social networking contributes 48% and 46% respectively which reveals low predictive power of both male and female students' social networking on their psychological health. This indicates that both male and female social networking do not have a significant predictive power on their psychological health. The corresponding hypothesis which predicted "no significant predictive power of male and female students' social networking on their psychological health" was further subjected to analysis of variance (ANOVA) as shown on table 10. The predictive power of male and female students' social networking on their psychological health is negative and weak. This was indicated in the predictive power of the observed value social networking as the predictor variable and the psychological health -criterion variable, the pearson's correlation coefficient (r) for male and female are 0.489 and 0.451 at 0.42 and 0.47 probability level respectively. These were above the 0.05 level of significance at which the null hypothesis was tested. The hypothesis which predicted "no significant predictive power of male and female students' social networking on their psychological health is therefore, accepted. Discussion The researchers were inspired to investigate the prevalence of social media networking, as it relates to academic achievement and psychological health of undergraduate students in federal universities in Nigeria. The first purpose ascertained the nature of students' online social networking. In other words, the result indicates that students derive pleasure in engagement in social media networking with less academic gain. In support of this assertion, prior study such as Penkraft (2015) opined that the enthusiasm of Nigerian students for SMNS is one of the causes of their poor academic performance. Most Nigerian students prefer to exhaust all their time in online chatting during their lesson period, which makes them not to have less time for their home work as well as reading for examinations. These activities have a negatively tremendous influence on their academic achievement. This study also found that students engage more in distracting social networking than the ones that partain to academic activities. To affirm this assertion, Blogger (2012) pointed out that when students are searching for their course material online; in order to kill the boredom in their study time, they get attracted to these sites which divert their attention from their study, such distraction makes them forget their major reason of using internet. As the case may be, the waste of time sometimes makes them unable to deliver their work in the specified time frame and consequently leads to their low grades in their school work. In order words, this study revealed a negative and weak relationship between students' social networking and their academic achievement. This study was supported by Blogger (2012) who stated tha students who get involved in activities on SMNS while studying, get their focus of attention reduced, this results in lack of concentration to study well and reduction in their academic performance. The present study found a negative and weak predictive power of social networking on students' psychological health. In affirmation to this assertion, Oyewumi, Isaiah and Adigun (2015) revealed that excessive and uncontrolled or compulsive social networking use has been known to have negative effects on psychological well-being of adolescents, such as loneliness and stress. Milanovic (2015) found that use of facebook by users correlates with their moods. They found that the more passionate users were overall unhappier than those who used the site less. Similarly, it has been found that people feel insecure after using Pinterest because they feel that they are not crafty or creative enough. Tavakoli (2015) asserted that Facebook and Twitter can make people feel like they are not successful or smart enough. This study found that neither students' masculinity nor femininity on the use of social networking predicts their academic achievement. However, gender issue with regards to social networking and academic achievement is still unconclusive. This was shown by previous studies which are not in agreement. For instance, Issa, Isaias and Kommers (2016) reveal that SMNS such as facebook, Pinterest and instagram as popular with females, and that overall females subscribe to online social network platforms to a greater extent than men, and that Nigerian males were more likely to use online social media for academic pursuits as compared to females who prefer using it for pleasure. It was found that both male and female students' social networking have a significant predictive power on their psychological health. Research on gender issues on social networking and students' psychological health is still unknown and calls for an urgent research attention, for instance, a study of Swedish SNS users found that women were more likely to have expressions of friendship, specifically in the areas of publishing photos of their friends, specifically naming their best friends, and writing poems about their friends. Women were also more likely to have expressions related to family relationships and romantic relationships. One of the key findings of this research is that those men who do have expressions of romantic relationships in their profile had expressions just as strong as the women. However, the researcher speculated that this may be in part due to a desire to publicly express heterosexual behaviours and mannerisms instead of merely expressing romantic feelings (Sveningsson, 2007). Recommendations  The parents, peers and teachers should be on guard to ensure that these students use social networking for appropriate period and as well help these students to be aware of the negative effects and what they are losing in the real world by sticking to social networking sites.  There should be a policy guarding against wrong use of some uneducative social networks.  Lecturers in tertiary institutions should be encouraged to use social media in teaching for example, giving assignments, term papers, quizzes etc.  This will promote a better use of social media by students. Conclusion Conclusively, the results of this study align with the findings of previous research, it was concluded that social media networking weakly and poorly predicts academic achievement and psychological health of undergraduate students in federal universities in Nigeria. The present study found that social media networking of students significantly predicts their academic achievement and psychological health, that male or female engagement in social media networking has no significant predictive power on their academic achievement and psychological health.
Bortezomib Amplifies Effect on Intracellular Proteasomes by Changing Proteasome Structure☆ The proteasome inhibitor Bortezomib is used to treat multiple myeloma (MM). Bortezomib inhibits protein degradation by inactivating proteasomes' active-sites. MM cells are exquisitely sensitive to Bortezomib – exhibiting a low-nanomolar IC50 – suggesting that minimal inhibition of degradation suffices to kill MM cells. Instead, we report, a low Bortezomib concentration, contrary to expectation, achieves severe inhibition of proteasome activity in MM cells: the degree of inhibition exceeds what one would expect from the small proportion of active-sites that Bortezomib inhibits. Our data indicate that Bortezomib achieves this severe inhibition by triggering secondary changes in proteasome structure that further inhibit proteasome activity. Comparing MM cells to other, Bortezomib-resistant, cancer cells shows that the degree of proteasome inhibition is the greatest in MM cells and only there leads to proteasome stress, providing an explanation for why Bortezomib is effective against MM but not other cancers. Introduction The proteasome inhibitor (PI) Bortezomib is used as a firstand second-line treatment of multiple myeloma (MM) (Anderson et al., 2011). Proteasomes' (Finley, 2009;Lander et al., 2012) main function is to degrade ubiquitinated proteins in a controlled manner (Bedford et al., 2011;Finley, 2009;Glickman and Ciechanover, 2002). Proteasomes are comprised of a cylindrical core particle (CP) capped at each end by a regulatory particle (RP) (Lander et al., 2012;He et al., 2012). The RP captures and denatures ubiquitin-marked protein substrates, and translocates their unfolded polypeptide chains towards proteolytic active-sites in the CP's lumen (Finley, 2009). CP contains three types of active-sites, each of which comprises a peptide-docking area and an exposed catalytic threonine. PIs including Bortezomib prevent protein hydrolysis by forming covalent adducts with the catalytic threonines of the active-sites (Groll et al., 2009;Beck et al., 2012). Some proteasome activity is necessary for any cell to live (Heinemeyer et al., 1997), not just MM cells (Craxton et al., 2012;Suraweera et al., 2012). Although the Bortezomib concentrations at which cells of different cancers die differ widely, from low nanomolar to high micromolar IC 50 concentrations, cells of the (incurable) B-cell malignancy MM are exquisitely sensitive (Shabaneh et al., 2013), hence Bortezomib's success in treatment of MM (Anderson et al., 2011). Intriguingly, Bortezomib at its low IC 50 concentration causes only a small reduction in proteasomes' ability to degrade proteins (Kisselev et al., 2006;Shabaneh et al., 2013). The reduction is small because Bortezomib preferentially inhibits the chymotrypsin-like (CT-Like) active-site, butat IC 50does not inhibit the caspase-like and trypsin-like active-sites ( Fig. S2A) (Kisselev et al., 2006); however, protein substrates can be hydrolysed by any of the three types of active-sites (Kessler et al., 2001;Kisselev et al., 2006). Thus, the question arises why minimal inhibition of proteasome function suffices to induce apoptosis in MM but not in Bortezomibinsensitive cells. Several explanations have been proposed, including high proteasome workload in MM cells (Bianchi et al., 2009;Meister et al., 2007;Shabaneh et al., 2013). We report that, surprisingly, a (low) IC 50 Bortezomib challengewhich in-vitro minimally inhibits proteasomesin living MM cells severely inhibits proteasomes' hydrolytic activity. Our data suggest that, in living MM cells, a Bortezomib-induced structural change in the EBioMedicine 2 (2015) 642-648 ☆ Research in context: Bortezomib and other proteasome inhibitors can treat multiple myeloma, a blood cancer arising from plasma B-cells, but few other cancers. It has been unclear why Bortezomib kills myeloma cells at concentrations so low that only partial inhibition of proteasomes is expected, and why Bortezomib cannot kill most other cancer cells. We now report that Bortezomib achieves unexpectedly severe inhibition of intracellular proteasomes, by triggering structural changes in these which further depress activity. Thus, Bortezomib 'punches above its weight' and achieves unexpectedly severesometimes lethallevels of proteasome inhibition. The greatest inhibition happens exactly in cells from cancers which Bortezomib can treat. ⁎ Contents lists available at ScienceDirect EBioMedicine j o u r n a l h o m e p a g e : w w w . e b i o m e d i c i n e . c o m proteasome (Pitcher et al., 2014) is responsible for this severe degree of proteasome inhibition. Graphs were produced using Graph pad Prism 6. Data plotted were mean of replicates, with error bars plotted of the standard error of mean (SEM). Research was supported by Leukaemia Lymphoma Research UK (LLR Grant10016 to MFK, LLR Gordon Piller Studentship award Grant11043 to MFK, AK, DP), by an MRC-Imperial Confidence-in-Concept (ICiC) grant to MFK, and by the NIHR Biomedical Research Centre at Imperial College NHS Trust, London. Results NCI-H929 MM cells were challenged with a lethal (10 nM) Bortezomib concentration for varying lengths of time. Cells were then lysed and an artificial fluorogenic peptide substrate was used to measure proteasomes' hydrolytic activity (Fig. 1a). Over 0-6/8 h, CT-like activity declined continually to almost nothing. Only when it was nearly absent (b 10%), after 6-8 hour incubation, was an increase in ubiquitin conjugate levels observed; intracellular accumulation of ubiquitinated proteins indicates that proteolytic workload exceeds proteasome capacity, accumulation being a phenotype which integrates other regulatory mechanisms in the cell including deubiquitinating enzyme activity ( Fig. 1a; see also Fig. S1A). However, when cells were first lysed and then challenged with 10 nM Bortezomib, only a 40% reduction in proteasomal (CT-like) activity was observed ( Fig. S2A: 61.5% CT-like activity left, and as previously reported (Kisselev et al., 2006)). A much stronger-than-expected in-vivo inhibitory effect was also observed for the other types of proteasomal active-sites (Figs. S2E;S2C). Predictable explanations for Bortezomib's severe inhibition of proteasomes' activity in MM cells did not apply: (1) severe inhibition was not due to prolonged incubation, because exposing purified proteasomes in the test tube to 10 nM Bortezomib for longer periods of time still showed modest inhibition in line with (Bianchi et al., 2009;Kisselev et al., 2006;Shabaneh et al., 2013). Furthermore, we identified a MM cell line, RPMI-8226, in which CT-like proteasome activity dropped very rapidly, within 2 h, to b10% (Fig. 1e); this time-frame approaches that of the in-vitro test using cell lysate (Fig. S2A), but in-vitro 61.5% of activity remains. (2) Although 10 nM Bortezomib induced apoptosis, and apoptosis is known to inhibit proteasomes via caspase activation (Sun et al., 2004;Adrain et al., 2004), caspase activation was not responsible for the observed proteasome inhibition: inhibition preceded caspase activation (Fig. 1c), occurred when caspase inhibitors were present (Fig. 1d), cells were alive until 10 h postchallenge as measured by Annexin-V staining (Fig. S2B), and there was a non-myeloma, T-lymphocyte cell line that did not die with 10 nM Bortezomib but in which proteasome activity still dropped well beyond expectation to~20% (Figs. 1F,S2D; see also lung carcinoma A549 cells: Fig. S2D). (3) Severe inhibition was not because MM cells actively pumped in and/or retained Bortezomib to establish a 10-100 fold higher ( Fig. S2A) intracellular Bortezomib concentration. We used the irreversible biotinylated inhibitor Ada-K(Biotin)-Ahx 3 -L 3 -VS (Kessler et al., 2001) to simultaneously measure inhibition of activity and the degree to which active-sites were inhibited, i.e. biotinylated (Kessler et al., 2001). We observed in-vivo a discrepancy between active-site inhibition and reduction in hydrolytic activity of those active-sites (Fig. 2a). We now report a structural change in the proteasome which does correlate with Bortezomib-triggered early shutdown of proteasomal activity: Bortezomib-triggered early changes in posttranslational modifications on proteasomal subunits. We recently reported that human nuclear proteasomes carry a constitutive, CTAB-PAGE-detectable, posttranslational modification that, in some respects, resembles poly(ADP)ribose (Pitcher et al., 2014). We reported that exposing MM cells to Bortezomib induced changes in these CTAB-PAGE-detectable proteasome modifications, exactly at Bortezomib's IC 50 concentration and above (Pitcher et al., 2014). Fig. 2b -CTAB-PAGE analysis of total cell lysateshows that changes in these modifications of the proteasomal Rpn12 subunit occur between 1 h (RPMI-8226) and 4 h (H929) of PI challenge. To examine these Bortezomib-triggered changes in finer detail, we combined cell fractionation with affinity-purification of human proteasomes. (Purification is a technical trick that enables modified subunits to also become compatible with, and visible on, SDS-PAGE (Pitcher et al., 2014).) We generated retrovirally-transduced OPM2 MM cells, which expressed a Rpn11 subunit tagged N-terminally with a His 6 -StrepII-StrepII-TEV-affinity cassette. Cell fractionation and then affinity-purification (using streptactin or Ni ++ NTA) showed that subunits from specifically nuclear proteasomes are extensively modified, as we reported previously (Pitcher et al., 2014) (Fig. 2c: Rpn11 subunit, detected with anti-streptag antibody). Repeating this fractionationthen-affinity-purification procedure, but this time after OPM2 cells had been treated with 20 nM (lethal) Bortezomib for up to 5-7 h, showed that patterns of modified subunits changed (Fig. 2d,e). These changes' characteristics depended on the particular proteasome subunit that was analysed. Generalising, these changes involved (A) a reduction in complexity for nuclear proteasomes (e.g. Rpt5, Rpn12), and/or (B) the appearance of modified subunit species in the cytosol fraction (e.g. β5i, Rpn12). (A) and (B) raise the possibility that severe inhibition of proteasome activity in total lysate (Fig. 1) may result from changes in proteasome structure which happen both in the cytosolic and nuclear compartments of the cell after cells are challenged with Bortezomib. We are currently investigating the exact nature of the proteasome modifications. In order to test directly if changes in these proteasome modifications affect proteasome function, we first searched for commercial enzyme preparations that can digest the modifications of nuclear proteasomes. We discovered that a combination of venom phosphodiesterase-1 and S1 nuclease was efficient in trimming these modifications and collapsing modified Rpt2 species into its correct subunit-size species (Fig. 3A) we combined this enzyme protocol with an in-vitro proteasomemediated proteolysis assay, which uses a ubiquitinated model protein as substrate (Matyskiela et al., 2013) (Figs. 3B, S4). We scaled up cell growth and affinity-purified proteasomes from total (i.e. including nuclear) lysate. Proteasomes bound to affinity-resin were mock-treated or treated with PDE1/S1 at a sub-optimal (20°C) temperature, after which resin was washed to remove enzymes. Next, proteasomes were eluted to generate a control and a PDE1/S1-treated proteasome Borte. α-Rpn12 Fig. 2. Bortezomib's inhibition of proteasomal active-sites triggers changes in the structure of intracellular proteasomes. (a) Investigating the active-site occupation by proteasome inhibitor Ada-K(Biotin)-Ahx 3 -L 3 -VS, and proteasome activity remaining. NCI-H929 cells were incubated with 10 μM inhibitor for several hours, after which they were harvested and analysed for levels of inhibited/biotinylated active-sites, overall proteasome activity and pro-caspase-3 status (n = 2 per value). While the level of inhibited active-sites did not increase over time (and in fact was less in the 4-and 6-hour samples), we observed a steady decline in the activity of the CT-like active-sites in these samples. Ada-K(Biotin)-Ahx 3 -L 3 -VS-treated cells remained alive, as determined by Annexin-V/7AAD staining, for up to 10 h, and procaspase-3 levels stayed constant for the same time. Data represented as mean ± SEM. (b) NCI-H929 cells were incubated with 10 nM Bortezomib, RPMI-8226 cells with 20 nM epoxomicin, cells were lysed at specific time points and run on CTAB-PAGE (Pitcher et al., 2014). The resulting western was blotted for Rpn12. Note the changes in Rpn12 patterning over time upon lethal PI challenge. (c) OPM2 MM cells were retrovirally transduced to overexpress the Rpn11 proteasome subunit bearing an N-terminal affinity-tag (see schematic). Western blotting (left panel) of whole-cell lysate from transduced/non-transduced OPM2 cells using an α-streptag antibody shows expression of the tagged subunit (expected size Rpn11 + Tag = 35.6 kDa + 9 kDa = 44.6 kDa). Using this tagged MM cell line, the Western Blot panel on the right shows affinity-purification of proteasomes from both cytosolic (CE) and subsequent nuclear (NE) extract by Ni ++ NTA or Streptactin capture. Extraction done as described (Pitcher et al., 2014), using 0.1% NP40 in CE buffer. Note that the many modified nuclear species of Rpn11 cannot be visualised on SDS-PAGE unless purified, and thusafter purificationappear on gel 'out of nowhere' (compare 'total lysate' panel on the left with purified 'CE/NE proteasome' panel on the right, green brackets), as reported previously (Pitcher et al., 2014). (d) Tagged proteasomes were affinity-captured with Ni ++ NTA from CE and NE from 7-hour control-(PBS) or Bortezomib-treated (20 nM) Rpn11-tagged OPM2 cells. These proteasomes were run on SDS-PAGE, and then blotted for various proteasomal subunits. Extraction done as described (Pitcher et al., 2014), but CE buffer contained 0.5% NP40. (e) Same as in (d), but cells were harvestedand proteasomes were capturedafter only 5 h of incubation with Bortezomib. See also Fig. S3. preparation. Most subunits of PDE1/S1-treated versus untreated proteasomes were very similar, but for example Rpn12 showed pronounced differences (Fig. S3). We then incubated the proteasome preparations with a ubiquitinated model protein (Matyskiela et al., 2013) ( Fig. S4) in order to assess the ability of these proteasome preparations to process a ubiquitinated substrate. At low proteasome/substrate ratio at 30°C, the PDE1/S1-treated proteasomes were impaired in degrading ubiquitinated substrate (Fig. 3C). However, increasing the ratio overcame this defect, with both preparations degrading substrate equivalently. At even higher proteasome/substrate ratio, and at 37°C, proteasomes processed the substrate towards deubiquitination rather than degradation, and again both preparations behaved equivalently (Fig. 3B). In addition, we found that changes in the redox state of the reaction conditions uncovered qualitative differences between enzyme-treated and untreated proteasomes (Fig. 3D): in the oxidizing conditions minus DTT, enzymetreated proteasomes removed streptag epitope (i.e. substrate) more efficiently than untreated proteasomes but did not shift down the ubiquitin signal correspondingly, indicating that these enzyme-treated Fig. 3. Manipulating proteasome modifications changes proteasome function. (a) Affinity-purified nuclear proteasomes were incubated with the indicated enzymes for 1 h at 37°C, subjected to SDS-PAGE, and analysed by Western Blot for proteasome subunit Rpt2. (b) Schematic presentation of the in-vitro proteasome-mediated proteolysis assay, which uses a ubiquitinated model protein as substrate (Matyskiela et al., 2013). In this system, proteasomes degrade the protein substrate while releasing the multiubiquitin chains in intact form (Fig. S4D). (c) Human proteasomes from Rpn11-tagged OPM2 cells were affinity-purified by Ni ++ -NTA agarose capture of the His 6 domain. Half the prep was incubated on the beads in reaction buffer, while the other with reaction buffer and PDE1 + S1 combination, for 50 min at 20°C before being eluted in 250 mM Imidizole-PBS. Varying relative amounts of proteasomes/substrate 1×, 3× were added to G3P model substrate and incubated for 1 h at 30°C (in the presence of ATP, MgCl 2 and DTT) before running on SDS-PAGE and blotted against the Strep domain on the model substrate. Panel on the right: incubated at 37°C and 19.5× proteasomes to substrate. (d) As in (c), but comparing enzyme-treated and control-treated proteasomes for ability to degrade ubiquitinated protein substrate in reducing (+DTT) and oxidizing reaction conditions. Analysis is done using streptag (substrate) and ubiquitin antibodies. (e) We propose the following model of how PIs virtually shut down proteolysis in cells while inhibiting few active-sites throughout. 2 h after exposing myeloma cells to an IC 50 Bortezomib challenge (5-10 nM), only a small proportion of active-sites have been inhibited (red star). This proportion does not increase and, as a result, proteolysis is, in this initial period, almost unaffected due to uninhibited active-sites. However, over the next several hours, proteasomes' CTAB-PAGE-detectable modifications change (red), causing proteasome activity to decline. When proteolysis falls below the level required for life, caspases are activated and cells go through programmed cell death. See also Fig. S4. proteasomes only partially digested the substrate proteinstarting from the tagged carboxyterminusbefore premature release. In contrast, under reducing conditions, untreated proteasomes were more efficient in degrading substrate than enzyme-treated proteasomes (see also Fig. 3C). In sum, our data indicate that, under certain experimental conditions, changes in proteasome modifications affect proteasome function, thereby strengthening the case that Bortezomib-triggered changes in proteasome modifications within cells also affect proteasome function. Discussion In summary, our data reveal a dramatic inhibition of proteasome activity in MM cells after a (low) IC 50 Bortezomib challenge, and suggest that this inhibition is the compound result of, first, inhibition of a subset of active-sites, and, second, structural changes in the proteasome which further impair hydrolytic activity (Fig. 3E). Engagement of PIs with active-sites changes proteasome conformation and stabilizes the (distant) CP-RP (Kleijnen et al., 2007;Park et al., 2008) and RP-hPLIC/ ubiquilin (Kleijnen et al., 2000) interactions, thus providing a possible signalling mechanism into the cell that may enable active-site inhibition to directly trigger activation of the cellular machinery that then changes the posttranslational modifications of the proteasomes. Whereas it has been very difficult to explain why MM cells die from a nanomolar IC 50 Bortezomib challenge when assuming that the modest level of proteasome inhibition observed in-vitro holds true in-vivo (Bianchi et al., 2009;Kisselev et al., 2006;Shabaneh et al., 2013), it is not surprising that a myeloma cell with over 95% CT-inhibition and observable proteasome stress (i.e. accumulation of ubiquitin conjugates, Fig. 1a) will undergo apoptosis. In addition, our data show that Bortezomib, in cancer cells which are Bortezomib-resistant, does not achieve the same degree of proteasome inhibition as in (Bortezomibsensitive) MM cells (Figs. 1F, S2D), thus providing a molecular mechanism explaining what differentiates MM from most other cancers which Bortezomib cannot treat. Please note that our data indicate that a high proteasome workload in MM cells (Bianchi et al., 2009;Meister et al., 2007;Shabaneh et al., 2013) cannot be the primary reason for MM cells' sensitivity to Bortezomib: for this explanation to work, all proteasomes in a cell would need to be fully engagedwith no spare capacity leftin order for a minimal inhibition of proteasomes to produce proteasome stress; instead, we observed that MM cells have much spare proteasome capacity, and that reducing capacity even to 20% still did not yield proteasome stress (Fig. 1a). Understanding the cellular mechanism via which Bortezomib amplifies its effect on proteasome function may enable future intervention to re-sensitize Bortezomibresistant cells to treatment. Authorship contributions DSP designed, performed research, analysed data, and wrote manuscript. KdM-S, KT designed, performed research and analysed data. HWA, AK designed research and analysed data. MFK designed, performed research, analysed data, and wrote manuscript. Disclosure of conflicts of interest The authors declare that they have no conflict of interest.
The European structural and investment funds and public investment in the EU countries Public investment is low and has declined in many EU countries since the global financial crisis. This paper estimates the effects of the various European Structural and Investment Funds (ESIF) on public investment in the EU countries. The analysis is run on annual data from 2000 to 2018 using dynamic panel data specifications. Funding from the Cohesion Fund, the EU’s facility for its less developed members, has had an almost one-to-one effect on public investment in the short term, and more in the longer term. Funding from the European Regional Development Fund may have had some effect, but it cannot be estimated precisely. Funding from other ESIF funds does not seem to have been related to public investment in the EU countries. Introduction This paper investigates the relationship between funding from the European Structural and Investment Funds (ESIF) and public investment in the countries of the European Union. The choice of this topic is motivated by the observation that public investment in per cent of GDP has been low or declining in many EU countries, especially since the global financial crisis. Total public investment in the 28 countries that were in the EU in 2019 was 3.3 per cent of GDP in 1995, 3.4 per cent in 2007, and 3.0 per cent in 2019 (Ameco 2021, code: UIGG0). The decline in public investment has been even particularly pronounced in some of the countries that were most severely affected by the global financial crisis. Public investment in infrastructure, buildings and equipment is commonly seen to be of key importance for economic development. 1 It may nevertheless be a tempting area for making budgetary cuts as their possible negative consequences are not noticed immediately (Novelli and Barcia 2021). The European Fiscal Board of the European Union has repeatedly argued that public spending should be directed much more towards investment in order to improve the longer-term growth prospects of the EU countries; see for instance European Fiscal Board (2020). These issues have only gained in importance since the Covid-19 pandemic started. The NGEU recovery fund will make additional resources available to the EU countries, and large parts of this new funding are being tied to public investment (Fuest 2021). The ESIF funds are the main fiscal instrument which the European Union uses for achieving economic and social convergence across the union. The history of the funds harks back to formation of the European Economic Community, the predecessor of the European Union, in 1957, but the number of funds, their purposes and their names have changed recurrently since then. There have since 2014 been in total five ESIF funds with a range of focus areas. The European Regional Development Fund, the Cohesion Fund, and the European Social Funds are collectively known as the EU Cohesion Policy funds. 2 The European Agricultural Fund for Rural Development and the European Maritime and Fisheries Fund also aim to ensure economic and social convergence, but the funds have different focus areas, as indicated by their names. The funds will be discussed in some detail in Sect. 2. Support from the ESIF funds is intended for a range of measures that seek to boost economic growth, competitiveness, employment, social inclusion, sustainability and rural development. In the period 2014-2020 more than one-third of the total EU budget went to the ESIF funds, meaning that all other activities, including the income and price support under the Common Agricultural Policy, must share the remainder. The European Commission argues that support from the ESIF funds is targeted at investment in the recipient countries, though the term may be used fairly loosely by the Commission (European Commission 2022). 1 3 Empirica (2022Empirica ( ) 49:1031Empirica ( -1062 The discussion above begs the question of how closely the support from the various ESIF funds is associated with higher public investment in the EU countries. This paper estimates fiscal reaction functions for public investment in the EU countries, using the funding received from various ESIF funds and various economic and political control variables as covariates. The annual panel data estimations cover the 28 countries that were members of the EU in 2019 and the time sample goes from 2000 to 2018. The endpoint is determined by the availability of adequate data on ESIF funding, but it conveniently excludes the period of the Covid-19 pandemic and the extraordinary policy measures taken in the slipstream of the pandemic. The ESIF funds are intended to promote economic and social convergence, though the eligibility and allocation criteria vary from fund to fund. Some funds largely target projects that are counted as public investment in the national accounts, while other funds have broader scopes. It is in any case important to understand the role that the funds play at the macroeconomic level, including their role in public investment. As discussed below, these effects cannot be ascertained from the institutional and legal frameworks that are used to allocate the funding, but call rather for careful econometric investigation; see also Hagen and Mohl (2011). There are several challenges involved in an analysis that seeks to ascertain the relationship between the provision of economic support and the eventual outcome of the support. Even if the support targets a specific objective, the outcome may be affected by behavioural changes in the recipient that may or may not be warranted by the principal. These behavioural changes are at the core of principal-agent theory and are of great practical importance (Bachtler and Ferry 2013;Aslett and Magistro 2021). 3 The principal may address possible principal-agent problems in various ways, such as requiring disclose of information, imposing strict monitoring, or demanding co-financing from the agent. There are also principal-agent problems in the funding from the EU (Del Bo & Sirtori 2016;Notermans 2016). Support from the ESIF funds may substitute or crowd out national funding, so that the eventual effect, which is sometimes known as additionality, is smaller than would otherwise have been the case. The regulation of the ESIF funding espouses the principle of additionality, which is that "support from the Funds should not replace public or equivalent structural expenditure by Member States" (European Commission 2015, p. 58). 4 It should also be noted that the ESIF funding typically only covers a part of total spending, and the cofinancing requirements from the recipients might amplify the effect of the support. The derived reactions of the recipients mean that the eventual effect of ESIF funding becomes an empirical question. Given the substantial amounts involved, it is not surprising that numerous evaluations and assessments of the impact of ESIF funding have been carried out; see surveys by Hagen andMohl (2011) andNotermans (2016). The studies typically focus on various economic and social objectives such as the income level, unemployment or poverty. It is helpful to distinguish between different levels of study. Studies at the micro level assess how recipients like firms and public organisations are affected and typically ignore the aggregate effects, while studies at the macro level assess how macroeconomic or social policy aggregates are affected at the national or regional level. A key finding is that studies undertaken at the micro level typically find larger and more persistent effects than studies at the macroeconomic level do. This finding is occasionally called the micro-macro paradox, and it is particularly noticeable for studies that examine the effects of EU funding on productivity and income convergence (Bradley 2006;Alegre 2012;Notermans 2016). It is beyond the scope of this paper to account for the micro-macro paradox, but it may be useful to consider one important factor of economic growth at the macroeconomic level, specifically public investment. This paper focuses on ESIF funding and public investment in the EU countries in the short and medium term. The research question is narrow in scope but it is relevant for policy-making at the national and EU levels. The paper complements a very small number of earlier studies that raise the same question. Mohl (2016) revisits the analysis in Hagen and Mohl (2011) and considers the effect of total EU cohesion policy funding on public investment. Various dynamic panel data models are estimated using a range of estimation methods, but the conclusion is consistently that cohesion policy funding has no effect on public investment. The data end in 2006, which means that the funding rounds for 2007-2013 and 2014-2020 are not included in the sample and the data for the new members from Central and Eastern Europe only enter for a few years. Alegre (2012) considers the effects of measures of total ESIF funding on public investment using annual data over the years 1993-2005 for the first 15 EU countries from Western Europe. The key finding is that an increase in cohesion policy funding of one euro is associated with an increase in total public investment of around 0.6 euro in the long term. An analogous exercise using data from Spanish regions instead of the EU countries provides comparable results. The estimated models of public investment in Alegre (2012) contain few control variables for the budgetary position of the country or region. Moreover, the data end in 2005 and the study only uses data from the first 15 EU countries, so the estimation results do not account for the developments in the Central and Eastern European EU countries, which typically have much lower per capita incomes than other EU countries. Cantos-Cantos et al. (2020) examine the effect of a specific ESIF fund, the European Regional Development Fund (ERDF), on public investment in the Spanish regions using data from 1994 to 2014. Various panel data methods are used, including some that take possible co-integration into account, but the conclusion is generally that there is no discernible link between ERDF funding and public investment in the regions in Spain. The differences between the results in the small number of studies that analyse how EU funding affects public investment are striking and suggest that a fresh look using updated data may provide valuable insights. This paper contributes to the literature in at least three ways. First, it considers all of the various ESIF funds at the same time, an investigation that provides deeper insights into the effects of the various funds on public investment. Second, it uses data for the period 2000-2018, which means that the member countries from Central and Eastern Europe are represented in most of the time sample. Finally, it carries out various sample splits that make it easier to interpret the results and help uncover possible heterogeneity within the sample. The rest of the paper is organised as follows. Section 2 provides an overview of purposes and allocation criteria of the European Structural and Investment Funds. Section 3 documents the data used in the analyses. Section 4 discusses the model specification and estimation methodology. Section 5 presents the results of the baseline estimation. Section 6 shows the results of various robustness analyses and sample splits. Finally, Sect. 7 offers some concluding comments. The European structural and investment funds The EU operates with Multiannual Financial Frameworks, or budget periods, that each run for seven years. This means that our sample covers the three budget periods 2000-2006, 2007-2013 and 2014-2020, but data on the payments from the funds for 2019-2020 were not available at the time of writing. Changes from budget period to budget period have occasionally caused some changes in how the funds are named and in the ways that the support is allocated. We have chosen the names used for the funds in the last budget period considered, which was in 2014-2020. Five different ESIF funds were operating in the budget period 2014-2020. The five funds have various objectives, eligibility conditions and allocation criteria, but they complement each other in how they support economic and social cohesion across the EU countries and across the regions of the EU countries. Table 1 presents the key features of the five ESIF funds. The main ESIF instrument is the European Regional Development Fund (ERDF). This fund is intended to strengthen economic and social cohesion in the European Union and to reduce regional disparities across the EU by promoting both public and private investment. To achieve this goal, the main task of the ERDF has since 2000 and throughout the following budgetary periods been to contribute to investment in sustainable growth and sustainable job creation. The ERDF focuses on investment in research, innovation and infrastructure, but digitalisation was added as a priority from the 2014-2020 budgetary period. Supporting small and medium sized enterprises (SMEs), by simplifying access to funding for example, hold an important place in the investment priorities. Since 2007, ERDF investments have also prioritised environmental protection and the transition to a low-carbon economy. The investment priorities of the ERDF mean that not only public investment projects but also private projects can receive support from the ERDF fund. The requirements to allocate the ERDF funds between various projects and the share of national own-financing both depend on the relative income level of the given NUTS2 region. The maximum financing rates from the ERDF range from 50 to 85 per cent of project costs, and since June 2013 up to 95 per cent, depending on the relative income level of the region. The Cohesion Fund was set up in 1994 to contribute funding for projects focused on the environmental and trans-European transport network for the less well-off countries in the EU. The projects that fall under these two thematic aims are in large part public investment projects. While funding from the ERDF is available for all member countries, funding from the Cohesion Fund (CF) is only available for members whose gross national income per capita is less than 90 per cent of the EU average. The territorial funding level for the Cohesion Fund is not the NUTS2 region, but the whole country. For the countries that receive CF funding, the amount allocated is found from data on GDP per capita, GNI per capita, and the unemployment rate. Fifteen EU countries were eligible to receive funding from the CF in the 2014-2020 budgetary period. The financing from the Cohesion Fund for projects can cover up to 85 per cent of total costs. The European Social Fund (ESF) is the oldest ESIF fund. It was set up in 1957 under the Treaty of Rome to improve employment opportunities and promote the mobility of workers. The ESF also finances initiatives that promote education and life-long learning, equal opportunities for men and women, sustainable development, and economic and social cohesion. This means that ESF funding largely goes to non-investment spending, including spending on human capital. ESF funding is always accompanied by public or private co-financing and so, like the ERDF, the ESF can fund both public and private sector projects. The financing from the ESF covers between 50 and 85 per cent of total project costs, but it can be as much as 95 per cent in exceptional cases, and the actual rate depends on the relative income level of the region. The European Agricultural Fund for Rural Development (EAFRD) is the second pillar of the European Union's Common Agricultural Policy and it was created under this name in 2014. Both EAFRD and, before 2014, the Guidance Section of the European Agricultural Guidance and Guarantee Fund (EAGGF) focus on supporting the competitiveness of agriculture, strengthening the balanced development of rural economies, modernising agricultural facilities and preserving European landscapes. The spending may consequently be either for investment or non-investment and it can go to the public sector or the private sector. The EU countries execute EAFRD funding through rural development programmes. The financing from the EAFRD usually covers between 53 and 85 per cent of the total cost, while the rest must be financed from national budget sources. It is important to note that the EAFRD fund supports social development and sustainability in the countryside, and is separate from the income and price support given 1 3 to farmers through the first pillar of the Common Agricultural Policy. The share of project costs financed by EAFRD depends on the relative income level of the NUTS 2 region. The support from the European Maritime and Fisheries Fund (EMFF) is small and disbursements under this name only started in 2014. The EMFF supports the Common Fisheries Policy and the level of support from this fund does not depend on the income level of member country receiving it. The allocation decisions are based on various sector-specific criteria such as the size and economic relevance of the fisheries and aquaculture sector in each region. At the heart of the ESIF policy lies the principle of additionality, i.e. the idea that resources received from the EU funds should not substitute public expenditure. The funds covered by the additionality principle and the reference points used to determine additionality have, however, changed between budgetary periods. Until the budgetary period that started in 2014, only the ERDF and ESF funds were covered by the principle of additionality. Since 2014, the Cohesion Fund has been covered too (European Commission 2013a). The definition of the reference level used to assess additionality, which is the public or equivalent structural expenditure, has also changed over the years. Before 2014, the public or equivalent structural expenditure was somewhat broader than public investment or gross fixed capital formation (European Commission 2009). Since 2014, however, it has explicitly meant the gross fixed capital formation of the general government (European Commission 2013a). Before the start of a new budgetary period, member states and the Commission decide and agree on an average public or equivalent structural expenditure that the member state will need to maintain in the new budgetary period. Although the notion of additionality is easy to understand, it is actually quite complicated to apply, and several shortcomings have undermined the verification of additionality principle over the years. These have included problems with defining the relevant eligible expenditure, difficulties in verifying the reliability of data, and shortcomings in data comparability over programming periods (European Commission 2009). Overall, there were several challenges with the ex-post verification of additionality for the 2007-2013 budgetary period, and this led to a considerable change in the methodology for the 2014-2020 period (European Commission 2017). Data We use annual data for 28 EU countries for 2000-2018, as these are the years for which suitable data on ESIF funding to the individual EU member countries are available at the time of writing in November 2021. The sample includes the UK, which was a member of the EU until 2020, and the new EU members from Central and Eastern Europe that joined in 2004, 2007 or 2013 and so were not members of the EU throughout the full sample period. Eight countries, Czechia, Estonia, Latvia, Lithuania, Hungary, Poland, Slovenia and Slovakia, joined the EU in May 2004; Romania and Bulgaria joined in January 2007; and Croatia did so in July 2013. Data and data sources We construct a dataset with data for the payments from the European Structural and Investment Funds, a large number of fiscal and macroeconomic variables, and two political variables. We discussed the ESIF funds and their eligibility and allocation criteria in Sect. 2. Data on the payments from each of the ESIF funds to each EU country are publicly available from the European Commission's data catalogue "Historic EU payments -regionalised and modelled" (European Commission 2021). These data are in the first place cash-based and record the flows only as the payments are disbursed. The national accounts however are accrual-based and record the flows when the underlying transaction occurs, so the ESIF reimbursement data and the national account variables are not directly comparable. However, the dataset also contains series for the ESIF funds provided under the label "modelled annual expenditure" (European Commission 2018, 2020), and these modelled ESIF series are meant to track the expenditures as they are incurred and so to mimic the accrual-based data. 5 The modelled ESIF data series are generally used in this study to ensure consistency with the data from the accrual-based national accounts. The econometric analysis uses the modelled ESIF amounts in per cent of the GDP of the country receiving the support. 6 The data on nominal GDP are sourced from Ameco, the macroeconomic database of the European Commission (Ameco 2021, code: UVGD). 7 For convenience the abbreviations of the various ESIF are also used as names for the modelled funding in per cent of GDP. The variable ERDF is thus the modelled funding from the European Regional Development Fund in per cent of GDP, the variable CF is the modelled funding from the Cohesion Fund in per cent of GDP, the variable ESF is the modelled support from the European Social Fund in per cent of GDP, and the variable RURAL is the sum of the modelled support from the European Agricultural Fund for Rural Development Fund and the European Maritime and Fisheries Fund in per cent of GDP. 8 The fiscal and macroeconomic variables are all sourced from Ameco (2021). The main fiscal variable is IG, which denotes general government gross fixed capital formation or, for simplicity, public investment in per cent of GDP (Ameco,UIGG0). This variable is the dependent variable in all of the estimations of the paper. The variable DEBT is the public debt in per cent of GDP (Ameco, UDGG). The variable BALCYC denotes the cyclically adjusted public balance in per cent of GDP, where the cyclical adjustment follows the rules in the Excessive Deficit Procedure of the European Union (Ameco, UBLGAP). The macroeconomic variables capture the broader macroeconomic situation of the EU28 countries. The variable YPPP denotes GDP per capita in purchasing power parity terms, expressed as an index with the original EU15 = 100 (Ameco, HVGDPR). The variable YGAP is the output gap computed by the European Commission and expressed as the gap between actual GDP and potential GDP in per cent of potential GDP (Ameco, AVGDGP). A number of variables are used for robustness checks. The variable INTG denotes the interest payments on public debt in per cent of GDP (Ameco, UYIG). The variable GSIZE denotes total public spending in per cent of GDP and so is a measure of the size of the public sector (Ameco, UUTG ). The variable IP is private investment in per cent of GDP (Ameco, UIGP). We also include two political variables in some robustness checks as politics could also affect public investment decisions. The variables are sourced from Armingeon et al. (2021). The dummy variable ELECTION takes the value 1 for years in which there was a general election and 0 otherwise. The variable SCHMIDT is the Schmidt Index. Relying on Schmidt (1992), the Schmidt Index captures the political orientation of the cabinet in power measured by the share of leftist and nonleftist parties in the government. Based on the share of leftist parties in the government, each cabinet is assigned a value from 1 to 5, where the minimum value 1 depicts a cabinet dominated by right-leaning parties, while the maximum 5 depicts a cabinet dominated by left-leaning parties. Data description This subsection presents the dataset. The full sample of 28 countries is labelled EU28, and this is the sample of countries considered in most of the empirical analysis in Sects. 5 and 6. We will however also consider three different groups that divide countries by their economic history and whether or not they have received support from the Cohesion Fund. The group WE-CF contains the 11 Western European EU countries that have never received support from the Cohesion Fund. The group WE+CF consists of the Western European EU counties that have received support from the Cohesion Fund during at least one budget period during the sample years 2000-2018, these being Cyprus, Greece, Ireland, Malta, Portugal and Spain. The third group, CEE, contains the 11 post-transition countries from Central and Eastern Europe, all of which have received support from the Cohesion Fund. Table 2 shows summary statistics for all of the variables defined in Subsect. 3.1. The table reports the means for the EU28 group and for the three groups WE-CF, WE+CF and CEE. The amount of ESIF funding varies substantially across the country groups. The aggregate ESIF funding, TOTAL, is around 0.1 per cent of GDP in the WE-CF countries, 1.0 per cent in the WE+CF countries, and 1.9 per cent in the CEE countries. This pattern is consistent with the ESIF funds being a way of facilitating structural change and investment in order to promote economic and social convergence across the EU. We will return to the dynamics of the ESIF variables later in this subsection. The average public investment across the 28 EU countries is 3.7 per cent of GDP; the EU countries from Western Europe that have not received support from the Cohesion Fund invest on average 3.3 per cent of GDP and the other countries from Western Europe invest 3.5 per cent, which is less than the 4.4 per cent invested by the EU countries from Central and Eastern Europe. It is however instructive to consider the dynamics of public investment over the sample period 2000-2018. Table 2 provides averages of several fiscal variables and they generally exhibit substantial differences across the three groups of EU countries. One example is the 1 3 average public debt stock, which is very high in the WE+CF countries, somewhat lower in the WE-CF countries, and much lower in the CEE countries. Not surprisingly, the relative income YPPP also exhibits substantial variation across the groups of EU countries. The averages of the election dummy correspond to a general election almost every four years. There seems to be very little difference in the ideological outlook of the cabinets in the three subgroups of EU countries. We discuss in more detail some of the most important variables in the dataset. Figure 1 shows IG, public investment in per cent of GDP, in each of the EU28 countries for the full sample 2000-2018. The figure confirms that there is substantial variation across the countries. Many Western European countries, including Germany, Austria and the UK, have relatively low rates of public investment throughout the sample period. The highest investment rates are typically found in the CEE countries, but investment rates have also been high in countries such as Greece and Luxembourg. Another striking observation from Fig. 1 is the considerable variation over time in public investment in per cent of GDP, as changes of more than 1 percentage point from year to year are not uncommon. The declines after the global financial crisis are particularly striking and generally occurred in the countries that were affected most severely by the crisis. This confirms that the post-crisis adjustments have often seen reductions in public investment in per cent of GDP. Figure 2 in "Appendix A" shows the dynamics of the four ESIF variables for each of the 28 EU countries. The figure reveals how the various ESIF funds are very different in size over time and across the EU countries. The combined ESIF support is very small for the countries in the WE-CF group, larger for some of the countries in the WE+CF group and even larger at 2-4 per cent of GDP in the CEE countries. This pattern is not surprising given the relative income levels of the countries. The various ESIF variables generally exhibit substantial time variation. The allocation criteria vary somewhat across the various ESIF funds, but the EU countries with the lowest per capita income are the main recipients. This means that the ESIF variables are likely to be correlated. Table 3 shows the Variance Inflation Factor (VIF) and its square root for the four ESIF variables when they are treated as the only covariates. 9 The VIF is above 5 but below 10 for the ERDF variable, just below 5 for the CF and ESF variables, and well below 5 for the RURAL variable. 3 Empirica (2022) These results suggest that multicollinearity may reduce the precision of the coefficient estimates, especially for ERDF and perhaps also for CF and ESF. We will therefore examine the stability of the results when we present the baseline estimation in Sect. 5. We have run a number of panel unit root tests on the variables in Table 2. Most, but not all, of the variables are panel stationary in the sample consisting of 28 EU countries over the sample period 2000-2018 (not shown). The ESIF variables we use in most specifications have been modelled to mimic accrual accounting are borderline stationary depending on the specific test of panel unit roots used. The cash-based ESIF data are however consistently panel stationary, which suggests that the modelling creates a degree of persistence in the data series. Given that we always include the lagged dependent variable in the estimations, the borderline stationarity results are unlikely to affect the results unduly. The variables DEBT, YPPP and INTG exhibit trends for many countries, so it is not surprising that it is not possible to reject a panel unit root in these variables irrespective of the test used. The variables DEBT and YPPP are included as control variables in the baseline specification, so we run robustness checks to ascertain the possible consequences of the variables exhibiting panel unit roots. Methodology Empirical specifications for fiscal reaction functions, such as those for public investment, are typically relatively parsimonious. The dynamic panel data model in this study regresses the public investment in per cent of GDP on its lagged value, all or a subset of the four ESIF variables defined in Sect. 3, and various control variables. The control variables always include country fixed effects and typically also three dummy variables taking the value 1 for each of the three budget periods covered by the sample. Other control variables include fiscal variables such as the debt stock and the budget balance, and macroeconomic variables such as the relative income level and the cyclical position. 10 The specification of the reaction function for public investment, including the choice of control variables, follows other studies of public investment (Mehrotra and Välila 2006;Heinemann 2006, Picarelli et al. 2019). However, we include the ESIF variables as covariates in line with other studies that consider the effects of ESIF funding; see Alegre (2012), Mohl (2016), and Cantos-Cantos et al. (2020). The estimation of the public investment models is complicated by the inclusion in the models of the lagged dependent variable, as this can give rise to the Nickell bias in the estimated coefficients when the models are estimated with fixed effects least squares (Nickell 1981). Various GMM estimators are available that may address this problem, but these estimators become consistent with a large number of cross sections. Moreover, it is difficult in many cases to find suitable instruments. The Nickell bias declines as the number of time periods rises. Judson and Owen (1999) and Bun and Kiviet (2001) run numerous Monte Carlo simulations and show that the Nickell bias is modest when the number of time periods is around 20 or more. Given these complications we estimate the public investment reaction functions using fixed effects least squares, but as a robustness check we also estimate the baseline public investment reaction functions using the bias-corrected LSDV dynamic panel estimator. The estimator is described in Bruno (2005), which extended the bias-corrected LSDV estimator derived by Kiviet (1995Kiviet ( , 1999 to accommodate unbalanced panels as well. The model of public investment is estimated using fixed effects least squares, which means that the coefficients are determined entirely from the dynamics over time. The allocation of support from each of the ESIF funds is decided for each seven-year budget period in the year before the start of the budget period. The amounts allocated are based on a number of economic and social criteria, and the final decision is made in the Council of Ministers after extensive negotiation. The upshot of this institutional setup is that the total allocation for the seven-year budget period is largely predetermined and independent of any measures taken in the individual EU member countries. The allocations of funding within the budget period and the following three years are, however, influenced by the policies within each EU country, as funding is typically conditional on the specific plans for a project being approved and the project subsequently being realised. This administrative practice means that the ESIF funding variables will in practice never be fully exogenous, and so it is difficult or virtually impossible to establish causal effects. The same limitation is found in other studies of EU funding and public investment; see Mohl (2016) and Alegre (2012). 11 Baseline results We start the empirical investigation with the sample of all 28 EU members and the time period 2000-2018. Table 4 shows the results when the dynamic panels of public investment are estimated using fixed effects least squares. Note that the specifications include dummies for the three budget periods, but the coefficients for these dummies are not reported. Column (4.1) provides the findings for the baseline model. The coefficient of the lagged dependent variable is 0.48 and it is precisely estimated. Public investment in the panel of EU countries exhibits substantial persistence, a result that has also been found in other studies of public investment in the EU countries (Hagen and Mohl 2011;Mohl 2016, Picarelli et al. 2019. The coefficient of the lagged debt stock DEBT(-1) is −0.015 and it is statistically significant at the 1 per cent level. An increase in the debt stock of 10 percentage points is associated with public investment being 0.15 percentage point lower in the 1 3 following year and with a somewhat larger decline in the longer term. Comparable results have been found in other studies of public investment in Europe (Heinemann 2006, Bacchiocchi et al. 2011, Picarelli et al. 2019, Kostarakos 2021. The cyclically adjusted budget balance lagged one year, BALCYC(-1), has no discernible effect on public investment. The macroeconomic control variables appear to be of some importance. The coefficient of YPPP(-1) is negative, suggesting that a higher relative income level is associated with lower public investment in per cent of GDP. Finally, the coefficient of YGAP(-1) is positive, so a favourable position of the business cycle appears to be followed by higher public investment in the following year. Novelli and Barcia (2021) similarly find that public investment is pro-cyclical. An interesting pattern emerges when the results for the ESIF variables are considered. The coefficient of the Cohesion Fund variable CF is around 1 and statistically significant at the 5 per cent level, while the coefficients of the other ESIF variables are small in numerical terms and estimated very imprecisely. Taken at face value, these results would suggest that there is close to a one-to-one association in the short-term between support from the Cohesion Fund and public investment, and twice that effect in the longer term, given that the estimated coefficient of the lagged dependent variable is around 0.5. Meanwhile, the other three ESIF funds, ERDF, ESF and RURAL, appear not to be associated with public investment in the sample of EU countries. Table 7 in Appendix B shows the results when the baseline estimation is altered in various ways. Column (B1.1) repeats for convenience the baseline specification; Columns (B1.2)-(B1.5) show the results when the funding dummies are removed, when year fixed effects are included instead of the funding dummies, when the crisis years 2009-2010 are excluded, and when the two variables with trending dynamics, the public debt stock DEBT and the relative income level YPPP, are excluded. The finding is in all cases that the overall conclusions from the baseline estimation in Column (4.1) do not change. As discussed in Sect. 3.2, the four ESIF variables exhibit some collinearity, which may inflate standard errors and complicate statistical inference. We begin the investigation by replacing the four individual ESIF funds in the baseline estimation with the aggregate ESIF investment TOTAL. Column (4.2) in Table 4 shows the results. The results for the lagged dependent variable and the independent control variables are close to those in Column (4.1). The coefficient of TOTAL, the aggregate ESIF funding, is 0.28. 12 That the coefficient is relatively small is consistent with the findings in the baseline specification when all four ESIF variables are included. 13 The total effect appears to be driven by the effect of the Cohesion Fund, 12 Table 8 in Appendix C shows the results in the model where TOTAL is included in various specifications; the results are in all cases close to the result in Column (4.2). 13 The average of the coefficient estimates of the four ESIF variables in Column (4.1) is around 0.25, which is very close to the coefficient estimate of 0.28 of TOTAL in Column (4.2). so it is reasonable to presume that the effects of the three other ESIF variables are very small or non-existent. We may analyse further the consequences of the ESIF variables being correlated. If each of the four variables is included individually in the specifications, the coefficient of each variable attains statistical significance even though the point estimates of ERDF, ESF and RURAL vary noticeably across the different specifications (not shown). This result is unsurprising given the substantial correlation between the ESIF variables. It is informative to run estimations where CF is included with each of the three other ESIF variables one by one. Columns (4.3)-(4.5) show the results. The coefficient of CF is around 1 in each of the three pair-wise competitions, while 2000-2018 2000-2018 2000-2018 2000-2018 2000-2018 2000-2018 the coefficients of the other ESIF variables are small and statistically insignificant in all cases. This provides further evidence that CF is important for public investment, while the other ESIF variables are of limited or no importance. Finally, we consider the WE-CF group alone, this being the group of Western European EU countries that has never received support from the Cohesion Fund, so CF = 0 for all years for these 11 countries, and it is thus not correlated with the three other ESIF variables for this group. Column (4.6) shows the results when ERDF, ESF and RURAL are included simultaneously, and the sample comprises only the countries in the WE-CF group. The coefficient of ERDF is 0.44 but is very imprecisely estimated, the coefficient of ESF is negative and statistically insignificant, and the coefficient of RURAL is close to 0 and statistically insignificant. The upshot from the analysis of the WE-CF group of the richest Western European countries is that it cannot be ruled out that ERDF may have a minor positive effect on public investment, but nor can it be pinned down with any precision. We conclude from the estimations in Table 4 that the effects of the ESIF funds on public investment differ markedly across the four funds. Support from the Cohesion Fund has a one-to-one effect on public investment in the short term and up to twice that effect in the longer term. Support from the European Regional Development Fund could have a positive effect on public investment, but the effect is small and statistically insignificant. Support from the European Social Fund and from the rural development and fisheries ESIF funds seems unimportant for public investment. In the light of these findings, we run the estimations in Table 4 with CF as the only ESIF variable included. The estimated coefficient of CF is, as expected, around 1 in all five of the specifications; see Table 8 in "Appendix C". It is useful to compare the results for the ESIF variables in Tables 4, 5, 6 with those in the literature. When all of the ESIF variables are included simultaneously, the coefficient of CF attains economic and statistical significance, but this is not the case for the coefficient of ERDF. This result is consistent with the study by Cantos-Cantos et al. (2020), who conclude that the effect of ERDF on public investment is virtually nil. The results when only total ESIF funding enters the specification are also consistent with earlier findings. The estimated coefficient of TOTAL is 0.28, but the long-term effect is twice as large and close to the long-term estimate of around 0.6 in Alegre (2012). The results in Tables 4 do not align closely with those in Mohl (2011), who found that total cohesion policy funding had no effect on public investment. It should be recalled however that the sample in that study ends in 2006, which means that there is relatively little overlap between the sample used in Mohl (2011) and the one in our study. Moreover, the Central and Eastern European countries are hardly represented in the sample in Mohl (2011). In Sect. 6 we run the baseline estimation independently for the three groups of countries in the sample, and the results are generally clearer for the CEE countries than for the WE+CF countries. This section discusses a number of additional estimations, partly to assess the robustness of the results and partly to widen the scope of the analysis. Including the lagged dependent variable in the panel means the results may be affected by the Nickell bias, so we have also estimated Table 4 using the bias-corrected LSDV dynamic panel estimator, where the bias correction is initialised using the Arellano-Bond GMM estimator (Arellano and Bond 1991). Bruno (2005) finds that the choice of the initial estimator should have only a marginal impact on the performance of the bias-corrected LSDV estimator. The estimation results are shown in Table 9 in "Appendix D". As expected, the coefficients of the lagged dependent variable are larger than when the models are estimated with least squares fixed effects, but the differences are generally small. Moreover, the estimated coefficients of the four ESIF variables shown in Table 9 are very close to those in Table 4. The small differences between the results obtained with the two estimators may be due to the relatively large number of time periods in the dataset (Judson & Owen 1999;Bun & Kiviet 2001). We next replace the modelled ESIF series, which mimic accrual data, with the original cash-based ESIF data, which follow the time at which the reimbursement of the project spending takes place. Table 10 in Appendix E shows the results when the cash-based data are used instead of the modelled data. The cash-based ESIF variables are marked by the postscript _P. The results are qualitatively in line with those for the modelled variables in Table 4. The main difference is that the coefficients of CF and of TOTAL are a little smaller for the cash-based variables than for the modelled variables. The finding that the coefficients of CF and TOTAL are a little smaller is reasonable given that the cash-based funding variables are by construction less closely linked to the accrual-based public investment. The upshot is that whereas using the modelled ESIF variables is consistent with the use of accrual data from the national accounts, the results are qualitatively unchanged if the original cash-based data are used. Table 5 presents the results when additional control variables are added to the baseline specification in Column (4.1) in Table 4. The models are, as before, estimated with fixed effect least squares. Column (5.1) shows the results when two additional fiscal control variables are included in the specification. The coefficient of public interest payments INTG is negative, so higher interest payments are associated with lower public investment. The magnitude of the effect is substantial, but it should be noted that the coefficient of the lagged public debt variable DEBT(-1) is now lower than in the baseline estimation in Column (4.1). The size of the public sector is positively associated with the public investment, but the positive association arises partly by construction as public investment is part of public spending. Despite the statistical and economic significance of the coefficients of the added control variables, the coefficient of CF is still around 1 and it is the only coefficient of the ESIF variables that is statistically significant. Alegre (2012) argues that private and public investment may be mutually dependent as complements or substitutes. Column (5.2) shows the result when private investment in per cent of GDP is included contemporaneously in the model of public investment. The estimated coefficient is positive, but it is relatively small and not statistically significant. Alegre (2012) reports that private investment has a negative and statistically significant effect on public investment. The results for the ESIF variables are unchanged. Column (5.3) shows the results when the two political variables are added to the baseline specification. The variable ELECTION is a dummy value taking the value 1 in years in which there are general elections, while the variable SCHMIDT is an index that depicts the political stance of the cabinet and takes the highest values for cabinets with a left-leaning orientation. The political variables are of little or 2000-2018 2000-2018 2000-2018 2004-2018 2004-2018 2004-2018 no importance, and the coefficients of the ESIF variables are essentially unchanged from those in the baseline specification in Column (4.1). Column (5.4) presents the results when the five additional control variables are included simultaneously. The coefficient of private investment is now statistically significant at the 10 per cent level, but the coefficient of CF is nevertheless still statistically significant and only a little below 1, while the coefficients of the other ESIF variables remain statistically insignificant. The upshot is that inclusion of a number of additional fiscal, macroeconomic and political variables have very little impact on the results for the ESIF variables. Finally, we examine possible cross-sectional heterogeneity in the panel. To this end, we run the baseline estimation from Column (4.1) for each of the three country groups WE-CF, WE+CF and CEE. The aim is to ascertain possible differences across the three groups, but it should be kept in mind that the sample sizes are generally relatively small, which may limit the inferences that can be made since the coefficients will generally be estimated less precisely. Columns (5.1) to (5.3) in Table 6 show the results for each of the three country groups. The results for the WE-CF group are identical to those in Column (4.6) in Table 4. The coefficient of the lagged dependent variable varies somewhat across the country groups; public investment exhibits more persistence in the WE-CF countries than in the WE+CF and CEE countries. This finding is unsurprising given the different dynamics of public investment shown in Fig. 1. Other studies have found corresponding differences in the persistence of various other fiscal variables between the EU countries in Western Europe and those in Central and Eastern Europe (Staehr 2008). Besides the difference in persistence between the country groups, it is noticeable that the coefficients of all of the control variables attain the same signs across the three country groups, except in the case of BALCYC(−1). The coefficients are not statistically significant in all three groups at the same time, except the coefficient of the lagged debt stock. As before, the coefficients of ERDF, ESF and RURAL are not statistically significant. The coefficient of CF is positive but small and statistically insignificant for the WE+CF group, while it is around 1 and statistically significant at the 10 per cent level for the CEE group. The CEE countries only joined the EU starting in 2004, so the time samples are somewhat different across the three country groups. We have therefore repeated the estimations for the period 2004-2018 so that possible differences across the three groups cannot easily be related to the time sample. The results are shown in Columns (6.4) to (6.6), where the results in Column (6.6) are by construction identical to those in Column (5.3). The coefficient of CF for the WE+CF group is now 0.61, but it is still very imprecisely estimated. We conclude that the large and statistically significant effect of support from the Cohesion Fund in large part is driven by the CEE countries. Concluding comments Public investment has been low and declining in many EU countries since the global financial crisis, and this may harm economic growth and development in the longer term. The European Structural and Investment Funds (ESIF) are the main fiscal instrument of the European Union for the objective of economic and social convergence across the regions of the union. This paper assesses whether there are relationships at the macroeconomic level between support from the ESIF funds and public investment in the EU countries. The analysis uses an annual panel dataset for 28 EU countries from 2000 to 2018. Data for the ESIF variables are amended using statistical modelling in order to mimic accrual-based data. The econometric analysis is carried out using dynamic panel data specifications. Public investment is regressed on the lagged dependent variable, the support from the ESIF funds, and a large number of fiscal and macroeconomic control variables. All the estimations use country fixed effects, so the estimated effects are derived from the within-country time variation. A key challenge is to separate the effects of each of the ESIF funds given that there is substantial correlation between the dynamics of the four variables. The analysis shows that the various ESIF funds affect public investment in the EU countries in very different ways. The Cohesion Fund provides support to the less developed EU countries, which are largely in Southern Europe and Central and Eastern Europe. Support from the Cohesion Fund had a close to one-to-one effect on public investment in the short term and more in the longer term. Funding from the European Regional Development Fund, the largest ESIF fund, might have some effect on public investment, but the effect is probably small and it cannot be established with reasonable precision. Payments from the European Social Fund and payments from the funds supporting agriculture, fishing and rural development do not appear to be associated with public investment in the EU countries. The results are robust to various specification changes including the exclusion of the years of the global financial crisis, changes in the estimation methodology and the inclusion of additional control variables. They are also robust to a division of the sample into various country groups, though there are some differences in the persistence of public investment across the groups. The results seem to align reasonably well with the eligibility and allocation criteria for the various ESIF funds discussed in Sect. 2. The Cohesion Fund mainly provides funding for environmental projects and trans-European networks, which may in large part be counted as public investment. The almost one-to-one effect in the short term and the larger effect in the longer term suggest that crowding out of national funding is limited. The European Regional and Development Fund provides funding to projects for a broad set of objectives and the lack of a clear relationship between ERDF funding and public investment may be seen in this context. This paper focuses on public investment or gross capital formation in the public sector, so funding that goes to non-investment projects is not included. If climate-related funds are spent on improving networks for instance, it is likely to be public investment, but if the funds are spent on training courses for public sector employees, it is likely to be public consumption. The European Social Fund, the European Agricultural Fund for Rural Development, and the European Maritime and Fisheries Fund meanwhile are funds that like ERDF support a range of projects, so the finding that they have no effect on public investment may not be inconsistent with their prescribed objectives. The results in the paper may be of some importance for the debate on public investment in the European Union and the perceived need to increase this type of investment in some EU countries; see Sect. 1. Public investment is closely related to funding from the Cohesion Fund, but this is likely not the case for the European Regional and Development Fund, the European Social Fund and other ESIF funds. The evidence provides some support for the argument that the Cohesion Fund can contribute to increased public investment, while the evidence is less strong for the ERDF, the ESF and the rural funds. The results may also be viewed through the prism of the principle of additionality. As discussed in Sect. 2, the definition and evaluation of additionality have changed over time, so the results obtained using a sample from 2000 to 2018 cannot be construed as an ex-post evaluation of the principle of additionality. However, if we consider the rules for the principle of additionality that were in place from the start of the funding period 2014-2020, then the analysis might provide some insights. Funding from the CF fund appears to be associated with increased public investment, but this may not be the case for the other ESIF funds. This suggests that the principle of additionality is not fully satisfied ex-post for the ERDF, the ESF and the rural funds. These findings should of course be interpreted in the light of the estimations covering the period from 2000 to 2018, which were carried out using fixed effects as discussed in Sect. 5. The policies meant to foster regional convergence have been under debate almost from the inception of the forerunners to the European Union, and this also applies to the contents and governance of the European Structural and Investment Funds. The debates have focused on the priorities of funding and of the burdens of administration and evaluation, but the importance of public investment in regional convergence is also among the topics of debate (Bubbico et al. 2016). The results in this paper suggest that the relation between public investment and the current ESIF funds varies markedly across the various funds. This study of the effects of ESIF funding on public investment in the EU countries leaves several issues open for further investigation. One avenue would seek to obtain a clearer identification of the causal effect of ESIF funding, similar to the effects found in Becker et al. (2010Becker et al. ( , 2013. This would require substantial exogenous variation across a large number of observations and this will be challenging to obtain using country-level data. Another avenue of investigation could focus on specific categories of public investment, such as investment in buildings and structures, infrastructure, or research and development in order to establish whether ESIF funding affects different categories of public investment in different ways. Finally, it would be beneficial to identify the institutional and economic conditions under which support from the various ESIF funds has the greatest effect on public investment. We leave these issues for future studies.
wing blister, A New Drosophila Laminin α Chain Required for Cell Adhesion and Migration during Embryonic and Imaginal Development We report the molecular and functional characterization of a new α chain of laminin in Drosophila. The new laminin chain appears to be the Drosophila counterpart of both vertebrate α2 (also called merosin) and α1 chains, with a slightly higher degree of homology to α2, suggesting that this chain is an ancestral version of both α1 and α2 chains. During embryogenesis, the protein is associated with basement membranes of the digestive system and muscle attachment sites, and during larval stage it is found in a specific pattern in wing and eye discs. The gene is assigned to a locus called wing blister (wb), which is essential for embryonic viability. Embryonic phenotypes include twisted germbands and fewer pericardial cells, resulting in gaps in the presumptive heart and tracheal trunks, and myotubes detached from their target muscle attachment sites. Most phenotypes are in common with those observed in Drosophila laminin α3, 5 mutant embryos and many are in common with those observed in integrin mutations. Adult phenotypes show blisters in the wings in viable allelic combinations, similar to phenotypes observed in integrin genes. Mutation analysis in the eye demonstrates a function in rhabdomere organization. In summary, this new laminin α chain is essential for embryonic viability and is involved in processes requiring cell migration and cell adhesion. L AMININS are large extracellular matrix (ECM) 1 molecules usually associated with basement membranes (BMs), and represent a family of molecules important for development, adhesion, and cell migration (reviewed by Timpl and Brown, 1996). Laminin was initially isolated from tumor cells as a heterotrimer composed of ␣ 1, ␤ 1, and ␥ 1 chains (Chung et al., 1979;Timpl et al., 1979;see Fig. 2). All laminin chains are composed of a series of protein modules that occur in other ECM molecules (e.g., EGF repeats or laminin G domains; see Fig. 2 A). The size of laminin chains is usually Ͼ 200 kD. Vertebrate studies have revealed the presence of at least five ␣ chains, three ␤ chains, and three ␥ chains that can assemble in a combinatorial manner to form native laminin molecules. All are classified using a recent nomenclature (Burgeson et al., 1994). Data so far show that only ␣ , ␤ , and ␥ heterotrimers are sufficiently stable to be secreted (Yurchenko et al., 1997), an issue that becomes particularly important when one of the subunits is lacking or mutated due to a genetic defect. Thin but extended sheets of BM require continuous molecular structures which can extend over long distances, e.g., in blood vessels. BMs are usually thought to provide sufficient mechanical stability to resist high shearing forces at the dermal-epidermal junction or to resist hydrostatic pressure in glomerular loops in the kidney. On the other hand, BM needs to be flexible, i.e., to respond to rapid changes in volume in blood capillaries. The major contribution to these properties comes from two networks formed independently from laminins and collagen IV. Laminin undergoes a thermally reversible polymerization, and electron micrographs suggest that peripheral short and long arm interactions are involved in this assembly (Yurchenco and Cheng, 1993). Additional molecules are known to interact with laminin, i.e., nidogen, which is thought to cross-link the laminin and the collagen IV network, or perlecan, a proteoglycan (reviewed by Timpl and Brown, 1996). Different laminin isoforms are not always expressed at the same site and time. A careful examination of the occurrence in vertebrate embryonic and adult tissues of all ␣ chains shows that laminin ␣ chains have distinct expression patterns, with ␣ 4 and ␣ 5 showing the broadest, and ␣ 1 the most restricted expression (Miner et al., 1997). Moreover, each BM examined contains at least one ␣ chain, but the composition of ␣ chains within the BMs changed constantly during embryonic development, as assayed in the kidney (Miner et al., 1997). Few data are known about the developmental function of laminins, mainly because few laminin mutations have been identified to date. However, mutations in the ␣ 2 chain of human laminin have been linked to congenital muscular dystrophy (Helbling-Leclerc et al., 1995), and the classic dy mutation in mouse could also be linked to defects in the murine ␣ 2 chain (Xu et al., 1994). In both species, the lack or partial loss of function of laminin ␣ 2 leads to variation in skeletal muscle fibers and muscle fiber necrosis. These findings demonstrate a role for the ␣ 2 chain in skeletal muscle function. Mutations in the ␥ 2 subunit of laminin can lead to Herlitz's junctional epidermolysis bullosa (Aberdam et al., 1994;Pulkkinnen et al., 1994), characterized by blister formation within the dermal-epidermal BMs. Furthermore, mutations in the ␣ 3 and ␤ 3 laminin chain which associate with ␥ 2 to form laminin 5 show similar phenotypes (Kivirikko et al., 1995;Cserhalmi-Friedman et al., 1998). Laminin ␣ 2 also plays a role in molecular pathogenesis of neural tropism since the bacterium Mycobacterium leprae binds to ␣ 2 on Schwann cell axon units (Rambukkana et al., 1997). Intensive studies on the composition of the ECM in invertebrates have shown the existence of a laminin with a proposed subunit composition ␣ 3, 5; ␤ 1; ␥ 1 Goodman, 1988, 1989;Chi and Hui, 1989;Kusche-Gullberg et al., 1992;Henchcliffe et al., 1993). ␣ 3, 5 was previously called lamA, and this new name is proposed as a reminder that ␣ 3, 5 is the precursor of both vertebrate ␣ 3 and ␣ 5 chains. Genetic studies have shown that null mutations in the Drosophila ␣ 3, 5 chain lead to embryonic lethality, with visible defects in mesodermally derived tissues, i.e., in heart, muscles, or gut leading to dissociated cell groups in the various organs (Yarnitzki and Volk, 1995). These data suggest that laminin is used to confer structural support and adhesivity. Surprisingly, no obvious pathfinding defects in central nervous system neurons were observed during embryogenesis (Henchcliffe et al., 1993), however, at the neuromuscular junction, the extent between neuronal and muscular surfaces appeared significantly altered in ␣ 3, 5 mutants (Prokop et al., 1998). Hypomorphic mutants and heteroallelic mutant combinations of ␣ 3, 5 can give rise to viable pupae and some viable adults (Henchcliffe et al., 1993). These adult escapers show abnormalities in the shape of their legs and in the organization of ommatidia in the compound eye (Henchcliffe et al., 1993). A recent report has also shown the requirement of ␣ 3, 5 in normal pathfinding by ocellar pioneer axons (Garcia-Alonso et al., 1996). In spite of the observed pleiotropy of mutations in the ␣ 3, 5 gene, the phenotypic effects seen in mutant animals are not dramatic given the wide distribution of the protein. This predicted the existence of a second laminin ␣ chain which can compensate for loss of ␣ 3, 5 function. Indeed, during the course of the Drosophila genome sequencing program, we noticed the presence of sequences related to laminin, and subsequent analysis of the genomic region allowed us to define a new member of the invertebrate laminin ␣ chain family, similar to both the vertebrate ␣ 1 and ␣ 2 chain. We show that mutations from the wing blister ( wb ) locus are associated with lesions in this new ␣ gene and that this second Drosophila laminin ␣ chain is indispensable for embryonic viability and adhesiveness between cell layers. Fly Stocks The wb alleles, wb k05612 , wb k00305 , wb PZ09437 , wb PZ10002 , wb SF25 , wb HG10 , and wb CR4 , were used to determine embryonic functions for the Wb protein. P element induced alleles, wb k05612 , wb k00305 , wb PZ09437 , and wb PZ10002 , were produced in the laboratories of Istvan Kiss and A. Spradling (Carnegie Institute, Baltimore, MD), and ethylmethane sulfonate (EMS) induced alleles, wb SF25 and wb HG10 , were produced in the laboratory of M. Ashburner (University of Cambridge, Cambridge, UK). wb SF25 , wb HG10 , and wb PZ09437 have been described previously (Karpen and Spradling, 1992;Lindsley and Zimm, 1992). Df(2L)fn 7 (breakpoints 34E3; 35B3-4) and Df(2L)fn 36 (breakpoints 34F3-5; 35B4) were used in this study and are described in Lindsley and Zimm, (1992). Revertants of wb PZ09437 were obtained by precise excision of the P element and showed wild-type appearance and fertility. Lethal chromosomes used in this study were kept in stocks balanced over CyO (Lindsley and Zimm, 1992). Somatic clones in the eye were produced by inducing mitotic recombination using the FLP/FRT system as described in Roote and Zusman (1996). Videomicroscopy Embryos from mutant lines were placed on petri perm plates (Hereus) in a drop of Voltalef 3S oil. All embryos were derived from mothers homozygous for the klarsicht ( kls ) mutation, which clears out yolk and makes embryonic phenotypes easily visible during filming, yet has no discernible effect on embryonic development (Wieschaus and Nüsslein-Volhard, 1986). Time lapse videomicroscopy was performed on embryos under a Zeiss Axioskop microscope with a Panasonic AG-6730 recorder and a Zeiss ZVS-47N CCD videocamera system. wb embryos were identified by their inability to hatch and the presence of a dorsal hole at the end of embryonic development. Immunostaining and Preparation of Embryos for Whole Mounts Embryos were collected on agar/apple juice plates and prepared for immunostaining according to the protocol described in Zusman et al. (1990) with an antibody against a pericardial protein ( Mab#3 ; Yarnitzky and Volk, 1995) or an antibody against a tracheal protein (2A12; Samakovlis et al., 1996). Embryos stained with antibodies were dehydrated and mounted in a 3:1 solution of methyl salicylate and Canada balsam for examination under bright-field illumination. For examination of somatic muscles, wb embryos were prepared as described by Drysdale et al. (1993) and viewed under polarized light. To confirm and examine further the wb somatic muscle phenotype, embryos derived from parents heterozygous for wb were stained with antibodies against muscle myosin (Kiehart and Feghali, 1986) using the procedures described in Young et al. (1991) and Roote and Zusman (1995). Late stage wb or deficiency-containing embryos were identified by the dorsal hole phenotype and/or their inability to hatch. At earlier stages wb phenotypes were based on 25% of the population exhibiting defects not observed in a wild-type population, and the similarity of these defects to those observed when a dorsal hole is present. The mutant tracheal phenotype was also observed in developing wb embryos using videomicroscopy. DNA and RNA Techniques Southern and Northern blot analyses were performed by standard procedures (Maniatis et al., 1982). RNA was extracted by the guanidium thiocyanate/phenol/chloroform extraction method of Chomczynski and Sacchi (1987). Poly(A) ϩ RNA was isolated using a Pharmacia Kit (Pharmacia Biotech, Inc.). Equal specific activity of wb probes and laminin ␣3, 5 and ␥1 probes were achieved using a standardized labeling protocol, and by using probes of similar lengths and similar GC content. Exposure times for Northern blots were 3 d. Whole mount in situ hybridizations were conducted using digoxigenin labeled wb cDNAs following the protocol of Tautz and Pfeiffle (1989). Verification of the Sequence of Genomic DS Phages At least three sequence errors were discovered within the published DS 03792 sequence leading to reading frame shifts. Suitable cDNAs were isolated using PCR, subcloned, and were used to correct the derived cDNA sequence. Irregularities between the domain structure of vertebrate and this new Drosophila laminin ␣ chain were confirmed by additional isolation of suitable cDNAs by PCR and subsequent sequencing, ruling out misleading interpretations of intron-exon boundaries. Generation of Antibodies and Staining of Embryos Two independent fragments from either the NH 2 or COOH terminus (amino acids 173-376 and amino acids 2,383-2,633, respectively) were cloned into the appropriate pMALc2 expression vectors (BioRad Laboratories). After induction and lysis of cells, fusion proteins were purified over a maltose matrix (BioRad Laboratories). Both antigens were used to generate two independent rabbit polyclonal antisera each. Polyclonal antisera were affinity purified over a corresponding GST fusion protein (Pharmacia Biotech, Inc.) column and eluted with 0.1 M glycine, pH 2.5. The specificity of the antisera was tested on Df(2L)fn 36 embryos. For histochemical staining, the antifusion protein antisera were used at a concentration of 1:500. Western Blotting Samples of embryonic extracts and conditioned medium of Schneider S2 cells were separated under nonreducing and reducing conditions on 6% SDS-PAGE. After transfer onto nylon membranes, blots were probed with anti-wb antibodies and detected with HRP conjugated secondary antibodies, followed by ECL chemiluminescence (Nycomed Amersham, Inc.). Cloning, Sequence Analysis, and Properties of a New Drosophila Laminin Chain In our attempt to find laminin-like sequences from Drosophila in the database, we noticed the presence of EGFlike repeats similar to laminin chains on the reverse strand of a subclone derived from the genomic phage DS 03792 (Kimmerly et al., 1996). Subsequent alignment of all subclones derived from this DS phage revealed the presence of a novel laminin chain gene in Drosophila. Analysis of the gene structure showed a genomic region spanning ‫07ف‬ kb of DNA with Ն16 exons contained within two overlapping DS phages, DS 037092 and DS 01068 ( Fig. 1 B). Most intron-exon boundaries proposed by GENSCAN (Burge and Karlin, 1997) were confirmed by isolating and sequencing suitable cDNA clones spanning the region of interest (data not shown). Conceptual translation of the 10,101-nucleotide open reading frame yields a protein of 3,367 amino acids with a deduced molecular size of ‫473ف‬ kD ( Fig. 2 A). At the NH 2 terminus, the predicted initiating methionine is followed by an amino acid sequence containing structural regions characteristic of a secretory signal sequence (Fig. 2 A;von Heijne, 1986). A hydropathy profile of the primary structure revealed no other long hydrophobic regions indicative of a transmembrane spanning segment ( Fig. 2 A), suggesting that this laminin chain is a secreted protein. Closer inspection of the domain structure shows that this new chain has all the domains of laminin ␣ chains in the appropriate order ( Fig. 2 C). However, the number of different modules varies in some regions. For example, the second EGF-like stretch contains 10 full and 2 half EGF repeats, while in vertebrates there are 8 full and 2 half EGF repeats (Fig. 2 C). In addition, a unique NH 2 -terminal extension of ‫021ف‬ amino acids is present (Fig. 2 C). Finally, the array of the second EGF repeat region is symmetrically interrupted by an insertion of 45 amino acids. We performed domain-wise comparisons of identities to existing vertebrate ␣ chains. The LN domain showed al- most an equally high degree of identity to vertebrate ␣1 and ␣2 chains, while the LE4 domain showed a slightly higher degree of identity to vertebrate ␣2 than to ␣1. However, both L4 domains showed slightly higher scores of identity to ␣5, immediately followed by equally high scores to ␣2 and ␣1. The remaining two EGF-like repeats showed that the first was highly homologous to ␣1 but the second was homologous to ␣2. Finally, all five G domains showed a slightly higher similarity to ␣2 than to ␣1. In summary, the majority of the domains showed most similarity to vertebrate ␣2 chains, yet many were significantly similar to ␣1. For this reason, and to illustrate the fact this chain is a common precursor of vertebrate ␣2 and ␣1 chains, we have tentatively called this chain Drosophila laminin ␣1, 2 in the remainder of the text. A special feature within the amino acid sequence should (Bairoch and Apweiler, 1999) and appear as boxes surrounding the sequence. A putative signal sequence is underlined. be noted: the presence of a RGD within the first L4 domain (Fig. 2). RGD tripeptides have been shown to mediate cell adhesion in Drosophila using Drosophila PS2 integrins as receptors (Bunch and Brower, 1992). In fact, a recent study based on cell culture assays demonstrated that the laminin ␣1, 2 subunit showed exclusive binding to one integrin isoform, ␣PS2m8␤PS4A, while the other PS2 integrin isoforms did not show any binding (Graner et al., 1998), suggesting that ␣1, 2 is a ligand of a splice-specific form of the PS2 integrins. Temporal and Spatial Distribution of Laminin ␣1, 2 Transcripts Northern analysis was performed on RNA derived from samples spanning the Drosophila life cycle using ␣1, 2 cDNAs as probes. An 11-kb transcript was first detected in the early stages of embryogenesis and peaked in 6-12-h embryos (Fig. 3 A). In the last part of embryogenesis (12-18 h), a slightly smaller version of a 10.5-kb transcript was observed. We cannot exclude the possibility of an alternative spliced transcript or alternative usage of another polyadenylation site. Transcription decays in the later stages and is hardly detectable in third-instar larvae, but increases again in pupal stages. To compare the existing Drosophila laminin chains, the same Northern blot used for ␣1, 2 was also probed with a mixture of ␣3, 5 and ␥1 probes (Fig. 3 C). This showed that the two laminin subunits are present at similar stages during embryogenesis. There is a marked difference, however, as these two sub-units are also transcribed very strongly during the late stages of embryogenesis, in contrast to ␣1, 2 which fades out rapidly during this stage. Assuming that all probes in this analysis had similar specific activities (see Materials and Methods), it suggests that ␣1, 2 is less abundantly expressed than ␣3, 5, a feature already noted in vertebrate expression studies (Miner et al., 1997). Using digoxigenin-labeled probes, the spatial expression of the ␣1, 2 chain was examined. Transcripts were first detected during oogenesis in nurse cells and growing oocytes ( Fig. 4 A), suggesting a maternal contribution. During cleavage stage, the message is uniformly distributed in the egg (Fig. 4 B) and becomes slightly enriched in cells of the trunk region at blastoderm stage (Fig. 4 C). During germband extension (Fig. 4 D), low levels of uniform expression are observed. After germband retraction, the visceral mesoderm of the gut starts to accumulate ␣1, 2 transcripts (Fig. 4, E and F). At that time, cells near the presumptive muscle attachment sites show transcripts (Fig. 4 F). At stage 14, strong expression is also observed in cardiac cells (Fig. 4 G) and more prominent in cells near the muscle attachment sites (Fig. 4, H and I). Transcription of laminin ␣1, 2 is also readily detectable in imaginal discs, as assayed by LacZ staining of imaginal discs derived from the viable P element line H155 which mimics the embryonic transcript pattern faithfully (data not shown). Particularly strong expression was found in wing discs, where certain groups of cells in the presumptive wing dorsal and ventral region show LacZ staining (Fig. 4 J). Strong staining was also observed in the eye an- The stage of the embryonic RNA is denoted in hours after egg laying, L3 is from the third instar larval stage, P from late pupal stages, and A from adult males and females. Two transcripts are detected which might derive from differentially polyadenylated mRNAs. (B) Trans-splicing in l(2) 09437 mutants. Northern blot analysis of 6-18-h old embryos from l(2) 09437 with a wb cDNA. Two groups of transcripts are detected, two wild-type bands ‫11ف‬ kb and a 5.6-kb band which derives from an aberrant splicing event with the last exon of ribosomal protein S12 on the mutant chromosome (Horowitz and Berg, 1995). (C) Comparison of transcriptional activity of different Drosophila laminin chains. The same Northern blot as in B was reprobed with a mixture of laminin ␣3, 5 and laminin ␥1 cDNAs (Kusche-Gullberg et al., 1992). The amount of loaded RNA was estimated by subsequent probing with a Drosophila ribosomal protein S19 (Baumgartner et al., 1993). Transcript lengths were determined with a ladder of RNA standards. (D) Western analysis of the Wb protein. Extracts from 0-24-h embryos (lanes 1, 2, 4, and 5) and conditioned medium from Schneider S2 cells (lane 3) were fractionated on 6% SDS-PAGE under nonreducing (lanes 1 and 2), and reducing conditions (lanes 3-5), and assayed using polyclonal anti-wb antisera (anti-NH 2 antibodies lanes 1 and 4; anti-COOH antibodies lanes 2, 3, and 5; see Materials and Methods). In conditioned medium, a single 360-kD band was observed (lane 3), while in extracts proteolytic cleavage occurs, resulting in a 240-kD (NH 2 ) and a 110-kD (COOH) band (lanes 4 and 5). The 180-kD band in lane 4 might represent another cleavage or a degradation product which is not recognized by the COOH antibody. Moreover, this band did not appear under nonreducing conditions (lane 1), suggesting that it originated from laminin. Under nonreducing conditions, an 800-kD band was observed (lanes 1 and 2). Mouse EHS laminin (a gift from J. Engel) was used to determine the relative location of the 800-kD and 400-kD bands, respectively. tennal disc immediately behind the morphogenetic furrow ( Fig. 4 K), and also in a specific pattern in leg discs (Fig. 4 L). Spatial Expression of the ␣1, 2 Protein To assess the nature and appearance of the ␣1, 2 protein, polyclonal antisera against the NH 2 and COOH termini (see Materials and Methods) were produced and assayed both by Western analyses and on whole mount embryos. Western blotting of conditioned medium of Schneider S2 cells showed a single 360-kD band (Fig. 3 D, lane 3), while in embryonic extracts proteolytic cleavage was observed giving rise to a 240-kD band (lanes 4 and 5) and a 110-kD band (lane 5) which are detectable using anti-NH 2 and -COOH antibodies, respectively. This suggests that proteolytic cleavage also occurs in Drosophila, as was reported for the vertebrate ␣2 chain (Ehrig et al., 1990). NH 2 antibodies also detected a possible further degradation product of ‫081ف‬ kD (lane 4), which is not detected by COOH antibodies. Both antisera recognize a single 800-kD band under nonreducing conditions (lanes 1 and 2), suggesting that the ␣1, 2 protein is part of a laminin trimer. Using an immunoprecipitation assay, ␣1, 2 was found to be associated with the same ␤ and ␥ chains, as was ␣3, 5 (data not shown). The protein is first detected at stage 10 as a weak diffusive stripe between the ectoderm and the mesoderm (Fig. 5 A). During germband retraction (Fig. 5 B) the protein is localized diffusely around areas that constitute the visceral mesoderm. At stage 14, strong staining is observed in the BMs that surround the digestive system, i.e., the gut (Fig. 5 C), or at muscle attachment sites (Fig. 5 D, G, and H). These patterns are strongly reminiscent of the expression patterns of various Drosophila integrin subunits, particularly the ␤ subunit (Leptin et al., 1989) and the ␣2 subunit (Bogaert et al., 1987). Later stages include localization in dorsal structures along the ventral nerve cord (Fig. 5 E), and BMs around the digestive system (Fig. 5 F). During imaginal wing disc development, ␣1, 2 is localized in a specific spot pattern on the presumptive wing dorsal and ventral region (Fig. 5 I). The wb Gene Encodes Laminin ␣1, 2 Genomic phage DS 03792 (Fig. 1 B) was mapped to chromosomal region 35A1 (Fig. 1 A). Several P element insertion events could be detected within the genomic area of the laminin gene. Of particular interest were two fly lines conferring embryonic lethality that showed the P element inserted into the middle of the fourth intron (Fig. 1 B). Because insertions of this type showed lethality on other occasions (Horowitz and Berg, 1995) where an unusual splicing event was shown to be the cause for lethality, we wondered whether the same situation would apply here. To test whether trans-splicing between the fourth exon of laminin ␣1, 2 and the last exon of ribosomal protein S12, which resides on the P element construct, we performed Northern analysis on RNA derived from l(2) 09437 embryos or l(2) 10002 embryos (not shown). Two bands were visible: the doublet band ‫11ف‬ kb, already detected in the developmental Northern analysis generated by the wildtype gene from the balancer chromosome, and a smaller species of 5.6 kb, derived from the mutant chromosome whose RNA showed trans-splicing to S12, yielding a shorter transcript (Fig. 3 B). Rehybridization of the same Northern lane using a S12-specific probe confirmed the same 5.6-kb mRNA species (data not shown). We interpret the fact that the 5.6-kb mutant band is stronger than the wild-type 11-kb band as a composite result of a higher efficiency to complete the transcript, because the mutant transcript is more stable, or the transfer of larger mRNA is less efficient. The shortened mRNA codes for a protein truncated within LE8 (Fig. 2 C), and as a result no assembly of the heterotrimeric laminin molecule can occur, as only ␣, ␤, and ␥ heterotrimers are sufficiently stable to be secreted . Consequently, it is likely that no functional laminin of the subunit composition ␣1, 2; ␤1; ␥1 is secreted in the l(2) 09437 mutant. A survey in the chromosomal area of 35A showed that a locus, termed wb, resulting in blisters in wings, could account for the loss of laminin function. To test this hypothesis, l(2) 09437 and l(2) 10002 flies were crossed to suitable viable and embryonic lethal wb alleles, and were tested for complementation. None of the strong ethylmethane sulfonate induced wb alleles complemented l(2) 09437 and l(2) 10002 for embryonic lethality (data not shown). This result, in combination with the mapping data, strongly argues that l(2) 09437 and l(2) 10002 are mutations in the wb locus, and that wb is indeed laminin ␣1, 2. Defects in wb Embryos To examine the functions of the Wb protein during em-bryogenesis, the development of embryos homozygous for embryonic lethal mutations in the wb gene (wb k05612 , wb k00305 , and wb HG10 ; Lindsley and Zimm, 1992;Zusman, S., unpublished results) was examined and compared with wild-type embryos and embryos homozygous for a deficiency that uncovers the wb locus (Df[2L]fn 36 and Df[2L]fn 7 ; Lindsley and Zimm, 1992). Time lapse videomicroscopy of developing flies revealed that homozygous wb HG10 , wb k05612 , and Df(2L)fn 36 embryos become abnormal during gastrulation. Rather than extending their germbands dorsally, mutant germbands twist and extend laterally (Fig. 6, A and B). Near the completion of germband extension, wb and Df(2L)fn 36 embryos show a distinct separation between the mesodermal and ectodermal tissue layers of the germband (Fig. 6, C and D). These phenotypes are similar to that described for mys hemizygous embryos which lack the ␤ PS subunit of integrin , a potential receptor for laminin (reviewed by Hynes, 1992). Another phenotype in common with mys embryos (Wieschaus et al., 1984) includes a dorsal hole which often forms in the cuticle of wb k05612 , and occasionally in wb k00305 embryos. Although Wb protein accumulates around the BMs of the developing embryonic gut, no defects were detected in gut morphology or midgut primordial migration. Previous studies of embryos lacking the Drosophila laminin ␣3, 5 chain have demonstrated functions for this molecule in the proper morphogenesis of heart, somatic muscle, and trachea (Yarnitzky and Volk, 1995;Stark et al., 1997). In laminin ␣3, 5 deficient embryos there is a dissociation of the pericardial cells of the heart, gaps in the dorsal trunk of the trachea, and the ventral oblique muscles fail to reach their attachment sites. Similar heart and tracheal defects are found in embryos with mutations affecting ␣ PS3 ␤ PS integrin (Stark et al., 1997). To determine if the Wb protein is also involved in these processes, we examined the development of their heart, trachea, and somatic muscles. The heart (dorsal vessel) forms from external pericardial cells and internal cardioblasts that migrate during dorsal closure to meet along the dorsal midline to form the heart tube (Bate, 1993). wb and wb-deficient embryos stained with antibodies that recognize pericardial cells show that homozygous wb k05612 , wb k00305 (both occasionally showing dorsal holes), and Df(2L)fn 36 embryos often contain fewer pericardial cells than wild-type embryos resulting in distinct gaps in the heart tube (Fig. 7, A and B). Furthermore, the pericardial cells appear to dissociate randomly and the tube often appears to curve off towards the lateral side of the embryo. The dorsal trunk of the trachea is formed by migration of the tracheal pits to form a long tracheal tube which extends the length of the embryo (reviewed in Manning and Krasnow, 1993). Antibodies were used to examine trachea formation in wb and wb deficient embryos. Embryos homozygous for wb k05612 , wb HG10 , and Df(2L)fn 36 were observed to have significant gaps in the dorsal trunk of the trachea (Fig. 7, C and D). This was confirmed by examining the development of filmed wb embryos. Due to the strong expression of the Wb protein in muscle attachment sites, we also examined wb and wb-deficient embryos for defects associated with the attachment of myotubes to their ectodermal attachment sites. Careful examination of somatic muscle in homozygous wb k05612 , wb HG10 , and Df(2L)fn 36 embryos stained with antimyosin antibodies (Fig. 7, E and F), or prepared for examination under polarized light at the end of embryonic development, revealed that their somatic myotubes are often not attached to target epidermal attachment sites. This defect most commonly involves the ventral oblique muscles located in the anterior most segments of the embryo (Fig. 7 F). Random disorganization of myotubes and areas without myotubes are occasionally observed in these embryos as well. In conclusion, several defects are observed in wb embryos, some in common with those observed in laminin ␣3, 5 embryos, and many in common with those observed with integrin mutations. Defects in Adults As the name implies, mutations in the wb locus can lead to blistering of the wing, in which the dorsal and ventral wing surfaces separate (Woodruff and Ashburner, 1979). As shown in Fig. 8, A and C, the blisters are located centrally within the wing, consistent with the location of laminin expression and localization in wing discs (Fig. 4 J and Fig. 5 I). The blisters vary in size, depending on the allelic combination used (data not shown). Homozygous viable alleles of wb exist that show no blistering (i.e., wb CR4 ), and only in combination with an embryonic lethal wb allele (i.e., l(2) 09437) or a deficiency (Df[2L]fn 7 or fn 36 ) were blisters observed, suggesting that below a certain threshold, the lack of functional laminin can lead to blistering. No haplo-insufficiency was observed in l(2) 09437 animals (data not shown). The wb phenotype strongly resembles the phenotypes associated with mutations in integrins (Brower and Jaffe, 1989;Brabant et al., 1993;Brower et al., 1995), and mutations in the Drosophila laminin ␣3, 5 gene can also lead to blistered wings (Henchcliffe et al., 1993). Due to the fact that high expression of wb was also found posterior to the morphogenetic furrow in the developing eye (Fig. 4 K), we wished to determine the function of wb during eye development. For this reason, somatic clones were induced in the eye of wb k05612 flies using the FLP technique (Golic, 1991). As evident in Fig. 8 D, the number of photoreceptor cells did not change, but they appear disorganized. Disorganized photoreceptor cells were also detected in mys and mew (PS1-encoding) mutant clones (Zusman et al., 1990;Brower et al., 1995). Discussion We have demonstrated the existence of a second laminin ␣ chain in Drosophila, and sequence analysis shows that it is homologous to the ␣2 and ␣1 chain in vertebrates. Most likely, this chain represents one of the ancestral versions of a vertebrate ␣ chain of laminin, as some marked changes are observed in comparison to ␣1, 2. The protein is slightly larger than vertebrate ␣1 or ␣2, mainly due to the addition of a NH 2 -terminal extension, an insertion in the first EGFlike region, and by acquisition of two additional EGF-like modules (Fig. 2 C). Other discrepancies have been observed in the Caenorhabditis elegans ␣1, 2 where one G module is deleted (Fig. 2 C). Laminins have also been isolated in lower organisms such as Hydra vulgaris (Sarras et al., 1994) where they are expressed in the subepithelial zone involved in attachment of mesoderm to the ectoderm. Sequence comparisons suggest that the ␣ chain associated with this laminin corresponds to an ancestral version of the ␣3 and ␣5 chain (Sarras, M., personal communication). Virtually no exon boundaries match the gene structure observed in human laminin ␣2 or C. elegans laminin ␣1, 2, nor is the number of exons similar (16 versus 64 and 10, respectively; Zhang et al., 1996;Fig. 2), suggesting that ␣ chains in higher animals have become more complex by splitting coding sequences through uptake of new noncoding sequences. In addition, no exon boundaries of Drosophila ␣1, 2 fit those of Drosophila ␣3, 5 (Fig. 2 D) or even of C. elegans ␣1, 2 (Fig. 2 C), suggesting that the two ␣ chains diverged much earlier. Based on the sequenced C. elegans genome, which discovered only two ␣ chains, it is plausible to assume that invertebrate genomes such as Drosophila or C. elegans probably possess only two ␣ chains, one ␤ and one ␥ chain, respectively, which may limit the number of possible assemblies into functional laminin trimers to two. A comparison between expression patterns of ␣1, 2 and vertebrate laminins reveals that the expression of vertebrate ␣2 fits better to Drosophila ␣1, 2, as ␣1 shows a highly restricted expression in kidney, as compared with ␣2 whose expression was reported to be widespread in mesenchymal cells (Miner et al., 1997). In accordance with vertebrate expression studies (Miner et al., 1997) where ␣5 was shown to be the most widely expressed ␣ chain, Drosophila ␣3, 5 is more widely expressed than ␣1, 2. Interestingly, Wb harbors a RGD sequence located on the L4 domain (Fig. 2 C) which makes it a likely ligand for integrins. Biochemical studies on integrin-mediated adhesion using Drosophila cell lines identified Wb as a distinct ligand for ␣PS2m8␤PS4A integrin (Graner et al., 1998), one of four splice forms of the ␣PS2␤PS integrins (Brown et al., 1989;Zusman et al., 1990). The ␣PS2 isoform is also the predominant splice form present at developmental stages during which Wb is expressed (Brown et al., 1989). No data have been reported to date on the isoform distribution of ␤PS integrin. In contrast, other RGD containing proteins such as tiggrin (Fogerty et al., 1994), or ten-m (Baumgartner et al., 1994) show no absolute requirement for a specific splice isoform of ␤PS: both proteins need only exon 8 of ␣PS2 to be present. Using a similar approach, Drosophila laminin containing ␣3, 5 was shown to be a specific ligand for ␣PS1␤PS integrin . This suggests that Drosophila laminins (subunit composition ␣1, 2; ␤1; ␥1, and ␣3, 5; ␤1; ␥1) can serve as PS2 and PS1 integrin ligands, respectively. Moreover, the model for embryonic muscle and pupal wing attachment proposed by Gotwals et al. (1994) holds true, by juxtaposing another partner to tiggrin facing the PS2 binding site. Interestingly, the region harboring the RGD in L4 of Wb is highly related to the RGD-containing site of vertebrate laminin ␣5 (Graner et al., 1998), which could indicate that vertebrate ␣5 has taken up this motif during evolution, in contrast to the existing Drosophila ␣3, 5 which does not harbor an RGD site. Genetic data further support an association of wb with integrins, since weak mys mutations increase the size and frequency of blisters in wb flies (Khare, N., and S. Baumgartner, manuscript in preparation). No conclusive genetic interaction data were reported to occur between ␣3, 5 and mys (Henchcliffe et al., 1993). Several embryonic wb phenotypes (Fig. 6) were shown to be remarkably similar to those of single integrin mutations, i.e., the separation of mesoderm and ectoderm, and the twisted germband common to mys (Fig. 6, B and D; Roote and Zusman, 1995) or to scb (Stark et al., 1997). Notably, separated mesoderm/ectoderm and twisted germband were not observed in mutations in the ␣3, 5 chain (Yarnitzki and Volk, 1995). The ␣3, 5 chain was only found to be required for later stages of patterning of mesodermally derived cells, suggesting that ␣1, 2 is exclusively used to confer early adhesion between mesoderm and ectoderm. In contrast, common phenotypes between ␣1, 2 and ␣3, 5 were detected in late stages of embryogenesis where the formation of the ventral oblique muscles is disturbed, particularly in the anterior segments (Fig. 7 F;Yarnitzki and Volk, 1995;Prokop et al., 1998). Finally, the Note that the encircled rhabdomeres (wb/wb) show the same overall organization as in the neighboring (wild-type) rhabdomeres, but appear disorganized, similar to mys eye clones (Zusman et al., 1990), or mew eye clones (Brower et al., 1995). formation of the heart was reported to be disturbed in mutations of both genes (Fig. 7 B ;Yarnitzki and Volk, 1995). No phenotype reminiscent of the muscular dystrophylike phenotype in vertebrates was observed in our mutants. Although we did not observe wb expression in muscles, we cannot rule out marginal expression levels below the sensitivity of our detection method. However, certain myotubes do appear disorganized in wb mutant embryos. This cannot be considered an analogous situation to dy/dy mice (Xu et al., 1994), because the defects observed are most likely due to the inability of muscle cells to migrate properly and a failure in attaching to muscle attachment sites. Similar phenotypes were also observed in laminin ␣3, 5 mutants (Prokop et al., 1998). Previous studies have shown that integrin-mediated adhesivity between the two epithelial cell layers of the wing is particularly sensitive to mutations involving either integrin ligands (this paper) or upstream factors of integrins, i.e., the blistered (bs) gene encoding a Drosophila serum response factor (SRE; Montagne et al., 1996). bs and integrins interact genetically (Fristrom et al., 1994) and mys expression appears to be greatly reduced in hypomorphic bs mutants (Montagne et al., 1996), suggesting a scenario where bs might directly control integrin gene expression on the transcriptional level. It is plausible to assume that bs might also directly control wb expression, as the transcript pattern of both show striking coexpression (Fig. 4 J; Montagne et al., 1996), and a corresponding SRE has been located 260 bp upstream of the putative TATA box of the wb gene (data not shown). Specific screens have been performed for mutations affecting adhesion between wing surfaces (Prout et al., 1997;Walsh and Brown, 1998). To our surprise, none of the loci described correspond to wb, suggesting that the formation of blisters in the wing depends on subtle changes of wb activity. This is further suggested by the fact that only suitable wb allelic combinations show blisters. For example, blisters were only detected in transheterozygous allelic combinations of a weak (homozygous viable) allele, wb CR4 , and Df(2L)fn 7 or l(2) 09437 which behaves as a null allele. In other words, only the range of wb activity slightly below 50% of wild-type activity is capable of forming blisters, while a level of Ն50% does not affect wing blistering, as no haplo-insufficiency is observed in l(2) 09437 flies. In parallel to the wing, wb clones induced in the eye cause similar phenotypes to clones induced in integrin mutations, i.e., ␣PS1 (mew) mutants or ␤PS (mys) mutants (Zusman et al., 1990;Brower et al., 1995), but not in ␣PS2 (if) mutants (Brower et al., 1995) which result in virtually wild-type eyes. Similar phenotypes were also observed in laminin ␣3, 5 mutant combinations, however, the degree of severity of disorganization is higher than in wb or integrin mutant clones (Henchcliffe et al., 1993).
FLAVIdB: A data mining system for knowledge discovery in flaviviruses with direct applications in immunology and vaccinology. Background The flavivirus genus is unusually large, comprising more than 70 species, of which more than half are known human pathogens. It includes a set of clinically relevant infectious agents such as dengue, West Nile, yellow fever, and Japanese encephalitis viruses. Although these pathogens have been studied extensively, safe and efficient vaccines lack for the majority of the flaviviruses. Results We have assembled a database that combines antigenic data of flaviviruses, specialized analysis tools, and workflows for automated complex analyses focusing on applications in immunology and vaccinology. FLAVIdB contains 12,858 entries of flavivirus antigen sequences, 184 verified T-cell epitopes, 201 verified B-cell epitopes, and 4 representative molecular structures of the dengue virus envelope protein. FLAVIdB was assembled by collection, annotation, and integration of data from GenBank, GenPept, UniProt, IEDB, and PDB. The data were subject to extensive quality control (redundancy elimination, error detection, and vocabulary consolidation). Further annotation of selected functionally relevant features was performed by organizing information extracted from the literature. The database was incorporated into a web-accessible data mining system, combining specialized data analysis tools for integrated analysis of relevant data categories (protein sequences, macromolecular structures, and immune epitopes). The data mining system includes tools for variability and conservation analysis, T-cell epitope prediction, and characterization of neutralizing components of B-cell epitopes. FLAVIdB is accessible at cvc.dfci.harvard.edu/flavi/ Conclusion FLAVIdB represents a new generation of databases in which data and tools are integrated into a data mining infrastructures specifically designed to aid rational vaccine design by discovery of vaccine targets. Conclusion FLAVIdB represents a new generation of databases in which data and tools are integrated into a data mining infrastructures specifically designed to aid rational vaccine design by discovery of vaccine targets. Background More than 70 known viral species belong to the flavivirus genus. The flavivirus genus can be divided into three clusters, fourteen clades, and 70 species [1]. The clusters are based on host-vector association: mosquito-borne, tick-borne, and no-vector viruses. The members of flavivirus clades share >69% pairwise nucleotide sequence identity, while members of individual species share >84% identity [1]. More than half of these singlestranded RNA viruses are known human pathogens [2]. The most important human pathogens among flaviviruses are West Nile virus (WNV), dengue virus (DENV), Tickborne encephalitis virus (TBEV), Japanese encephalitis virus encephalitis virus (JEV), and yellow fever virus (YFV). Flaviviruses pose a significant global public health threat since they are responsible for hundreds of millions of human infections each year. Hailed as one of the most successful vaccines ever developed, the live attenuated YFV vaccine [3][4] however often induces severe adverse effects [5], thus leaving room for significant improvements. Safe and efficient JEV and TBEV vaccines have also been developed, although these have limited global application due to high production prices [6]. To date, successful vaccines against DENV, WNV, and a range of other emerging flavivirus pathogens have proven elusive. Because of the high sequence similarity between the flavivirus species, the analysis of antigenic diversity, both intra-and inter-species, offers important insights in potential cross-reactivity, cross-protection following infection or immunization, and clues for understanding the factors that determine disease severity. Cross-reactivity and cross-protection is particularly relevant in the case of DENV, due to issues relating to severe consequences of secondary infection. While the factors that lead to severe dengue disease are unclear, it was proposed that it is related to misdirected immune responses including antibody dependent enhancement (ADE) [7] as well as exaggerated and partially misdirected T-cell responses [8]. Sequencing efforts in recent years have produced a large body of flavivirus molecular data enabling advanced data analyses for rational vaccine design. Primary databases such as GenBank [9] contain comprehensive collections of nucleotide sequence data. Protein sequence data are available from UniProt knowledgebase [10] which offers curated, high quality protein data. A number of dedicated flavivirus databases are available such as Flavitrack [11] and the NIAID Virus Pathogen Database and Analysis Resource (ViPR) (www.viprbrc.org). Both Flavitrack and ViPR provide access to curated sequence data and they include a selection of sequence analysis tools. Particularly ViPR offers an abundance of useful analysis tools such as sequence similarity search, multiple sequence alignment (MSA), single nucleotide polymorphism (SNP) analysis, and construction of phylogenetic trees, neatly organized in their workbench analysis environment. However, data mining for vaccine target discovery requires complex database search requests and often a combination of several different tools (for example, prediction of epitopes is often preceded by extensive conservation and variability analysis) integrated in data mining systems for automated knowledge extraction and knowledge discovery. To aid the analysis of immunological properties and discovery of vaccine targets in the flaviviruses, we constructed the FLAVIdB -a database of Flavivirus spp. that contains information on protein sequences, immunological data, and structural data. These data are integrated into a modular extensible infrastructure that enables detailed analysis of sequences and their antigenic properties through application of data mining techniques [12]. The tools can be applied individually or by using predefined workflows designed for discovery of vaccine targets. Data collection The sequence data for FLAVIdB were extracted from primary sources GenBank, GenPept [9] and UniProt [14]. The raw data were downloaded for species in the Flavivirus genus (NCBI taxonomy ID: 11051), and subsequently transformed into an XML format. Data module of experimentally determined B-and HLA class I and II T-cell epitopes was populated with data extracted from IEDB [15] as well as additional epitope data retrieved from the literature. The epitope data were enriched with data from binding assays, neutralization assays, and cross -protective properties. Macromolecular structure data from protein data bank (PDB) [16] was also extracted for the envelope proteins of DENV. The content of FLAVIdB is searchable using keyword search and is available for download by users. The main purposes of FLAVIdB are data integration, data mining, and knowledge extraction for applications in immunology and vaccinology. The overall framework of the system is shown in Figure 1. Data cleaning Sequences that had duplicate entries were merged into single entries to minimize data redundancy. In the primary sources some of the available sequences were wellannotated, some had incomplete annotations, whereas others lacked annotation altogether. The NCBI GenPept protein reference sequences served as templates for annotation of viral protein sequences. For quality control, of existing protein annotations and the addition of missing annotations, query sequences were aligned to the appropriate reference sequence using MAFFT MSA tool [17]. Sequence fragments shorter than 23 amino acids were not included in the database. The threshold 23 was chosen because it is the length of the shortest protein naturally occurring in the Flavivirus proteome (the 2K peptide). Existing positional annotations were compared to the MSA and corrected when needed. The missing positional annotations were generated from MSA positions and included in FLAVIdB records. Data enrichment Journal publications corresponding to the strain entries were extracted from PubMed (www.ncbi.nlm.nih.gov/ pubmed) when available. Semi-automated extraction of articles was performed to retrieve information missing in the GenBank entries, but otherwise available. Manual checking was necessary because of limited extent of Figure 1: Summary of the structure of FLAVIdB. Users can access FLAVIdB through the interactive interface for direct access to data or tools, or through static interface and predefined workflows. Workflows use a predefined process to access data and tools and to produce a report. standardized fields, nomenclature, and terms in primary sources. Basic search tools Keyword search Beyond the basic utility of keyword search, FLAVIdB also offers options for filtering data based on species, pathology (disease outcome or fatal/non-fatal), proteins, strain type (wild type, laboratory strain, or vaccine strain), entry type (complete proteomes or partial proteomes), and host. The data retrieval function also serves as a tool for selection of subsets of sequences for comparative analyses. Sequence similarity search Sequence similarity search of the FLAVIdB can be performed using the basic local alignment search tool (BLAST) algorithm [18] through an integrated BLAST module within the FLAVIdB. We recommend that integrated BLAST tool is used with the default parameters, while for advanced users there is an option to set different values for parameters such as E value, word size, substitution matrix, and gap cost. Basic analysis tools Multiple sequence alignment The MSA can be performed for three or more sequences in FLAVIdB using the MAFFT tool [17]. The output is color coded for easy visualization of variations, matches, and gap insertions in the alignment. The search interface enales a selection of pre-defined subsets by virus type or subtype, pathology, individual protein, strain type (wild type, laboratory, or vaccine), size (complete or partial), or host of isolation. Furthermore, MSA can be performed on selected results from BLAST search. Sequence conservation and variability metrics FLAVIdB is equipped with the tools for sequence conservation and variability analysis. Variability analysis can be performed on entries grouped by protein and further narrowed down by virus type or subtype, and by host of isolation. The variability analysis at amino acid level is based on calculation of Shannon entropy [19] at each position in a MSA. The entropy is calculated using the formula: where H is the entropy, x is the position in the MSA, i represent individual amino acids at position x, I is the number of different amino acids on position x, and Pi is the frequency of the given amino acid. Conservation of a position, x, is defined by the frequency of the consensus amino acid. Block entropy analysis To accommodate conservation and variability analysis of T-cell epitopes, we developed a new entropy measurement method, specifically designed for the analysis of short peptides, 8-11 amino acids in length. This approach is based on calculation of entropy for each overlapping window (block) of 8-11 amino acids in length in a given MSA of homologous proteins. Then, entropy and conservation is calculated for the peptides, rather than individual residues. Since T-cell epitopes are recognized as peptides and not as individual residues, this approach provides a more representative image of the conservation of linear epitopes. For class I T-cell epitopes, the size of the window ranges between 8 and 11 amino acids [20] while class II epitopes are typically 13-20 amino acids long with a binding core of minimum 9 amino acids long [21][22]. The results of block entropy analysis are displayed with traditional entropy analysis to further clarify the peptide diversity and its relationship to individual amino acid diversity. Species classification FLAVIdB enables classification of newly acquired sequences that belong to species of flaviviruses. Classification of species to unassigned strains is performed using the BLAST algorithm [18] in combination with the knowledge of phylogenetic traits of the genus flavivirus reported in [1]. Because its main purpose is the analysis of antigenic properties of viruses, FLAVIdB has nucleotide sequence data for each of the protein entries in the background and not searchable by keywords or by browsing. For species classification, we use the similarity rule defined in [1]. The nucleotide sequence similarity search is performed in FLAVIdB using BLAST algorithm, after which the query is compared to the highest scoring match. If the pairwise identity of query and match is 84% or greater, the query is considered of the same species as its match. Species classification can only be performed on full (or nearly full) viral genome sequences. Since some proteins are far less variable than others, submitting a single gene sequences could give ambiguous or incorrect results. Advanced analysis tools Prediction of MHC class I binders Prediction of peptide binding affinity to MHC class I is performed using the neural network and weight matrix based prediction algorithm; NetMHC 3.2 [23]. Epitope prediction in FLAVIdB is available for the following HLA alleles: HLA-A*0201, HLA-A*0301, HLA-A*1101, HLA-A*2402, HLA-B*0702, HLA-B*0801, and HLA-B*1501, since NetMHC 3.2 predictions for these alleles were independently validated and assessed as highly accurate [24]. Characterization of shared neutralizing components of B-cell epitopes (BBscore) In DENV, it is essential that antibody based vaccines afford broad neutralization across all four serotypes. The tool for B-cell epitope analysis in FLAVIdB is based on comparative analysis of known B-cell epitopes together with comparison of corresponding binding and neutralization assay data. Features shared by neutralizing epitopes against all four serotypes are extracted and presented on 3D models of the envelope protein. Furthermore, users can map these shared features onto any envelope protein sequence in FLAVIdB. At present, structural data for the envelope protein is only publically available (in the pdb database) for dengue, West Nile, Japanese encephalitis, Langat, Omsk hemhorraghic fever, and yellow fever viruses. Useful amounts of experimentally validated epitopes and corresponding biochemical/functional assay data are only available for DENV and WNV. Thus, the BBscore tool is currently limited to the analysis of DENV and WNV. The output of this analysis can automatically improve both the breadth and depth of characterization of neutralizing properties of antigenic sites on surface proteins as more epitopes and assay data becomes available in primary sources such as the IEDB [15]. Database content As of June 2011, FLAVIdB contains 12,858 entries, with sequences from 87 flavivirus species consisting of 65 classified species and 24 provisional or unclassified species (see Table 1) The first release of the database (June, 2011) has 3,120 complete proteome sequences and 9,738 partial sequences. Each proteome entry was annotated as its individual protein (or in some cases, protein fragment) constituents. Each entry contains protein sequences along with additional annotations describing various strain information (see Table 2). Each entry was given a shorthand nomenclature, sequence name and source strain identifier. The nomenclature contains information about species, host, country (ISO code), strain name, isolate name, clone name, and year of collection. An example of a FLAVIdB nomenclature is: An entry with this nomenclature represents a DENV type 1 isolated from a human host, at geographic location Thailand, in year 2002, with the specific strain NIID2, isolate 133, clone 02-20. For the repository of experimentally determined B-cell epitopes, each entry (only applicable to DENV in the current release) describes positions in protein sequence, species, serotype, publication reference, and data from binding assays and neutralization assays. For the repository of experimentally determined T-cell epitopes, each entry is described by location in protein, epitope sequence, HLA restriction, and publication references. Data quality The population of FLAVIdB was subject to a rigorous quality control. Approximately 500 sequence errors and artifacts (nonsensical characters, frameshift mutations rendering the protein sequences in question of no use to conservation and variability analysis) were detected and corrected or removed. More than 1,000 metadata terms used in primary sources were consolidated into approximately 200. To support data term consolidation, a library of all fields from all entries was created. The library was used for semi-automated consolidating the entry vocabulary by merging redundancies such as "US", "U.S.", "United States of America", "America", etc., into the FLAVIdB convention: "USA" and the corresponding ISO code used in FLAVIdB nomenclature, "US". The species classification error analysis led to the identification and correction of 17 strains, and the classification of seven previously unclassified serotypes of DENV. Furthermore, the entries were enriched by definition of additional metadata. The specific information includes: location of collection, host of collection, time of collection, strain type (whether entry genome was derived from wild type, laboratory strain, or vaccine strain), pathology (whether infection led to disease and/or fatality) and comments on protein function (some mutant strains encode nonfunctional protein products). The differences between the available annotations in the primary data from external sources and the enriched data in FLAVIdB are shown in Table 3. Analysis tools FLAVIdB is equipped with several generic data analysis tools as well as three tools specifically developed for FLAVIdB: Block entropy analysis, BBscore, and viral species classifier (Table 4). Strain type Information on whether strain is wild type, laboratory strain, or vaccine strain Pathology The morbidity and mortality associated with the virus Workflows To accommodate rapid and extensive analysis without the need for local computation or moving data between individual analysis tools or different web servers, FLAVIdB includes pre-defined analysis workflows. A workflow is an automated process that takes a request from the user, performs complex analysis by combining data and tools preselected for common questions, and produces a comprehensive report [13]. These workflows demonstrate both the utility and flexibility of the data mining infrastructure in FLAVIdB. For the current release we have developed and implemented two data mining workflows; a summary workflow and a query analyzer workflow. Summary workflow The summary workflow can be applied to all sequences in the database or to any defined subset thereof. The purpose of this workflow is to summarize potential vaccine targets common to all entries within the FLAVIdB, or a subset of entries, such as summary analysis of one or more species within the FLAVIdB. The results of application of each analysis tool are presented to the user in a single printable output report. The structure and components of the summary workflow are presented in the flowchart in figure 3. The utility of the summary workflow is particularly important at the very beginning of research projects, or as an incremental analysis of existing projects upon the database update. Query analyzer workflow The query analyzer workflow is a useful tool for researchers who need to rapidly analyze newly sequenced strains or previously uncharacterized sequences found in the database. The query analyzer workflow applies the existing data mining modules to the query in a predefined order and the analysis results are presented in a single printable output report. The first step is sequence selection -either directly from the database or the nucleotide sequence followed by species classification. The analysis is followed by parallel application of T-cell epitope, Bcell epitope, and variability analysis, and the final step of report generation. The steps and tools involved in the Table 3: The results of the data enrichment. Direct parsing is performed by considering only information available in the dedicated fields in GenBank entries. GenBank does not have dedicated fields for the information marked with an asterisk (*), so some form of automated text mining was applied to extract this information. The lack of a dedicated field does not necessarily mean that information about strain type, pathology, or sequence annotation is not present in other fields (such as comments/notes) in the entries, but extraction was only possible to automate to a very limited extent without error and artifact propagation. Figure 4 Application of summary workflow to DENV Application of the summary workflow to all four subtypes of DENV revealed a pool of 333 T-cell epitope candidates, which can potentially be combined in a polyvalent vaccine comprised of five synthetic dengue virus proteins. The analysis of B-cell epitopes revealed five conserved positions in the dengue virus envelope protein that are targeted in antibody-based neutralization. These results were generated by submitting all available DENV sequences to the FLAVIdB summary workflow, thus demonstrating the utility of the work flows in data mining from for comprehensive identification of vaccine targets. Figure 5 shows the input submission screen and Figure 6 shows the conservation and variability of the DENV E protein and the block entropy part of the output report. Conclusion FLAVIdB is a comprehensive database of Flavivirus spp. antigens extracted from multiple external sources (GenPept and UniProt, epitope data from IEDB, and structural data from the PDB). The FLAVIdB data has been manually curated and enriched by the extraction of additional annotations available from the corresponding literature, ensuring high data quality and data completeness. We have integrated the annotated data with the data mining infrastructure consisting of data mining tools for discovery of vaccine targets. This infrastructure provides for an automated data mining platform, enabling extraction of higher level knowledge about T-cell epitopes and neutralizing B-cell epitopes in the flaviviruses. We have also defined and implemented two workflows where data mining tools have been arranged to rapidly and seamlessly extract knowledge from the data stored in the FLAVIdB. The query analyzer workflow enables analysis of single sequences for vaccine targets, whereas the summary workflow summarizes vaccine targets across a larger data set. The modular structure of FLAVIdB can easily be modified for application to other viral pathogens, as well as integration of new analysis modules and workflows. The main purpose of FLAVIdB is to enable user to perform knowledge discovery from viral antigen data with particular emphasis on applications in immunology and vaccinology. Prediction and characterization of immunogenic epitopes is a critical step in identification and assessment of potential vaccine targets. This process is not identical for different viruses. For example, in DENV it is essential that vaccines are designed to elicit crossprotection to all four serotypes, due to complications in secondary infection by different subtypes. To analyze DENV vaccine targets, is it necessary to identify and compile a very precise and detailed information about antigenic sites which are conserved across all four serotypes, perform a detailed analysis of antigenic diversity, and understand variants representation both in time and geographic spread. In FLAVIdB, the analysis tools are organized in the data mining workflows composed of preselected sets of tools applied to user defined data sets from FLAVIdB. With new knowledge accumulating, we plan to extend the tools and workflows so that precise and detailed analysis can be automated and brought directly to the virologist and vaccinologist workbench. FLAVIdB differs from existing dedicated flavivirus databases since it offers novel analysis tools (BBscore, block entropy analysis, and flavivirus species classification) specifically developed for the analysis of immunological and vaccinological properties of flaviviruses. These tools, along with generic analysis tools, were implemented in the two workflows providing automated report generation. Furthermore, the sequence data in FLAVIdB is fully annotated with protein cleavage sites, T-cell and B-cell epitopes, which is only partially addressed in some of existing dedicated flavivirus sequence data resources. FLAVIdB represents a new breed of bioinformatics databases that tightly integrates the content (data), analysis tools, and workflows to enable the automation of complex queries, data mining, and report generation. These "new generation" immunological databases shift focus from retrieval and simple analyses, to complex analyses and extraction of high-level knowledge. We expect this database to serve as a template for the development of Figure 4: Flowchart of the steps in the query analyzer workflow. The query sequence is selected either from the database or submitted directly through the input window. If manual submission is used, the species is first classified using the species classification tool. T-cell epitopes are predicted and B-cell epitopes are characterized. Finally, a variability analysis is performed for sequences related to the query. The combined results are presented in an output report. Figure 6: A screenshot of the output report from the summary workflow applied to all four serotypes of DENV. The top graph shows conservation (blue line) and entropy (red line) for the DENV envelope protein, with the consensus amino acids sequence on the x-axis. The following table is a summary of binding assay data and neutralization assay data from IEDB. The bottom graph is the summary of the block entropy analysis. On the x-axis are the starting positions of each peptide block analyzed and on the y-axis is the number of peptides required to achieve an accumulated frequency of 99% within each given block. The lack of data from starting point 148-158 is due to a high fraction of gaps on this position in the multiple sequence alignment.
From the Quasi-Total Strong Differential to Quasi-Total Italian Domination in Graphs This paper is devoted to the study of the quasi-total strong differential of a graph, and it is a contribution to the Special Issue “Theoretical computer science and discrete mathematics” of Symmetry. Given a vertex x∈V(G) of a graph G, the neighbourhood of x is denoted by N(x). The neighbourhood of a set X⊆V(G) is defined to be N(X)=⋃x∈XN(x), while the external neighbourhood of X is defined to be Ne(X)=N(X)∖X. Now, for every set X⊆V(G) and every vertex x∈X, the external private neighbourhood of x with respect to X is defined as the set Pe(x,X)={y∈V(G)∖X:N(y)∩X={x}}. Let Xw={x∈X:Pe(x,X)≠⌀}. The strong differential of X is defined to be ∂s(X)=|Ne(X)|−|Xw|, while the quasi-total strong differential of G is defined to be ∂s*(G)=max{∂s(X):X⊆V(G)andXw⊆N(X)}. We show that the quasi-total strong differential is closely related to several graph parameters, including the domination number, the total domination number, the 2-domination number, the vertex cover number, the semitotal domination number, the strong differential, and the quasi-total Italian domination number. As a consequence of the study, we show that the problem of finding the quasi-total strong differential of a graph is NP-hard. Introduction Given a graph G = (V(G), E(G)), the open neighbourhood of a vertex x ∈ V(G) is defined to be N(x) = {y ∈ V(G) : xy ∈ E(G)}. The open neighbourhood of a set X ⊆ V(G) is defined by N(X) = x∈X N(x), while the external neighbourhood of X, or boundary of X, is defined as N e (X) = N(X) \ X. The differential of a subset X ⊆ V(G) is defined as ∂(X) = |N e (X)| − |X| and the differential of a graph G is defined as These concepts were introduced by Hedetniemi about twenty-five years ago in an unpublished paper, and the preliminary results on the topic were developed by Goddard and Henning [1]. The development of the topic was subsequently continued by several authors, including [2][3][4][5][6][7]. Currently, the study of differentials in graphs and their variants is of great interest because it has been observed that the study of different types of domination can be approached through a variant of the differential which is related to them. Specifically, we are referring to domination parameters that are necessarily defined through the use of functions, such as Roman domination, perfect Roman domination, Italian domination and unique response Roman domination. In each case, the main result linking the domination parameter to the corresponding differential is a Gallai-type theorem, which allows us to study these domination parameters without the use of functions. For instance, the differential is related to the Roman domination number [3], the perfect differential is related to the perfect Roman domination number [5], the strong differential is related to the Italian domination number [8], the 2-packing differential is related to the unique response Roman domination number [9]. Next, we will briefly describe the case of the strong differential and then introduce the study of the quasi-total strong differential. We refer the reader to the corresponding papers for details on the other cases. For any x ∈ X, the external private neighbourhood of x with respect to X is defined to be P e (x, X) = {y ∈ V(G) \ X : N(y) ∩ X = {x}}. We define the set X w = {x ∈ X : P e (x, X) = ∅}. The strong differential of a set X is defined to be ∂ s (X) = |N e (X)| − |X w |, while the strong differential of G is defined to be ∂ s (G) = max{∂ s (X) : X ⊆ V(G)}. As shown in [8], the problem of finding the strong differential of a graph is NP-hard, and this parameter is closely related to several graph parameters. In particular, the theory of strong differentials allows us to develop the theory of Italian domination without the use of functions. In this paper, we study the quasi-total strong differential of G, which is defined as ∂ s * (G) = max{∂ s (X) : X ⊆ V(G) and X w ⊆ N(X)}. We will show that this novel parameter is perfectly integrated into the theory of domination. In particular, we will show that the quasi-total strong differential is closely related to several graph parameters, including the domination number, the total domination number, the 2-domination number, the vertex cover number, the semitotal domination number, the strong differential, and the quasi-total Italian domination number. As a consequence of the study, we show that the problem of finding the quasi-total strong differential of a graph is NP-hard. The paper is organised as follows. Section 2 is devoted to establish the main notation, terminology and tools needed to develop the remaining sections. In Section 3 we obtain several bounds on the quasi-total strong differential of a graph and we discuss the tightness of these bounds. In Section 4 we prove a Gallai-type theorem which shows that the theory of quasi-total strong differentials can be applied to develop the theory of Italian domination, provided that the Italian dominating functions fulfil an additional condition. Finally, in Section 5 we show that the problem of finding the quasi-total strong differential of a graph is NP-hard. Notation, Terminology and Basic Tools Throughout the paper, we will use the notation G ∼ = H if G and H are isomorphic graphs. Given a set X ⊆ V(G), the subgraph of G induced by X will be denoted by G[X], while (for simplicity) the subgraph induced by V(G) \ X will be denoted by G − X. The minimum degree, the maximum degree and the order of G will be denoted by δ(G), ∆(G) and n(G), respectively. A leaf of G is a vertex of degree one. A support vertex of G is a vertex which is adjacent to a leaf, while a strong support vertex is a vertex which is adjacent to at least two leaves. The set of leaves, support vertices and strong support vertices of G will be denoted by L(G), S(G) and S s (G), respectively. A dominating set of G is a subset D ⊆ V(G) such that N(v) ∩ D = ∅ for every v ∈ V(G) \ D. Let D(G) be the set of dominating sets of G. The domination number of G is defined to be, γ(G) = min{|D| : D ∈ D(G)}. The domination number has been extensively studied. For instance, we cite the following books [10][11][12]. We define a γ(G)-set as a set D ∈ D(G) with |D| = γ(G). The same agreement will be assumed for optimal parameters associated to other characteristic sets of a graph. For instance, a ∂ s * (G)-set will be a set X ⊆ V(G) such that X w ⊆ N(X) and ∂ s (X) = ∂ s * (G). As described in Figure 1, . Let D t (G) be the set of total dominating sets of G. The total domination number of G is defined to be, The total domination number has been extensively studied. For instance, we cite the book [13]. A k-dominating set of G is a subset D ⊆ V(G) such that |N(v) ∩ D| ≥ k for every vertex v ∈ V(G) \ D. Let D k (G) be the set of k-dominating sets of G. The k-domination number of G is defined to be, For a comprehensive survey on k-domination in graphs, we cite the book [10] published in 2020. In particular, there is a chapter, Multiple Domination, by Hansberg and Volkmann, where they put into context all relevant research results on multiple domination concerning k-domination that have been found up to 2020. In particular, the following result will be useful in the study of quasi-total strong differentials. Theorem 1 ([14]). Let r and k be positive integers. For any graph G with δ(G) ≥ r+1 r k − 1, A semitotal dominating set of a graph G with no isolated vertex, is a dominating set D of G such that every vertex in D is within distance two of another vertex in D. This concept was introduced in 2014 by Goddard et al. in [15]. Let D t2 (G) be the set of semitotal dominating sets of G. The semitotal domination number of G is defined to be γ t2 (G) = min{|D| : D ∈ D t2 (G)}. is a vertex cover of G if every edge of G is incident with at least one vertex in C. The vertex cover number of G, denoted by β(G), is the minimum cardinality among all vertex covers of G. Recall that the largest cardinality of a set of vertices of G, no two of which are adjacent, is called the independence number of G and it is denoted by α(G). The following well-known result, due to Gallai, states the relationship between the independence number and the vertex cover number of a graph. Theorem 2 (Gallai's theorem, [16]). For any graph G, The concept of a corona product graph was introduced in 1970 by Frucht and Harary [17]. Given two graphs G 1 and G 2 , the corona product graph G 1 G 2 is the graph obtained from G 1 and G 2 , by taking one copy of G 1 and n(G 1 ) copies of G 2 and joining by an edge every vertex from the i th -copy of G 2 with the i th -vertex of G 1 . Notice that n(G 1 G 2 ) = n(G 1 )(n(G 2 ) + 1) and γ(G 1 G 2 ) = n(G 1 ). The following result will be one of our main tools. For the remainder of the paper, definitions will be introduced whenever a concept is needed. In particular, this is the case for concepts, notation and terminology that are used only once or only in a short section. General Results To begin this section we present some bounds on the quasi-total strong differential of a graph, and then we discuss the tightness of the bounds. Theorem 4. For any graph G, the following statements hold. as required. To prove lower bound ∂ s * (G) ≥ n(G) − γ 2 (G) we only need to observe that for any Finally, to complete the proof of (ii) we only need to combine the previous bounds with Theorem 3. In order to show some classes of graphs with ∂ s * (G) = ∂ s (G) and ∂ s * (G) = n(G) − γ(G) − |S s (G)|, we consider the case of corona graphs. It is not difficult to see that if G 1 has no isolated vertex and G 2 is a non trivial graph, then In addition, if G 2 is a graph with at least two isolated vertices, then Next we discuss some cases where the lower bounds given in Theorem 4 are achieved. Theorem 5. For any graph G, the following statements are equivalent. We are now able to characterize the graphs with ∂ s * (G) = n(G) − γ(G). Theorem 6. For any graph G, the following statements are equivalent. . However, the converse does not hold. For instance, as we will see in Corollary 2, if G is a path or a cycle, then We next consider some cases of graphs satisfying ∂ s * (G) = n(G) − γ 2 (G). Theorem 7. Let G be a graph. If ∆(G) ≤ 3 or G is a claw-free graph, then Proof. By Lemma 1, there exists D ∈ D(G) which is a ∂ s * (G)-set and |P e (v, D)| ≥ 2 for every v ∈ D w . Assume that ∆(G) ≤ 3. We define a set D ⊆ V(G) as follows. Notice that N(v) ∩ D = ∅ and |N(v) \ D| = |P e (v, D)| = 2 for every v ∈ D w . Hence, D ∈ D(G) and D w = ∅, which implies that D is a 2-dominating set of G and Therefore, ∂ s * (G) = n(G) − |D | ≤ n(G) − γ 2 (G), and we deduce the equality by the lower bound ∂ s * (G) ≥ n(G) − γ 2 (G) given in Theorem 4. Now, assume that G is a claw-free graph. Observe that in this case P e (v, D) is a clique for every v ∈ D w , as N(v) ∩ D = ∅. Let X ⊆ V(G) \ D such that |X| = |D w | and |X ∩ P e (v, D)| = 1 for every v ∈ D w . Notice that X = D ∪ X is a 2-dominating set of G. Hence, Therefore, by the lower bound ∂ s * (G) ≥ n(G) − γ 2 (G) given in Theorem 4 we conclude the proof. The following result is a direct consequence of Theorem 7 and the well-known equalities γ 2 (C n ) = n 2 and γ 2 (P n ) = n+1 2 due to Fink and Jacobson [18]. Corollary 2. For any integer n ≥ 3, By Theorems 1 and 4 we derive the following result. Theorem 8. Given a graph G, the following statements hold. For instance, for any cubic graph with γ 2 (G) = n(G) 2 we have ∂ s * (G) = n(G) 2 , and for any corona graph of the form G ∼ = G 1 K 2 we have ∂ s * (G) = ∂ s (G) = n(G) 3 . We next discuss the relationship between the quasi-total strong differential and the semitotal domination number. Theorem 9. Given a graph G with no isolated vertex, the following statements hold. and only if one of the following conditions holds. Next we derive some lower bounds on ∂ s * (G). Theorem 10. For any graph G with every component of order at least three, Proof. Let S be a γ(G)-set such that S(G) ⊆ S and S = V(G) \ S. Now, we define S ⊆ S as a set of minimum cardinality among all subsets S of S that satisfy the following conditions. Therefore, the result follows. The bound above is tight. For instance, it is achieved by the graphs shown in Figure 2. Theorem 11. For any graph G with no isolated vertex, Proof. Let S 1 be a γ t (G)-set and S 2 a γ(G)-set. Let S = S 1 ∪ S 2 . As S 1 ∈ D t (G) and S 2 ∈ D(G), we deduce that S w ⊆ N(S) and S w ⊆ S 1 ∩ S 2 . Hence, as desired. The bound above is tight. Figure 3 shows a graph G with γ(G) Proof. Let S be a β(G)-set such that S(G) ⊆ S. Now, we define S ⊆ L(G) such that |S | = |S(G)| and |N(v) ∩ S | = 1 for every vertex v ∈ S(G). Hence, S = S ∪ S is a dominating set, S w ⊆ S s (G) and S w ⊆ N(S ), which implies that Therefore, the result follows. The bound above is tight. For instance, Figure 3 shows a graph G with ∂ s * (G) Notice that Theorems 2 and 12 lead to the following bound. Theorem 13. For any graph G with every component of order at least three, In particular, for graphs of minimum degree at least two we deduce the following result. Theorem 14. For any graph G with δ(G) ≥ 2, the following statements hold. Next we discuss the trivial bounds on ∂ s * (G) and we characterize the extreme cases. Proposition 1. For any graph G of order n(G) ≥ 3, the following statements hold. To prove the remaining statements, we take a ∂ s * (G)-set D ∈ D(G), which exists due to Lemma 1. To conclude this section, we discuss the case of join graphs. Proposition 2. For any two graphs G and H we have the following statements. A Gallai-Type Theorem A Gallai-type theorem is a result of the form a(G) + b(G) = n(G), where a(G) and b(G) are parameters defined on G. This terminology comes from Theorem 2, which is a well-known result stated by Gallai in 1959. The aim of this section is to identify the parameter a(G) such that a(G) + ∂ s * (G) = n(G). We will show that this invariant, which is associated to a version of the Italian domination, is perfectly integrated into the theory of domination. Let f : V(G) −→ {0, 1, 2} be a function and V i = {v ∈ V(G) : f (v) = i} for i ∈ {0, 1, 2}. We will identify the function f with these subsets of V(G) induced by f , and write f (V 0 , V 1 , V 2 ). The weight of f is defined to be The theory of Roman domination was introduced by Cockayne et al. [19]. They defined a Roman dominating function on a graph G to be a function f (V 0 , V 1 , V 2 ) satisfying the condition that every vertex in V 0 is adjacent to at least one vertex in V 2 . Recently, Cabrera García et al. [20] defined a quasi-total Roman dominating function as a Roman dominating function f (V 0 , V 1 , An Italian dominating function on a graph G is a function f (V 0 , V 1 , V 2 ) satisfying that f (N(v)) = ∑ u∈N(v) f (u) ≥ 2 for every v ∈ V 0 , i.e., f (V 0 , V 1 , V 2 ) is an Italian dominating function if N(v) ∩ V 2 = ∅ or |N(v) ∩ V 1 | ≥ 2 for every v ∈ V 0 . Hence, every Roman dominating function is an Italian dominating function. The concept of Italian domination was introduced by Chellali et al. in [21] under the name Roman {2}-domination. The term Italian Domination was later introduced by Henning and Klostermeyer [22,23]. The Italian domination number, denoted by γ I (G), is the minimum weight among all dominating functions on G. The following Gallai-type theorem for the strong differential and the Italian domination number was stated in [8]. Theorem 15 (Gallai-type theorem, [8]). For any graph G, We say that an Italian dominating function Clearly, every quasi-total Roman dominating function is a quasi-total Italian dominating function. The quasi-total Italian domination number, denoted by γ I * (G), is the minimum weight among all quasi-total dominating functions on G. Theorem 16 (Gallai-type theorem). For any graph G, Proof. By Lemma 1, there exists a ∂ s * (G)-set D which is a dominating set of G. Hence, the function g(W 0 , W 1 , W 2 ), defined from W 1 = D \ D w and W 2 = D w , is a quasi-total Italian dominating function on G, which implies that We proceed to show that γ I * (G) ≥ n(G) − ∂ s * (G). Let f (V 0 , V 1 , V 2 ) be a γ I * (G)function. It is readily seen that for D = V 1 ∪ V 2 we have that D \ D w = V 1 and D w = V 2 . Thus, Therefore, the result follows. Computational Complexity In this section, we show that the problem of finding the quasi-total strong differential of graph is NP-hard. To this end, we need to establish the following result. Theorem 17. For any graph G, Proof. Given x ∈ V(G), let x be the vertex of the copy of K 1 associated to x in G K 1 , and let V( By Lemma 1, there exists a ∂ s * (G K 1 )-set A which is a dominating set and |P e (v, A)| ≥ 2 for every v ∈ A w . Hence, A w ∩ X = ∅. Now, if there exists x ∈ V(G) ∩ A w , then there exists u ∈ P e (x, A) ∩ V(G) such that u ∈ A and N(u ) ∩ A = ∅, which is a contradiction. Hence, A w = ∅, which implies that A is a 2-dominating set of G K 1 . Thus, ∂ s * (G K 1 ) = ∂ s (A) = n(G K 1 ) − |A| ≤ n(G K 1 ) − γ 2 (G K 1 ). A direct consequence of the preceding result is the determination of computational complexity of finding the quasi-total strong differential. Given a graph G and a positive integer t, the domination problem is to decide whether there exists a dominating S in G such that |S| is at most t. It is well known that the domination problem is NP-complete. Hence, the optimization problem of finding γ(G) is NP-hard. Therefore, from Theorem 17, we derive the following result. Corollary 4. Given a graph G, the problem of finding ∂ s * (G) is NP-hard. Conclusions and Open Problems This article is a contribution to the theory differential of graphs. Particularly, we introduce the concept of the quasi-total strong differential of a graph. In our study, we show that the quasi-total strong differential is closely related to several graph parameters, including the domination number, the total domination number, the 2-domination number, the vertex cover number, the semitotal domination number, the strong differential, and the quasi-total Italian domination number. Finally, we proved that the problem of finding the quasi-total strong differential of a graph is NP-hard. Some open problems have emerged from the study carried out. For instance, we highlight the following. (a) It would be interesting to obtain some Nordhaus-Gaddum type relations. We have shown that if ∂ s * (G) = α(G), then α(G) = n(G) − γ 2 (G). Likewise, we have shown that if ∂ s * (G) = γ(G), then γ(G) = n(G) − γ 2 (G). However, the problem of characterizing all graphs such that ∂ s * (G) = α(G) and ∂ s * (G) = γ(G) is still an open problem. (c) Since the optimization problem of finding ∂ s * (G) is NP-hard, it would be interesting to devise polynomial-time algorithm for simple families of graphs or to develop heuristics that allow to estimate as accurately as possible this parameter for any graph. (d) It would be interesting to investigate the quasi-total strong differential of product graphs, and try to express this invariant in terms of different parameters of the graphs involved in the product.
Pulling-force generation by ensembles of polymerizing actin filaments The process by which actin polymerization generates pulling forces in cellular processes such as endocytosis is less well understood than pushing-force generation. To clarify the basic mechanisms of pulling-force generation, we perform stochastic polymerization simulations for a square array of polymerizing semiflexible actin filaments, having different interactions with the membrane. The filaments near the array center have a strong attractive component. Filament bending and actin-network elasticity are treated explicitly. We find that the outer filaments push on the membrane and the inner filaments pull, with a net balance of forces. The total calculated pulling force is maximized when the central filaments have a very deep potential well, and the outer filaments have no well. The steady-state force is unaffected by the gel rigidity, but equilibration takes longer for softer gels. The force distributions are flat over the pulling and pushing regions. Actin polymerization is enhanced by softening the gel or reducing the filament binding to the membrane. Filament-membrane detachment can occur for softer gels, even if the total binding energy of the filaments to the membrane is 100 or more. It propagates via a stress-concentration mechanism similar to that of a brittle crack in a solid, and the breaking stress is determined by a criterion similar to that of the ‘Griffith’ theory of crack propagation. Introduction In many cellular processes that require large forces to generate membrane curvature, such as formation of protrusions, endocytosis, and phagocytosis, actin is an essential factor [1]. Bending the membrane requires pushing and pulling forces in balance. Generation of pushing forces by actin polymerization has received substantial quantitative experimental study. For example, polymerization of individual actin filaments [2], and small numbers of actin filaments [3] yields forces on the order of pN. Force measurements on growing branched actin networks in vitro using cantilevers [4,5] found pushing pressures of ≈ 0.2-1.0 × 10 −3 pN nm −2 . Force densities on actinpropelled biomimetic beads [6] reach values of at least 2.5 × 10 −4 pN nm −2 . On the other hand, the processes by which actin polymerization generates pulling forces have received less quantitative study. Experiments on motile fluid vesicles propelled by actin comet tails [7,8] found a force distribution dominated by inward pushing forces on the sides of the vesicle, and directional pulling forces at the rear of the vesicle. The maximum pulling pressure in [7] was about ≈3.5 × 10 −4 pN nm −2 . Measurements of force around podosomes, mechanosensitive adhesion cell structures that exert protrusive forces onto extracellular environment, show that pushing forces from actin polymerization at the core and pulling forces from lateral acto-myosin contractility in the surrounding adhesion ring are required for a single podosome to deform the substrate [9]. Endocytosis in yeast also requires pulling forces from actin. The required magnitude is large, since overcoming the turgor pressure of 0.2 pN nm −2 or more [10] requires a comparable pulling-force density from actin polymerization. Actin patches consisting of an Arp2/3-branched network [11] form during this process. This network constitutes a crosslinked gel whose mechanical properties are not well known. The pulling forces required to initiate invagination are generated only after the arrival of actin [12], suggesting that the network generates them. Furthermore, recent superresolution microscopy studies of the geometry of the process demonstrated accumulation of the membrane-filament binding protein Sla2, within a central dot, surrounded by a ring of the actin nucleator Las17 [13] (a WASP homolog). This suggested a generic mode of pulling-force generation with enhanced actin polymerization in a ring-shaped region creating pulling forces at the center. Deleting the yeast crosslinking protein Sac6, which should reduce the stiffness of the actin gel [14], stops invagination [15]. Reducing the turgor pressure by providing osmotic support across the plasma membrane reduces the requirement for actin filament cross linkers [16], presumably because the force requirement is lowered. These observations, in combination, show that a stiff actin gel is required for robust pulling-force generation. In addition, mutating Sla2 by deleting its actinbinding domain stops the invagination process [15], showing that strong actin-membrane attachments are crucial. Further contrib utions are probably generated by curvature-generating proteins such as clathrin. However, these are not sufficient to drive the process, as shown by the correlated electron-microscopy and light-microscopy studies of [12]. This work showed that no measurable membrane bending occurs without polymerized actin, suggesting that actin polymerization is the dominant factor generating pulling forces. Although numerous theoretical models have described how actin polymerization generates pushing force [17,18], generation of pulling forces has been studied less extensively. Most studies have assumed that the total force exerted by the actin network on the membrane vanishes, corresponding to overall force balance on the actin network. This assumption is justified by the smallness of the viscous and inertial forces acting on the actin network [19]. Force balance implies that there are two types of filaments, pushing and pulling. Simple calculations based on the surface area of the invagination and the turgor pressure indicated that forces of over 1000 pN were required to drive endocytic invagination in yeast [20]. Subsequent analyses performed by fitting to observed membrane shapes, including force terms from membrane tension, membrane curvature, and curvature-generating proteins, gave estimates of ∼3000 pN for the minimum required actin pulling force [21,22]. A few models have explicitly treated the process by which actin polymerization generates pulling forces during endocytosis in yeast. They have assumed enhanced actin polymerization in a ring-shaped region. Using an actin polymerization rate increasing continuously outwards from the center of the endocytic patch, the continuum-mechanics calculations of [19] showed that even modest actin polymerization forces spread over a large ring can generate a large pulling-force density at the center by a force-amplification process. These calculations were extended in [22] to evaluate the actin growth profile needed to generate the required pulling forces. Subsequent work [23] treating the actin network as a visco-active gel found that it could exert sufficient pulling forces to drive the process. The validity of a continuous deterministic treatment of the discrete stochastic system of filaments and membrane, however, is not clear. Stochastic simulations of the growth of a rigid 3D actin network [24] during endocytosis in yeast assumed a uniform distribution of the actin nucleator Las17 in a ring-shaped region on the membrane. They also assumed that the filament growth velocity is determined by the average opposing force per filament. However, the distribution of the forces exerted on the membrane was not obtained. Simulations based on realistic dimensions and molecular compositions taken from superresolution experiments found that a 3D branched network of actin filaments can produce forces exceeding 1000 pN, enough to overcome turgor pressure [13]. However, in this model, the actin filament stall forces may have been overestimated [25], which could lead to an overestimate of the pulling force. Although these models have confirmed the ability of an actin-nucleator ring to generate pulling forces, there has been no systematic study of the mechanisms determining the magnitude of the pulling force, and how this magnitude is affected by key physical properties. By force balance, the pulling force must be limited by the total stall force of the pushing filaments. However, it is neither clear what fraction of this limit can be achieved practically, nor how rapidly the pulling force reaches its maximum value. In addition, there have been no detailed studies of the spatial distribution of the pulling force, explicitly treating stochastic polymerization of individual filaments. Finally, the possibility of pulling filaments detaching from the membrane has not been treated in detail. This process is plausible because of the large magnitude of the pulling force per pulling filament. The key features that affect pushing-force generation, such as the free-monomer concentration C A and the on/off rate constants k 0 on and k 0 of f , are important for pulling-force generation as well. The single-filament stall force obtained by thermodynamic arguments [26] is where k 0 on and k 0 of f are rates for a free filament not interacting with an obstacle, C A is the actin monomer concentration, δ is the actin step size per added subunit, and θ is the angle of incidence relative to the direction of motion. Thermodynamic analysis shows that the stall force is proportional to the number of filaments and is independent of the geometrical details of the growth process, a result confirmed by simulations [27]. However, it has been suggested that ATP hydrolysis can reduce the stall force [28]. Lateral interactions [29,30] can also affect the stall force. But in branched actin networks such those at endocytic actin patches, the filament spacings of ∼10 nm are large enough that lateral interactions are probably not important. Bending of individual actin filaments, and the elasticity of the actin network on whole, are also likely important for pulling-force generation. Previous studies have shown that fluctuations of the tips or bases of actin filaments are crucial for obtaining rapid network growth under moderate forces [31,32]. Here we calculate the pulling forces that a discrete array of cross-linked actin filaments with varying polymerization properties exerts on a rigid obstacle. We evaluate the total magnitude of the pulling force, its spatial distribution, the dynamics of the force buildup, and the conditions that lead to detachment of the pulling actin filaments from the membrane. The scale and parameters of the model are chosen to correspond to endocytosis in yeast. We assume that there is a group of filaments at the center of the array whose zero-force growth velocity is low because their growing ends are strongly bound to the obstacle. When the remaining 'pushing' filaments grow, the slower-growing filaments inhibit this growth via their indirect linkage to the pushing filaments mediated by the obstacle, and thus exert a pulling force. We vary several parameters, including the pulling filaments' binding strength and the actin gel stiffness, and evaluate the resulting effects on force generation and actin network deformation. We interpret our numerical results from this complete system of filaments, including 'pushers' and 'pullers', via a mean-force model that treats the pushers and pullers separately. In this model, the growth velocity is determined by the equality of the pusher and puller forces at a common velocity. We calculate the effect of transient attachments of the pushing filaments to the membrane on the magnitude of the pulling force. We evaluate the dependence of the time scale of force generation rate on the gel stiffness. Finally, we investigate how detachment of pullers depends on the gel stiffness. We find that strengthening the filament-obstacle binding by choosing deeper potential wells for the central filaments decreases the growth rate of the actin network in this region. This increases the total pulling force up to a maximum that becomes the sum of the stall forces of the surrounding pushing filaments when the central filaments do not polymerize at all. However, even very slow polymerization of the central filaments can strongly reduce the pulling force. We also find that the mean-force model accurately predicts the results of full system simulations for the total pulling force. We find that the time required for the maximum pulling force to build up is roughly inversely proportional to the gel stiffness. Transient attachments of the pushing filaments reduce the total pulling force, so maximum force is produced when their potential is purely repulsive. Finally, softening the gel, or weakening the binding of the central filaments, can lead to actin gel detachment from the membrane despite the total binding energy of several hundred k B T or more between the puller filaments and the membrane. Model We model the growing actin network at an endocytic site as a 12 × 12 square array of filaments with spacing a, interacting with a flat moving obstacle (see figure 1). The geometry is motivated by the measured architecture [13] of the endocytic actin patch, but to reduce the computational load the 120 × 120 nm square geometry is taken somewhat smaller than the measured circular geometry of radius 100 nm. We treat a fixed number of uncapped filaments rather than treating the dynamics by which filaments are nucleated by Arp2/3 complex and subsequently capped. Thus we model the force-generation properties of the filaments that are uncapped at a given time. The obstacle, corresponding to the combination of the cell wall and membrane, contains a central 6 × 6 patch of a filamentmembrane binding protein such as Sla2 (yellow circles), surrounded by a square band of nucleation promoting factors (NPFs), such as the yeast WASP homolog Las17 (purple circles). As the filaments grow, more rapid growth of the filaments in the outer region causes pulling and pushing forces to act on the gel in the directions indicated by black arrows, deforming the gel. In order to elucidate the physical mechanisms as clearly as possible, we focus on the steady-state force and the buildup to steady state, rather than treating the feedback loops [24] that cause the polymerized-actin count to drop to zero after reaching a peak. We also leave out possible effects of hydrolysis of actin subunits at filaments tips. The possible consequences of this assumption are analyzed in section 2.5. Each filament tip interacts with the obstacle via a smooth potential U(r), where r is the distance from the filament tip to the obstacle. U(r) can be either purely repulsive or have an attractive well. The 6 × 6 square central filament subset is assumed to have a deep well in U(r), corresponding to filament-membrane binding. The outer filaments have a purely repulsive potential or one with a shallow well. We adopt a picture similar to that of [33] where the filament 'free length' L protrudes beyond a gel region, which we treat as linear and elastic. We define the filament bases (black squares) as being a distance L in from the tip. At this point the filaments are crosslinked, either by Arp2/3 complex or another crosslinker, and the gel begins (see figure 1). For simplicity, we ignore variations in L from filament to filament. The initial filament base positions are staggered randomly within a subunit length interval, so that each filament tip is at a different position relative to the membrane. The filaments polymerize and depolymerize stochastically at rates that depend on r. The filament tips also diffuse rapidly, because of bending fluctuations. We focus on the component of the diffusive motion perpendicular to the obstacle. Diffusive motion of a filament tip parallel to the obstacle does not change the filament-obstacle interaction, because this motion is much smaller in magnitude than the distance (10 nm or more) over which the filament-obstacle interaction changes character. Similarly, diffusive motion of bases describes elastic deformation of the actin gel induced by the forces from the filaments. In treating this effect, crosslinks between bases of adjacent filaments are modeled as springs that constrain the relative motion of filaments in the direction perpend icular to the obstacle. Both the tip and base motions are described using biased Brownian dynamics driven by the filament-obstacle interaction and a linear restoring force. This force is determined by the filament rigidity for tip fluctuations and the actin gel stiffness for the base motions. The obstacle moves stochastically in response to forces from the filaments, via biased Brownian motion. Figure 2 shows the interaction between a single actin filament and the obstacle. To simplify the calculations, we project the growth of the actin filament onto the z direction (the direction of the obstacle motion), with δ cos(θ) being the projected actin step size. We treat only the z-direction growth explicitly. The default filament direction makes an angle θ = 35 • with the normal to the obstacle, consistent with the 70 • Arp2/3 branching angle [34]. The azimuthal angle is unspecified because it does not affect the calculation results. This approach ignores lateral motion of the filament tips along the membrane resulting from polymerization. The possible impact of this approximation is discussed in section 2.5. Filament-obstacle interaction The distribution of pulling and pushing forces on the obstacle is determined by the differences in polymerization and bending/deformation between different filaments. To keep track of these differences, we define for each filament a 'height', which determines the filament-obstacle interaction. The height of filament i is determined at a given time step by where n is the number of subunits added to the filament, z tip is the filament tip fluctuation, and z base is the filament base fluctuation. Staggering of filaments is described by the initial base positions h 0 (i) = α i δ cos (θ), where α i is a random number between 0 and 1. This means that even if all the filaments have grown by the same number of subunits, their heights will differ because their bases are at different locations. These correspond to the positions of Arp2/3 branch points where the filaments anchor in the gel. Changes in h result from either polymerization, filament bending, or motion of the filament base. The values of h do not correspond to actual filament lengths, which are not calculated explicitly. Only differences in h from filament to filament are important for determining the forces. The time dependent gap r between the obstacle at position z obst and a given filament tip is We treat the interaction between the obstacle and the filament tip with smooth idealized potential functions, as shown in figure 3, having the form where A, B, C, D, κ 1 , κ 2 , κ 3 , κ 4 , r 1 , and r 2 are constants. Having a non-zero B in equation (4) adds an attractive well to the potential. We refer to this type of potential as a 'simple well'. A 'double-well' potential is obtained (4) and (5). Blue curve: 'pusher potential' (equation (4)), with A = 54.66 pN · nm, B = 0, and κ 1 = 0.9 nm −1 . Red curve: 'puller potential' having a well depth of 25 k B T , with parameter values (equation (4) by choosing positive C and D in equation (5). In the double-well potential the broad minimum might represent conformational flexibility of a protein binding the actin filament to the membrane, or the presence of two different binding sites [35,36]. The corresponding forces exerted on the obstacle by the filaments are and The pusher filaments in the outer ring of the array have either only repulsive potential terms or a repulsive potential plus one with a shallow 'simple' well. The puller filaments in the 6 × 6 central region have either a deep 'simple well' or a 'double well' potential. Stochastic treatment of actin polymerization The actin on-rate, k on , has been defined in most previous models as an average over a time long compared to the time scale of filament-tip and obstacle fluctuations. Here, such an average is not appropriate because of the large force fluctuations that occur over short times from the combination of Brownian obstacle and filament-tip motion, and the rapid variation of the force between the filaments and the obstacle. Therefore we treat instantaneous rates that apply to a particular position of the filament tip relative to the obstacle. Thermodynamic analysis [25] shows that the instantaneous position-dependent rates must satisfy the relationship on and k 0 of f are free-filament on-and off-rates respectively. In order to concretely determine k on and k off , we assume that their magnitudes never exceed the free filament on and off rate values, so that Stochastic time evolution of obstacle position, filament bending and gel deformation The obstacle position and filament bending evolve in time according to random thermal forces and deterministic forces from the filament-obstacle interaction potential, as well as linear restoring forces for the filament tips. For conceptual simplicity, we treat the system dynamics using a 'filament-centric' approach, where the actin gel is assumed to be stationary, while the obstacle moves. To treat force generation in the presence of a stationary obstacle, the calculated motion can be reversed by a simple coordinate transformation, assigning the actin gel diffusion coefficient to the obstacle. The diffusive motions of the obstacle and filament bending coordinates are treated by the discrete form of the Langevin equation ( [37], chapter 3): Here D obst is the obstacle diffusion coefficient, D tip is the filament tip diffusion coefficient, F tot is the total force from the filaments acting on the obstacle, F load is the external force applied on the obstacle (used only in the 'Mean-Force' model described below), ∆t is the time step, and k bend is the tip bending stiffness. Further, α and α are random numbers uniformly distributed between − 1 2 and 1 2 , so that < α 2 >=< α 2 >= 1 12 . Displacements in consecutive time steps are uncorrelated. The values of ∆z tip,i are limited to the range defined by a filament being either perpendicular to the obstacle or parallel to it: −δ cos (θ) < ∆z tip,i < δ[1 − cos (θ)]. The use of a uniform distribution for the individual steps is justified, because the central limit theorem guarantees that after many steps the the displacement distribution will approach the Gaussian distribution that characterizes Brownian motion. We find that already after five time steps or 2.5 × 10 −9 s, the rms difference between the calculated displacement distribution and the Gaussian is only 1.3% of the Gaussian peak height. Gel deformation is treated via stochastic motion of the filament bases anchored in the gel. Interactions between different bases include only nearest-neighbor interactions, and are assumed to be proportional to the difference in their z-coordinates. The base displacements follow an equation similar to that of the tips: Here D base is the filament base diffusion coefficient, k elas is the filament-base spring constant embodying the elasticity of the actin gel, and the random number α has properties identical to those of α . The ∆z nn,i term is the difference between the average z base of a filament's neighbors and the value of z base for the filament itself. The ∆z base,i term is also needed, because in its absence the degree of freedom corresponding to uniform motion of the gel surface and the obstacle by the same amount has no restoring force. The ∆z base,i Choice of parameters The parameters are given in table 1. As described above, the obstacle diffusion coefficient D obst corresponds to that of the actin gel, D gel . Estimating D obst from the actin gel size of ∼10 2 nm versus 3 nm-size actin monomers with diffusion coefficient D mon = 5 × 10 6 nm 2 s −1 [38], using the inverse proportionality of diffusion coefficients to size, would suggest D obst = D mon /30 = 1.7 × 10 5 nm 2 s −1 . However, using an obstacle diffusion coefficient this large would be extremely computationally demanding, and for this reason we use a smaller value D obst = 10 4 nm 2 s −1 . We believe that this value is large enough to capture the key physical mechanisms since the dimensionless ratio of t dif f = δ 2 /2D obst (the time that takes the obstacle to diffuse the length of one actin step size) to t pol = 1/C A · k 0 on (the time for one free-filament subunit to add), t dif f /t pol = δ 2 C A k 0 on /2D obst = 0.008, is still very small. Here C A is the actin monomer concentration and k 0 on is the free-filament on-rate constant (see table 1). The diffusion coefficients D tip and D base are expected to be much larger than D obst because the moving entities are much smaller than the actin gel. Again, for computational practicality we use values of D tip and D base , that are 10 times larger than D obst , but probably smaller than the physical values. We have evaluated the sensitivity of the results to these two parameters by halving each one separately, and halving them both at the same time. This changed the total pulling force by only about 1%. We have not tested the effect of increasing the parameter values, because of the large computational effort that would be required. But the smallness of the effect resulting from reducing the values suggests that the effect of increasing them would also be moderate unless the behavior changes very abruptly as a function of the diffusion coefficients. We obtain a = 10 nm for the filament spacing from the estimated number of actin filaments 300 [39] at an endocytic patch with radius R = 75 nm. We obtain the bending spring constant as k bend = 3 k B TL p /L 3 sin (θ) 2 = 4.17 pN nm −1 [40]. Here L p = 17.5 µm is the persistence length of an actin filament [41] and as discussed above L is the free length of the filament beyond the gel surface. We take L to have a typical value of 54 nm, corresponding to the 20 subunits of a typical actin filament in endocytic actin patch. (The estimated number of actin filaments at an endocytic patch is taken to be 300 [39], with a total of 6000 actin monomers [42]). We assume that L remains constant during the polymerization process, as crosslinkers bind to newly grown parts of the actin gel near the membrane. The actin-gel spring constant k elast is obtained by fitting it to elastic restoring forces calculated for a configuration of filaments where alternating filaments are displaced in opposite directions (see appendix). Validity of assumptions Here we discuss the potential impacts of the main simplifying assumptions that we have made: Assumption of a sharp boundary between pusher and puller filaments. Because the endocytic protein patches assemble stochastically, the boundary between the pusher and puller filaments will be blurred. In the regions where there are roughly equal numbers of pullers and pushers, the forces will oppose each other and the 'smeared' force density will be reduced relative to the force densities in the strongly pushing or pulling regions. This will reduce the magnitude of the total pulling force that can be obtained. Ignoring lateral component of polymerization. This could have at least two effects. First, filament tips of pushers could move into the puller region. This would blur the boundary between pushers and pullers, reducing the total pulling force as described above. Second, if the filament tips are anchored strongly enough in the membrane that lateral motion is inhibited, forces could build up that would slow the polymerization of the pusher filaments. Again, this would reduce the total pulling force. Treatment of the actin network as an elastic gel. Viscous flow of the actin network will inhibit its ability to sustain a distribution of pushing and pulling forces, once again reducing the magnitude of the total pulling force. The viscosity of the actin network at endocytic actin patches is not known, but the magnitude of the effect is estimated in the Discussion section. Assumption of an infinitely hard obstacle. During endocytosis, the actin gel interacts with the cell membrane, which in turn interacts with the cell wall. We model only the part of the process before the invagination forms. During this time, the membrane is pressed against the cell wall. The force from the membrane is unlikely to deform the cell wall noticeably. This force density is comparable to the turgor pressure, which is about 200 kPa [10]. On the other hand, the Young's modulus of the cell wall is about 110 MPa and its thickness is about 120 nm, suggesting a maximum deformation of about (120 nm)×(200 kPa)/(110 MPa) = 0.22 nm, very small on the scale of the current simulations. In addition, because the membrane is nearly flat, the bending forces that it generates and transmits to the actin gel are very small. Therefore we feel that using an infinitely hard obstacle is a reasonable approximation. Absence of hydrolysis effects in model. If subunits at filament tips hydrolyze and release inorganic phosphate before a new subunit is added, the pusher-filament stall force will be reduced [28]. The rates of hydrolysis and phosphate release are not known for filament-tip subunits in endocytic actin patches. However, as the opposing force increases, the rate of subunit addition will slow, increasing the likelihood of release occurring before subunit addition. Therefore the total pulling force, even for very tightly bound pullers, could be significantly below the value predicted from the thermodynamic stall forces of the pusher filaments. Force generation and gel deformation by a 12 × 12 square array of cross-linked filaments The simulations for our system of 6 × 6 = 36 pullers and 12 × 12 − 36 = 108 pushers begin with all the filaments having one subunit, and their bases staggered as discussed above. Pushing and pulling forces develop as the pushers and pullers grow at different rates. The total pulling force is defined as the sum of all of the forces on the pulling filaments, and is taken to have a negative sign. We obtain our results from a single run of 10 s, rather than averaging multiple shorter runs. We use this procedure because obtaining reasonable estimates of the steady-state forces requires us, for each simulation run, to go beyond an equilibration time that can be as long as several seconds. For this reason, multiple runs much shorter than ten seconds would not be valuable because they would be dominated too much by the equilibration time. Multiple runs of ten seconds, for each parameter set, are not viable because of the computer time required. However, a single long run will accurately represent the results of shorter runs if the system is ergodic, so that a time average is equivalent to a configuration average. We are not able to perform enough runs to test the ergodicity for our full 144-filament system. However, we have tested it for a smaller 16-filament system with 4 pulling filaments. We compared results of runs of 10 shorter runs of 5 s with the result of one long run of 30 s. We obtained a force of 62 pN from the shorter runs, a force of 66 pN from a run of 30 s, and forces of 62 and 67 pN from two 10 s runs. Thus for this smaller system using a single long run gives average forces accurate to better than 10%. The force fluctuations in the larger 144-filament system are found to be about three times smaller than in the smaller system, so we expect that the average forces in this case are accurate to within a few percent. Figure 4 shows the time course of the total pulling force for systems with different puller-obstacle interaction potentials (see figure 3), treating a range of values of the actin gel stiffness. The total pushing force (not shown) almost entirely balances the pulling force. The total pulling force is thus less than the sum of the pusher stall forces Weak puller-obstacle binding, as in the 5 k B T potential, generates very small forces. The pulling force approaches the limiting value of 760 pN with increasing well depth, but even for potentials as deep as 50 k B T, the pulling force is substantially below the limiting value. Modifying the gel stiffness (halving k elas for 'soft' gels and doubling k elas for 'stiff' gels) might correspond to changing the concentration of actin filament crosslinkers. This does not change the asymptotic total pulling force, but stiffer gels generate large pulling forces earlier in the process. In addition, soft gels can lead to detachment of the pulling filaments from the obstacle, as occurs in the '25 k B T soft' curve at about 1.8 s, where the force suddenly drops to zero. This effect is described in detail in section 3.6. The total gel deformation is another measure of the force-generating capability of the system. It is defined as the difference between the average base position of all the pullers and that of the pushers. Figure The time t max that it takes the force to build up to near its maximum value, or equivalently the time required to reach near maximum deformation, can be roughly estimated using dimensional analysis. We take the maximum force F max to be determined by the total gel deformation ∆u z , the actin gel Young's modulus E, and the radius R ( 60 nm in our model) of the gel. Further assuming that F max is proportional to ∆u z E, dimensional consistency requires that F max = (constant) × ∆u z ER. Taking filaments to grow at their zero-force values up until the stall point, we have ∆u z = k 0 on C A δ cos (35 • )t max . Then The inverse proportionality of t max to E is seen in figures 4 and 5, where the force and deformation reach their maximum values for stiff gels faster than for softer gels. The numerical values predicted by equation (15) are not expected to be accurate, but t max is estimated to be about 0.8 s for the case of the medium gel and a 50 k B T puller. By comparison, the time required to reach half-maximum deformation in the simulations is about 1 s. Effects of transient attachment of pushers to obstacle It is believed that the WASP family of proteins, and their yeast homologue Las17, create weak transient attachments between filament tips and membrane [45]. We thus calculate how adding a potential well to the pusher potentials affects the magnitude of the pulling force. Figure 6 shows the magnitude of the total pulling force as a function of the pushers' well depth. For each point the mean value is calculated from the last seven seconds of a ten-second run, to minimize contributions from the equilibration period. The error bars are obtained as the standard deviation of the mean of force values from seven consecutive one-second pieces of these runs. Thus they include both random error and some component of systematic error. For both 25 k B T and 50 k B T pullers, the force drops as the pushers' binding to the obstacle becomes stronger, and the fractional effect is larger for weaker puller potentials. At a well depth of 10 k B T, the drop in total pulling force corresponds to about 1 pN per pusher. Thus the total pulling force is significantly stronger when pushers have purely repulsive interactions with the membrane. Force distributions To explore possible spatial variations of the pulling and pushing forces, we present their spatial distribution using heat-map diagrams. Because the simulations are stochastic, the distributions obtained over a finite time interval (figure 7(a)) display noticeable fluctuations. To show the systematics of the distributions more clearly, we create symmetrized force distribution heatmaps by averaging the filament forces over symmetrically related subsets ( figure 7(b)). For example, the symmetrically averaged force for a filament with coordinates (x, y) includes contributions from filaments at (±x, ±y) and (±y, ±x). We consider the case of 50 k B T pullers, which gives close to the limiting pulling force. We see a fairly flat distribution in the pushing region (blue) balancing the total pulling force in the center (red). Figure 8 shows a cross section of the time-averaged force distribution along a row in the middle of the array, for four different potentials. As expected, the magnitudes of the individual pulling forces are always less than the average maximum force per pulling filament The pulling and pushing force distributions are relatively flat in all cases except the double well potential. We believe this holds because in arrays with single-well pullers, all pulling filaments must grow at the same velocity once steady state is reached; since force determines growth velocity, all pullers must also experience the same force, and the force distribution is flat. In contrast, double well potentials have a velocity that is nearly force-independent after the pulling force reaches a certain magnitude [25]; therefore a range of forces is compatible with a certain velocity, so equality of growth between pulling filaments does not necessarily lead to a flat distribution of force. This is seen in the 'DW 25 k B T' data points in figure 8. The force distribution extracted from EM images of invagination shapes ( figure 8(b) of [22]) shows a flat distribution over the pulling region similar to our prediction. However, the extracted force profile shows a ≈30% hump in actin force density at the edge of the pushing region (r ≈ 50-80 nm). We do not see this hump in our simulation results. We believe that the hump results from the hemispherical geometry assumed in [22]. This choice makes it easier to shear the gel near the edge of the hemisphere, because it is thinner there. This should in turn reduce the magnitude of the forces. In the present model, we have taken all filaments to be elastically equivalent, which makes the resistance to shearing the same everywhere. Given that the hemispherical shape itself is an idealization, we do not feel that weakening the shear strength near the boundary to mimic the hemispherical shape would render the model more realistic. Figure 9 shows the time evolution of the force distribution for the case of 50 k B T pullers. The 'Early' stage shows the forces averaged over the 1s < t < 2s interval of a 10 s long simulation. As seen in figure 4(b) this stage is well before steady-state, explaining why the distribution has not reached its asymptotic constant value. The filaments at the edge of the pulling region show enhanced forces at this stage. We believe this occurs because at the early stages of the simulation, the forces are not yet strong enough to slow polymerization greatly. Then all the pullers will have added roughly the same number of subunits, and all the pusher filaments will have added a constant number of subunits (larger than the value for the pullers). The corresponding gel deformation will have a constant value in the puller region and a different constant value in the pusher region. Maintaining the difference between these deformations requires a force dipole at the boundary between the pusher and puller regions, which we believe explains the peak in the force seen in the 'Early' results. The 'Late' stage is closer to the asymptotic one, as the network approaches steadystate. The 'Middle' stages has force peaks similar to those seen in the 'Early' results, but also has a bump in the middle, for which we do not have a simple physical explanation. Mean-force theory To clarify the mechanisms determining the magnitude of the pulling force, we study a simplified meanforce model based on the force-velocity relations of separate puller and pusher ensembles. In the full system simulations, 108 pushers and 36 pullers exert force on the obstacle simultaneously. The forces exerted by the pullers experience fluctuations in time due to the polymerization dynamics of the pushers, and vice versa. However, on average, the magnitude of the force felt by the pushers equals that felt by the pullers. One would then expect the steady-state force to have the value at which the puller and pusher growth velocities are equal. To explore this hypothesis, we developed a mean-force model based on two force-velocity relations v push (F) and v pull (F). Here F (positive) is the magnitude of the time-averaged total force felt by either the pullers or the pushers. We calculate v push (F) and v pull (F) by performing pusheronly or puller-only simulations using an external force of magnitude F, with the force pointing in opposite directions in the two cases. The condition determining F is that v push (F) = v pull (F). The value of F satisfying this condition is obtained by linear interpolation from a finite set of force calculations. Figure 10 shows the curves of v push (F) and v pull (F), as well as the comparison between the predicted force from meanforce model with that of the full-system simulations. As is clear from figure 10(a), slow puller growth will bring the crossing point between the F-V curves in figure 10(a) down and to the right, increasing the pulling force. On the other hand, slowing the growth of pushers will bring the crossing point down and left, reducing the pulling force. As figure 10(b) shows, the mean-force model closely predicts the results of simulations with a full array of filaments, over a range of puller potentials. This result suggests that the time fluctuations of the forces from pushing filaments may not crucially impact the growth velocity of the pullers, and vice versa. This analysis shows that a large pulling force will occur when the puller filaments slow pusher filaments' growth by a large factor. The maximum force occurs when the puller filaments' growth stops completely, stalling the pusher filaments. Figure 11 shows the relationship between total pulling force and average growth velocity of the central filaments. As the plot shows, even a growth rate as low as 10% of the free filament velocity can reduce the generated force substanti ally, by >40%. As described in the Discussion, this is important for ascertaining the effect of viscous flow on force generation. Effect of mechanical parameters on extent of actin polymerization We find that softening the gel increases the extent of actin polymerization required to reach a certain force. Figure 12 shows the number of subunits added to all of the filaments in the array during the course of the simulation. We compare different gel stiffnesses and potential depths at a given value (270 pN) of the force, about 80% of the maximum force for 25 k B T pullers. The total amount of actin polymerization is increased by about 160% for both 25 k B T and 50 k B T pullers going from a stiff gel to a soft one. In addition, ≈40% more actin is polymerized for a medium gel at the same force of 270 pN when the pullers' potential depth is halved from 50 k B T to 25 k B T. This increase is smaller for soft and stiff gels (≈25%). The black 'No obstacle' bar corresponds to the amount of polymerization obtained after 0.5 s (the average time it takes for the force with 25 k B T pullers to reach 80% of maximum) if no pullers are attached to the obstacle. Obstacle-gel detachment Our finding of obstacle-gel detachment in some parameter ranges is surprising, given the magnitude of the potential and the number of filaments. Consider the case of a completely rigid, solidly anchored gel. Standard reaction rate theory [46] gives an analytical estimate of the escape time of the obstacle from the potential well: Here w and ∆U are defined in figure 13, where a positive external force of 335 pN (which caused detachment in figure 4 above) pulls on a 6 × 6 array of rigid pullers with 25 k B T potential wells and staggered initial alignment. In this case, w ≈ 5 nm, D obst = 10 4 nm 2 s −1 (as above), and ∆U ≈ 180 k B T. Thus, equation (16) gives an extremely long time of ≈ 4 × 10 75 s for the obstacle to detach from these pullers, so detachment essentially never happens. Consistent with this prediction, when the actin gel is stiff and the binding energy is 25 k B T, our simulations find that obstacle-gel detachment never occurs during 10 s simulations. However, softening the gel or weakening the puller-filament binding can lead to detachment before steady state is reached, as was seen in figures 4 and 5 for the 25 k B T soft-gel case. Thus a soft actin network behaves differently from stiff networks in that it detaches more easily from the obstacle. We evaluated detachment in 40 different runs for the 25 k B T softgel case. We found the distribution to be peaked with an average value of 1.7 s and a standard deviation of 0.3 s. The distribution does not have the exponential form that would result from a stationary Poisson process. We believe that the peaked behavior in this case means that the detachment occurs rather rapidly once the force has reached a sufficient value to drive detachment, which occurs near 1.7 s. To understand the detachment process in more detail, we performed puller-only stimulations with an external force acting on the obstacle. In a particular simulation with 25 k B T pullers in a gel with medium elasticity, with an external pulling force of 335 pN on the obstacle, the obstacle detached from the gel after about 6.5 s. To understand the origin of this effect, we look at the distribution of r, the gap between filament tips and obstacle, during a few time steps right before the rupture happens. Figure 14 shows frames of a heatmap plot of the distribution of r for the pullers. Larger r values (greater stretching) have redder pixels. Frame (a) is at a time well before rupture, to show the baseline appearance of the distribution. Frames (b) through (m) span 12 000 time steps, corresponding to 6 µs. The number of pink pixels increases gradually during this period, indicating the appearance of possible detachment nucleation points around the tip of the actin filaments. The accumulation of these nucleation points eventually spreads from the right bottom corner over the entire pulling region, and the obstacle detaches completely. This is reminiscent of the process of a crack propagating between two dissimilar mat erials due to stress concentration at the edge of the crack. Some light can be shed on the effect of gel stiffness on the detachment process via the 'Griffith' theory of the critical stress for fracture [47]. The Griffith theory describes how stress concentration at a crack tip aids crack propagation. The stress σ c required to propagate a crack of length l c is given by where γ is the energy density required to break the bonds along the crack. In this criterion, σ c increases with E, consistent with our finding that stiff gels do not detach from the membrane. In applying it to the actin-filament system, we take γ to be the ratio of the puller potential depth to the area a 2 per filament. Since we are studying an incipient crack, we take l c = a = 10 nm. For a puller potential well depth of 25 k B T, using the value E = 0.140 pN nm −2 , we obtain σ c = 0.095 pN nm −2 . By comparison, the stress at a force of 335 pN, for which detachment occurs in the simulations, is σ = 0.17 pN nm −2 . Given the small size of the simulation system and the absence of a welldefined preexisting crack, this level of agreement is reasonable. Discussion Our calculations show that actin-based pulling force generation can result from spatially separated ensembles of filaments having different growth velocities. The slower-growing filaments exert pulling forces on the membrane, while faster-growing filaments exert pushing forces. Large pulling forces are generated if the velocity of pullers at a given force is much smaller than that of pushers. When the pullers do not grow at all, the total pulling force is maximized and equals the sum of the stall forces of the pushing filaments. The mechanism explored here for generating slower puller-filament growth is a stronger binding to the membrane, but others may be relevant. A mean-force model treating pushers and pullers separately with constant forces reveals in more detail how the pulling force is determined by the difference between the puller and pusher force-velocity relations. It accurately predicts the total pulling force from fullsystem simulations, suggesting that time statistics of the force generation are not important for the final results. Our key specific findings are the following: The pulling force is reduced by growth in the pulling region, viscous deformation of the actin gel, and transient attachment of the pushers to the membrane When the binding of puller filaments becomes weaker and they grow faster, the pulling force drops rapidly ( figure 4). A similar effect would be expected from viscous flow of the actin gel. Figure 11 shows that a growth rate of 13.6 nm s −1 is sufficient to reduce the pulling force by about 50%. Taking our model system to have a radius of about 60 nm gives a shear rate of about 0.2 s −1 . Since the pulling force per filament is on the order of 20 pN (see figure 8(a)) and the filament spacing is about 10 nm, the stress in the puller region is ∼20 pN/100 nm 2 = 0.2 pN nm −2 . Then substantial force reduction would occur if the actin gel viscosity becomes less than about 0.2 pN nm −2 /0.2 s −1 = 1 pN s nm −2 = 10 6 Pa s. This value is much higher Figure 14. Distribution of r-values for pullers with 25 k B T binding in a medium-stiffness gel. Redder pixels correspond to larger r values. Frame a is the baseline distribution, at a time well before the rupture. Frame (m) is the distribution after the obstacle detaches completely. Frames (b) through (m) span 12 000 time steps, a total of 6 µs. Accumulation of nucleation points around several filament tips (frame (h)), which takes place in a time in the order of microseconds, leads to detachment. than any that have been previously measured. However, the viscous properties of actin networks at the very high actin and crosslinker densities present in endocytic actin patches have not been explored quantitatively. Pulling forces are also maximized when the pushers' interaction potential with the obstacle is monotonically repulsive (figure 6), which gives the largest growth velocity [25]. Transient attachments of the pushers to the obstacle slow their growth and thus inhibit pulling force generation. Effect of actin gel mechanical properties on pulling-force generation In the steady-state limit, the magnitude of the pulling force is independent of the actin gel stiffness over the range studied, provided that obstacle-gel detachment does not occur ( figure 4). However, large forces are obtained at earlier times for stiff gels. This suggests that invagination should be slowed by mutations reducing the number of crosslinkers. The endocytic event could also be aborted completely because the actin patch has a finite lifetime, which may be too short to allow the maximum force to build up. This is consistent with requirement of the yeast fimbrin homolog Sac6 for endocytosis in yeast [15,16]. In [15], more than 70% of the endocytic sites in Sac6∆ cells were found to have a flat membrane profile. The majority of the remaining ones invaginated to a distance of ≈100 nm, but then retracted. The 'retraction' phenotype could result from insufficient force building up during the lifetime of the actin patch. The slower buildup of force for the soft gels could also prevent endocytosis if viscous flow of the gel is important. Such flow during the period of force buildup might prevent the force from ever becoming large enough to overcome the turgor pressure. Force distributions The steady-state force distribution profiles (figures 7 and 8) reveal a complete force balance between pushing and pulling regions, and fairly constant force densities over these regions. They are roughly consistent with the force profiles obtained from measured membrane deformations [22]. The time evolution of the force distributions shows enhanced forces at the edge of the pulling region at early stages, which are part of a dipole of forces surrounding the interface between the pusher and puller filaments. The force distributions could be measured using a combination of superresolution microscopy and suitably designed molecular force sensors. If a force sensor were inserted into Sla2, and the signal from the force sensor measured using superresolution microscopy, a picture of the force distribution in the central Sla2 region could be obtained and compared to these predictions. Effect of mechanics and filament-membrane interaction on actin polymerization We also find that the actin count is increased by either softening the gel or reducing the puller binding energy (see figure 12). We are not aware of data showing the effect of gel softening on the actin count, but an extreme version of reducing the puller binding energy is obtained in Sla2 deletion mutants. In these experiments, extensive actin accumulation has been observed in the form of 'comet tails' [48,49]. Detachment of pullers from the obstacle This effect can completely disrupt the force-generation machinery. We find that detachment does not occur for completely rigid gels. But for soft gels, it does occur as seen in figure 14 and described by equation (17). The process begins with an initial 'nucleation' event in which one or more filament tips move out of the potential well binding them to the obstacle. This leads to stress concentration, and detachment then spreads over a microsecond time scale, in a mechanism analogous to crack propagation in a solid. These results suggest that softening the actin gel driving endocytosis in yeast by mutations reducing the number of crosslinkers could cause actin gel detachment from the membrane, aborting the endocytic event. This mechanism could provide an alternate explanation of the requirement for Sac6 in endocytosis [15,16]. The detachment mechanism could be distinguished from the direct effect of gel softening by tracking the motion of patches of the actin proxy Abp1, in Sac6∆ cells. If the detachment mechanism operates, the patches will move into the cell rapidly; if the direct effect of gel softening are more important, the patches will remain at the membrane. where we have taken Poisson's ratio σ = 1/2 to correspond to an incompressible actin gel. To obtain the restoring forces for the given displacements, we first express this result in Fourier space: The Young's modulus for the actin gel has been roughly estimated as E = 0.14 pN nm −2 [22]. Thus equation (A.7) obtains the baseline value for the actin gel spring constant as k elas = 0.53 pN nm −1 .
Seafood Consumption and Components for Health In recent years, in developed countries and around the world, lifestyle-related diseases have become a serious problem. Numerous epidemiological studies and clinical trials have demonstrated that diet is one of the major factors that influence susceptibility to lifestyle-related diseases, especially the middle-senile state. Studies examining dietary habits have revealed the health benefits of seafood consumption. Seafood contains functional components that are not present in terrestrial organisms. These components include n-3-polyunsaturated fatty acids, such as eicosapentaenoic acid and docosahexsaenoic acid, which aid in the prevention of arteriosclerotic and thrombotic disease. In addition, seafood is a superior source of various nutrients, such as protein, amino acids, fiber, vitamins, and minerals. This review focuses on the components derived from seafood and examines the significant role they play in the maintenance and promotion of health. Introduction Lifestyle-related diseases, such as obesity, diabetes, hypertension, and hyperlipidemia, are widespread and increasing in developed countries. Metabolic syndrome includes a cluster of symptoms that are related to lifestyle diseases and is associated with an increased risk of type 2 diabetes, some types of cancers (Cerchietti et al., 2007), cardiovascular disease (CVD) (Hwu et al., 2008), and nonalcoholic fatty liver (Byrne 2010). Together with the rapid increase in the number of older people with lifestyle diseases, these have become serious national problems, both medically and financially. Increased dietary sugar and fat promotes obesity and diabetes (Linseisen et al., 2009;Cordain et al., 2005). Soft drink and fast-food consumption is influenced by several factors. Some of these factors include, but are not limited to, food availability, preferences, culture, age, and knowledge of nutrition and health. Reshaping the food environment is a promising new approach to lifestyle-related disease problems (Story et al., 2008;Glanz & Yaroch 2004). Seafood is currently accepted as an essential food for humans (FAO 2010). Seafood is highly regarded for its abundance of high-quality proteins, n-3 polyunsaturated fatty acids (PUFAs), and other nutrients, such as minerals, trace elements, and vitamins (FAO 2010). These nutrients are essential for bodily functions and are beneficial to growth, the brain, and the nervous system; they also have anticancer properties (Liao & Chao 2009). Seafood has helped alleviate food crises in many developing countries, providing a valuable supplement to a diverse and nutritious diet. In recent years, seafood consumption has gradually increased throughout the world (FAO 2010). In Japan, the consumption of livestock food products, such as dairy products, meats, and their processed foods, have increased. This may lead to an increased incidence of CVD as a result of lifestyle-related diseases, such as hyperlipidemia, atherosclerosis, diabetes, and hypertension (Toshima 1994). Epidemiological and experimental reports have demonstrated a relationship between diet and incidence of CVD (Pereira et al., 2004;Osler et al., 2002). Therefore, dietary therapy is considered the first-choice treatment for arteriosclerotic disease and is recognized as being as important as medical treatment. Many researchers have demonstrated that seafood has nutritional characteristics that maintain and promote health (Mozaffarian & Rimm 2006;Hu et al., 2002). In particular, the health benefits of seafood have principally been associated with high intakes of n-3 PUFAs, such as eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) (Dyerberg et al., 1978). Fish oil contains abundant EPA and DHA and is sold as a functional food that can promote superior health. Many other bioactive Health Effects of Seafood Consumption Epidemiological evidence gathered from Greenland Inuit and Japanese fishing villages has demonstrated that the intake of marine animal products is effective in the prevention of CVD (Kagawa et al., 1982;Bang et al., 1980). Many other studies from a variety of countries have also reported that seafood consumption helps protect against lifestyle-related diseases. Numerous epidemiological studies have examined the relationship between dietary marine products and CVD (Guallar et al., 2002;Krauss et al., 2000;von Schacky et al., 1999;Singh et al., 1997). In one report, individuals who consumed fatty fish had a 34% reduction in CVD in a three-cohort study (Oomen et al., 2000), and 35 g/day of fish consumption resulted in decreased CVD mortality (Daviglus et al., 1997). A meta-analysis revealed that individuals who consumed fish once a week had a 15% lower risk of CVD mortality compared with individuals who consumed no fish (He et al., 2004). The intake of lean and fatty fish in a sample British population was associated with a reduction in diabetes risk in the epidemiological data from the European Prospective Investigation of Cancer (EPIC)-Norfolk cohort study (Patel et al., 2009). One ecological study reported that high frequency fish and seafood consumption decreased the risk of type 2 diabetes in populations with an overweight group (Nkondjock & Receveur, 2003). Sufficient seafood consumption in childhood has been demonstrated to help ensure good fetal neuron development and infant and child cognitive and visual development (Ryan et al., 2010;Carlson, 2009); however, whether or not these positive effects continue into adulthood has not been confirmed. The medical benefits of fish consumption have also been examined as they pertain to inflammatory diseases (Gopinath et al., 2011;Rosell et al., 2009), certain cancers (Szymanski et al., 2010;Dewailly et al., 2003;Zhang et al., 1999), dementia (Cederholm & Palmblad, 2010;Robinson et al., 2010), and psychological status (Appleton et al., 2010). The Health Benefits of Bioactive Components Derived from Seafood The health benefits of seafood and fish oil consumption according to an epidemiological survey of Greenland Inuit by Dyerberg et al are very interesting (Dyerberg et al., 1978). Although the Inuit have a very high-fat diet, the prevalence of ischemic disease is very low in the population. This report received worldwide attention, and studies related to the health functionality of marine products were widely conducted as a result. Many marine organisms inhabit complex environments that are exposed to extreme conditions and, as a result of adapting to the changing environment, they produce a wide range of secondary (biologically active) metabolites. Marine organisms have many bioactive components, such as n-3 PUFAs, protein, fiber, taurine, sterol, and pigments; they also contain unique components that are not present in terrestrial organisms. Nutrients and other bioactive components derived from fish and marine organisms may become functional food ingredients that have medical characteristics and provide health benefits. n-3 PUFA The various beneficial effects of seafood have primary been attributed to n-3 PUFAs such as EPA and DHA. Marine organisms have been identified as the only foods that contain a naturally high amount of these fatty acids. This arises from the fact that marine phytoplankton has a high ratio of EPA and DHA, and thus these fatty acids are accumulated in the food chain. The total content of EPA and DHA in fish varies depending on the type of fish and their habitat. The proportion of n-3 PUFAs in fish muscle is higher in fatty fish, such as mackerel, herring, and salmon, than in lean fish, such as cod, haddock, and halibut. In addition, shellfish, such as crab, shrimp, and lobster, have low levels of n-3 PUFAs (Shahidi, 2011). The metabolites of EPA are the most well known and include eicosanoids, such as the 3-series prostaglandins, prostacyclins, and thromboxanes, and the 5-series leukotrienes (Calder, 1998). The eicosanoids derived from EPA are less active than the pro-inflammatory and pro-thrombotic eicosanoids derived from arachidonic acid. The n-3 and n-6 fatty acids compete for conversion into these important metabolites. In fact, tissue n-6/n-3 levels are largely determined by dietary intake levels (Lands et al., 1992). Daily intake of n-3 PUFAs such as EPA and DHA reduce the rate of incidence and death from CVD. For example, in the GISSI-Prevention study (Marchioli et al., 2002), more than 2,800 Italians who were heart attack survivors consumed 850 mg of purified EPA/DHA in capsule form for 3.5 years. The results revealed that, compared with a similar number of patients who did not consume the EPA/DHA capsules, there was a 20% reduction in the rate of any-cause death and a 45% reduction in the rate of death from CVD. n-3 PUFAs supplementation therapy continues to demonstrate considerable promise in the primary and secondary prevention of CVD (Lavie et al., 2009). For preventing general heart disease, the American Heart Association (AHA) recommends approximately 1 g of EPA/DHA per day for coronary heart disease patients (Kris-Etherton et al., 2002). For healthy people, the AHA recommends consuming fatty fish at least twice a week (30-40 g per day), or approximately 500 mg of EPA/DHA per day. Hypertriglyceridemia patients are advised to ingest 2 to 4 g of EPA/DHA per day (Lichtenstein et al., 2006). Further, the consumption of 3 to 20 g or more of EPA/DHA has been examined in terms of the dietary effects on serum triglycerides (TG), blood pressure, platelet aggregation activity, endothelial function, blood vessel flexibility, and inflammation (Kris-Etherton et al., 2003). A previous study found that treatment with 1.5 g of EPA/DHA per day appeared to improve carotid artery plaques stability (Thies et al., 2003). n-3 PUFAs intake has also been associated with beneficial effects related to obesity, insulin sensitivity, and the reduction of inflammatory markers (Ramel et al., 2010;Rudkowska 2010;Ramel et al., 2008;Nettleton & Katz 2005). In a murine model of obesity and insulin resistance, dietary n-3 PUFAs were incorporated in the cell membrane phospholipids; they enhanced the membrane fluidity and the expression, affinity, and some insulin receptors (Das 1999) as well as glucose transporter-4 protein levels in adipose tissue (Peyron-Caso et al., 2002), thereby improving insulin sensitivity. In addition, n-3 PUFAs have beneficial effects on adipose tissue in obese individuals through reduced body fat mass and stimulated lipid oxidation (Couet et al., 1997), improvement in body weight and satiety regulation (Abete et al., 2010), amelioration of the cytokine profile, including leptin and adiponectin (Abete et al., 2010), and a reduction of inflammation (Das 2005), rheumatoid arthritis (Volker et al., 2000), systemic lupus erythematosus (Walton et al., 1991), Crohn's disease (Belluzzi et al., 1996), ulcerative colitis (Stenson et al., 1992), and immunoglobulin A nephropathy (Donadio et al., 1994). There is also an increasing amount of evidence that suggests that diets containing fish and/or EPA/DHA may protect against the development of Alzheimer's disease (Morris et al., 2003) and prostate cancer (Terry et al., 2001). Phospholipids Although the majority of fat in seafood is TG, approximately 10% consists of phospholipids (PLs). Numerous studies using animal models have suggested that dietary PLs may be of benefit to human health. For example, phosphatidylcholine, which is a major component of dietary PLs, can decrease blood total lipids (Mastellone et al., 2000) and improve brain function (Chung et al., 1995). Phosphatidylethanolamine and phosphatidylserine can also decrease blood cholesterol (Imaizumi et al., 1983) and improve brain function (Mc Daniel et al., 2003). been several human studies that have investigated the beneficial effects of supplementation with dietary krill oil, which has PL-containing n-3 PUFAs rather than TG-containing n-3 PUFAs. Results indicate that krill oil supplementation was well tolerated and caused desirable increases in plasma and cell membrane EPA and DHA levels (Wang et al., 2011;Maki et al., 2009). Furthermore, PL-containing n-3 PUFAs are beneficial in that they can help alleviate obesity-related disorders (Shirouchi et al., 2007) and act as antiinflammatory (Ikemoto et al., 2001), antioxidant (Hiratsuka et al., 2008, and antitumor agents (Hosokawa et al., 2001) in animal experiments. Previous studies have suggested that PL-containing n-3 PUFAs derived from squid mantle muscle decreased serum and liver TG and cholesterol levels compared with that induced by soybean PL-or TG-containing n-3 PUFAs (Hosomi et al., 2010a). Although, research in this field is still in the initial stage, it has been receiving increasing attention as a result of the realization that PL-containing n-3 PUFAs may provide vital outcomes and facilitate progress in the design of beneficial clinical therapies for humans. Protein, Peptide, and Non-Protein Nitrogen Compounds It is generally accepted that seafood is a high-quality source of protein and that seafood consumption provides health benefits to growing children, adolescents, and the elderly. Normal dietary habits include fish oil as well as whole fish, which provide many additional nutrients. Dietary n-3 PUFAs decrease serum TG, although they do not lower serum cholesterol (Balk et al., 2006). Therefore, there is a possibility that the health function of fish-based foods is not solely related to EPA and DHA. There is a great deal of research that is focused on the efficacy of EPA and DHA in seafood for human health, whereas there is almost none related to the health effects of proteins. As nutrient components of seafood, the beneficial effect of proteins may have been masked by EPA and DHA in seafood intake intervention studies. Fish protein, which is a major micronutrient in fish, plays an important role in human nutrition worldwide (FAO 2010) and has been used as a main ingredient in processed seafood, such as kamaboko (Japanese fish paste) and fish sausage. Seafood proteins possess excellent amino acid scores and digestibility characteristics. These constitute approximately 10 to 25% of seafoods and can be classified as sarcoplasmic, myobibrillar, and stroma types. In general, amino acid compositions and the bioavailability of animal protein are more suitable than plant protein, and the protein quality of most fish proteins may be equal to that of an ideal protein such as lactalbumin, and exceed that of terrestrial meat (Friedman 1996). Another aspect of the role of fish proteins in human health pertains to their possible effects on lipid metabolism. In this context, our group and other investigators have demonstrated that fish proteins affect serum cholesterol levels in experimental animals (Hosomi et al., 2009;Wergedahl et al., 2009;Shukla et al., 2006;Zhang & Beynen 1993). A previous study suggested that dietary fish protein decreased serum cholesterol through the inhibition of cholesterol and bile acid absorption and the enhancement of cholesterol catabolism in the liver (Hosomi et al., 2009). In addition, dietary fish protein also has beneficial effects, such as antihypertensive (Boukortt et al., 2004), stimulation of fibrinolysis (Murata et al., 2004), and antiobesity properties (Oishi & Dohmoto, 2009). In human studies, compared with other animal proteins, dietary cod proteins decreased the highly sensitive C-reactive proteins concentration in serum (Ouellet et al., 2008) and improved insulin sensitivity in insulin-resistant individuals (Ouellet et al., 2007). Recently, the large Nurse's Health Study, which is a prospective study following more than 84,000 women aged 30 to 55 years over a 26-year period, suggested that increasing the intake of fish as a major dietary protein source provided a significant CVD reduction risk (Bernstein et al., 2010). Thus far, the health functions of various types of fish tissue besides muscle were examined. Although the testes and ovaries are edible parts, only information related to their high cholesterol and nucleic acids content is available. Protamine, which is abundant in fish testes, has been widely used as a pharmaceutical product as an antidote to heparin. It maintains its antihyperglycemic effects together with insulin, and is a natural food preservative. Protamine strongly inhibited the hydrolysis of trioleoylglycerol emulsion using phosphatidylcholine (Tsujita et al., 1996), suppressed lipid absorption in the oral tolerance test in humans (Hoshino et al., 2008), and also suppressed the increase of body mass through the inhibition of fat absorption in small intestine (Duarte-Vázquez et al., 2009). Furthermore, dietary protamine resulted in decreased serum and liver cholesterol levels through the suppression of cholesterol and bile acid absorption, and enhanced the cholesterol secretion from the liver into bile in rats (Hosomi et al., 2010b). In recent years, many people have become interested in the health promotion properties of bioactive peptides prepared from seafood protein. In a group administered the valyl-tyrosine peptide, which was derived from sardine muscle hydrolysate by alkaline protease, systolic and diastolic blood pressure was reduced by 9.3 and 5.2 mm Hg, respectively, in a four-week double-blind placebo controlled trial (Kawasaki et al., 2000). In addition, the inhibition of lipid peroxidation by a marine bioactive peptide, isolated from jumbo squid, was determined using a linoleic acid model system, and its activity was much higher than α-tocopherol and close to that of butylated hydroxytoluene (Mendis et al., 2005). Marine bioactive peptides also have beneficial effects such as immunomodulating (Duarte et al., 2006), hypocholesterolemia (Wergedahl et al., 2004), and antimicrobial effects (Tincu & Taylor 2004) in animal and in vitro studies. The various health functions of protein and peptides derived from seafood have been clarified by researchers using animal and human studies. Several long-term human studies have been undertaken to evaluate the health effects of marine proteins. In the future, the assessment of the health benefits of marine protein in humans needs to be assessed in long-term clinical trials. Non-protein nitrogen (NPR) compounds are also present, to various extents, depending on the species. The dark muscles of fish generally contain a higher amount of NPR compounds than the light muscles. NPR compounds in muscle tissues are composed of free amino acids, amines, nucleotides, guanidine and their breakdown products, urea, and ammonium salts (Shahidi, 1998). The contribution of NPR compounds to the taste of seafood is important. Taurine With the exception of free amino acid, taurine (2-aminoethanesulfonic acid) is present in nearly all tissues and is particularly abundant in the heart, blood, retina, and developing brain (Wójcik et al., 2010). Taurine synthetic activity in humans is weaker than that in guinea-pigs and rats, and dietary dependence on taurine is high. Hence, taurine is a nonessential but conditionally essential amino acid in the human body (Huxtable, 1992). Taurine has many important roles in several essential biological processes, such as calcium modulation, bile acid conjugation, antioxidation, membrane stabilization, and immunity (Schuller-Levis & Park, 2004;Huxtable 2000;Huxtable 1992). Humans consume taurine largely through seafood, which contains high amounts of taurine compared to meat (Tsuji & Yano, 1984). In particularly, taurine is particularly abundant in some marine invertebrates: oyster tissue has more than 1/100g the taurine content, whereas the taurine content in terrestrial plants is low or absent (Kataoka & Onishi, 1986). Taurine has beneficial antihypertensive (Schaffer et al., 2010;Harada et al., 2004), antihypercholesterolemic (Matsushima et al., 2003), and antiinflammatory effects on lifestyle-related diseases (Jerlich et al., 2000). Furthermore, human intervention studies have revealed that the administration of taurine and n-3 PUFAs has hypolipidemic and antiatherogenic effects compared with n-3 PUFAs supplementation alone (Elvevoll et al., 2008). In non-diabetic obese human subjects, 3 g/day taurine supplementation for 7 weeks reduced serum TG, the atherogenic index, and body weight compared to a placebo group (Zhang et al., 2004). These findings suggest that the consumption of a sufficient quantity of taurine may be important in reducing the risk of lifestyle-related diseases. However, further clinical trials are required to confirm the health promotion mechanism of taurine. Fiber In general, muscle-based seafood contains very little carbohydrate and fiber. However, edible seaweed contains a lot of dietary fiber (25-75% dry weight), and water-soluble fiber constitutes approximately 50 to 85% (Jimenez-Escrig & Sanchez-Muniz, 2000). On the basis of their pigmentation, seaweeds are classified into three main groups. Brown seaweeds are predominantly brown due to fucoxanthin and have primary polysaccharides such as fucans, cellulose, alginates, and laminarins (Goni et al., 2002;Haugan & Liaaenjensen, 1994). Green seaweeds are green due to the presence of chlorophyll and ulvan, which is a major polysaccharide component (Robic et al., 2009). Red seaweeds have phycoerythrin and phycocyanin as their principal pigments; they also contain agars and carrageenans as the primary polysaccharides (McHugh 2003). In animal studies, polysaccharides extracted from various edible seaweeds have been found to reduce total cholesterol, low-density lipoprotein (LDL)-cholesterol, and TG in plasma (Amano et al., 2005;Pengzhan et al., 2003). The hypocholesterolemic effect of polysaccharides may be due to an augmented interfering with micelle formation and lipid absorption in the small intestine or an increased excretion of neutral sterols and biliary acids in feces. In addition, sulfated polysaccharides, such as fucoidan and carrageenans, are recognized to possess a number of biological activities, including anticoagulant (Matsubara et al., 2000), antiviral (Artan et al., 2008), antioxidant (Heo et al., 2005), and antiinflammatory (Kim et al., 2009) effects that may have relevance in functional foods, cosmetics, and pharmaceutical applications (d' Ayala et al., 2008;Guo et al., 1998). While a substantial number of studies has been conducted to date both in vitro and in vivo, few studies have been conducted on human subjects. Further study related to the fiber in seaweed should aim to examine the health benefits in human subjects. Phytosterols The structure of phytosterols is also similar to cholesterol, with only minor differences in the relative position of ethyl and methyl groups. Phytosterols are common ingredients in plants, and the principal forms are β-sitosterols, stigmasterol, and campesterol. The forms of phytosterols in marine invertebrates include free sterols, stanols, and sterol ester (Kanazawa 2001). Phytosterols are often used to develop health food, including low-fat and fat-free yogurt, milk, juices, spreads, cereals, and bread (Demonty et al., 2009). Clinical trials have consistently shown that an intake of 2 to 3 g/day of phytosterols is associated with a significant lowering (between 4.1 and 15%) of blood LDL-cholesterol (Malinowski & Gehret 2010;de Jong et al., 2008;Patch et al., 2005;Thompson & Grundy 2005). The hypocholesterolemic effects associated with an intake of certain edible microalgae have been demonstrated to be caused by phytosterols, and microalgae have been launched as industrial producers of phytosterols (Plaza et al., 2009;Rasmussen et al., 2009). The lipid-lowering mechanism of phytosterols is thought to occur when phytosterols compete with the absorption of cholesterol by binding to micelles in the intestine (Jones et al., 2000). Their presence in the intestine thus adversely affects the stabilization of cholesterol into micelles, thereby decreasing cholesterol absorption. Another aspect of phytosterols is that they enhance the enterocyte ATP-binding cassette (ABC) G5 and ABCG8 proteins, which act to excrete cholesterol into the intestinal lumen and expression (Marangoni & Poli 2010;Patch et al., 2006). Phytosterols have also been reported to be responsible for other biochemical properties, including antiinflammatory (Houweling et al., 2009), antioxidant (Mannarino et al., 2009), andanticancer effects (Bouic 2001). Few studies have examined the relationship between high-dose phytosterols and the reduction in fat-soluble vitamins, antioxidants, and carotenoids (Musa-Veloso et al., 2011;Katan et al., 2003). Further research is needed to gain more insight into the security of phytosterols as food and functional supplements in the human body. Carotenoids Carotenoids are fat-soluble and they have brilliant yellow and orange pigments. They act to transform light energy into chemical energy and antioxidants that inactivate the harmful reactive oxygen species of photosynthetic organisms, plankton, and fungi (Lesser, 2006). One of the most important biological functions of carotenoids such as β-carotene in the human body is their ability to form vitamin A (García-González et al., 2005). However, other carotenoids, such as astaxanthin, lycopene, and fucoxanthin, do not form vitamin A. Recently, the astaxanthin and fucoxanthin derived from seafood have been reported on for a wide range of commercial applications based on their biological properties. Astaxanthin is a xanthophyll carotenoid that is contained in salmonid fish, lobsters, and marine crustaceans. Astaxanthin is considered to have health-promoting effects because astaxanthin oral supplementation in healthy human volunteers caused significant reductions in biomarkers of oxidative stress, inflammation, and hyperlipidemia (Cicero et al., 2007;Karppi et al., 2007;Iwamoto et al., 2000). Non-obsese individuals who consumed astaxanthin for 12 weeks had decreased TG and increased high-density lipoprotein (HDL) cholesterol, which is related to an increase in the adiponectin level (Yoshida et al., 2010). A limited number of clinical studies in humans have been conducted to test the safety of the consumption of astaxanthin. Fucoxanthin is an orange-colored carotenoid found in edible brown seaweeds, such as Undaria pinnatif, Hijiki afusiformis, Laminaria japonica, and Sargassum fulvelum (Maeda et al., 2007). Fucoxanthin prevents the growth of fat tissue, reduces abdominal fat, and reduces the risk of stroke, inflammation, and various cancers (Maeda et al., 2008;Ikeda et al., 2003). Although the beneficial functions of fucoxanthin are beginning to be examined, fucoxanthin administration is known to markedly elevate plasma HDL-cholesterol and total cholesterol levels (Woo et al., 2010;Kadekaru et al., 2008). Before fucoxanthin is used as a functional supplement, further study is required to determine its safety. Risk associated with Fish Consumption The health benefits related to the reduction in risk of CVD have triggered the mass consumption of fish (FAO 2010). Fish consumption, however, also carries certain risks associated with exposure to environmental toxicants. For instance, the only exposure to methylmercury is through edible marine products. Free mercury easily metabolizes methylmercury by microorganisms and is accumulated in the fish at the top of the food chain. Methylmercury exposure affects the highly sensitive nervous system. The developing fetal and infant nervous systems are also highly sensitive to methylmercury. Methylmercury induces central nervous system damage that depends on the amount ingested (Clarkson et al., 2003;Yoshizawa et al., 2002). Fish consumption recommendations for pregnant women and children are accompanied by warnings regarding how much and what kind of fish should be consumed (FDA 2004). Further, the dioxins and polychlorinated biphenyls contained in seafood have caused concerns related to the health effects of seafood consumption (Arisawa et al., 2005;Arisawa et al., 2003). The balancing of the health benefits and risks of fish intake is an important problem (He, 2009). Some researchers have reported that the consumption of seafood provides benefits that outweigh the risks, except for shark, swordfish, and edible animals and plants from areas with high levels of environmental contaminants (Dewailly et al., 2007;Yaktine & Nesheim 2007;Yoshizawa et al., 2002). Conclusion People have come to realize the importance of seafood in our diet. Numerous studies have proved that some of the best sources of excellent fats, protein, vitamins, and minerals that promote health can be found in seafood. It is unfortunate that it took so many years for the health benefits of seafood to be realized. In the future, an increase in lifestyle-related diseases, the majority of which are a result of dietary habits, is expected in both developed and developing countries (Daar et al., 2007). There is evidence that increased consumption of seafood and bioactive components derived from fish, shellfish, and seaweed could have a positive impact on the health of people around the world. Thus, the role of seafood in the maintenance and enhancement of health may grow stronger, given the problem of lifestyle-related disease and the local food environment. To sum, it is of paramount importance to promote the consumption of seafood and a reduction in high-sugar and high-fat food, including fast food and soft drinks (sugar, in particular), saturated fatty acids, and n-6 PUFAs, which is currently excessive.
Does balneotherapy provide additive effects to physical therapy in patients with subacute supraspinatus tendinopathy? A randomized, controlled, single-blind study This study assessed the additional contribution of balneotherapy on physical therapy in subacute supraspinatus tendinopathy. Ninety patients with subacute supraspinatus tendinopathy were included. They were randomized into two equal groups. In group 1 (n = 45), transcutaneous electrical nerve stimulation (TENS), hot pack, ultrasound treatments, and Codman’s and range of motion (ROM) exercises were performed. In group 2 (n = 45), balneotherapy was added to the treatment program. In both groups, shoulder active ROM and handgrip strength were measured. Pain was evaluated using a Visual Analogue Scale (VAS) (rest, sleep, movement); functional assessment and quality of life were measured respectively with the Shortened Disabilities of the Arm, Shoulder and Hand Questionnaire (QuickDASH), and the Short Form-36 health survey (SF 36) form. All measurements were repeated before and after 15 treatment sessions. There were statistically significant differences between the before and after assessment parameters in group 1 (all p < 0.05), but not for SF-36 General Health Perceptions, SF-36 Mental Health sub-parameters, and handgrip strengths. However, there were statistically significant differences between all the evaluation before and after the treatment in group 2 (all p < 0.05). When the two groups were compared in terms of alpha gains, statistically significant differences were observed in favor of group 2 in all measurements (all p < 0.05) except for SF-36 Emotional Role Difficulty and SF-36 Mental Health sub-parameters. This study shows that the addition of balneotherapy to physical therapy for subacute supraspinatus tendinopathy can make additional contributions to shoulder ROM, pain, handgrip strength, functional status, and quality of life. Introduction The shoulder joint has the greatest range of movement in the body. Shoulder pain is the third most common problem in the general population, after back and neck pain among musculoskeletal system issues (Roe et al. 2013). Its prevalence varies between 7 and 26%. The wide range of prevalence rates has been explained in the literature by the use of different definitions (Luime et al. 2009). Acute shoulder pain is defined as symptoms lasting up to 6 weeks, subacute lasts 6 to 12 weeks, and chronic pain is defined as symptoms lasting longer than 12 weeks. Studies indicate that the duration of symptoms is the most important in terms of prognosis. Chronic pain makes treatment difficult and increases treatment costs (Reilingh et al. 2008). Periarticular causes account for up to 90-95% of shoulder pain. Among these, rotator cuff lesions are the most common cause. Rotator cuff lesions vary in a broad spectrum from tendinitis to partial and complete tears and calcific tendinopathy. Studies using diagnostic imaging methods in shoulder pain showed that rotator cuff pathologies were most frequently observed in the supraspinatus tendon (Vecchio et al. 1995;Karel et al. 2017). In treatment, conservative methods such as analgesic and anti-inflammatory drugs, various injections, exercises, and physical therapy are used. Various trends and modalities, hot-cold treatments, deep heating agents, mobilization, and manipulation techniques are used in physical therapy. In cases where conservative treatments are inadequate, surgical methods are used (Filiz and Çakır 2014). Balneotherapy is frequently used for musculoskeletal diseases including shoulder diseases in our country and some European and Asian countries . There are many studies assessing the effects of balneotherapy in hand and knee osteoarthritis, chronic low back pain, and degenerative diseases such as lumbar spondylosis and mechanical neck pain, and in fibromyalgia (Nasermoaddeli and Kagamimori 2005;Fioravanti et al. 2014;Roques and Queneau 2016;Branco et al. 2016). The additional contribution of balneotherapy to treatment in shoulder pathologies has been investigated in a limited number of studies. However, these studies included patients who were treated with broad definitions such as subacromial impingement syndrome or chronic shoulder pain (Şen et al. 2010;Chary-Valckenhaere et al. 2012). Also, only a few studies indicated the efficacy of balneotherapy in chronic shoulder pain (Şen et al. 2010;Chary-Valckenhaere et al. 2012;Tefner et al. 2015). Only one study showed the beneficial effects of peloid treatment in subacromial impingement syndrome, which is one of the causes of shoulder pain (Şen et al. 2010). The effects of thermal water baths have not been studied adequately in shoulder pathologies. In light of all these data, the additive effects of balneotherapy on physical therapy in patients with subacute supraspinatus tendinopathy (6-12 weeks) were aimed to investigate. We evaluated health-related quality of life, emotional mood, sleep, pain scores, functional evaluation of the shoulder, handgrip strength, and active range of motion (ROM). Material and methods This single-blind, randomized controlled trial was conducted in the Physical Medicine and Rehabilitation Department Outpatient Clinic of the Ahi Evran University Medical Faculty. Declaration of Helsinki protocols were followed, and local ethics committee approval for the study was obtained (process no: 2018-06/62). The study was performed between March 29, 2019, and April 30, 2019 (ACTRN12619000045112). This study also conforms to all consort guide lines and reports the required information accordingly. The patients were evaluated by a single researcher (CK) both before and after the treatment periods. The researcher was blinded as to which treatment protocol the patients had been ordered. The G-power (v.3.1.9.2) program was used to determine the sample size, and it was concluded that a minimum of 45 people in each group was required to achieve an effect size of approximately d = 0.5 (medium-level effect size) at 80% power and 5% significance level (Cohen 2013). Patients between the ages of 20-65 with 6-12 weeks of unilateral shoulder pain were examined. Neer, Hawkins, and painful arc tests are provocative tests for subacromial impingement. All tests were performed to the patients. Patients who were positive in at least one of these three tests were evaluated. To be diagnosed with subacute supraspinatus tendinopathy (Burbank et al. 2008), pain severity (VAS 4 and above) is moderate or severe, full passive range of motion were assigned as inclusion criteria. Other shoulder evaluation tests (Cools et al. 2008), physical examination, laboratory, and diagnostic imaging were performed. Patients with the differential diagnosis of shoulder pain were excluded and patients with subacute supraspinatus tendinopathy with the affected shoulder MRI were included in the study. In the Neer test, one hand stabilizes the patient's scapula while the other hand raises the arm into full flexion; a positive test is indicated by pain. The Hawkins test involves flexing the shoulder to 90°then forcibly internally rotating it, though gentle internal rotation has also been recommended. Pain in the shoulder area indicates that the test is positive. When performing the painful arc test, the patient is asked to actively lift the arm in the scapular plane, then slowly reverse the movement. The test is noted positive if the patient has pain between 60-120 degrees of during elevation (Çaliş et al. 2000). A detailed history was taken from all patients. Musculoskeletal system and neurologic examinations were performed and radiologic (shoulder anteroposterior (AP)/ lateral, cervical AP/lateral), serologic (acute phase reactants, erythrocyte sedimentation rates, C-reactive protein (CRP), rheumatoid factor (RF)), and biochemical analysis (liver function tests, fasting blood glucose (FBG), urea, uric acid, creatinine) and hemograms were obtained. Magnetic resonance (MR) imaging was performed on the affected shoulder in all cases. The exclusion criteria were specified as follows: shoulder instability; those who underwent shoulder surgery; positive drop arm test; diagnosed adhesive capsulitis; rotator cuff tear; osteonecrosis; cuff arthropathy or arthritis; a history of shoulder injection in the past one year; acromioclavicular joint pathology; those who received physical therapy and/or received therapeutic balneotherapy in the past one year; those with a history of fracture or dislocation in the shoulder area; calcific tendinitis on radiography; neurologic deficit; regional diseases (cervical radiculopathy, brachial neuritis, complex regional pain syndrome, peripheral neuropathy), rheumatologic, oncologic, infectious disease, coagulopathy, and severe cardiovascular and pulmonary disease; patients with visceral-induced shoulder pain; a history of severe psychiatric illness; and breastfeeding or pregnant women. According to the inclusion and exclusion criteria, 98 patients who were diagnosed as having subacute supraspinatus tendinopathy were included in the study; eight patients dropped out for various reasons (Fig. 1). The study was completed with 90 patients (53 women and 37 men). The participants were given detailed information about the study and their written approval was obtained. Patients were randomly divided into two equal groups using the covariate adaptive randomization method (variables: age, sex, education level) with a computer program (Kang et al. 2008). Group 1 received transcutaneous electrical nerve stimulation (TENS), hot pack, ultrasound (US), and exercise treatment. Group 2 received balneotherapy in addition to the treatments given to group 1. All treatments were performed in a total of 15 sessions; five days per week. TENS treatment was performed by crossing the electrodes, including the supraspinatus muscle and the aching area, at 60-80 Hz frequency, 100 msec pulse intervals, with a current intensity at 1 to 100 mA, where the patient feels a slight tingling, without causing contraction. During treatment, the patient's arm was supported with a pillow in a resting position while the patient was sitting. Each session lasted 20 min. Hot pack treatment was performed by placing a hot pack containing silica gel on the aching shoulder, which was heated in water at 72-75°C in a boiler, and wrapped in two layers of towels. The treatment was performed once a day for 20 min. The US treatment was performed by moving the probe with continuous contact in circular movements over the aching shoulder, at a dose of 1.5w/cm 2 in the continuous mode, for 6 min per day. For exercises, Codman's pendulum exercises were given to both groups of patients. The exercises were actively performed by the patient under the supervision and directions of the researcher for 15 min. In group 2, balneotherapy was given at Kırşehir Terme Spas, which operate under the Department of Physical Medicine and Rehabilitation in Ahi Evran University. The hot mineral water at 42 ± 1°C contains 98.3 mg/L sulfur, 556 mg/L bicarbonate, 186.7 mg/L sodium, 34.5 mg/L magnesium, 226 mg/L calcium, 232 mg/L chloride, 2.6 mg/L fluoride, and 58.43 mg/L silicate acid. Spa treatment was given to the patients as a whole-body bath and assigned as 20 min. No analgesic or anti-inflammatory drugs were allowed to be taken during the study. None of the patients were using pregabalin and/or gabapentin. The demographic and affected shoulders characteristics were recorded. The active ROM (flexion, extension, abduction, internal and external rotation) of the affected shoulder was measured using a goniometer. Grip strength evaluations were performed using a Jamar hand dynamometer. The patients were asked to grade their pain during sleep, rest, and movement using a Visual Analogue Scale (VAS) scoring system. A shortened Disabilities of the Arm, Shoulder and Hand Questionnaire (QuickDASH) and the Short Form-36 health survey (SF-36) quality of life scale were administered before and after the study. The VAS is a scale used for the evaluation of pain severity. The scale is a 10-cm line with the left-most part showing no pain, and the right-most part showing maximum pain. All patients were asked to mark the most appropriate statement on the line according to the pain (Hong 2011). The SF-36 scale is used measure of quality of life, that consists of 36 items evaluating physical functioning; physical role functioning; emotional role functioning; social role functioning; general health, mental health; bodily pain, and vitality. Scores for the eight domains are calculated by summing up the item scores. Each domain is scored from 0 to 100, with 0 indicating the worst health status and 100 indicating the best health status. The validity and reliability studies of the scale have been performed in the Turkish population (Kocyigit et al. 1999). QuickDASH is an 11-item questionnaire, that measure physical function and symptoms in patients with upper limb musculoskeletal disorders. The 11 items of QuickDASH handle daily activities, house/garden work, shopping, recreation, self-care, eating, sleep, friends, work, pain, and tingling/ numbness. The validity and reliability studies of the scale have been performed in the Turkish population (Düger et al. 2006 Corp.) package program was used for all analyses. The normality of measured data distributions was evaluated using the Shapiro-Wilk test. Continuous data are showed as mean ± standard deviation (SD), and categorical data are presented as percentages (%). If the data were normally distributed, Student's t test was used, and the Mann-Whitney U test was used if the data were not normally distributed. Qualitative comparisons of the groups were performed using the Chisquare test. Additionally paired t tests were used to compare repeated measures for each group if the data were normally distributed. And the Wilcoxon test was used if the data were not normally distributed. The threshold for statistical significance was set at p < 0.05. Results There was no statistically significant difference between the treatment groups regarding demographic characteristics such as age, sex, body mass index, and education (p > 0.05). Also, there was no significant difference in baseline pain duration in either group (p > 0.05) ( Table 1). Positivity-negativity ratios of diagnostic tests such as Neer, Hawkins, and the painful arc test were similar in both treatment groups (all p > 0.05) ( Table 2). In group 1 and group 2, the pre and post-treatment ROM measurements (flexion, extension, internal rotation, and external rotation) were evaluated, and statistical significantly improvements were detected in both groups after the treatments (p < 0.05) ( Table 3). In the post-treatment Jamar hand dynamometer measurements in group 1, there was no significant difference compared with the pre-treatment measurements (p > 0.05). However, the post-treatment measurements of the Jamar hand dynamometer were detected significantly higher than the pretreatment measurements in group 2 (p < 0.05) ( Table 4). In group 1, there were no significant differences between measurements of before and after treatment related with SF 36 General Health, and SF 36 Mental Health values (all p > 0.05). Group 2 statistically significant improvements were found in all other sub-parameters (p < 0.05). However, there were significant improvements in the post-treatment values in all parameters of SF-36 compared with pre-treatment in group 2 (p < 0.05) ( Table 5). The difference between the treatment efficacy of the two groups was evaluated using delta gains. There was no significant difference between the two groups regarding SF-36 Emotional and SF36 Mental Health gains (p > 0.05). In terms of the delta gains of all other variables, a statistically significant difference was found in favor of group 2 (p < 0.05) ( Table 6). Discussion In our study, a significant improvement was observed in active ROM measurements, QuickDASH, and VAS (during rest, sleep, movement) scores in both groups (p < 0.05). However, the difference in the group receiving additional balneotherapy was significantly higher than in the other group (p < 0.05). Similar to our results, in the study of Şen et al. (2010), peloid treatment, which is a method of balneotherapy, provided an increase in shoulder ROM measurements, shoulder function, and a significant improvement in VAS scores. In a multicenter study in which the effectiveness of balneotherapy in shoulder pain associated with chronic cuff tendinopathy was evaluated, a significant improvement was observed in the DASH scores of the group receiving the spa treatment (Chary-Valckenhaere et al. 2012). Although similar results were obtained in the study of Tefner et al. (2015), in which thermal water and balneotherapy was used in patients with chronic shoulder pain, as in our study, no significant difference was found between the groups' ROM measurements. This result was considered to be caused by the capsular tension and adhesions associated with chronic pathologies of the patients included in the study. There are many studies on the use of balneotherapy in various musculoskeletal diseases in the literature (Odabaşı et al. 2002;Şen et al. 2007;Herisson et al. 2014). Although still not among the recommended treatment methods in some international treatment guidelines and meta-analyses, it is one of the recommendations of the Turkish League Against Rheumatism (TLAR) for the treatment of knee osteoarthritis and ankylosing spondylitis (Bodur et al. 2011;Tuncer et al. 2012). Also, among the non-pharmacologic treatment recommendations of ankylosing spondylitis in the Assessment of Spondyloarthritis International Society (ASAS)/European League Against Rheumatism (EULAR) prepared by van den Berg (2012), balneotherapy is recommended in combination with other non-pharmacologic treatments or alone in addition to pharmacologic treatment. The thermal, chemical and anti-inflammatory effects of balneotherapy have been stated in numerous studies in the literature (Gálvez et al. 2018;Morer et al. 2017;Cozzi et al. 2018). Also, its benefits on pain and joint stiffness at the cellular-molecular level have been shown. (Kurt et al. 2016;Koczy et al. 2019;Fioravanti et al. 2011), Balneotherapy provides analgesic effects by preventing the stimulation of nociceptive receptors, reducing pain transmission through the gate control theory of pain stimulating thick nerve-fibers, removing oxygen radicals, and increasing beta-endorphin levels in particular (Yurtkuran et al. 1993;Koczy et al. 2019;Tishler et al. 2004;Bender et al. 2005;Hizmetli and Hayta 2011). Again in the literature, it has been shown that balneotherapy treatments decrease inflammation and, ultimately, pain by increasing antiinflammatory cytokines (Shehata et al. 2006). In our study, we think that the greater improvement in active joint ROM, pain, and shoulder functions of the group receiving balneotherapy may be related to the pathophysiological mechanisms demonstrated in the studies mentioned above. However, reallife data and studies on patients with shoulder pain are less than the other pain syndrome groups (Karagulle et al. 2017). There was statistically significant increase in handgrip strength measurements with the Jamar hand dynamometer in the post-treatment results of group 2 compared with pretreatment (p < 0.05). In the analysis of delta gains, the gain in the group receiving balneotherapy was also significantly higher. The pain and inflammation reduction and thermal effect mechanisms of balneotherapy have been investigated in many studies in the literature. In response to heat, the elasticity of tissues containing collagen increases, muscle spasm decreases (possibly reducing pain), and joint function improves (Tishler et al. 2004;Bender et al. 2005;Shehata et al. 2006;Fioravanti et al. 2011). Handgrip strength is a clinical measurement that is aimed to be improved with decreasing pain and spasm. There were significant improvements in sub-parameters of SF-36, except for general health perception and mental health (p < 0.05) in group 1. However, in group 2, there was a significant improvement in all parameters (p < 0.05). When the post-treatment changes of the two groups were compared, the well-being was higher in the group receiving balneotherapy, except for the role limitation and mental health sub-parameters due to emotional problems. Balneotherapy has been shown to increase physical and mental quality of life, reduce anxiety and depression, as well as reduce pain and improve functions (Evcik et al. 2002;Fioravanti et al. 2012;Tefner et al. 2015). These effects are estimated to be due to adaptive modifications, particularly in autonomic and behavioral changes in regulatory systems (Bender et al. 2005). For these reasons, balneotherapy is widely used today for therapeutic purposes. In the study of Çağlar (2015), in which the additional contributions of balneotherapy to physical therapy in various musculoskeletal diseases were investigated, a higher rate of improvement was found in favor of the group that received balneotherapy in all sub-parameters of the quality of life scales. Also, similar results have been revealed concerning quality of life in balneotherapy studies with regional diseases such as knee osteoarthritis, hand osteoarthritis, chronic low back pain, and hip osteoarthritis (Guillemin et al. 1994;Horvath et al. 2012;Kesiktaş et al. 2012;Kovacs et al. 2012;Onat et al. 2014, Kovacs et al. 2016). However, different results were reported in SF-36 sub-parameters in a two-center study examining balneotherapy in chronic shoulder pain, in which the control group was given TENS and exercise (Tefner et al. 2015). There were improvements in both groups in the role limitation related to physical problems, vitality, and pain sub-parameters, but the group receiving balneotherapy had no superiority. Additionally, the role limitations due to emotional problems sub-parameter did not improve in either group. In the abovementioned studies and in our study, most of the SF-36 sub-parameters improved with the spa treatment in general, but different results were obtained in some sub-parameters. In our study, patients undergoing balneotherapy received daily outpatient treatment due to their clinical conditions and intensity. This situation resulted in patients not being able to benefit from recreational factors that increase quality of life, such as environmental change, stress relief, lifestyle change, and rest, which are thought to contribute to the effectiveness of balneotherapy. Therefore, differences in sub-parameters of the quality of life scale (SF-36) may be related to this fact. In some of the studies performed with diagnostic framing in which the additional contribution of balneotherapy to treatment methods was evaluated, follow-up that could provide data on long-term permanent effects was conducted. However, in our study, the data only included results of the short-term effects because many of the patients were not present in the long-term follow-up. These outcomes revealed additional contributions of balneotherapy in the early period. In two studies, although patients with newly diagnosed shoulder pain received primary therapy, it was reported that 40-50% of patients continued to have pain even after 6-12 months (Croft et al. 1996, Winters et al. 1999. Kujipar et al. also indicated, when shoulder pain is taken into account, 80% of the expenditures are made up of patients who do not receive good results despite conservative or surgical treatments (Kuijpers et al. 2006). Balneotherapy is a cost-effective treatment and helps to reduce both the loss of labor force and treatment costs (Van Tubergen et al. 2002). When used together with routine physical therapy methods, balneotherapy can contribute to the treatment of musculoskeletal diseases, especially in early stages, to prevent the symptoms from becoming chronic. As in our study, the efficacy of balneotherapy should be investigated in specific pathologies, with more extensive series and longer follow-up. Our study is important in terms of being the first on a specific pathology among studies investigating the effectiveness of balneotherapy in shoulder diseases. We believe that this study may be a guide for further research. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Characterization of the transcriptional regulator YY1. The bipartite transactivation domain is independent of interaction with the TATA box-binding protein, transcription factor IIB, TAFII55, or cAMP-responsive element-binding protein (CPB)-binding protein. YY1 is a multifunctional transcription factor implicated in both positive and negative regulation of gene expression as well as in initiation of transcription. We show that YY1 is ubiquitously expressed in growing, differentiated, and growth-arrested cells. The protein is phosphorylated and has a half-life of 3.5 h. To define functional domains, we have generated a large panel of YY1 mutant proteins. These were used to define precisely the DNA-binding domain, the region responsible for nuclear localization, and the transactivation domain. The two acidic domains at the N terminus each provide about half of the transcriptional activating activity. Furthermore, the spacer region between the Gly/Ala-rich and zinc finger domains has accessory function in transactivation. YY1 has been shown previously to bind to TAFII55, TATA box-binding protein, transcription factor IIB, and p300. In addition, we identified cAMP-responsive element-binding protein (CBP)-binding protein as a YY1 binding partner. Surprisingly, these proteins did not bind to the domains involved in transactivation, but rather to the zinc finger and Gly/Ala-rich domains of YY1. Thus, these proteins do not explain the transcriptional activating activity of YY1, but rather may be involved in repression or in initiation. YY1 is a multifunctional transcription factor implicated in both positive and negative regulation of gene expression as well as in initiation of transcription. We show that YY1 is ubiquitously expressed in growing, differentiated, and growth-arrested cells. The protein is phosphorylated and has a half-life of 3.5 h. To define functional domains, we have generated a large panel of YY1 mutant proteins. These were used to define precisely the DNA-binding domain, the region responsible for nuclear localization, and the transactivation domain. The two acidic domains at the N terminus each provide about half of the transcriptional activating activity. Furthermore, the spacer region between the Gly/ Ala-rich and zinc finger domains has accessory function in transactivation. YY1 has been shown previously to bind to TAF II 55, TATA box-binding protein, transcription factor IIB, and p300. In addition, we identified cAMP-responsive element-binding protein (CBP)-binding protein as a YY1 binding partner. Surprisingly, these proteins did not bind to the domains involved in transactivation, but rather to the zinc finger and Gly/Ala-rich domains of YY1. Thus, these proteins do not explain the transcriptional activating activity of YY1, but rather may be involved in repression or in initiation. Different mechanisms have been implicated in the regulation of gene transcription by YY1. Depending on the context, YY1 was shown to either stimulate or repress gene expression (for review, see Refs. 1 and 2). The mechanistic basis of these two different activities has not been characterized. However, recent evidence indicates that the interaction of YY1 with the coactivator p300 may be relevant in determining whether YY1 functions as an activator or repressor (3). Furthermore, YY1 has been described as an initiator-binding protein (4). This has been supported by the finding that YY1 can stimulate basal transcription in vitro in combination with TFIIB 1 and RNA polymerase II, notably in the absence of the TATA box-binding protein (TBP) (5). In addition, YY1 has been recently identified as a component of a large RNA polymerase II complex that contains YY1 in stoichiometric amounts with RNA polymerase II and several general transcription factors as well as DNA repair proteins (6). Yet another aspect of YY1 function has been uncovered by demonstrating its identity to the nuclear matrix protein NMP-1 (7). These data imply that YY1 may also be involved in aspects of chromatin organization possibly by tethering DNA to the nuclear matrix. Together, these findings suggest that YY1 participates in a number of different processes associated with regulation of gene transcription. Interestingly, YY1 function and regulation have been linked to the adenovirus protein E1A and the proto-oncoprotein c-Myc (3, 4, 8 -10). Originally, it was found that E1A-mediated activation of the adeno-associated virus (AAV) P5 promoter results from relief of YY1 repression (4). This seems not to be due to a direct interaction of E1A with YY1, but rather the effect of binding of E1A to the coactivator p300 in a p300-YY1 complex (3). Thus, in this complex, p300 appears to acquire a new quality as mediator of repression, whereas it supports activation of all other studied transcriptional regulators including CREB and c-Myb (11,12). In contrast to E1A, c-Myc directly interacts with and alters the function of YY1 (10). 2 In addition, YY1 can also transactivate the mouse c-myc promoter (9). Since both E1A and c-Myc are potent cell growth regulators (for review, see Refs. 13 and 14), their interaction with YY1 suggests a role for this protein in cell growth control. YY1 is a zinc finger-containing transcriptional regulator with homology to the GLI-Krü ppel family of proteins (4,(15)(16)(17). The analysis of YY1 deletion mutants, mainly in the context of Gal4 fusion proteins, has indicated that the zinc finger region is responsible for DNA binding and that the N-terminal region contains a transactivation domain (8, 16, 18 -20). The repression function of YY1 has been mapped to the very C terminus, a region also essential for DNA binding (4,8,19). Here we report that YY1 is a rather stable phosphorylated protein expressed at comparable levels in both growing and differentiating cells. In addition, using a panel of YY1 mutant proteins, we show that all four zinc fingers are required for specific DNA binding. We have mapped a region, including fingers 2 and 3, essential for efficient nuclear targeting. Furthermore, the transactivation domain is bipartite, with each of the two acidic domains at the N terminus contributing about half of the transactivating potential, whereas the spacer region between the Gly/Ala-rich and zinc finger domains has accessory function for transactivation. In addition to binding to p300 (3), we demonstrate that YY1 can also interact with the CREBbinding protein (CBP). However, binding to CBP as well as to the previously described interaction partners TFIIB, TBP, and TAF II 55 (5,21) does not require the transactivation domains, but instead the Gly/Ala-rich and zinc finger domains. These findings connect the binding of YY1 to CBP, TFIIB, TBP, and/or TAF II 55 to repression or initiation rather than transactivation. Transient transfections were performed using a standard calcium phosphate transfection protocol as described previously (22). Briefly, cells were plated at a density of 1.5 ϫ 10 5 cells/plate. Each 6-cm plate received 2 g of reporter plasmid, 2 g of pRSVlacZ as internal control, and the amounts of effector plasmids indicated. All transfections were done in duplicates or triplicates, and all experiments were performed at least four times. Cells were harvested after 36 -48 h, and luciferase and ␤-galactosidase activities were determined. Plasmids-The pCB6 ϩ -based YY1 expression vector (pCMVYY1) was a gift of M. Atchison (17). A BglII-ClaI fragment from this construct was inserted into pBluescript KS ϩ (Stratagene), and the resulting pBS-YY1 plasmid was used for mutagenesis. Deletions were made either by exploiting existing restriction sites or by introducing new sites by polymerase chain reaction. All junctions and all polymerase chain reaction-derived sequences were verified by sequencing. None of the deletion mutants contains additional amino acids at the junctions. The YY1 deletion mutants were then cloned into the EcoRI site of pCB6 ϩ . pCMVHAYY1, pCMVHAYY1⌬399 -414, and pCMVHAYY1⌬334 -414 were generated by insertion of a short DNA fragment encoding a start codon followed by a hemagglutinin (HA) epitope between the BglII site in the pCB6 ϩ polylinker and the NcoI site overlapping the ATG codon of the YY1 coding sequence. min-tk-luc consists of nucleotides Ϫ32 to ϩ51 of the herpes simplex virus thymidine kinase promoter inserted into XP-2 (23) and has been described previously (22). P5ϩ1tk-luc was constructed by insertion of an oligonucleotide containing the P5ϩ1 sequence from the AAV P5 promoter (4) into the SalI site of min-tk-luc. pRSVlacZ was obtained from I. Bredemeier. GST-TFIIB was a gift of F. Holstege and M. Timmers. GST-TBP was constructed by insertion of the cDNA for human TBP (gift of M. Timmers) into pGEX2T. pGEX-hTAF II 55 consists of a fragment from pF: 55-11d (obtained from R. Roeder) encoding amino acids 1-257 preceded by a Flag tag in pGEX2T (21). The GST-CBP fusion proteins were a gift of R. Janknecht (24). Antibodies, Western Blotting, and Immunofluorescence-The polyclonal antiserum 263 was generated by immunization of a rabbit with bacterially expressed and purified His-tagged YY1. pDS56HisYY1 was a gift of T. Shenk (4). Affinity purification of the antibodies was performed on a matrix containing purified His-tagged YY1 covalently coupled to CNBr-activated Sepharose 4B. Anti-YY1 C20 was purchased from Santa Cruz Biotechnology. The 12CA5 monoclonal anti-HA antibody was a gift of R. Janknecht. For immunofluorescence, RK13 cells were seeded onto coverslips that were placed in 6-cm tissue culture plates and transfected with 3 g of the indicated expression constructs. The cells were fixed 24 h later in 3% paraformaldehyde, permeabilized with phosphate-buffered saline containing 0.1% Triton X-100, and blocked in phosphate-buffered saline supplemented with 20% horse serum (blocking buffer) for 30 min. Cells were then incubated with antibodies diluted in blocking buffer (anti-YY1 C20, 1:2000; affinity-purified 263 anti-YY1, 1:2000; control antibodies, 1:2000; and monoclonal anti-HA, 1:20). After extensive washing with phosphate-buffered saline, secondary antibodies (anti-rabbit Cy3 or anti-mouse fluorescein isothiocyanate) were applied in blocking buffer. Nuclei were stained with Hoechst 33258, and coverslips were mounted in Moviol containing isopropyl gallat. Photographs were taken using a Zeiss Axiophot photomicroscope and Kodak color slide film. Scanned images were arranged and labeled with Adobe Photoshop. Metabolic Labeling and Immunoprecipitations-For metabolic labeling, cells were washed three times with phosphate-buffered saline and then incubated for 15 min in methionine-free medium containing 10% dialyzed fetal calf serum and 100 Ci/ml [ 35 S]methionine. Cells were either immediately lysed in antibody buffer or chased in medium containing an excess of unlabeled methionine for the indicated times prior to lysis. After sonification and removal of insoluble material by centrifugation, immunoprecipitations were performed as described (26). Immunoprecipitated proteins were separated by SDS-PAGE. Quantification of individual bands was performed on a Fuji phosphorimager. GST Fusion Proteins, in Vitro Transcription/Translation, and GST Pull-downs-For expression of GST-TFIIB, GST-TBP, and GST-TAF II 55, the corresponding plasmids were transformed into Escherichia coli strain BL21(DE)LysS. 200-ml cultures were grown to a density of A 600 ϭ 0.8, induced with 2 mM isopropyl-1-thio-␤-D-galactopyranoside, and incubated for an additional 3 h. Cells were harvested by centrifugation, resuspended in 15 ml of buffer A (20 mM Tris-HCl, pH 8.0, 0.5% Nonidet P-40, 10 mM dithiothreitol, 1% aprotinin, and 0.1 mM phenylmethylsulfonyl fluoride), and sonicated, and insoluble material was removed by centrifugation. Supernatants were applied to glutathione-agarose, and bound proteins were eluted in buffer A containing 5 mM glutathione. Eluted fusion proteins were dialyzed against 20 mM Tris-HCl, pH 8.0, 100 mM NaCl, and 10% glycerol. Protein concentrations were estimated in comparison with bovine serum albumin after SDS-PAGE and staining with Coomassie Blue. YY1 deletion mutants were transcribed/translated in vitro using the TNT-coupled T7/reticulocyte lysate system (Promega) in the presence of [ 35 S]methionine. The products were separated by SDS-PAGE and quantitated using a phosphorimager. For GST pull-down assays, 10 g of each fusion protein was bound to 15 l of glutathione-agarose and incubated with equal numbers of counts of each mutant in binding buffer (20 mM Hepes, pH 7.5, 100 mM NaCl, 2.5 mM MgCl 2 , 0.1 mM EDTA, and 0.05% Triton X-100) (12) at 4°C for 90 min. Then the beads were washed three times with binding buffer. Bound proteins were analyzed by SDS-PAGE. YY1 Is a Widely Expressed, Stable Protein-To study the YY1 protein, we developed an antiserum against full-length bacterially expressed YY1. This serum (263) reacted specifically with a protein of 68 kDa in all cell lines analyzed as well as with bacterially expressed His-tagged YY1 ( Fig. 1 and data not shown). The specificity of the serum was established by performing immunoprecipitation/Western blotting and blocking experiments in combination with a commercially available antiserum (Fig. 1a). YY1 was detected in fibroblasts (NIH3T3, CV1, and RK13), in primary rat embryo fibroblasts, in cells of hematopoietic origin (Jurkat, 70Z/3, Manca, Ramos, and U937), in PC12 pheochromocytoma cells, in HeLa epitheliumlike cells, and in the F9 embryonal carcinoma cell line by metabolic labeling with [ 35 S]methionine and immunoprecipitation as well as by immunoblotting ( Fig. 1 (b-d) and data not shown). Comparable levels of YY1 were expressed in all cell lines analyzed. To determine the stability of YY1, we performed pulse-chase experiments. Jurkat or F9 cells were pulse-labeled for 15 min with [ 35 S]methionine and chased in excess unlabeled methionine for the times indicated (Fig. 1d). YY1 appeared to be a rather stable protein with a half-life of 3.5-4 h as revealed by quantification of immunoprecipitated YY1 using a phosphorimager. A similar half-life of YY1 was determined in NIH3T3 cells and in rat embryo fibroblasts (data not shown). To further evaluate YY1 protein expression, we analyzed YY1 levels during differentiation. Similar amounts of protein were detected during retinoic acid/dibutyryl cAMP-induced F9 cell differentiation; during lipopolysaccharide-induced 70Z/3 B cell differentiation; and during 12-O-tetradecanoylphorbol-13acetate-, retinoic acid-, or vitamin D 3 -induced U937 differentiation ( Fig. 1c and data not shown). In addition, during F9 differentiation, no significant change in the stability of YY1 was observed (Fig. 1d). These findings identify YY1 as a uniformly expressed protein both in growing and differentiating cells. Functional Domains of YY1-To define functional domains in YY1, a series of deletion mutants were generated (Fig. 2). All these proteins were expressed efficiently in COS-7 and RK13 cells ( Fig. 3 and data not shown). The DNA binding capacity of YY1 and YY1 mutant proteins overexpressed in COS-7 cells was analyzed in electrophoretic mobility shift assay experiments. As probe, the P5ϩ1 sequence from the AAV P5 promoter (4) was used, which was bound by endogenous YY1 in COS-7 and F9 cells as well as by bacterially expressed Histagged YY1 (Fig. 4). The specificity of the complex was demonstrated by the ability of purified YY1 antibodies to inhibit binding, whereas unrelated antibodies had no effect (Fig. 4). Furthermore, binding to P5ϩ1 was competed by specific (but not by nonspecific) oligonucleotides (data not shown). All the mutant YY1 proteins with deletions in the zinc finger region were unable to bind to the P5ϩ1 oligonucleotide (Fig. 4). These findings show that all four zinc fingers are essential for the specific binding of YY1 to DNA. Immunofluorescent staining of control and transiently transfected RK13 cells was used to determine the subcellular localization of YY1 and YY1 mutant proteins. Endogenous YY1 was detected exclusively in the cell nucleus using affinity-purified 263-7 antibodies (Fig. 5a). Exogenously expressed YY1 was stained with a commercially available anti-peptide serum recognizing the C terminus of YY1 since it recognizes a defined epitope and its reactivity was too low to stain the endogenous protein under the conditions employed. Mutants with deletions of the C terminus (YY1⌬399 -414 and YY1⌬334 -414) were tagged with an HA epitope and detected with a monoclonal antibody against HA. All the mutant proteins either with deletions of N-terminal regions or with deletions affecting either the first or fourth zinc finger showed nuclear localization (Fig. FIG. 1. YY1 is a constitutively expressed protein. a, to establish the specificity of our YY1 antiserum (263) generated against bacterially expressed and purified His-tagged YY1, we performed immunoprecipitation/Western blot analysis. HeLa whole cell lysates were immunoprecipitated using affinity-purified YY1 antibodies (serum 263-7, ␣-YY1), unrelated affinity-purified antibodies (control AB), 263 preimmune serum (␣-YY1 PI), or purified YY1 antibodies preincubated with GST-YY1 (␣-YY1 block). The immunoprecipitates were separated by SDS-PAGE, blotted onto nitrocellulose, and stained with anti-YY1 C20. The positions of the Ig heavy chains (IgH), YY1, and GST-YY1 are indicated. b, whole cell lysates of the different cell types indicated were prepared in antibody buffer, and equal amounts of protein (ϳ10% of a subconfluent 10-cm tissue culture plate) were separated by SDS-PAGE and blotted onto nitrocellulose. The Western blot was developed with purified YY1 antibodies (serum 263-7). c, F9 embryonal teratocarcinoma cells were differentiated in the presence of retinoic acid and dibutyryl cAMP for the times indicated. Whole cell lysates of equal numbers of cells were analyzed by Western blotting using purified YY1 antibodies (serum 263-7). For comparison, three different amounts of control lysate were loaded. d, Jurkat or undifferentiated or differentiated F9 embryonal carcinoma (EC) cells were labeled for 15 min with [ 35 S]methionine. The labeled cells were then chased in the presence of excess unlabeled methionine. The cells were harvested at the times indicated, lysed in antibody buffer, and immunoprecipitated using purified YY1 antibodies (serum 263-7). For blocking, the antibodies were preincubated with GST-YY1 prior to the addition of lysate (0/bl). The immunoprecipitates were separated by SDS-PAGE, and the proteins were detected by fluorography. The radioactivity of the different bands were quantified with a phosphorimager. 5 (b and c) and data not shown). A deletion of the first zinc finger and the two Cys residues involved in coordinating Zn 2ϩ of the second zinc finger (YY1⌬296 -331) distributed mainly to the nucleus (Fig. 5b). Deletion of the entire C terminus including part of the second, third, and fourth zinc fingers resulted in a protein (YY1⌬334 -414) with predominant cytoplasmic staining (Fig. 5c). These data suggest that the nuclear localization signal of YY1 is contained within the region encoding the second and third zinc fingers as summarized in Fig. 2. YY1 Shows a Bipartite Transactivation Domain-YY1 has been implicated in both positive and negative regulation of gene transcription. To analyze the domains in YY1 responsible for these functions, the gene regulatory activities of the YY1 mutants were tested. Reporter constructs were made containing a minimal thymidine kinase promoter and the luciferase gene with or without the P5ϩ1 YY1-binding site (4). First, the role of the P5ϩ1 binding site was determined in three different cell lines. Whereas in CV1 and RK13 cells the presence of a P5ϩ1 site led to an increase in reporter activity, a slight decrease was observed in NIH3T3 cells (Fig. 6a). Expression of exogenous YY1 resulted in a binding site-dependent activation of the reporter construct in all three cell lines. In addition, the activation was dose-dependent in the range of 1 ng to 1 g of pCMVYY1 ( Fig. 6b and data not shown). Under these conditions, we have not observed any repression. However, reduced activation was seen when pCMVYY1 concentrations of 2 g or higher were used, most likely due to squelching as a result of highly overexpressed YY1. Next we were interested to determine the transactivating potential of the different YY1 mutant proteins. These analyses revealed that the YY1 mutant proteins can be divided into three classes. Deletion of the His cluster (YY1⌬69 -85) or the Gly/Ala-rich region (YY1⌬154 -199) did not affect the transactivating activity of the resulting mutant proteins compared with wild-type YY1 (Fig. 7). Proteins with deletions of either of the two acidic regions (YY1⌬2-62 and YY1⌬92-153) or of the spacer region between the Gly/Ala-rich and DNA-binding do- 3. Expression of the YY1 mutant proteins. RK13 cells were cotransfected with constructs expressing wild-type YY1 (wt), the indicated mutants, or control vector and a construct expressing ␤-galactosidase and P5ϩ1-tk-luc, identical as for the reporter gene assays. Whole cell lysates were prepared in antibody buffer, and the expression of the different proteins was analyzed by Western blotting. The blot on the left was developed using the YY1 C20 antibodies, and the one on the right using purified YY1 antibodies (serum 263-7). Wild-type YY1 is indicated as well as a nonspecific band cross-reacting with YY1 C20 antibodies (*). mains (YY1⌬199 -273) showed transcriptional activity that was reduced by 50% compared with YY1 (Fig. 7). Deletions including both acidic domains (YY1⌬2-150, YY1⌬2-197, and YY1⌬2-273) were inactive in stimulating transcription of the P5ϩ1-tk-luc reporter construct ( Fig. 7 and data not shown). Similarly, all the mutant proteins with deletions in the C terminus inhibiting DNA binding (YY1⌬262-299, YY1⌬296 -331, YY1⌬334 -414, and YY1⌬399 -414) were unable to stimulate the expression of the reporter (Fig. 7 and data not shown). None of the YY1 mutant proteins displayed an increased transactivating activity as compared with wild-type YY1, suggesting that no single domain, as deleted in our panels of mutants, was important for repression, in addition to the previously identified C terminus. These differences in transactivation were the result of deleting functional domains and were not due to differences in protein expression (see Fig. 3). Similar expression levels were found for all the mutants in comparison with wild-type YY1, with the exception of YY1⌬2-273, which showed consistently reduced steady-state levels. These findings suggest that YY1 contains a bipartite transactivation domain composed of the two acidic regions at the N terminus. In addition, the spacer region in the middle of the protein appears to have some modulatory activity (for summary, see Fig. 2). YY1 Interacts with Different Components of the Basal Transcriptional Machinery-A number of proteins have been identified to interact with YY1, several of which are intimately involved in polymerase II transcription, including TBP, TFIIB, TAF II 55, and p300 (3, 5, 21). We were interested to test whether these interactions are mediated by the identified transcriptional activation domains. In GST pull-down assays, we observed that bacterially expressed YY1 was able to interact efficiently with GST-TFIIB and GST-TBP and to a lower degree with GST-TAF II 55, whereas no binding to GST alone was observed (Fig. 8a). Since YY1 interaction with p300 has been shown, we tested whether YY1 can also bind to CBP. Binding was detected to GST-CBP-(451-721), the CREB-binding domain, and to GST-CBP-(1891-2175) (Fig. 8b). The interaction with CBP-(451-721) was weaker than with GST-TFIIB or GST-TBP (Fig. 8a). To define interaction domains, YY1 and YY1 mutant proteins were synthesized in vitro (Fig. 8c, INPUT), and their binding to GST-TBP, GST-TFIIB, GST-TAF II 55, and GST-CBP-(451-721) was determined (Fig. 8c, BOUND). Whereas wild-type YY1 and several of the mutant proteins bound to all four GST fusion proteins, but not to GST alone, deletion of part of the zinc finger domain (YY1⌬296 -331 and YY1⌬334 -414) reduced or abolished binding, respectively. In addition, YY1⌬154 -199, in which the Gly/Ala-rich domain is removed, bound consistently less well to all four fusion proteins. These findings indicate that the DNA-binding and Gly/Ala-rich domains are important for four different protein-protein interactions analyzed. TBP, TFIIB, TAF II 55, and CBP did not required the two acidic transactivation domains for interaction. DISCUSSION Several lines of evidence suggest that YY1 is a multifunctional transcriptional regulator, activating or repressing transcription depending on both the promoter and the cellular context. YY1 has been detected in a number of different tissues and cell types. Our analyses further support the concept that YY1 is a ubiquitously expressed protein. In all cell lines tested, comparable levels of YY1 were detected as determined by Western blotting ( Fig. 1 and data not shown). In addition, no changes in the level of expression were observed in differentiating F9, 70Z/3, or U937 cells ( Fig. 1 and data not shown). The finding in F9 cells is in agreement with previously published data showing constitutive YY1 mRNA expression during retinoic acid-induced F9 differentiation (15). Although YY1 is expressed constitutively during F9 differentiation, indirect regulation of YY1 activity has been suggested to occur through CpG methylation of YY1-binding sites (20,27). The accessibility of YY1 to its cognate binding site appears also to be regulated in the context of the 3Ј enhancer (28). Early in B cell development until the activated B cell stage, the YY1-binding site in the 3Ј enhancer is covered by a nucleosome. However, the YY1 site becomes accessible in plasma cells paralleling increased transcription from the locus. Interestingly, the time of appearance of a YY1 footprint in the 3Ј enhancer suggests, in contrast to an earlier study (17), a positive role for YY1 in -chain expression (28). While in differentiating F9 cells little difference in the DNA binding capacity of YY1 was seen, 3 a decrease in YY1 binding activity was observed during differentiation of chicken embryonic myoblasts (29). Presently, it is unclear whether this reflects a down-regulation of the protein, modulation of the DNA binding activity, or altered association with the nuclear matrix that may result in differential extractability. Further work will be required to determine whether YY1-DNA binding is regulated in other differentiation systems. In addition to the data described above, we could not observe any difference in YY1 protein expression in quiescent fibroblasts compared with serum-stimulated cells or exponentially growing cells (data not shown). This is in contrast to a recent study showing reduced YY1 mRNA expression in quiescent NIH3T3 cells as compared FIG. 4. DNA binding of YY1 and YY1 mutant proteins. Whole cell extracts were prepared in F-buffer from undifferentiated F9 embryonal carcinoma (EC) cells (left panel) or from COS-7 cells transfected with plasmids expressing the indicated proteins (right panel). Electrophoretic mobility shift assays were performed using a labeled P5ϩ1 oligonucleotide and endogenous YY1, exogenous YY1 and YY1 mutant proteins, and bacterially expressed His-tagged YY1 (H6-YY1). The specificity of the DNA-protein complex was analyzed using specific antibodies. F9 cell extracts were incubated with affinity-purified YY1 antibodies (serum 263-7) or with affinity-purified c-Myb antibodies (␣-YY1 and ␣-Myb, respectively). wt, wild-type YY1. with growing cells (30). Since in this latter study protein expression was not analyzed, direct comparison with our findings is currently not possible. In summary, constitutive expression of YY1 was observed under most cellular conditions. Therefore, one could consider YY1 as a permanently present "basal" transcription factor whose activity may be controlled exclusively by secondary events such as competition with other transcription factors (31,32), effects on the binding site (20,27,28), or binding by cell cycle-or differentiation-regulated factors such as p300 or CBP (Ref. 3 and this study). Using lysates of [ 32 P]orthophosphate-labeled cells and specific immunoprecipitation, we found YY1 to be phosphorylated (data not shown), as are many other transcription factors (33). Since altered phosphorylation is frequently associated with functional changes in the activities of transcription factors, we analyzed YY1 phosphorylation under different cellular conditions. At present, we have not found any differences in the phosphorylation pattern of YY1 during growth or differentiation by peptide mapping. 2 To transport proteins into the cell nucleus, at least two potential mechanisms can be envisaged (34). First, the protein contains a nuclear localization signal and by this interacts directly with the nuclear import machinery. Second, the protein is cotransported with a nuclear localization signal-containing protein. Both possibilities appear conceivable for YY1. Whereas no obvious nuclear localization signal is present within the region of the second and third zinc fingers, which are important for nuclear localization (Fig. 5), a number of basic residues have been noted that may function not only in DNA binding, but also in nuclear targeting. Alternatively, this region may interact with B23, which has been identified as a YY1-interacting protein in a yeast two-hybrid screen (35). Since B23 is a protein shuttling between the nuclear and cytoplasmic compartments, possibly transporting proteins across FIG. 5. YY1 is nuclear-localized. To determine the subcellular localization of YY1 and YY1 mutant proteins, untreated or transiently transfected RK13 cells were fixed in paraformaldehyde, permeabilized with Triton X-100, and stained as outlined below. a, RK13 cells were stained with affinity-purified YY1 antibodies (serum 263-7) or with control antibodies (Ab) as indicated (left panels). The DNA was stained using Hoechst 33528 (right panels). b, RK13 cells were transfected with plasmids expressing the indicated YY1 or YY1 mutant proteins and stained with YY1 C20 antibodies (left panels). The DNA was stained using Hoechst 33528 (right panels). c, RK13 cells were transfected with plasmids expressing the indicated YY1 or YY1 mutant proteins and stained with YY1 C20 antibodies (left panel) or with HA-tagged antibodies (right panels ). The DNA was stained using Hoechst 33528 (middle panels). wt, wild-type YY1. the nuclear envelope (36), it may be involved in the accumulation of YY1 in the nucleus. In a previous study, placement of the YY1-binding site from the initiation site of the AAV P5 promoter (P5ϩ1; see Ref. 4) in front of a minimal promoter resulted in a repression of transcription. Using a similar construct, we also observed a small repressive effect in NIH3T3 cells (Fig. 6). However, in CV1 and RK13 cells, the addition of the P5ϩ1 site resulted in a significant activation of the minimal thymidine kinase promoter, although equal amounts of endogenous YY1 are present in all three cell lines (Fig. 1). Cotransfection of YY1 expression plasmids in the range of 1 ng to 1 g of DNA activated the P5ϩ1tk-luc reporter gene in all three cell lines, indicating that YY1 by itself is an activator of transcription. This is supported by findings from other investigators who have observed an activating effect of YY1 overexpression in a variety of systems (9, 18 -20, 37, 38). The repressive effect of large amounts of YY1 expression vector observed previously (18,20) is probably due to squelching, an effect also caused by other activators when overexpressed in large amounts. The moderate repressive effect of the P5ϩ1 site in NIH3T3 cells could then be caused by a protein different from YY1, although it is the predominant protein observed in in vitro band shift reactions. To characterize the protein further, we constructed an extensive panel of YY1 deletion mutants (Fig. 2). Previous studies involving large deletions have shown that zinc fingers 2, 3, and 4 are required for DNA binding (16). We extend this observation by showing that a mutation that disrupts zinc finger 1 also abolishes binding to DNA in a band shift assay (Fig. 4), demonstrating a requirement for all four zinc fingers for specific DNA binding. Our data define three regions of YY1 important in the regulation of specific transactivation in addition to the zinc finger domain (Fig. 7). Whereas the two acidic domains (YY1⌬2-62 and YY1⌬92-153) each contribute about half of the transactivating potential, the spacer region is also important for full activity, but does not have transactivating activity on its own. The notion that the N-terminal region of YY1 is involved in transactivation has been suggested by the analysis of YY1 deletion mutants on the c-myc promoter (19). These findings were further confirmed by the analysis of Gal4-YY1 fusion proteins, implicating the N-terminal region of YY1 in transactivation (8, 18 -20). Detailed analysis of such Gal4-YY1 fusion proteins revealed an important role for the first acidic domain in transcriptional activity, but showed little significance of the second acidic domain (18). This is in contrast to our findings that demonstrated equal importance of both acidic domains. In addition, no specific function for the spacer region could be determined using Gal4 fusion proteins. This region of YY1 may be required for correct folding and presentation of the two transactivation domains. Together, the mutants analyzed here allow us to delineate a more detailed map of functional domains of YY1 in a context not relying on fusion proteins. A number of proteins involved in gene transcription that are frequently targeted by transactivation domains have been shown previously to interact with YY1, namely TFIIB, TBP, TAF II 55, and the coactivator p300 (3, 5, 21). Therefore, we asked whether one or more of these factors could bind directly FIG. 6. The P5؉1 YY1-binding site mediates cell type-specific transactivation. a, to assess the influence of the P5ϩ1 YY1-binding site on reporter gene transcription, CV1, NIH3T3, and RK13 cells were transiently transfected with min-tk-luc (2 g) or with P5ϩ1-tk-luc (2 g) in the absence of pCMVYY1. The -fold induction of P5ϩ1-tk-luc relative to min-tk-luc is displayed. b, NIH3T3 and RK13 cells were transiently transfected with P5ϩ1-tk-luc (2 g) in the presence of increasing amounts of pCMVYY1 as indicated. FIG. 7. Domains in YY1 responsible for transactivation. To determine the transactivating activity of the YY1 mutant proteins relative to wild-type YY1 (wt), RK13 cells were transiently transfected with P5ϩ1-tk-luc (2 g) in the presence of increasing amounts of the indicated plasmids expressing wild-type YY1 or YY1 mutant proteins. to the domains in YY1 that we have identified as important for transactivation. First, we confirmed the direct binding of YY1 to TFIIB and TAF II 55 and demonstrated an interaction with TBP ( Fig. 8) as has been suggested previously (4). Second, we showed that a C-terminal domain of CBP, a p300-related protein (39,40), also interacted with YY1. However, we observed an even stronger interaction of YY1 with the CREB-binding domain of CBP (Fig. 8). The corresponding domain in p300 may have been disrupted in GST-p300 fusion proteins used previously, possibly explaining the lack of binding to this region (3). Surprisingly, none of these interaction partners bound to a domain involved in transactivation (Fig. 8). Instead, all displayed similar patterns of binding, requiring the core of the YY1 DNA-binding domain and the Gly/Ala-rich domain. It is possible that the interactions with TFIIB, TBP, TAF II 55, or CBP/p300 may be relevant for repression rather than activation by YY1. In addition, the interaction with TFIIB may be important for the function of YY1 as an initiator-binding protein (5). Thus, it remains open which protein(s) is contacted by the transactivation domains of YY1. The picture that is emerging of YY1 in transcriptional regulation is quite complex. It can bind to enhancer and initiator sequences, can contact several different components involved in RNA polymerase II transcription, possesses two transactivation domains of unknown specificity, and can be part of a large RNA polymerase II complex. Recent evidence suggests that transcriptional regulators may recruit RNA polymerase II holoenzyme, which has been estimated to consist of at least 50 polypeptides (41). Since a single contact of a transcriptional activator with a component of the holoenzyme appears to be sufficient for activation of gene expression (42), multiple possibilities exist for interaction, and it will now be important to define the contact(s) of YY1 relevant for activation. Also, the contribution of this protein to the other proposed functions and the role of the identified interaction partner awaits further detailed analysis.
Using Case- Based Science Scenarios to Analyze Preservice Teachers’ Analytical Thinking Skills : Any science teacher must first acquire analytical thinking skills in order to give their students the ability to think analytically. Therefore, the candidacy period is important for teachers to develop and transform this skill into professional knowledge. Based on this idea, the current research aims to determine the ability of third-grade preservice science teachers to use analytical thinking skills. An Analytical Thinking Test is used in the research conducted survey method. This test consists of twenty case-based science scenarios in total from four different learning fields. These scenarios are designed according to the analytical thinking skill dimensions of Marzano’s Taxonomy. Preservice science teachers ( N =158) from two public universities have participated in the study. It was determined that the majority of preservice science teachers weakly used their analytical thinking skills. It was revealed that preservice science teachers had difficulties respectively in classification - specification - error analysis generalization – comparison according to Marzano’s taxonomy from most to least while solving scenarios. It is recommended that the science educators develop the designs to improve the analytical thinking skills of the candidates in the courses they conduct on the basis of the results of the research. In addition, science educators should pay attention to development in the dimensions of classification and specification by considering the alternative conceptions of the preservice science teachers. The type of people that the societies need termly varies. Therefore, the definition of qualified people varies according to the period. Especially, since the second half of the 19 th century, it has been understood that skill is more important than knowledge in business circles (Inkeles, 1969). It is a necessity to raise individuals who can adapt to various jobs of this age and have high-level thinking skills in this century, in which we are experiencing the Industry 4.0 revolution (Ichsan et al., 2021). All the qualities sought in the current era are defined in the skills of the 21 st century. Therefore, it is known that all developed countries, including Europe and USA, have revised their curricula in order to enable their students to gain 21 st century skills for qualified work and qualified earnings (Green, 1986). According to the research report, which reveals the necessity of 21 st century skills carried out with the participation of many institutions in the USA, it is determined that good education increases productivity in the workplace by 15-20 percent on average, while it increases the earnings of individuals by about 77 percent (Stuart, 1999). In general, these skills include collaboration, communication, digital literacy, citizenship, problem-solving, critical-analytical thinking, creativity, and productivity (Voogt & Roblin, 2012). Having analytical thinking skills, one of the 21 st century skills, is among the general competencies individuals should have (Prawita et al., 2019). Since the individuals with this skill do not have difficulty in solving the problems they encounter both in their daily life and their business life (Eckman & Frey, 2005), it is necessary to develop the analytical thinking skills of individuals who will just start their profession (Ratnaningsih, 2013). There is an important relationship between students' analytical thinking skills and their academic success (Bozkurt, 2022); therefore, analytical thinking affects students' success in many areas (Hyerle, 2008;Sebetci & Aksu, 2014). For example, analytical thinking skills are directly proportional to the development of scientific process skills (Irwanto et al., 2017) and creative thinking skills (Lestari et al., 2018;Lubart et al., 2013). Due to the importance of the individual in school life, daily and business life, analytical thinking skills are among the skills expected to be acquired by secondary school students in the Science Curriculum in Turkey since 2013 (Ministry of National Education [MoNE], 2018). However, according to research, it has been determined that the level of analytical thinking skills of students at many levels, from secondary school students (Bozkurt, 2022;Mete, 2021) to university students (Akkuş-Çakır & Senemoğlu, 2016), is medium or low. Teachers have the most significant role in acquiring analytical thinking skills for students (Ennis, 1985). The fact that teachers do not give enough importance to such thinking skills in their classrooms causes low students' skill levels (Tanujaya, 2016). Teachers need to develop instructional designs more compatible with problem-solving teaching methods to gain this skill (Chinedu & Olabiyi, 2015;Ramdıah et al., 2018) and use such long-term designs (Siribunnam & Tayraukham, 2009). However, it is a well-known fact that a teacher who wants to teach or gain any skill must first have these skills. It has been determined that preservice teachers (Kala & Kirman-Bilgin, 2020) and even teachers (Anılan & Gezer, 2020) do not have professional competencies to teach their students analytical thinking skills. Knowing how much teachers use this skill during candidacy before starting the profession is essential. This is because preservice teachers gain most of their professional knowledge and skills during their candidacy. A candidate who does not gain analytical thinking skills during the candidacy period may have difficulty acquiring this skill in his/her students in his/her career. Therefore, researching the analytical thinking skills of preservice science teachers is important in contributing to the relevant literature and structuring preservice teacher education programs. To examine preservice teachers' analytical thinking skills in-depth, first, the characteristics of this thinking skill should be well known. Theoretical Background Analytical thinking is a high-level thinking skill (Ichsan et al., 2021;Toledo & Dubas, 2016) and is in critical interaction with other thinking skills. Analytical thinking is associated with other thinking skills such as synthetic, systematic, and creative thinking (Amer, 2005). It is seen that analytical thinking is mostly done within the framework of the concept of analysis in the literature. Amer (2005) defines analytical thinking as dismantling the situation, thinking of an idea in a distinctive way, analyzing data to solve problems, and remembering and using information. Dewey (2007), on the other hand, thinks that analytical thinking is to first examine the parts that make up the objects separately and then reason how the parts interact with each other in order to make the system work. According to Sternberg (2002Sternberg ( , 2006, analytical thinking is a) to break down a problem into parts and make sense of these parts, b) to explain the operation of a system, the reasons why something happens, or the steps to solve a problem, c) to compare two or more situations, d) to evaluate and criticize the properties of something. Although the general features of analytical thinking are seen in the current definitions, it is of foremost importance to know the systematic cognitive processes (indicators) of analytical thinking so that teachers can recognize this skill and integrate it into instructional designs. One of the sources of cognitive processes of analytical thinking is chronologically the analysis phase of Bloom Taxonomy. According to Bloom et al. (1956), analytical thinking takes place in three interrelated cognitive processes: item analysis, relationship analysis, and organizational principles analysis. Behn and Vaupel (1976) have stated that analytical thinking takes place in five stages. These stages are thinking, subdividing, simplifying, specifying, and rethinking. An individual who implements these five stages in the thinking process has acquired the ability to think analytically. Anderson et al. (2001), who have revised the Bloom Taxonomy, state that the individual differentiates important parts of a message, organizes the ways in which parts of this message are edited, and characterizes the underlying purpose of the message in the process of analytical thinking. Therefore, the authors say that analytical thinking occurs in three cognitive processes: differentiating, organizing, and attributing. Marzano mentions five cognitive processes for analytical thinking: comparison, classification, error analysis, generalization, and specification. (Marzano, 2001;Marzano & Kendall, 2007). It is accepted that individuals who can systematically perform these five processes in their working memory think analytically. Analysis means more than just the illumination of the structure, unlike other taxonomies in Marzano's Taxonomy. The individual can also think analytically and produce new information that he does not already have in this taxonomy (Marzano & Kendall, 2007). It is decided that the analysis phase is in problem-solving and is put hierarchically under the creation immediately in the taxonomy proposed by Ichsan et al. (2021). Analytical thinking is one of the high-level thinking skills as seen in the thinking taxonomies. One of the courses in which analytical thinking skills can be acquired most easily is the science course (Tsalapatas, 2015). Since science is a course intertwined with life, students' analytical thinking skills can be developed very easily in order to solve the problems in this course. However, the teachers must first have analytical thinking skills for students to overcome both science and daily life problems. When the studies are examined, it is seen that the student's analytical thinking skills are low despite the significant importance of analytical thinking in our business and daily life (Gunawardena & Wilson, 2021;Husain et al., 2012;Irwanto et al., 2017;Thaneerananon et al., 2016). Although determining the level of thinking skill is particularly important, determining which element of the thinking process has a problem is more important in terms of developing instructional designs to eliminate existing problems. As mentioned in the top paragraph, some scientific studies examine the subcognitive processes of analytical thinking. In this study, data collection tools were developed based on Marzano's analytical thinking categories because the analysis category in Marzano's Taxonomy includes elements from at least three levels in Bloom's Taxonomy, namely "analysis, synthesis and evaluation" (Marzano & Kendall, 2007). In this respect, it is thought that Marzano's analytical thinking categories are more suitable for solving complex daily life problems. There are limited studies in the literature analyzing analytical thinking based on Marzano's Taxonomy (Fakhrurrazi et al., 2019;Yulina et al., 2019), but it is seen that multiple-choice test is used in these researches. In multiple-choice tests, since the student marks one of the derived information, it allows us to reach limited information about the individual's thinking processes. Case-based science scenarios were used in this research. A limited number of studies have been found in the literature in which case-based scenarios are used to improve students' inquiry skills (Cresswell & Loughlin, 2017) or to measure only their analytical thinking skills (Akkuş-Çakır & Senemoğlu, 2016;Olça, 2015). Case-based science scenarios were preferred in this study both because the student produces the knowledge directly and because it eliminates the chance factor in multiple-choice questions. In addition, since these scenarios are remarkably similar to the cases that the individual may encounter in his/her daily or business life, it is thought that results that are more reliable will be obtained on whether he/she can solve a complex case by thinking analytically in real life. In this context, the aim of the study is to determine the proficiency of third grade preservice science teachers in analytical thinking skills first and then to analyze their analytical thinking. Accordingly, it will also be possible to determine which element is executive for preservice science teachers in analytical thinking or why they cannot think analytically. Method This research is carried out by survey method. Survey studies are a type of research carried out to determine the current situation. The ability of preservice science teachers to use analytical thinking skills is investigated in this study. The survey method prepares the necessary infrastructure for special case studies and provides the environment for the creation of the problem that will be investigated (Ruel et al., 2015). It is thought that the results of this research will form the basis of many studies that will be conducted. Participants The third year preservice science teachers (N=158) studying at two state universities in Turkey participated in this research. The reason for working with these participants is that the same teacher training program is implemented in all education faculties in our country. The preservice science teachers go through the same education process except for a few elective courses, even if they are in different universities. Related research is a product that emerges from an ongoing project. The researchers involved in this project work at two different universities. Therefore, the researchers preferred convenience and convenience sampling while determining the participants. Sixty-nine preservice teachers from one of these universities and eighty-nine preservice teachers from the other have participated in the research. Since the research aims not to compare the competencies of universities to provide preservice teachers with analytical thinking skills, the data obtained from the participants are not presented separately. It is decided to conduct the research with third-year preservice science teachers since they have taken all the field courses in the first three years in the Science Teacher Training Program at universities and have the necessary field knowledge to analyze a scenario. Participants have not taken any analytical thinking training courses before the research. Data Collection Tools The researchers of this study develop the Analytical Thinking Test (ATT) as part of the research to reveal the analytical thinking skills of preservice science teachers. Since the solution of the scenarios in ATT takes time and cannot be solved once in practice, it has been transformed into four worksheets. The scenarios in these worksheets are developed by considering the achievements in four different learning fields in the secondary school Science Curriculum (SC). Therefore, the scenarios are designed to cover four different learning fields (Living Beings and Life, Physical Events, Matter and Change, Earth and Universe) and Marzano's five analytical thinking skills (comparison, classification, error analysis, specification, and generalization). Each worksheet contains five scenarios, one of which is from Marzano's entire analytical thinking category. Therefore, preservice science teachers have solved twenty case-based analytical scenarios within the scope of this research. The features of the developed scenarios are shown in Table 1. The developed case-based science scenario examples and considerations while developing these scenarios are shown in Table 2. The scenarios developed within the scope of Table 2 are designed by the researchers and are subjected to validity studies by two science educators. The revised questions are applied to twenty preservice science teachers within the feedback framework. The reliability studies are completed within the framework of the data obtained from the senior preservice science teachers, and the ATT is finalized. Since the ATT consisted of open-ended questions, the reliability coefficient was not calculated. However, the researchers of this study examined the answers given by the preservice science teachers and checked how many of the expected answers were given. Data Collection Process The data are applied at separate times. The questions with two learning fields are applied one day, the questions with other learning fields are applied the next day, and the data are collected. There is no time limitation for the preservice science teachers while they are solving scenarios in the worksheets. The preservice science teachers who gave the data collection tool the earliest completed the questions in 45 minutes. The preservice science teachers who gave the data collection tool the latest completed the questions in 61 minutes. Data Analysis The data obtained from the ATT is analyzed on the basis of the criteria in Table 3 obtained by adapting the classification used by Marek (1986). Table 3 The Analysis of the Data That Is Obtained from the ATT Categories Contents Score Complete Analytical Thinking The answer that includes scientifically correct analytical thinking in particle size: To be able to detect the data related to the given problem, to divide the data into elements, to be able to process and solve the problem by using the dimensions of the ability to think analytically about the elements 3 Partial Analytical Thinking The answer that indicates some of the ways you can think analytically at a macroscopic level or think correctly 2 Analytical Thinking with Alternative Concepts Analytical thinking with alternative concepts that are not consistent with scientific knowledge 1 Inability to Think Analytically The answers like "I don't know" and meaningless answer 0 No answer No respond 0 The categories are scored in Table 3 to calculate the participants' average scores according to the dimensions of the participants' analytical thinking skills and learning fields and to interpret more deeply how much the candidates can use their analytical thinking skills. When the data obtained from the ATT are scored within the scope of Table 3, a candidate receives a maximum of "60" points and a minimum of "0" point from the aforementioned test. According to the answers given by the preservice science teachers to the ATT, it is based on the evaluation style proposed by Kala (2019) to interpret how much they use this skill in general. This form of evaluation is shown in Table 4. Table 4 The classification that is used in the analysis of data that will be obtained from ATT According to Kala (2019), an individual's analytical thinking level is in the A (analytical thinking skills are at a level that needs to be improved) code between 0-0.99 points, B (analytical thinking skills are weakly acceptable) between 1-1.99 points, C (analytical thinking skills are moderately acceptable) between 2-2.59 points, and D (analytical thinking skills are well acceptable) between 2.6-3 points out of 1 question. ATT has twenty scenarios. When the coefficients proposed by Kala (2019) are multiplied by twenty, the score intervals in Table 4 and the analytical thinking levels that correspond to these score intervals appear. Ethical Procedures Ethical approval and written permission were obtained from Kafkas University Social and Human Sciences Ethics Committee with the decision dated 06.09.2017 and numbered 05/01. The research was carried out following ethical rules at every stage. Participation of the candidates in the research took place on a voluntary basis. Results The findings obtained from the ATT used to reveal the preservice science teachers' use of analytical thinking skills are demonstrated in Figure 1. Figure 1 is examined, it is noteworthy that the ability of ten preservice science teachers to use the relevant skill within the scope of the ATT is at a level that needs to be improved. It is seen that there are 144 preservice science teachers who can use analytical thinking skills at a poorly acceptable level and four who can use them at a moderately acceptable level. It is noteworthy that there is no preservice science teacher who can use it at a well acceptable level. The general test averages of the candidates according to the dimensions of analytical thinking are as in Figure 2. Figure 2 The Findings of General Test Averages According to the Dimensions of Analytical Thinking When Figure 2 is examined, it is seen that the test average (1.66) obtained from the questions belonging to the comparison dimension of the ATT of all preservice science teachers participating in the research is higher than the other dimensions. It is noteworthy that the test averages obtained from the classification (1.04) and specification (1.08) dimensions are lower than the other dimensions. The overall test averages obtained according to the learning fields of the candidates are shown in Figure 3. Figure 3 The Findings of General Test Averages According to Learning Fields When Figure 3 is examined, it is seen that the ATT of preservice science teachers is lower than the test average (1.22) obtained from the Earth and Universe learning field compared to other dimensions. It is noted that the test average (1.48) obtained from the Living Beings and Life learning field is higher than other dimensions. The alternative concepts that emerged within the framework of the ATT of preservice science teachers are stated in Table 5. Since the density of the water is high, the upper surface is frozen, since the density of the olive oil is low, it freezes from the bottom. 33 The water is frozen because it is pure. 2 Dense substances are more difficultly frozen. 2 The surface of the water is frozen because of the specific heat difference. 1 4 The water droplets in the laundry freeze and separate from the laundry, so the laundry dries. Horses may have died of exhaustion because they have too much muscle. 4 The horses may have died because of carbon dioxide in their bodies. 3 Horses die of exhaustion because they do not convert lactic acid into glucose. 1 10 The deep cut may have devastated the veins. 4 In the first case, it reduces pain as the blood clots. 1 12 The distance the laser beam takes in the space is too high. 2 The laser beam is at the speed of light, the lantern light is slower. 1 16 If there were no axial tilt, the seasons would be reversed. 1 When Table 5 is examined, it is seen that preservice science teachers mostly have alternative concepts in the fields of matter and change, living beings, and life learning. It is noted that more alternative concepts have been identified in the field of matter and change learning field than other questions within the scope of question 1 on the specification dimension. Discussion and Conclusion The way that secondary school students can use or acquire analytical thinking skills in science lessons is related to how much science teachers include activities that will enable students to think analytically in their learning environments (Ichsan et al., 2021;Tanujaya, 2016). A science teacher is expected to have analytical thinking skills in order to design such learning environments (Ennis, 1985). They need to gain this skill in the process of preservice training, which they must combine with teaching professional knowledge and skills and improve themselves with supportive training while performing their professions. Based on this idea, the current research aims to reveal the situations in which preservice science teachers use analytical thinking skills. The data obtained from the ATT used within the research scope shows that most preservice science teachers can weakly use their analytical thinking skills (Figure 1). This may be because preservice science teachers have not been trained to develop these skills in their learning life until the research. This is because analytical thinking skills have been added to our country's curriculum only to be developed at the secondary school level since 2013. It is thought that since the students do not receive an education aimed at gaining analytical thinking skills, it causes them to have problems while interpreting a non-routine problem or a socioscientific situation. It has been determined that the students have more difficulty solving conceptual problems based on interpretation than operational problems in many studies conducted in our country (Bekdemir et al., 2010;Kaya & Keşan, 2012). Preservice science teachers could not both think analytically and had difficulties interpreting conceptual questions. As a result of this situation, their test averages were low. To sum up, the preservice science teachers participating in the research have not gone through a training process focused on improving their analytical thinking skills. The fact that they have not taken a vocational course for this skill during the candidacy process can also be seen as one of the reasons for the results of this research. However, higher education institutions are required to produce graduates with analytical thinking skills (Kwok, 2018). Another research result obtained from the ATT is that the test averages of preservice science teachers are low according to the dimensions of analytical thinking skills. It is revealed that the preservice science teachers have more difficulty in classifying and specifying the data compared to other dimensions within the framework of the dimensions of analytical thinking skills. It is seen that they have less difficulty in making comparisons compared to other dimensions. Yulina et al. (2019) find that they are able to think analytically at a low level in their study with fifteen preservice chemistry teachers. Yulina et al. (2019) find that the candidates have already been struggling in the dimensions of error analysis, generalization, specification, comparison, and classification from most to least in terms of the dimensions of analytical thinking skills. Fakhrurrazi et al. (2019) state that they have difficulties matching, generalizing, classifying, analyzing errors, and specifying categories in biology subjects from most to least in their study. Preservice science teachers have difficulties respectively in classification, specification, error analysis, generalization, and comparison from most to least in the current study. As can be seen, the results of these three studies are quite different from each other. This may be due to the differences in the contents of the questions used in the three studies. This study revealed that preservice science teachers have more difficulties in solving problems related to the learning fields of Earth and Universe, Matter, and Change by using analytical thinking skills in the current study and they have less difficulty in solving problems related to physical events, living beings, and life learning fields by using their analytical thinking skills compared to other dimensions. The reason for this situation is that it is necessary to have conceptual learning in the field of knowledge learning to gain analytical thinking skills (Hyerle, 2008). It was determined that preservice science teachers had the greatest number of alternative concepts in the field of Matter and Change and the least in the field of Earth and the Universe within the scope of ATT. The emerging alternative concepts were seen as density, dietary patterns, fermentation, digestive system, nervous system, light, and Earth. However, the subject in which the preservice science teachers had the greatest number of alternative concepts is density. The reason for the emergence of alternative concepts within the scope of the concept of density may be due to the insufficient conceptual knowledge of the candidates about the particulate structure of matter (Barker & Millar, 1999;Kirman-Bilgin & Yiğit, 2017). From a general perspective, both the high level of misconceptions and low understanding of Matter and Change, and Earth and Universe areas can be explained by the fact that preservice science teachers have received less education in these areas until now. For example, when the number of achievements in the four areas of the Science Curriculum is listed, there is a ranking similar to Figure 3 (MoNE, 2018). It can be said that the courses related to Matter and Change, and Earth and Universe in high school and Science Teacher Training programs in our country are fewer than the courses in the other two fields. Furthermore, considering that the overall test averages of the problems related to these two fields are low (Figure 3), it can be said that the lack of conceptual knowledge of preservice science teachers negatively affects analytical thinking processes. This is because it is necessary to have theoretical knowledge about that case as well as analytical thinking skills in order to be able to analyze a science-based scenario or case. Bozkurt (2022) determined that content knowledge has a profound effect on the solution of a sciencebased scenario. When the findings obtained from the research are evaluated in general, the following main conclusions are reached. It is found that the majority of preservice science teachers use analytical thinking skills at a poorly acceptable level, and the candidates have difficulties respectively in classification -specification -error analysisgeneralizationcomparison from most to least while solving problems. This research is found that preservice science teachers have difficulties in learning fields, respectively the Earth and Universe -matter and change -physical events -living beings and life from most to least while solving problems. Finally, it was determined that the preservice science teachers had alternative conceptions for each learning area, but mostly about density. Implications This research revealed the status of preservice science teachers' analytical thinking skills according to their learning areas. The result of the research provides the opportunity for science educators to design their learning environments according to the needs of the preservice science teachers. It can be suggested to science educators to conduct the critical and analytical thinking course, which is among the vocational elective courses in the undergraduate course content of science teaching on the basis of the results of the relevant research. Moreover, science educators may be advised to use this skill by comparing them with case-based science scenarios based on problemsolving in their courses. The learning environment to improve the analytical thinking skills of preservice science teachers can be designed using the current research results, and its effectiveness can be investigated. Science educators should pay attention to development in the dimensions of classification and specification by considering the alternative conceptions of the preservice science teachers. In addition, science educators should strive to develop more analytical thinking skills in the learning areas of matter and change and the Earth and Universe.
Safety and efficacy of hydroxychloroquine for treatment of non-severe COVID-19 among adults in Uganda: a randomized open label phase II clinical trial Background Several repurposed drugs such as hydroxychloroquine (HCQ) have been investigated for treatment of COVID-19, but none was confirmed to be efficacious. While in vitro studies have demonstrated antiviral properties of HCQ, data from clinical trials were conflicting regarding its benefit for COVID-19 treatment. Drugs that limit viral replication may be beneficial in the earlier course of the disease thus slowing progression to severe and critical illness. Design We conducted a randomized open label Phase II clinical trial from October–December 2020. Methods Patients diagnosed with COVID-19 using RT-PCR were included in the study if they were 18 years and above and had a diagnosis of COVID-19 made in the last 3 days. Patients were randomized in blocks, to receive either HCQ 400 mg twice a day for the first day followed by 200 mg twice daily for the next 4 days plus standard of care (SOC) treatment or SOC treatment alone. SARS COV-2 viral load (CT values) from RT-PCR testing of samples collected using nasal/orapharyngeal swabs was performed at baseline, day 2, 4, 6, 8 and 10. The primary outcome was median time from randomization to SARS COV-2 viral clearance by day 6. Results Of the 105 participants enrolled, 55 were assigned to the intervention group (HCQ plus SOC) and 50 to the control group (SOC only). Baseline characteristics were similar across treatment arms. Viral clearance did not differ by treatment arm, 20 and 19 participants respectively had SARS COV-2 viral load clearance by day 6 with no significant difference, median (IQR) number of days to viral load clearance between the two groups was 4(3–4) vs 4(2–4): p = 0.457. There were no significant differences in secondary outcomes (symptom resolution and adverse events) between the intervention group and the control group. There were no significant differences in specific adverse events such as elevated alkaline phosphatase, prolonged QTc interval on ECG, among patients in the intervention group as compared to the control group. Conclusion Our results show that HCQ 400 mg twice a day for the first day followed by 200 mg twice daily for the next 4 days was safe but not associated with reduction in viral clearance or symptom resolution among adults with COVID-19 in Uganda. Trial registration: NCT04860284. Background The novel coronavirus, SARS-CoV-2, which causes Coronavirus disease 2019 , is the seventh human coronavirus described to date. By 3 July 2020, more than 11 million COVID-19 infections were reported worldwide resulting in more than 450,000 deaths [1] In just over a year, there have been nearly 130 million cases with more than two million deaths globally [2]. The COVID-19 pandemic has stretched the health care capacity of all systems across the globe, particularly the low-income countries with the weakest health care systems. Focus has been put on reducing the burden of infection and hospitalization as the primary goal [3]. According to the Uganda Ministry of Health data, the country has had over 41,000 cases and 340 deaths since the first case was reported on 21st March 2020 [4]. Several repurposed drugs have been investigated for treatment of COVID-19, however, none have been confirmed to be efficacious. These drugs include antimalarials like hydroxychloroquine (HCQ), antivirals such as remdesivir and favipiravir and antiretroviral combination therapies such lopinavir/ritonavir. Animal and human studies are not conclusive about the effect of HCQ on covid-19 with one animal study showing no effect, while other in-vitro studies and a small observational study demonstrated antiviral properties of HCQ [5][6][7]. In Uganda, an observational study among mild COVID-19 patients revealed a shorter time to recovery among those that had received HCQ [8]. Contrary to this, a retrospective study showed slower viral clearance among patients on HCQ compared to standard care [9]. Another randomized open-label trial in mild-to-moderate COVID-19 patients showed no difference in clinical status in the HCQ group as compared to standard of care [8]. However, this trial did not assess viral clearance and included patients up to 14 days after onset of symptoms. The authors asserted that it was conceivable that drugs that may limit viral replication would perhaps be more beneficial in the earlier course of the disease thus slowing progression to severe and critical illness [10]. The contention on the benefit of HCQ remained a debate in Uganda due to conflicting data from higher resource settings. Despite being ubiquitously used for the treatment of malaria; several studies have highlighted potential harm in the use of HCQ in COVID-19 patients. Cardiac arrhythmias from prolonged QT interval like irregular ventricular rhythms, ventricular tachycardia and fibrillation were noted especially with the relatively high doses administered in some trials to suppress viral replication [11]. However, the populations in these studies were older and burdened with more comorbidities as compared to Uganda's COVID-19 population. We therefore performed a randomized, open-label, clinical trial to determine the safety and efficacy measured as viral clearance, of HCQ compared to standard of care (SOC) for treatment of non-severe covid-19 in adults in Uganda. Study site We conducted a randomized open label Phase II clinical trial entitled Hydroxychloroquine for Treatment of Non-Severe COVID-19 (HONEST trial) from October-December 2020. The study was conducted at the Namboole nontraditional isolation facility where patients with asymptomatic or mild COVID-19 with no comorbidities were isolated and managed. Namboole stadium, a multipurpose stadium located 10 km east of the central business district of Kampala city, was remodeled into a COVID-19 isolation and treatment facility for patients with asymptomatic and mild COVID-19 due to escalating numbers in the country. Study design and population Diagnosis of COVID-19 was performed using RT-PCR at the government approved laboratories. Patients diagnosed with COVID-19 were included in the study if they were 18 years and above and had a diagnosis of COVID-19 made in the last 3 days. Patients were excluded if they had known allergies to HCQ or chloroquine, were on medications that have clinically significant interactions with HCQ, had a positive rapid test for malaria, were diagnosed with severe/critically ill COVID-19 (WHO Ordinal Scale of ≥ 5), had QTc prolongation of > 450 ms for males and > 470 ms for females, were pregnant or breastfeeding or were on chronic HCQ use. Participants found to have hypo-or hyperkalemia at baseline were withdrawn from the study. Conclusion: Our results show that HCQ 400 mg twice a day for the first day followed by 200 mg twice daily for the next 4 days was safe but not associated with reduction in viral clearance or symptom resolution among adults with COVID-19 in Uganda. Trial registration: NCT04860284. Randomization and masking Randomization was performed by an independent statistician using a computer generated randomization code with block randomization with varied sizes. Patients were allocated in a ratio of 1:1; to receive either HCQ 400 mg twice a day for the first day followed by 200 mg twice daily for the next 4 days plus SOC treatment or SOC treatment alone. The SOC treatment at the time included vitamin C and zinc supplementation. Symptomatic patients also received azithromycin and analgesics if necessary. Computer-generated randomization codes were enclosed in sequentially numbered opaque sealed envelopes containing treatment allocation. After meeting study eligibility criteria, the study nurse assigned the next envelope to the participant, opened the envelope and assigned treatment allocation. Treatment was immediately initiated. Participants and the trial team were not blinded. Participants who progressed to WHO ordinal scale ≥ 5 (severe or critical disease) during the study were managed according to the national clinical guidelines for COVID-19 which includes intravenous antibiotics and anticoagulation with low molecular weight heparin. Clinical assessments Participants were evaluated daily for new clinical symptoms, worsening or improvement of existing symptoms and adverse events during admission. An ECG was obtained at baseline, day 2 and 4. Where the QTc interval on ECG exceeded 500 ms or increased by > 60 ms above the baseline, the ECG was repeated. If the repeat QTc interval remained above these values, HCQ was discontinued. Serum ALT, visual tests using Snellen's and Ishihara charts were measured at baseline and day 4 while serum potassium was measured only at baseline. Participants who developed grade 3 or 4 clinical or laboratory based adverse events were discontinued from medication. Participants were discharged after 2 consecutive negative SARS COV-2 PCR tests, generally after 10-14 days. Interim analyses An independent data and safety monitoring board (DSMB) reviewed the study protocol and oversaw the progress of the trial. Progressive data review for safety and efficacy was planned after 25% of the participants had completed 10 days of follow-up, and another as deemed necessary by the DSMB. Stopping guidelines were provided to the DSMB with the use of a Lan-DeMets spending function for the primary outcome. The first interim analysis was performed and presented to the DSMB committee on February 17, 2021, for their recommendation on whether to stop the trial for safety concern or futility or any other reason given by the committee. The trial stopped because of the national roll-out of HCQ as standard of care by the Uganda Ministry of Health. Data analysis Primary and secondary outcomes were analyzed on intention-to-treat population, and other outcomes and safety data were analyzed on complete cases. The primary outcome was median time from randomization to SARS COV-2 viral clearance by day 6. Viral load clearance was defined as a negative SARS COV-2 PCR test with no subsequent positives. Analysis of time to viral clearance was performed using Kaplan-Meier methods, and compared across the two treatment arms using logrank test. We used Cox regression model to compare the secondary outcome of rates of viral load clearance in the two arms. Proportional hazard assumption was checked using schonefeld residuals. For other outcomes: the proportion of PCR negative conversion by day 6 and day 10, and proportion of participants with 25% reduction of SARS COV-2 viral load (CT-values) from baseline at day 6 were compared between treatment arms using Chi square test; change in SARS COV-2 viral load (CT-values) over time in the two arms compared using student's T-test; Time to symptom clearance by day 10 was summarized using median and inter-quartile range and compared using Wilcoxon rank-sum test. Safety outcomes like incident elevated ALT (> 40 IU), incident elevated We assumed HCQ would lower median time to viral clearance from 7 days (as per standard of care) to 4 days, with power of 80% and 5% 2-sided significance level, we calculated a sample size of 284 patients (142 per group) after accounting for 25% loss to follow-up or missing data. Results Of the 105 participants enrolled, 55 were assigned to the intervention group (HCQ plus SOC) and 50 to the control group (SOC only). The proportion of target (284) enrolled was 37%. Figure 1 shows the disposition of the study participants. Table 1 shows the baseline characteristics of the participants. The median (IQR) age was 32 (27-43) years and majority 76 (72.4) were male. Regarding COVID-19 symptoms at baseline, cough was the most common in 24 (43.6%) participants in the intervention group and 21 (42.9%) in the control group, followed by headache 14 (25.5%) and 11 (22.4%) in intervention and control group respectively. Details of the baseline COVID-19 symptoms are as shown in Table 2. The proportions of participants' clinical and laboratory examination features at baseline did not differ between groups (Table 3). Of 55 participants in the intervention group and 50 in the control group, 20 and 19 participants respectively had SARS COV-2 viral load clearance by day 6 with no significant difference, median (IQR) to viral load clearance between the two groups was 4(3-4) vs 4(2-4) days: p = 0.457 as shown in Table 4. Figure 2 shows the Kaplan-Meier plot showing time to first SARS COV-2 viral load clearance by treatment groups. The rate of viral load clearance per 100 persondays (95% CI) did not differ between the intervention and control groups, unadjusted hazard ratio 0.89 (95% CI 0.47-1.66): p = 0.703 (Table 4). There were no significant differences in secondary outcomes between the intervention group and the control group as shown in Table 4: SARS COV-2 PCR negative conversion by day 6 was found in 20 (35.1%) participants in the intervention group vs 19 (38.0%) participants in the control group, p = 0.755. Of 55 participants in the intervention group and 50 in the control group, SARS COV-2 CT values data were available for 15 participants in each group. There was no significant difference in change in CT values from baseline (mean, SD) in the intervention group 5.8 (5.3) vs 4.1 (7.1) in the control group, p = 0.471. The proportion with 50% reduction of SARS COV-2 viral load (CT values) from baseline was not statistically significant in the intervention group 5 (33.3%) vs 6 (40.0) in the control group, p = 0.464, by day 6. Regarding COVID-19 symptoms, data were available for 36 participants in the intervention group and 29 in the control group. There was no significant difference in time to symptom clearance by day 10 between the two groups (median (IQR) in days 3 (2-5) vs 3 (2-5): p = 0.909), this finding was similar to individual symptom analysis ( Table 5). Safety of HCQ There were no significant differences in adverse events such as elevated alkaline phosphatase and prolonged Table 1 Baseline socio-demographics of participants HCQ hydroxychloroquine, SOC standard of care, SD standard deviation, IQR Inter-quartile range a The baseline SARS COV-2 CT-values were defined as the CT-values measured at patient's enrolment. However, some participants had missing CT values at enrolment because majority of participants reported 4 days after their first positive PCR tests, and the repeat PCR test at enrolment for most of them was negative, as highlighted in this table. Only positive PCR tests could have SARS COV-2 CT-values b Missing values: High blood pressure (n = 2), Heart disease (n = 1), Diabetes (n = 1), Cigarette smoking (n = 2), Alcohol dependency (n = 2), HIV status (n = 1), History of allergies (n = 1), Medication before admission (n = 1) Characteristics Arm 1: HCQ + SOC QTc interval among patients in the intervention and control group. Details of clinically significant laboratory abnormalities are as shown in Table 6. Discussion In this randomized, open-label, clinical trial to determine the safety and efficacy, of HCQ for treatment of nonsevere SARS CoV-2 PCR-positive adults in Uganda, we found no difference in the proportion of participants who had PCR negative conversion, a 50% reduction in SARS-CoV-2 viral load (based on Ct values) after 6 days of treatment, or resolution of symptoms by day 10 of treatment when we compared participants who were randomized to receive HCQ and participants receiving SOC. Since March 2020, various therapies have been evaluated in clinical trials with adoption of some in clinical guidelines. One of these therapies, HCQ, was first used in 1955 and is considered to have a superior safety profile over chloroquine [12]. In vitro studies suggest that HCQ prevents SARS-CoV-2 binding to gangliosides, subsequently preventing binding with the Angiotensinconverting enzyme receptor (ACE-2), required for viral entry into cells [13]. By incorporating into endosomes and lysosomes, the drug increases the pH of intracellular compartments, resulting in defective protein degradation, endocytosis, and exocytosis required for viral infection, replication, and propagation [14]. HCQ was shown to inhibit a broad range of viruses including coronaviruses (SARS-CoV-1 and Middle East respiratory syndrome-CoV) in cell culture [15,16], however, evidence from Hamster models suggested that HCQ did not demonstrate an effect on reducing SARS-CoV-2 virus levels [17]. By 13 April 2021, 62 trials of HCQ for the treatment of COVID-19 had been completed [18]. The efficacy of HCQ has been explored in both mild-moderate and severe COVID-19 disease. Similar to our study, Chen and colleagues did not find statistically significant differences in PCR conversion rate by day 7 and no difference was observed in clinical outcomes [19]. The SOLIDAR-ITY trial conducted in multiple countries did not demonstrate mortality benefit among hospitalized patients who were treated with HCQ [20]. Omrani et also found HCQ to be safe with no sever adverse events when used with or without azithromycin, however it had no effect on virological outcomes at day 14 [21]. In a trial evaluating the efficacy of HCQ and standard of care vs standard of care alone, Tang and colleagues showed that the addition of HCQ did not result in a significantly higher probability of negative PCR conversion by 28 days [22]. In outpatient settings, HCQ has also shown mixed efficacy when used as post-exposure prophylaxis with one study in India showing a relative reduction in the incidence of COVID-19 [23] while two other trials in the United States and Canada and one recent metanalysis did not demonstrate any benefit in prevention of COVID-19 [24][25][26]. Further, the use of once or twice weekly or daily (over 8 weeks) HCQ as pre-exposure prophylaxis among health care workers did not significantly reduce the incidence of laboratory confirmed SARS-CoV-2 infection [27,28]. Two metanalysis, one of which was conducted more recently, showed that there was insufficient evidence to demonstrate the efficacy of HCQ in reducing short term mortality or risk of hospitalization among outpatients with SARS-CoV-2 infection [29,30]. One study combining HCQ with azithromycin demonstrated significantly reduced viral titers at day 6 resulting in shortened time to clinical recovery and cough remission [6], however, the sample size was small, and the severity of disease was not [31], this authorization was later revoked on April 15, 2020 due to growing evidence of cardiac adverse events along with evidence suggesting that the drug was unlikely to be effective in treating COVID-19 [32]. Additionally, in March 2021 a WHO expert panel review of studies testing HCQ for preventing COVID-19, found high certainty evidence indicating HCQ has no significant impact on mortality risk or hospitalization and also found moderate certainty evidence that the drug does not significantly impact the risk of developing COVID-19. The most common side effects of HCQ include nausea, vomiting, and diarrhea [33], however, prolongation of the QTc interval has been observed with HCQ use and can result in ventricular arrythmias [12]. In a Spanish trial involving asymptomatic contacts of patients with polymerase-chain-reaction (PCR)-confirmed COVID-19, the incidence of adverse events was higher in the HCQ group than in the usual-care group (56.1% vs. 5.9%), but no treatment-related serious adverse events were reported [34]. We found no excess occurrence of these adverse events in the arm using HCQ compared to the SOC arm during our trial. In December 2020, the Uganda Ministry of Health adopted the use of HCQ for treatment of mild to moderate COVID-19 disease, subsequently, on 8 February 2021, the Uganda National Council of Science & Technology issued a directive halting this trial after enrollment of 37% (105) of estimated sample size of 284 participants. Thus, the trial did not reach the planned sample size. There was slower recruitment of participants when management of patients with asymptomatic disease was changed to include home based self-isolation and treatment from the previous recommendation of hospitalizing all those with a positive SARS-CoV-2 test. Despite this limitation, our study was still able to provide locally generated evidence to add to the body of evidence regarding the study question. In conclusion, our results show that HCQ 400 mg twice a day for the first day followed by 200 mg twice daily for the next 4 days was safe but not associated with reduction in viral clearance or symptom resolution among adults with COVID-19 in Uganda. These findings do not support the use of HCQ in the management of non-severe COVID-19 disease and we recommend the exclusion of HCQ from Ugandan COVID-19 treatment guidelines.
Ras regulates kinesin 13 family members to control cell migration pathways in transformed human bronchial epithelial cells We show that expression of the microtubule depolymerizing kinesin KIF2C is induced by transformation of immortalized human bronchial epithelial cells by expression of K-RasG12V and knockdown of p53. Further investigation demonstrates that this is due to the K-Ras/ERK1/2 MAPK pathway, as loss of p53 had little effect on KIF2C expression. In addition to KIF2C, we also found that the related kinesin KIF2A is modestly upregulated in this model system; both proteins are expressed more highly in many lung cancer cell lines compared to normal tissue. As a consequence of their depolymerizing activity, these kinesins increase dynamic instability of microtubules. Depletion of either of these kinesins impairs the ability of cells transformed with mutant K-Ras to migrate and invade matrigel. However, depletion of these kinesins does not reverse the epithelial-mesenchymal transition caused by mutant K-Ras. Our studies indicate that increased expression of microtubule destabilizing factors can occur during oncogenesis to support enhanced migration and invasion of tumor cells. mutations such as V12 are resistant to inactivation by GTPase activating proteins (GAPs), and as a result, remain constitutively in the active state, causing persistent activation of Rasdependent, downstream effector pathways. Activating mutations in Ras proteins are present in about 20% of human cancers, with mutations in K-Ras accounting for nearly 85% of the total 1 . In non-small cell lung cancers (NSCLC), K-Ras is mutated in 15-20% of cases, with highest mutation frequency in lung adenocarcinoma (20%-30%) 2 . Epithelial cells expressing mutant K-Ras undergo dramatic morphological changes; they often lose typical epithelial morphology and contact inhibition and become irregularly shaped, consistent with epithelial to mesenchymal transition (EMT) 3,4 . These morphological changes are accompanied by loss of epithelial proteins involved in cell-cell junctions and cell-matrix contacts such as E-cadherin. Conversion to a more migratory phenotype is related to expression of N-cadherin, often used as a marker of cells that have undergone EMT. Supporting the idea that K-Ras induces morphological changes, in certain cell lines morphology can be reverted by blocking pathways downstream of Ras, for example, with farnesyltransferase inhibitors, Anthrax lethal factor, or combinations of kinase inhibitors [5][6][7][8] , flattening cells and restoring contact inhibition. KIF2A is a kinesin-13 family member which is important for formation of bipolar spindles during cell division as well as for suppression of collateral branch extension in neurons; both functions are mediated through microtubule depolymerization catalyzed by KIF2A 9,10 . The closely related kinesin, KIF2C, commonly known as the mitotic centromere-associated kinesin (MCAK), also depolymerizes microtubules in an ATP-dependent manner [11][12][13] . The depolymerase activity of these KIFs has been demonstrated in a number of ways including in vitro assays with purified proteins, using single molecule microscopy, and analyzing phenotypes of knock out mice 11,12,9 . KIF2C has multiple roles in mitosis from spindle assembly at the centrosome to microtubule turnover at kinetochores 14 . Because of their depolymerizing activity, these kinesins increase dynamic instability of microtubules. Few roles have been ascribed to either protein outside of mitosis. Although KIF2C is thought to be degraded after cell division, it has been implicated in microtubule dynamics during interphase and associates with plus end tips of microtubules 12,15 . KIF2A has also been implicated in organelle localization 16 . In this study, we find that oncogenic K-Ras-induced transformation of human bronchial epithelial cells (HBEC) lacking p53 is accompanied by changes in morphology affecting both microtubule and actin cytoskeletons. Therefore, we hypothesized that regulators of the cytoskeleton may in some way be altered in transformed cells. We find that the kinesin family proteins KIF2A and KIF2C, both microtubule destabilizing, are upregulated in cells that have been transformed with K-Ras G12V and in a fraction of human cancer cell lines. Knocking down either KIF2A or KIF2C reduces the ability of K-Ras G12V -expressing, transformed bronchial epithelial cells to migrate, suggesting that aberrant expression of these proteins during transformation can contribute to the migratory potential of cancer cells. Expression of oncogenic K-Ras G12V increases expression of the microtubule depolymerases KIF2C and KIF2A We found that the microtubule depolymerizing kinesins KIF2C and to a lesser extent KIF2A were upregulated in a number of lung and breast cancer cell lines compared to immortalized human bronchial epithelial cells (HBEC) or human mammary epithelial cells (HMEC50) representative of normal tissue (Fig. 1A,F,G, S1A). Lung cell lines A549, Calu-6, H358, HCC515 and H1155 express K-Ras mutations and can be found in the (Cosmic Database (http://www.sanger.ac.uk/cosmic)). We considered the possibility that mutant K-Ras might alter expression of these proteins. Because larger changes in KIF2C expression were noted, we first determined if increased expression of KIF2C can be caused by mutated K-Ras. To do this we used an immortalized HBEC system that has been previously described 17,18 . HBEC from different patients (distinguished by a number) were immortalized by expression of human telomerase reverse transcriptase (hTERT) and CDK4, yielding HBEC3KT, HBEC30KT, etc. These immortalized cells were further altered by stable knockdown of p53 (HBEC3KT53). p53 is commonly mutated or lost in cancers, and loss of wild type p53 is required for HBEC to bypass Ras-induced senescence 2,19,20 . K-Ras G12V was stably expressed in p53 knockdown cells yielding HBEC3KTR L 53 which were used in many of the following studies (Fig. 1B). Expression of KIF2C, almost undetectable in HBEC3KT, was greatly increased in HBEC3KTR L 53 grown in serum-containing medium (Fig. 1C, lanes 1 and 6, S1C) 18 . An increase in KIF2A protein was also observed in the K-Ras G12Vtransformed cells, but of smaller magnitude. We next tested the effect of growth factor and nutrient withdrawal by placing the cells in Earle's balanced salt solution (EBSS). Removing nutrients and growth factors to slow protein synthesis and energy utilization had little effect on the amount of KIF2C, even after 4 hr of starvation. The increase in KIF2C protein was paralleled by greatly increased KIF2C mRNA in HBEC3KTR L 53 compared to HBEC3KT (Fig. 1D), suggesting that transcription is enhanced by expression of mutant K-Ras. An increase in KIF2A mRNA expression observed in cells harboring oncogenic K-Ras reached only a low level of statistical significance ( Fig 1E). To determine if these results with a laboratory-generated model system of mutant K-Rastransformation might be representative of patient samples, we asked if KIF2C and KIF2A were upregulated in a large group of patient-derived cancer cell lines. Information was obtained from microarray studies that had been performed on 147 lung cancer lines and 59 normal lung cells (Table S1). Statistically significant upregulation of KIF2C was noted in cancer lines compared to normal control cells (Fig. 1F); a less significant increase was also noted for KIF2A (Fig. 1G). No increase or even a small decrease in expression of KIF2B, a related kinesin 13 family member, was observed in the cancer cell lines (Fig. S1B), consistent with the idea that these KIFs have different cellular functions 21 . To determine if loss of K-Ras G12V from HBEC3KTR L 53 was sufficient to decrease expression of KIF2C, K-Ras was depleted by siRNA. We found that KIF2C expression decreased to a level similar to that in HBEC3KT, indicating that constant expression of oncogenic K-Ras, even after morphological transformation has occurred, maintains elevated KIF2C expression ( Fig. 2A). Expression of KIF2A was not detectably decreased by K-Ras knockdown, perhaps to be expected with the smaller increase caused by K-Ras overexpression. To evaluate the possibility that p53 also affected expression of KIF2C or KIF2A, p53 was transiently knocked down or overexpressed in the HBEC model cell lines. We observed a relatively small effect on expression of either KIF2 protein ( Fig. 2B-D). Expression of oncogenic K-Ras G12V causes morphological changes that alter microtubule and actin cytoskeletons Cells transformed with oncogenic K-Ras in the context of loss or mutation of p53 and grown in serum are morphologically altered from isogenic cells lacking mutant K-Ras. To examine the effect of oncogenic K-Ras transformation on the microtubule and actin cytoskeleton of HBECs, we compared the morphology of HBEC3KT, HBEC30KT, HBEC3KT53 and HBEC3KTR L 53 by immunofluorescence staining for actin and α-tubulin (Fig. 3A,B, S2). HBEC3KT, HBEC30KT, and HBEC3KT53 appear larger and flatter than HBEC3KTR L 53. Ras-induced transformation caused inhibition of stress fiber formation 22 ; actin appeared less organized, consistent with the irregular cell morphology. HBEC3KTR L 53 had fewer microtubule polymers compared to HBEC3KT53, suggesting that microtubules may be more dynamic in cells expressing mutant K-Ras, as would be expected with elevated expression of microtubule depolymerizing kinesins. These changes are also consistent with EMT (see also Fig. 7D,E). Knocking down K-Ras from HBEC3KTR L 53 was sufficient to reverse much of the change in morphology of the microtubule and actin cytoskeleton, restoring microtubule polymerization and organized actin stress fibers (Fig. 3C). Signaling pathways downstream of Ras regulate morphological changes Because ERK1/2 are known to regulate microtubule and actin dynamics and are activated downstream of Ras 23-25 , we hypothesized that inhibition of the ERK1/2 pathway could also revert these phenotypes in HBEC3KTR L 53. Indeed, treatment of HBEC3KTR L 53 for 48 hr with 100 nM PD0325901, a MEK1/2-specific inhibitor, resulted in longer microtubules and reappearance of stress fibers (Fig. 4A), similar to effects of K-Ras knockdown. The effect of depletion of either KIF2A or KIF2C from HBEC3KTR L 53 was also similar to the effects on microtubules of K-Ras knockdown (Fig. S3). To demonstrate that prolonged exposure to this MEK inhibitor is not toxic to the cells, cells treated with PD0325901 for 48 hr were placed in fresh serum-containing medium for 30 min which activated ERK1/2 ( Fig. 4B). Furthermore, no change in viability was noted following prolonged PD0325901 treatment (Fig. S4A). Phosphatidylinositol 3-kinase (PI3K) is another important Ras effector that promotes cytoskeleton transformation changes in part through the small Rho family GTPase Rac 26 . To determine if this Ras-activated, growth-promoting pathway is also required for these cytoskeletal changes, we inhibited the PI3K pathway with 10 µM LY294002. This PI3K inhibitor did not cause microtubule spreading or flattening of cells, but did elicit some shape changes, e.g., further elongation (Fig. 4A). These data suggest the relative importance of the ERK1/2 pathway in influencing actin and microtubule cytoskeleton organization in this oncogenic Ras-transformed system. Because the cytoskeletal changes in the transformed system seemed to be mediated by ERK1/2, we evaluated the importance of this pathway for elevated KIF2C expression. Inhibition of ERK1/2 activation by exposure of cells to PD0325901 for 48 hr, but not with comparable treatment with a PI3K inhibitor, LY294002, decreased KIF2C protein in HBEC3KTR L 53 ( Fig. 4C) and to a lesser extent in HBEC3KT53 cells (Fig. S4B,C). Similar to inhibiting PI3K, inhibiting mTOR, another growth-promoting pathway, with the small molecule rapamycin did not affect KIF2C expression to the same extent as inhibition of ERK1/2 activation (Fig. 4D). Inhibition of ERK1/2 activation for 72 hr also reduced KIF2A protein by approximately 30% (Fig. 4E,F). Inhibition of MEK1/2 with PD0325901 significantly reduced KIF2C mRNA, not only in HBEC3KTR L 53 but also in cells that do not express mutant K-Ras, HBEC3KT and HBEC3KT53 ( Fig. 5A-C). Expression of constitutively active MEK1 caused a small increase in KIF2C protein (Fig. S5). As was the case with KIF2A protein, KIF2A mRNA also decreased with reduced ERK1/2 activitation (Fig. 5D-F); however, the reduction upon treatment with PD0325901 did not reach statistical significance in some cell lines as it did in HBEC3KTR L 53. Interestingly, inhibition of PI3K in HBEC3KT53 results in a significant upregulation of KIF2C mRNA ( Fig. 5G-I). In contrast, a closely related kinesin 13 family protein, KIF2B was upregulated by MEK inhibition under the same circumstances, suggesting that functions of specific kinesin molecules are differentially sensitive to regulation through this pathway ( Fig. S4D-F). KIF2B is also not upregulated in cancer lines (Fig. S1B). Changes in expression of KIF2C or KIF2A do not alter cell cycle progression in HBEC3KTR L 53 KIF2A and KIF2C have well defined but distinct roles in mitosis 21 . Because KIF2C and KIF2A were upregulated in cancer, we wondered if their functions in regulating the cell cycle were disturbed. Therefore, we examined the cell cycle profiles following individual knockdown of KIF2A or KIF2C (Fig. 6 A,B). DNA content was measured following depletion of kinesins for 96 hr to analyze cell cycle profiles. There appeared to be no difference in the number of cells in G2/M phase when cells were depleted of KIF2A or KIF2C. Unlike an earlier study suggesting a difference in mitotic index in U2OS cells 10 , we did not observe mitotic accumulation of HBEC3KTR L 53 following knockdown of KIF2A or KIF2C. As a positive control, exposure of HBEC3KTR L 53 to 200 nM taxol overnight caused marked accumulation of cells in mitosis (Fig. S6). Thus, under these conditions, residual KIF2A and KIF2C or other compensating proteins are apparently sufficient to prevent abnormalities. KIF2A and KIF2C facilitate migration of transformed cells Microtubule dynamics have long been implicated in the migratory ability of cells 27,28 . Therefore, we hypothesized that knockdown of KIF2A or KIF2C could have an effect on cell migration by disturbing microtubule dynamics. As anticipated, HBEC3KTR L 53 cells migrated significantly more than HBEC3KT (Fig. 7A). We knocked down KIF2A, KIF2C or both using siRNA in HBEC3KT and HBEC3KTR L 53 and found that depletion of either or both KIF2A and KIF2C reduced migration of HBEC3KT and HBEC3KTR L 53 through membranes compared to cells treated with a nontargeting siRNA (Fig. 7A,C). Knock down of these kinesins also reduced the bility of HBEC3KTR L 53 to invade through matrigel (Fig. 7B,C). HBEC3KT showed little or no ability to invade. HBEC3KT53 express E-cadherin exclusively. On the other hand, HBEC3KTR L 53 have lost E-cadherin and instead express N-cadherin. Depletion of either KIF2C or KIF2A does not alter this pattern of cadherin expression. Thus, KIF2A or 2C knockdown did reverse the changes in microtubule organization but did not reverse EMT caused by mutant K-Ras as assessed by expression of E-cadherin (Fig. 7D,E, S3). Importantly, loss of either or both KIF2A and KIF2C reduced the migration and invasiveness of HBEC3KT transformed with oncogenic K-Ras. Discussion We find that KIF2C and to a lesser extent KIF2A, two related microtubule-depolymerizing kinesins, are upregulated in transformed lung epithelial cells and contribute to the ability of cells transformed with mutant K-Ras to migrate. A number of kinesins including KIF2A have been implicated in cancers either from cell-based studies or through cancer genome analysis, most in the context of their mitotic roles [29][30][31][32][33] . Nearly four dozen human kinesins are known 34 , making the delineation of their individual functions difficult at best. Some transport cargo on microtubules in a plus-end direction, and others transport cargo in a minus-end direction, while a smaller number, including KIF2C, KIF2A, and KIF2B, depolymerize microtubules increasing dynamic instability 11,12,35 . Why two of these related kinesins are utilized by K-Ras and the extent to which the actions of these depolymerases in K-Ras-transformed cells reflect on their physiological actions in normal cells are questions that remain to be answered. Reduced migration of HBEC and Ras-transformed HBEC upon KIF2A and 2C knockdown suggests that some of their actions are common in normal and cancer cells. The full transforming potential and tumor formation caused by mutant K-Ras depend on its activation of interacting downstream effector pathways. Activated K-Ras contributes to many properties of successful cancers; these include enhanced proliferation under suboptimal conditions, evasion of cell death, escape from the primary tumor and invasion and metastasis to distant sites. A variety of observations have suggested that Rastransformation increases microtubule dynamics 36 . Changes in microtubule dynamics may have multiple consequences, but among them is modulating migratory capacity of cells. For example, H-Ras transformed MCF10a cells exhibit fewer acetylated microtubules; a reduction in this post-translational modification is thought to indicate a decrease in microtubule stability 23 . Mechanistic studies have focused largely on microtubule stabilizing factors, such as discs large 1 (Dlg1), RASSF1A and adenomatous polyposis coli (APC) which control cell motility and are frequently lost in human cancers [37][38][39] . Our studies indicate that increased expression of microtubule destabilizing factors can also occur during oncogenesis to support enhanced cell migration and invasion. The Raf/ERK1/2 and PI3K pathways are major effectors of Ras transformation and have powerful actions on the cytoskeleton. ERK1/2 were first described as microtubuleassociated protein kinases and regulate aspects of both microtubule and actin lattices 24,25,[40][41][42][43] . In Ras-transformed Swiss-3T3 fibroblasts, sustained ERK-MAPK signaling prevents actin stress fiber formation and promotes migration by downregulating Rho-kinase 22,44 . In agreement with this finding, inhibition of ERK1/2 activation by PD0325901 restored actin stress fibers in our system and restored microtubule polymers as well. Ras-induction of KIF2C was suppressed through inhibition of ERK1/2 activation, resulting in expression of KIF2C comparable to its amount in normal immortalized HBEC. A less dramatic change was also noted in KIF2A. We conclude that the ERK1/2 pathway is a major Ras effector controlling expression of these kinesins. These findings suggest that ERK1/2 may be more important in controlling morphology in transformed cells than recognized previously. Mutation of K-Ras induces changes in gene expression and morphology of cancer cells that have been evaluated in a number of ways, leading to the conclusion that changes in many genes downstream of Ras contribute to cancer phenotypes 2, 45-49 . Among prominent examples, Zeb1 participates in the induction of EMT by suppressing expression of Ecadherin 48,50 . Knockdown of K-Ras in cancer cells has revealed genes whose expression is reversible upon loss of mutant K-Ras and those that have become Ras independent, such as Zeb1 4,48 . In the transformed model we studied, knockdown of K-Ras or inhibition of ERK1/2 activation did reduce KIF2C and KIF2A expression, suggesting that these genes retain Ras dependence for expression. PI3K influences the actin cytoskeleton to confer a migratory advantage on cancer cells. The mechanism includes the stimulation of Rac through regulation of T-lymphoma invasion and metastasis gene 1 (Tiam), which functions as a specific guanine nucleotide exchange factor (GEF) for Rac1 51 . Additionally, the regulatory subunit of PI3K, p85α, can activate Cdc42 and subsequently regulate actin dynamics and migration 52 . Though we hypothesized that some of the changes in the actin cytoskeleton as a consequence of K-Ras G12V transformation would be reversed with the inhibition of PI3K pathway, we found that in our system this pathway had a smaller than expected effect on the actin phenotype. Prolonged inhibition of this pathway also had a minor effect on microtubule organization and only a small effect on KIF2C expression. Thus, in this model of nonsmall cell lung cancer, ERK1/2 are dominant regulators of morphology and of kinesins involved in microtubule dynamics. Finally, we determined that KIF2C and KIF2A are not only upregulated in our laboratorygenerated system of K-Ras oncogenic transformation but in many cancer lines isolated from lung cancer patients. Knockdown of KIF2C and KIF2A decreased the ability of HBEC3KTR L 53 to migrate and invade, suggesting the utility of targeting Ras-mediated pathways that promote different aspects of cancer biology for therapeutic advantage. Because of its critical functions in regulating the cytoskeleton, KIF2C has been proposed as a potential new cancer drug target for breast, colorectal and pancreatic cancers 53 . Further studies are necessary to investigate the role of KIF2C and KIF2A in cancer invation and metastasis. Cell Culture Immortalized HBEC3KT, HBEC30KT and HBEC3KT53 cells were cultured in Keratinocyte serum free medium (KSFM) (Invitrogen) supplemented with 5 ng/mL epidermal growth factor and 50 µg/mL bovine pituitary extract according to manufacturer's recommendations. HBEC3KTR L 53 were cultured in RPMI-1640 medium supplemented with 5% heat-inactivated fetal bovine serum (vol/vol) and 2 mM L-glutamine. Cells were grown at 37°C in a humidified atmosphere of 5% CO 2 . MDA-MB-231, HCC38, HCC1143, Htb126, and MCF7 cells were provided by M.A. White (Dept Cell Biology) and T47D and HCC1428 cells were provided by G.W. Pearson (Simmons Comprehensive Cancer Center, Dept Pharmacology). SUM149PT and SUM190PT cells were purchased from Asterand (Detroit, MI) and grown according to provider culture conditions. HCC1937 cells were a gift of A. Gazdar (Hamon Cancer Center, Dept Pathology). HME50 cells were derived from the non-cancerous breast tissue of a female diagnosed with Li-Fraumeni syndrome as previously described 54 . The missense p53 mutation (M133T) in HME50 was sequence verified. Breast cancer cell lines were cultured in Dulbecco's modified Eagle's medium or RPMI-1640 with 10% fetal calf serum. HME50 was cultured in serum-free conditions as described elsewhere 55 . Microarray Our cell line panel consists of 118 NSCLCs, 29 SCLCs, 30 HBECs (immortalized human bronchial epithelial cells), and 29 HSAECs (immortalized human small airway epithelial cells). All cell lines were DNA-fingerprinted and mycoplasma-tested. Total RNA (500 ng) was prepared using the RNeasy Midi kit (Qiagen, Valencia, CA) and checked for quality and concentration using the Bio-Rad Experion Bioanalyzer (Hercules, CA). cRNA labeling was done with Ambion Illumina TotalPrep RNA Amplification kit (IL1791). Amplified and labeled cRNA probes (1.5 ug) were hybridized to Illumina HumanWG-6 V3 Expression BeadChip (BD-101-0203) overnight at 58 degree C, then washed, blocked and detected by streptavidin-Cy3 per manufacturer's protocol. After drying, the chips were scanned by Illumina iScan system. Bead-level data were obtained, and pre-processed using the R package mbcb for background correction and probe summarization 56 . Pre-processed data were then quantile-normalized and log-transformed. All array data are deposited on GEO (GSE32036). siRNA Cells were transfected for from 48 to 96 hr as indicated with dsRNA oligonucleotides using Lipofectamine RNAiMax according to manufacturer's protocol (Invitrogen). The following target sequences for KIF2A were used: GAAAACGACCACUCAAUAA (Thermo Scientific) and GACCCTCCTTCAAGAGATA (Thermo Scientific); KIF2C: GCAAGCAACAGGUGCAAGU (Thermo Scientific) and GGCAUAAGCUCCUGUGAAU (Thermo Scientific); Ras: GGAGGGCUUUCUUUGUGUA (Thermo Scientific); p53: GGAGAAUAUUUCACCCUUC (Thermo Scientific). In several experiments in which duplicate lanes are shown, duplicate wells of cells were treated with the same oligonucleotides. Immunofluorescence Cells were washed with Tris-buffered saline (TBS); fixed with 4% paraformaldehyde (vol/ vol) in TBS for 10 min; and washed twice for 5 min with TBS. Cells were permeabilized with 0.1% Triton X-100 for 5 minutes and washed twice with TBS. After incubation with 10% normal goat serum (vol/vol) at room temperature for one hr, cells were incubated with the indicated antibodies at 4 °C overnight. Cells were washed with TBS, incubated with Alexa fluor-conjugated secondary antibody at room temperature for one hr, washed with TBS, and imaged. Fluorescent Z-stacks (0.2 mm) were acquired and deconvolved using the Deltavision RT deconvolution microscope. α-tubulin antibodies were from Sigma (T6199). Cell cycle analysis HBECK3KTR L 53 were transfected with siRNA for 96 hr and collected for flow cytometry. Cells were trypsinized and collected by sedimentation at 1,000×g. Cells were washed in 1X phosphate buffered saline (PBS) and fixed in 70% cold EtOH at −20° C overnight. Cells were washed in 1X PBS and stained with 0.4 ml of propidium iodide/RNase A 1 solution. DNA content was measured by flow cytometry with FACSCalibur (BD Biosciences), and data were analyzed using FlowJo. Cells harvested for immunoblot were washed with 1X PBS and harvested with Laemmli sample buffer. Samples were boiled for 2 min at 95° C, sonicated, and boiled again for 2 min at 95°C. Protein concentration was measured with BCA Protein Assay (Thermo Scientific Prod # 23235). Cell lysate protein (20 µg) were resolved on gels and processed for immunoblotting as above. Migration Assays The HBEC3KTR L 53 and HBEC3KT cell lines were used for all migration assays. For the transwell migration assays, 1x10 4 cells were seeded 72 hr post-knockdown of indicated proteins, using Transwell permeable supports (Corning #3422). Cells were seeded in the top chamber in 5% FBS and allowed to migrate along a concentration gradient through a polycarbonate membrane with 8 µm pores to the bottom chamber containing medium with 25% FBS. After 24 hr cells were fixed, stained (with 1% methylene blue, 1% borax solution), and counted. For invasion assays 1.5 x10 5 cells were imbedded in Growth Factor Reduced Matrigel in transwell permeable supports, 72 hr post-knockdown of indicated proteins. Cells were allowed to migrate for 48 hr across membranes with a gradient of 25 % serum in the bottom chamber. Cells were fixed, stained (with 1% methylene blue, 1% borax solution), and counted. Fig. 3. Cells transformed with K-Ras G12V display changes in microtubule and actin cytoskeletons Actin was detected using phalloidin (green) and α-tubulin (red) was detected by immunostaining in (A) HBEC3KT53, (B) HBEC3KTR L 53, and (C) HBEC3KTR L 53. In (C) staining was performed 72 hr after transfection with control (top) or K-Ras siRNA (bottom).
Confining strings, axions and glueballs in the planar limit We present recent results on the spectrum of a confining flux tube that is closed around a spatial torus as a function of its length as well as the spectrum of glueballs. The extraction of the spectra has been realized by simulating four dimensional $SU(N)$ gauge theories and performing measurements using lattice techniques. Regarding flux-tubes, we have performed calculations for $N=3,5,6$ and for various values of spin, parity and longitudinal momentum. Long flux-tubes can be thought of as infinitesimally thin strings; hence their spectrum is expected to be described by an effective string theory. Furthermore, the flux-tube's internal structure makes possible the existence of massive states in addition to string modes. Our calculations demonstrate that although most states exhibit a spectrum which can be approximated adequately by Nambu-Goto there is strong evidence for the existence of a massive axion on the world-sheet of the QCD flux-tube as well as a bound state of two such axions. Regarding glueballs, we extracted spectra from $N=2$ to $N=12$ which enables us to extrapolate to $N= \infty$. Our main aim was to calculate the lightest glueball masses for all different configurations of the quantum numbers of spin, parity and charge conjugation. This provides a major update on the spectrum of glueballs in the planar limit. Introduction During the last three decades lattice gauge theory simulations provided useful information towards the physics of the 't Hooft's large-limit of gauge theories as well as QCD. In parallel, the "second superstring revolution" of Maldacena's AdS/CFT correspondence bloomed leading to gauge-gravity dualities. Such dualities between weakly coupled string theories and strongly coupled gauge theories at large-, have led to a common interest in what the physics of the large-gauge theories is. Understanding the large-limit of gauge theories requires the investigation of masses of associated states. The simplest such states one can consider are gluballs and flux-tubes, with both states reflecting hadronic dynamics. The calculation of the spectrum of these excitations has been a matter of investigation by both, lattice gauge theories as well as strings including AdS/CFT duality and effective bosonic string theory. In addition a more straightforward relation between these two fields has been established: lattice provides data extracted considering first principles for comparison with strings. In QCD the quarks are confined in bound states by forming open flux-tubes. Long flux-tubes behave similarly to thin strings: If you pull the string apart, at some point it breaks; thus the term confining strings. However, to observe such a phenomenon in a lattice QCD calculation it requires the introduction of dynamical quarks (sea quarks) in the Markov-chain simulation used in production of configurations. We consider pure gauge theories where such effects do not appear. By placing the confining flux-tube in a given position in space we expect − 2 massless modes to propagate along the string arising from the spontaneously broken translation invariance in the − 2 directions transverse to the flux. We, thus, expect that there should be a low energy effective string theory describing such oscillating modes. Although, a flux-tube can be considered effectively as a thin string, it also has an intrinsic width. This suggests that massive states related to the intrinsic structure of the tube may exist in the spectrum. One can investigate, whether, such states exist by extracting the flux-tube spectrum, compare it with an effective string theory model and identify states which exhibit significant deviations from a theoretical description. A naive expectation would be that a massive mode has the characteristics of a resonance with energy gap of the order of the mass gap (scalar glueball mass ∼ ) of the theory. For reasons of simplicity, we investigated the spectrum of the closed flux-tube which winds around the spatial lattice torus, thus the name "torelon". This set up avoids the consideration of the effect of the static quarks on the spectrum, and focuses on the dynamics of the flux-tube. In the past it has been demonstrated [1] that the confining string in = 2 + 1 ( ) gauge theories can be adequately approximated by the Nambu-Goto free string in flat space-time, from short to long flux-tubes, without any massive excitations showing up. Furthermore, we demonstrated [2] that the spectrum of the closed flux-tube in = 3 + 1 consists mostly of string-like states but in contrast to the = 2 + 1 case a number of excitations with quantum number = 0 − appeared to be in accordance with the characteristics of a massive excitation. In 2013, Dubovsky et al, [3] demonstrated that this state arises naturally if one includes a Polyakov topological piece in the string theoretical action. Our old results were poor -the spectrum has been extracted for a few string lengths, and for low statistics. Recently, we proceeded towards a major improvement of the previous investigation on = 3 + 1 by extracting the spectrum of the flux-tube for all the irreducible representations expanded by the quantum numbers (QNs) {| |, ⊥ , } using three values of color , namely = 3, 5, 6, as well as by probing through a large set of flux-tube lengths. In addition to flux-tubes we have also improved the older glueball spectra calculations in the Large-limit. Our main aim in this work is to provide a calculation of the low-lying 'glueball' mass spectrum for all quantum numbers and all values of . This means calculating the lowest states in all the irreducible representions, , of the rotation group of a cubic lattice, and for both values of parity and charge conjugation symmetry . We do so by performing calculations in the corresponding lattice gauge theories over a sufficient range of lattice spacings, and with enough precision that we can obtain plausible continuum extrapolations. We also put effort to extrapolate to the = ∞ limit and to compare this to the physically interesting (3) theory. To do so we have performed our calculations for = 2, 3, 4, 5, 6, 8, 10, 12 gauge theories. The structure of these proceedings is the following. First, in Section 2 we provide a short chapter on the Large-limit to remind ourselves the basic properties of the physics on the planar limit. Then in Section 3 we present the effective string theoretical descriptions suitable for approximating the spectrum of the confining string. Subsequently in Section 4, we provide a brief description of the lattice setup, by explaining how one can extract the masses of colour singlets on the lattice as well as the quantum numbers relevant for the extraction of the flux-tube and glueball spectra. Followingly, in Section 5 we move to the presentation of the results starting from the spectra of confining strings, demonstrating the appearance of the worldsheet axion and proceeding to the spectrum of glueballs. Finally, in Section 6, we conclude. Large-limit Yang-Mills gauge theory has a dimensionless running coupling 2 and we, thus, might expect to be able to use the coupling as a general parameter for the theory. However, due to the fact that the scale invariance is anomalous, setting 2 to some particular value 2 , we can only hope to use it as a useful expansion parameter for physics close to the scale for which the running coupling takes that value; in other words where 2 ( = ) 2 . An alternative but more general expansion might be provided by 1/ as t' Hooft suggested, back in 1974. One can think of expanding ( ) gauge theories in powers of 1/ 2 around (∞): According to the t' Hooft's double line representation for the gluon propagators and the associated vertices, ignoring for simplicity the difference between ( ) and ( ), the expansion parameter can be expressed as 1/ 2 . As a result, a smooth large-limit can be achieved if one keeps the parameter 2 fixed. This can be viewed by considering a gluon loop insertion in the gluon propagator using the double-line notation as this is pictorially represented on the left panel of Figure 1. The two vertices give a factor of 2 and the sum over the colour index in the closed loop gives a factor of . Hence, such an insertion will produce a factor of 2 in the amplitude. To ensure smooth physics as increasing → ∞ we require that the number of such insertions in the diagrams dominating the physics of interest are roughly fixed as we alter . The above requires that we keep = 2 fixed. Such diagrams can be mapped on a plane and can, thus, be called planar. On the other hand diagrams on which glue propagators cross, cannot be mapped on a plane but can be resembled as planar diagrams with handles; an example of such a diagram is presented in the right panel of Figure 1. This non-planar Feynman diagram has six vertices and just one circulating loop, which means that the expression will be proportional to This is a naive way to demonstrate that the non-planar Feynman diagrams vanish in the large-limit. It is also straightforward to show that a Feynman diagram that contains virtual quark loops will get suppressed in the large-limit. Therefore, at the 't Hooft limit only planar Feynman diagrams without quark loops survive. So far we are making the assumption that there is a confining phase in the large-limit. This is based on numerical evidence. For instance, flux-tubes and glueballs exist and their masses extrapolate well in the Large-limit. We draw this conclusion by performing calculations for ( ) gauge theory and a sequence of finite values of . Of course it would be nice to show that there is in fact a large-confining phase, and that a smooth physics limit does in fact exist. At this point it should be made clear that there is no expectation that all the physics of (3) is close to that of (∞). We can only make sure that an observable in (3) is close to that of (∞) once we perform the calculation. It could be possible that other large-limits are more appropriate for the physics under investigation. For instance in QCD where we have 2 or 3 light flavours, / ∼ 1. Hence, it could appear possible that the limit → ∞ by keeping / fixed might be more appropriate for some physical quantities [4]. A nice review where several such limits are being discuss is provided in Ref. [5] Low energy Effective String Theories Let us imagine a flux-tube as a confining string with length = winding around the spatial torus where the lattice spacing. Imposing fixed spatial position for the string spontaneously breaks translation symmetry. Therefore, we expect − 2 Nambu-Goldstone massless bosons to appear at low energies. Such bosons reflect the transverse fluctuations of the flux-tube around its classical configuration. We would thus, expect a low energy Effective String Theory (EST) describing the flux-tube spectrum for large enough strings. Of course a flux-tube is not an infinitesimally thin string, it is an ( ) object and presumably has an intrinsic width ∝ 1/ √ . We would therefore expect that the spectrum of the flux-tube consists not only of string like states but also of massive excitations. Below, we describe the current theoretical predictions for the excitation spectrum of the Nambu-Goldston bosons as well as an approach to explain the existence of massive resonances on the world-sheet of the confining string. The Goddard-Goldstone-Rebbi-Thorn string In this subsection we describe the spectrum of the Goddard-Goldstone-Rebbi-Thorn (GGRT) [6] string or in other words the Nambu-Goto (NG) [7] closed string. NG string describes non-critical relativistic bosonic strings. One can extract the GGRT spectrum by performing light-cone quantization of the closed-string using the NG action or equivalently the Polyakov action. This model is Lorentz invariant only in = 26 dimensions. Nevertheless, for reasons that we now understand better [3] NG can also describe to a good extend the spectrum of strings in = 3 and 4 dimensions. The expression of the GGRT spectrum is given by: Lorentz invariant string approaches Systematic ways to study Lorentz invariant EST which can describe the confining string were pioneered by Lüscher, Symanzik, and Weisz in [8] (static gauge) as well as by Polchinski and Strominger in [9] (conformal gauge). Such approaches produce predictions for the energy of states as an expansion in 1/ √ . Terms in this expansion of (1/ ) are generated by ( + 1) -derivative terms in the EST action whose coefficients are a priori arbitrary Low Energy Coefficients (LECs). Interestingly, these LECs were shown to obey strong constraints that reflect a non-linear realization of Lorentz symmetry [10][11][12], and so to give parameter free predictions for certain terms in the 1/ expansion. The EST approaches can be characterised by the way one performs the gauge fixing of the embedding coordinates on the world-sheet. This can be either the static gauge [8,10,12] or the conformal gauge [9,13,14] with both routes leading to the same results. The starting point of building the EST is the leading area term which gives rise to the linearly rising potential for large strings i.e. . Subsequently comes the Gaussian action which is responsible for the ∝ 1/ Lüscher term with universal coefficient depending only on the dimension . At next step one adds the 4-derivative terms which yield a correction on the energy spectrum proportional to 1/ 3 with a universal coefficient that also depends on the dimension . One can include the 6−derivative terms and show that for = 3 they yield the fourth universal term proportional to 1/ 5 in the energy spectrum, while for general states in = 4, the coefficient of the (1/ 5 ) term is not universal. Nonetheless, the energy just for the ground state in the = 4 case is universal. Summarizing the above information, the spectrum is given by Since we think of the GGRT model as an EST, which may be justified only for long strings [15], one can expand the associated energy for √ 1. The result of the expansion is the same as the expression in Equation 3 where for simplicity we have set = 0, and = ( + )/2. The topological term action In 2013, Dubovsky et al. worked out an approach for extracting the spectrum of the confining string for short as well as for long lengths. The idea was based on the fact that the GGRT string provides the best approximation for the flux-tube spectrum and that Equation 2 can be re-expressed as where are the momenta of individual phonons in units of 2 / comprising the state quantised. The naive expansion in terms of 1/ √ is the combination of two different expansions; the first is an expansion in the softness of individual quanta compared to the string scale, i.e. in / √ and the second expansion is a large volume expansion, i.e. an expansion in 1/ √ . To disentangle the two expansions the following procedure is being adopted. First, one calculates the infinite volume -matrix of the phonon collisions. This is done perturbativelly given that the center of mass energy of the colliding phonons is small in string units; this is called the momentum expansion. Followingly, the authors extracted the finite volume energies from this -matrix by using approximate integrability and the Thermodynamic Bethe Ansatz (TBA). This allows to extract the winding effects on the energy from virtual quanta traveling around the circle as well as the winding corrections due to phonon interactions. The authors argued that when a state has only left-moving phonons the GGRT winding corrections in the string spectrum are small and, therefore, one expects the spectrum to be close to that of the free theory. On the contrary, for states containing both left-and right-moving phonons, energy corrections are larger. The above is in a good agreement with most of the states in = 4 but fails to provide an explanation for the anomalous behaviour of the pseudoscalar level 0 −− firstly demonstrated in [2,16], suggesting that an additional action term is required in order to describe such excitations. The most straightforward way to do this is the introduction of a massive pseudoscalar particle on the world-sheet. The leading interaction compatible with non-linearly realized Lorentz invariance for such a state is a coupling to the topological invariant known as the self-intersection number of the string int = 8 ∫ 2 with being the extrinsic curvature of the world-sheet, the associated coupling and , = 1, 2 the world-sheet coordinates. Adapting the above interaction term to our old results for (3), = 6.0625 yields a mass of / √ 1.85 +0.02 −0.03 and a coupling of = 9.6 ± 0.1. The lattice gauge theory We define the ( ) gauge theory on a = 4 Euclidean space-time lattice which has been compactified along all directions with volume × × × . The length of the flux-tube is equal to , while , and were chosen to be large enough to avoid finite volume effects. For the calculation of the confining string spectra we choose the transverse lattice extents = = ⊥ uniformly so that we ensure rotational symmetry around the string axis while for glueballs we choose all spatial directions to be equal i.e. = = for similar reasons. We perform Monte-Carlo simulations using the standard Wilson plaquette action where the sum runs over all the plaquettes ( ), the basic square Wilson loop one can construct with side one lattice spacing as well as with inverse coupling = 2 2 ( ) . In order to keep the value of the lattice spacing approximately fixed for different values of we keep the 't Hooft coupling ( ) = 2 ( ) approximately fixed, so that ∝ 2 . From a technical point of view, the simulation algorithm materialized for such investigations, combines standard heat-bath and over-relaxation steps in the ratio 1:4; these are implemented by updating (2) subgroups using the Cabibbo-Marinari algorithm [17]. Mass Extraction and Quantum Numbers Flux-tubes and glueballs are colour singlet states. Masses of colour singlet states can be calculated using the standard decomposition of a Euclidean correlator of some operator ( ), with high enough overlap onto the physical states in terms of the energy eigenstates of the Hamiltonian of the system : where the energy levels are ordered, +1 ≥ , with 0 that of the ground state. The only states that contribute in the above summation are those that have non zero overlaps i.e. = vac| † | ≠ 0. We, therefore, need to match the quantum numbers of the operator to those of the state we are interested in. In this work we are interested in glueballs and closed flux-tubes, thus, we need to encode the right quantum properties within the operator which will enable us to project onto the aforementioned states. The extraction of the ground state relies on how good the overlap is onto this state and how fast in we obtain the exponential decay according to Eq. (5). The overlap can be maximized by building operator(s) which "capture" the right properties of the state, in other words by projecting onto the right quantum numbers as well as onto the physical length scales of the relevant state. In order to achieve a decay behaviour setting in at low values of one has to minimize contributions from excited states. To this purpose we employ the variational calculation or GEVP (Generalized Eigenvalue Problem) [18,19] applied to a basis of operators built by several lattice path in different blocking levels [20][21][22]. This reduces the contamination of excitation states onto the ground state and maximizes the overlap of the operators onto the physical length scales. Quantum numbers of the confining string The energy states of the closed flux-tube in = 3 + 1 are characterised by the irreducible representations of the two-dimensional lattice rotation symmetry around the principal axis denoted by 4 [23]. The above group is a subgroup of (2) corresponding to rotations by integer multiples of /2 around the flux-tube propagation axis. This splits the Hilbert space in four orthogonal sectors, namely: mod 4 = 0, mod 4 = ±1, mod 4 = 2. Furthermore, parity ⊥ which is associated with reflections around the axis⊥ 1 can be used to characterise the states. Applying ⊥ transformations, flips the sign of . Therefore, one can choose a basis in which states are characterised by their value of ( = ±), or by their value of | | and ⊥ . We adopt the latter. In the continuum, states with ≠ 0 are parity degenerate, however, on the lattice this holds only for the odd values of . In practice, we describe our states with the following 5 irreducible representations 1 , 2 , , 1 and 2 of 4 group whose and ⊥ assignments are: Furthermore, there is the longitudinal momentum | | carried by the confining string along its axis (which is quantized in the form | | = 2 / ; ∈ ) and the parity | | with respect to reflections across the string midpoint. Since | | and | | do not commute, we can use both to simultaneously characterise a state only when = 0. The energy does not depend on the sign of momentum and we, thus, focused on those with ≥ 0. Thus (0) belongs to 1 and 2 , (1) to and, finally, (2) to 1 as well as 2 . Lastly, it is The Quantum Numbers of glueballs The glueballs like the flux-tubes are color singlets and, thus, an operator projecting onto a glueball state is obtained by taking the ordered product of ( ) link matrices however now around a contractible loop and then taking the trace. To retain the exact positivity of the correlators we use loops that contain only spatial links. The real part of the trace projects on = + and the imaginary part on = −. We sum all spatial translations of the loop so as to obtain an operator with momentum = 0. We take all rotations of the loop and construct the linear combinations that transform according to the irreducible representations, , of the rotational symmetry group of our cubic spatial lattice. We always choose to use a cubic spatial lattice volume ( = = ) that respects these symmetries. For each loop we also construct its parity inverse so that taking linear combinations we can construct operators of both parities, = ±. The correlators of such operators will project onto glueballs with = 0 and the quantum numbers of the operators concerned. All the 12 paths used for the construction of the glueball operators are provided in Figure 4. The irreducible representations of our subgroup of the full rotation group are usually labelled as 1 , 2 , , 1 , 2 . Mind that these representations are different than those for the group 4 of the confining-string . The 1 is a singlet and rotationally symmetric, so it will contain the = 0 state in the continuum limit. The 2 is also a singlet, while the is a doublet and 1 and 2 are both triplets. Since, for example, the three states transforming as the triplet of 2 are degenerate on the lattice, we average their values and treat them as one state in our estimates of glueball masses and we do the same with the 1 triplets and the doublets. Once more, the glueball energies are extracted by making use of correlation matrices = † ( ) (0) with , = 1... op in combination with GEVP where op the number of operators. The scalar channel ++ 1 has a non-zero projection onto the vacuum. In this case it can be convenient to use the vacuum-subtracted operator − , which will remove the contribution of the vacuum in Equation 5, so that the lightest non-trivial state appearing in the aforementioned sum, is the leading term in the expansion of states. The above representations of the rotational symmetry reflect our cubic lattice formulation. As we approach the continuum limit these states will approach the continuum glueball states which belong to representations of the continuum rotational symmetry. In other words they fall into degenerate multiplets of 2 + 1 states. In determining the continuum limit of the low-lying glueball spectrum, it is clearly more useful to be able to assign the states to a given spin , rather than to the representations of the cubic subgroup which have a much less fine 'resolution' since all of = 1, 2, 3 . . . , ∞, are mapped to just 5 cubic representations. The way 2 + 1 states for a given are distributed amongst the representations of the cubic symmetry subgroup is given, for the relevant low values of , in table 1. For instance, the seven states corresponding to a = 3 glueball will be distributed over a singlet 2 , a degenerate triplet 1 and a degenerate triplet 2 , so seven states in total. These 2 , 1 and 2 states will be split by ( 2 ) lattice spacing corrections. So once the lattice spacing is small enough these states will be nearly degenerate and one can use this near-degeneracy to identify the continuum spin. The spectrum of the confining string and the world-sheet axion At this section of the manuscript we present results for the spectrum of the confining string extracted from calculations on five different gauge groups. Namely, we investigated = 3 at = 6.0625 ( 0.09fm) and = 6.338 ( 0.06fm), = 5 at = 17.630 ( 0.09fm) and = 18.375 ( 0.06fm) as well as = 6 at = 25.550 ( 0.09fm). Critical slowing down [24,25], as one moves towards the continuum ( → 0) and the large-limit ( → ∞), prohibits the investigation of gauge groups with ≥ 6 and < 0.09fm. Nevertheless, the above configuration of measurements is enough to determine whether significant lattice artifacts as well as 1/ 2 corrections are affecting our statistically more accurate = 3 calculations. As a matter of fact our investigation demonstrates that such effects are of minor importance and do not play a significant role in the interpretation of the spectrum. The energy spectrum we extracted is compared to the predictions of the GGRT string. Namely, we fit the absolute ground state (| mod 4 | ⊥ || = 0 ++ ) for all calculations using Equation 2 as a function of the length for √ > 2.5 and extract the string tension √ . Once the string tension has been extracted, Equation 2 can be used as a parameter free prediction for higher string excitations with + > 0. The energy spectrum for = 0 and the world-sheet axion We begin by presenting our results for the = 0 longitudinal momentum sector in Figures 5, 6, 7, 8 and 9. In Figure 5, the lowest energy level corresponds to the absolute ground state | mod 4 | ⊥ || = 0 ++ which is used to set the scale of the NG string, hence, the nearly perfect agreement with the GGRT string. Furthermore in Figure 5, we plot the first excited state of 0 ++ as well as the ground states of 2 ++ , 2 −+ and 0 −− for (3) at = 6.0625. We compare the above data with the GGRT prediction for = = 1. This string state is expected to be four-fold degenerate with levels with continuum quantum numbers 0 ++ , 0 −− , 2 ++ and 2 −+ . While 0 ++ , 2 ++ and 2 −+ flux-tube excitations appear to exhibit small deviations for short values of √ and for larger strings become consistent with GGRT, 0 −− ground state appears to demonstrate significant deviations from the GGRT string. In Figure 6 we present the ground and in addition the first excited state with quantum numbers 0 −− for all gauge groups considered in this work. It appears that both states are only mildly affected by lattice artifacts and 1/ 2 corrections. The 0 −− ground state appears to have characteristics of a resonance i.e. a constant mass term coupled to the absolute ground state. This is more obvious by subtracting the absolute ground state 0 ++ where this excitation exhibits a plateau; this is presented in Figure 7 for A state of two axions In Figure 8 we present the second excitation state with quantum numbers 0 ++ . Above this energy level we get a plethora of states which reflect the multifold degeneracy of the GGRT string for = = 2. Strikingly, this state appears to exhibit the same resonance behaviour as the 0 −− ground state: it appears as a constant term coupled to the absolute ground state. This is more obvious if we subtract from this energy level the contribution of the absolute ground state as this appears in Figure 9. Namely, we observe that this is in agreement with a resonance of mass twice that of the axion. This raises the question whether such a relation is accidental or it has some deeper interpretation. A reasonable expectation would be that this state is a bound state of two axions with a very low binding energy; this scenario is in agreement with the quantum numbers of the state. The ≠ 0 sector and the appearance of the world-sheet axion In this section we present our results for the = 1 and = 2 momentum sectors. In the left panel of figure 10 we demonstrate the spectrum for = 1, (3) and = 6.338. Since, the string ground state = 1, = 0 can only be created by a single phonon, it has = 1. The flux-tube ground state with quantum numbers 1 ± , = 1 appears to be in good agreement with the prediction of the GGRT string. This is in accordance with the results of Ref [3]. The next string excitation level, corresponding to = 2 and = 1 should be seven-fold degenerate. This should consist of one 0 + , one 0 − , three 1 ± , one 2 + and a 2 − state. In the left panel of Figure 10 we show the flux-tube ground state with quantum numbers 2 + , the ground state with 2 − , the ground state for 0 + as well as the first and second excited states with 1 ± . All the above five states appear to cluster around the GGRT prediction. Furthermore, we demonstrate the ground state for 0 − which appears to exhibit large deviations from the GGRT string. Since, this state has the same quantum numbers as the pseudoscalar massive excitation the first assumption one could make is that this reflects to the axion. A naive comparison of this state with a relativistic sum of the absolute ground state plus an axion with momentum 2 / is provided in the same figure, demonstrating an approximate (3), = 6.0625. The horizontal purple band corresponds to the mass of the axion as this has been extracted in [3]. agreement with our data for large flux-tubes. This strengthens the scenario of this state being the world-sheet axion. In the right panel of Figure 10 we show results for = 2, (3) and = 6.338. The string ground state = 2, = 0 is expected to be four-fold degenerate. Namely, it is expected to be occupied by states with quantum numbers 0 + , 1 ± , 2 + and 2 − . We, thus, extract the flux-tube ground states with the above quantum numbers and observe that they all cluster around the GGRT prediction. The next string excitation level is multi-fold degenerate and should also include a 0 − state which encodes the quantum numbers of the axion. We extract the flux-tube ground state with quantum numbers 0 − and we observe a very similar behaviour as for the case of = 1; namely it diverges greatly from the GGRT prediction. The spectrum of glueballs in the planar limit At this section of the manuscript we present results for the spectrum of glueballs in ( ) gauge theories at the continuum limit → 0 as well as their extrapolations to the = ∞ limit. These spectra have been extracted from calculations on introduces systematic errors which should be addressed carefully. Since this refers to technical aspects of the calculation and is, thus, beyond the scope of this presentation, we refer the reader to the actual publication [24]. For the gauge groups presented above, we calculated glueball masses from the correlators of suitable operators which have been encoded zero momentum = 0 by imposing translation invariance. These operators are chosen to have quantum numbers as this has been presented in Section 4.1. 12 different closed loops on the lattice have been used to facilitate the Generalized Eigenvalue Problem; all the loops are presented in Figure 4. For each different loop all 24 rotations as well as their linear combinations of the traces that transform irreducibly under have been constructed. By taking the real and imaginary parts of the traces separately, we build operators with = ± respectively. We also calculate the parity inverses of each of these 12 closed loops, and of their rotations, and by adding and subtracting appropriate operators from these two sets we form operators for each configuration of the quantum numbers with = ±. Once more, states in cubic representations 1 and 2 appear to be one-dimensional, meaning that for each energy level we have only one such state. States in the representation are doubly degenerate (two-dimensional) and in the 1 and 2 are triply degenerate (three-dimensional). Lattice simulations are computationally intensive, and for reasons of computational economy we wish to perform calculations on lattice volumes that are small but, at the same time, large enough so that any finite volume effects do not interfere with the physics under investigation. The computational cost of simulating and calculating in ( ) gauge theories increases approximately as ∝ 3 due to the multiplication of two × matrices. Since finite volume corrections are expected to decrease as powers of 1/ , we reduce the size in physical units of our lattices as we increase . Due to the technical nature of this topic we refer the reader to our longer manuscript in Ref [24]. Special attention has been given in ensuring that finite volume effects do not affect the spectrum of the glueballs. There are two types of such finite volume corrections. The first arises when the propagating glueball emits a virtual glueball which propagates around the spatial torus. The shift caused by the virtual gluballs in the mass of the propagating glueball decreases exponentially in with being the length of the spatial torus. For the glueball calculation we choose so that × / is large enough and, thus, we can expect this correction to be small. Details on the choice of can be found in Ref. [24]. Similar source of finite volume effects is also present in the confining-string spectrum where in order to ensure that such effects are under control we have chosen the transverse directions of the lattice = to be adequately large. The second type of finite volume effects in the glueball spectrum includes states composed of several flux-tubes winding around a spatial torus in a singlet state. The lightest of these states will be composed of one winding flux-tube together with a conjugate winding flux-tube and we, thus, refer to it as a 'ditorelon'. These states have a non-zero overlap onto the loops (Figure 4) we use as our glueball operators. Therefore, it can appear as a state in our extracted glueball spectrum. Neglecting interactions between the flux-tubes, the lightest ditorelon will consist of each flux-tube in its ground state with zero momentum and will have an energy, , that is twice that of the flux-tube absolute ground state gs , = 2 gs . In principle we expect interactions to shift the energy but this shift should be small on the volumes we have chosen. Hence, we shall use 2 gs as a rough estimate in searching for these states. The ditorelon ground state contributes only to the ++ 1 and ++ representations. If we allow one or both of the component flux-tubes to be excited and/or to have non-zero equal and opposite transverse momenta we can populate other representations and produce towers of states. However, these excited ditorelon states will be considerably heavier on the lattice volumes we use. Ditorelon contributions in the ++ 1 and ++ channels have been investigated in detail in the longer write-up. Namely, operators which have been constructed in such a way so that they maximise the overlap onto ditorelon states have been used. This enabled us to identify ditorelon states which appeared in the calculated glueball spectra and ensure that the quoted glueball spectrum consists solely of glueball states. Continuum masses For each value of for ( ) we have extracted the low-lying glueball spectra for a range of values of ( ). All the masses are expressed in lattice units , and to transform that to physical units we can take the ratio to the string tension, √ , that we have simultaneously calculated. We can then extrapolate this ratio to the continuum limit using the standard Symanzik effective action analysis that tells us that for our lattice action the leading correction at tree-level is ( 2 ): In the above expression we have used the calculated string tension, 2 ( ), as the ( 2 ) correction. Clearly, we could use any other calculated energy, and this would differ at ( 4 ) in Equation (7). We choose to use 2 ( ) since we can extract it with small errors. In the left panel of Figure 11 we demonstrate our extrapolations of the lightest two ++ 1 , ++ and ++ 2 states for (4). These states are of particular importance because, as explained in Section 4.2.2, they correspond to the lightest two = 0 ++ and 2 ++ states. As can be seen all the fits appear to be linear, confirming the expression provided in Equation 7. In the middle panel of Figure 11 we show the corresponding plot for = − which corresponds to the lightest two = 0 −+ and 2 −+ states. The lightest states have very plausible continuum extrapolations, although the excited states, which are heavier than those for = +, begin to show a large scatter character indicating a poor fit. In the right panel of Figure 11 we present the continuum extrapolations of various 1 states that correspond to = 1, and again we observe that the fits appear to be convincing for the lighter states and quite plausible for the heavier states. Clearly, we can safely state that most of the states exhibit small lattice artifacts since the slopes of the continuum extrapolations appear to be small. Lines are linear extrapolations to the continuum limit. In that limit the +− 1 states become the lightest two = 1 +− glueballs while the other two becomes the 1 −+ and 1 −− ground state glueballs. All three plots for (4). Finally, in the left column of Figure 13 we provide the extrapolated results for, the phenomenologically most interesting case of (3) and all the different irreducible representations configured by representations 1 , 2 , , 1 and 2 as well as by = ± and = ±. Large-extrapolations Undoubtedly, from a phenomenological point of view, the most interesting calculation of glueball spectra is that for (3) presented in the left panel of Figure 13. Thus, a whole paper to that case [25] has been devoted. However, from a theoretical point of view, the most interesting glueball spectra are those of the ( → ∞) theory since the theoretical simplifications in that limit make it the most likely case to be accessible to analytic solution, whether complete or partial. To extract the = ∞ spectrum from the data obtained for the sequence of values of , one can use the fact that in the pure gauge theory, as explained in Section 2, the leading correction is (1/ 2 ). So we can extrapolate the continuum mass ratios using the formula The results of the extrapolation to the → ∞ are presented in the right column of Figure 13 as well as in Table 2. Most of the fits are for ≥ 2 or for ≥ 3 but some fits are over a more restricted range of mainly for technical reasons. For instance, the ++ 2 ground state, has been fitted to ≥ 4, the −+ Confining strings, axions and glueballs in the planar limit second excited state, has been fitted to ≥ 4, the −− 2 ground state, has been fitted to ≥ 4, and finally the +− 2 ground has been fitted to ≤ 8. From the practical point of view the most important extrapolations are for those states to which we are able to assign a continuum spin. The extrapolation of these states are presented in the three panels of Figure 12 for states with = 0, 2, 1 respectively. Furthermore, the extrapolated corresponding glueball masses are given in Table 3. Judging by the behaviour of the extrapolations, the mass ratios appear to be described to an adequate level by Equation 8 with slopes which are relatively small. This suggests that, glueball spectrum in (3) can be approximated to a good extent by the spectrum of (∞). Conclusions In this work we have improved extensively the extraction of and, thus, our knowledge on the spectrum of the closed confining string. The majority of the states appearing in the spectrum are string-like, in the sense they can be adequately approximated by a low energy effective string theory. In addition a small sector of the excitation spectrum appears to be massive resonances which can be interpreted as an axion on the world-sheet of the theory. We concluded to the above by the resonance character of the 0 −− , = 0 ground state which appears to be an axion coupled to the string's absolute ground state, by the 0 ++ second excited state which can be interpreted as a bound state of two axions with a very low binding energy coupled to the absolute ground state as well as by the 0 − = 1, 2 ground states which also have an axion character. Furthermore, states with axionic character can also be identified in other irreducible representations such as | mod 4 | ⊥ = 1 ± ; (∞). Namely, the = 0 ++ scalar ground state has a mass of 0 ++ ∼ 3.41 √ for (3) to 0 ++ ∼ 3.07 √ for (∞), the next heavier glueballs are the tensor with a mass of 2 ++ ∼ 1.5 0 ++ and the pseudoscalar 0 −+ which appears to be nearly degenerate with the tensor. Moving higher in energies, we encounter the 1 +− with 1 +− ∼ 1.85 0 ++ , and this is the only relatively light = − state. With approximately the same mass comes the first excited 0 ++ state and then the lightest pseudotensor with 2 −+ ∼ 1.95 0 ++ follows. All other states are heavier than twice the lightest scalar, with most of the = − ground states being very much heavier.
“Barriers and opportunities for hi-tech innovative small and medium enterprises development in the 4th industrial revolution era” High-tech innovative SMEs’ development plays crucial role in economic growth of every country. It creates new work places and infrastructure, motivates people for creating new ideas. At the same time, SMEs still face with a huge number of problems in the business performance. The purposes of the research are: to define main barriers for high-tech SMEs’ development in the 4th industrial revolution era (4IR); to work out recommendations for policy-makers towards intensification of SMEs’ potential. For this purpose, the authors reviewed studies devoted to the SMEs’ innovative development and revealed that main barriers of SMEs’ development are related to inef- ficient government support in this sphere. In order to work out recommendations for Ukrainian policy-makers in the area of SMEs’ development, the authors conducted a survey of local high-tech SMEs and on the basis of SWOT-analysis distinguished the main directions for their further improvement. Finally, a set of recommendations for improving SMEs’ environment in Ukraine, taking into consideration the challenges of the 4th IR, was developed. INTRODUCTION Nowadays, people live in the early days of the upgrading digital economy. Despite this fact, it has already had huge impacts on the development of business and economies during recent years. Moreover, digital transformations and rapid fundamental changes in business are among dominant features of the Fourth Industrial Revolution (4IR) of the 21th century. The digital capability model was developed as the new way of industrial automation aimed at transformation of technological process and changing humans' roles in it. One of the most significant modern peculiarities is that robots function as human substitutes in a wide range of industries. Upgrading of digital business concept is flexible, full of innovations and experiments. Digital concept will create new innovative business opportunities and boost their productivity. Nevertheless, some SMEs move beyond existing trends due to their diversified and unpredictable activities. As a result, there are changes in the whole area. In particular, new forms of doing business, including cloud systems and services, and cooperation between high-tech SMEs and different stakeholders, are developed. There are new types of services and products, including Internet of things, new ways of usage and commercialization of the existing products and services (on-line television, digital advertising, social networking, financial technology, e-commerce, online education, automated monitoring and control systems, game played processes and services, technologies of environmentally benign future and energy-saving life, optimizing the life and business processes). However, only a few business structures are ready to be adapted to ongoing transformations. SMEs are among the latter as one of the most flexible and capable for innovations segments of the business. The new wave of digital technologies brings both challenges and opportunities for SMEs. There is a need to study the nature and peculiarities of the 4IR changes. It will help high-tech innovative SMEs decision makers to manage their business under conditions of current global economic competitiveness. LITERATURE REVIEW Recent studies show the far-reaching integration of information and operational technologies. Surveys conducted by Schwab (2015Schwab ( , 2017 SMEs impact on economic growth is provided by the Edinburgh Group researches (2011). Kushnir (2010) provides an overview of data at micro, small, and medium enterprise levels in different economies. The study by Chien (2012) deals with startups ability to monetize their patents in the USA. The Kauffman Index of Startup Activity indicates the meaning of startup initiatives for economic health (Fairlie & Reedy, 2016). The level of value added generated by EU28 SMEs is studied by Muller (2016 METHODOLOGY To determine obstacles and opportunities of hightech innovative SMEs' development in the era of the 4IR we used different scientific methods and proceeded four following steps. Firstly, we have chosen statistical reports of authoritative institutions such as private companies, consulting and rating firms (e.g. Deloitte, Edge, Moody's, Price Waterhouse), international organizations (UNIDO, OECD, the World Bank, the European Commission), NGOs and others. Using induction method, we identified main rising trends of high-tech SMEs and disclosed main factors that affect their evolution and development. To prove the importance of SMEs, we regarded the number of enterprises, employment and value added by SMEs and Large enterprises in the EU-28 by using comparative method. Secondly, we analyzed public policies in countries with welldeveloped SMEs sector and, using system method, classified types of SMEs of government support. Thirdly, by means of method of statistical observation, we conducted survey of high-tech innovative SMEs to define their main problems in doing business. Fourthly, based on the results of SWOTanalysis of SMEs' government policy in Ukraine we determined possible types of support that could be implemented in Ukraine; then we formulated recommendation for Ukrainian policy-makers in the field of SMEs' development that could help to overcome barriers and meet opportunities on the way toward the era of 4IR. RESULTS While the First Industrial Revolution was accompanied by industrial growth, the Second was connected with the invention of the electricity and mass production development (the latter were mainly connected with large enterprises with their capability to attract additional resources and to produce more items (Bloem et al., 2014;Moavenzadeh, 2015). The Third Industrial Revolution or the Digital Revolution, which began in the second half of the 20th century, is still ongoing nowadays and includes the development of personal computer, the Internet and other ICT. And in recent years, there has been an increasing interest to high-tech innovative SMEs as participants of industrial process, which are capable to innovations and able to create new jobs. A dramatic increase in digital industry development has been observed over the last century. On the one hand, the Fourth Industrial Revolution means the logical continuation of the Third IR, because digital technologies introduction and development are among its main features. On the other hand, this era creates the unique opportunities to disrupt the current industry structures, to improve the existing human communication and conflict resolution systems. The main influences of the 4IR will be multiplied by the globally technology breakthroughs in areas such as artificial intelligence, robotics, the Internet of things, autonomous vehicles, 3-D printing, nano and biotechnologies, material science, energy storage, and quantum computing (Schwab, 2015;Schwab, 2017;Richards, 2016). These inventions are targeted to transform the production-consumption process from step-bystep to integrated interdependent process. Instead of the previously functioning linear productionconsumption model, a new hyper connected platform is being created. The latter means a deeply integrated system of planning, R&D, production, marketing, sales & services, which drives all components of productivity improvement in the value chain, developing shorter life cycle of a product (K IET, 2017). As already mentioned above, among important features of high-tech SMEs activity are the flexi-bility and innovative capability because they need fewer resource, are less structured and more capable for improvements and experiments than large corporations. The EU Commission surveys indicate that more than 90% of SMEs perceive they are lagging behind in the field of digital innovations (Dittrich, 2016). The global production of ICT goods and services amounts to an estimated 6.5% of GDP in 2017. Sales of robots are at the highest level ever, worldwide shipments of threedimensional printers more than doubled in 2016 (UNCTAD, 2017). Therefore, SMEs offer a huge potential for technologies in the digital age, but in developing countries, the conditions for their creation are inadequate. Based on Shift Index research, conducted by the Deloitte Center for the Edge, we summarized the fundamental factors inherent in the information era that influence the modern progress of the high-tech SMEs: • significant increase in productivity over the last 45 years; • cheapening of the digital technology; • decrease in the cost of computing power; • fall of the cost of storing information; • downturn of the cost of the data transfer rate; • significant rise of the Internet penetration rate (more than 50 times over the last 20 years). The global audience of Internet users (3.5 billion people) and billions of smart phones, tablets, laptops have already created necessary infrastructure for the business breakthrough development in the information era. The world advanced information infrastructure in combination with the listed above factors of SMEs' development are causing the three main global trends in hightech field: • rapid development of artificial intelligence (AI). Most of the activities in the high-tech sector are provided with AI, cyber security and searching systems, interconnection of users of automated control and recognition; • spreading of cloud services that are used by most Internet users. Moreover, cloud data storage becoming an integral part of most businesses; • development of nanotechnologies, which primarily improve medicine, pharmacology and biotechnology. These industries are technologically saturated; they transform the production and usage of most materials in the modern world. Nowadays, in the world of intellectual capital, innovations are the major driver of growth for all types of organizations and economies. Since innovation implies a high degree of risk, most of the innovative ideas are commercialized in the form of startups or in a small scale. According to the Global Entrepreneurship Monitor (GEM) research, hightech business creates on the average three times more jobs in the long-run period than a SME, which operates within already existing traditional industries. From mentioned GEM study, we can distinguish five main reasons of why high-tech SMEs become significantly important in the global economy: • an innovative component, since startups are the most adaptable type of business to implement innovative breakthrough ideas. They significantly contribute to the rapid technological development of the world. In addition, under global competition, countries invest in development of innovative centers to attract more startups that have positive impact on the development of innovations in the country; • growth of the national economies competitiveness. Since startups are the most dynamic and promising companies, they contribute to the export potential, economic security and innovation of the country in which they operate. Creation and development of new technologies and services can provide a leading position in new market niches; • innovative and research ecosystems development inside the country. Like any high-tech business, startups are closely linked to the scientific activities of educational and research institutions. The development of SME stimulates researches and development of technoparks, universities, academies, research and scientific centers; • a positive contribution to the social development of the society. The development of innovative entrepreneurship changes values of the society, educates it on the basis of knowledge, entrepreneurship and creativity. Attraction of foreign investors and entrepreneurs helps to increase qualification of local specialists by exchanging experience and establishing new business contacts; • creating jobs and economic growth stimulation ( Figure 1). We should mention that crisis of 2008 and the subsequent recession of most economies in the world had a negative impact on startup activity. It has caused a lack of capital in all markets. In addition to that, the overall decline in the economies of the countries significantly reduced the purchasing power of the population, which greatly reduced the value of products and services that were new and untested. According to the Kauffman Foundation's research (Figure 2), the dynamics of the startup activity fell sharply even in the USA and began to recover only in 2014. The development of SME business is closely related to the economic, political and demographic aspects of each country. More than 95% of all enterprises in the world are SMEs (it depends on the applied criteria in each country). They employ approximately 60% of all private-sector workers. The role of SME businesses in the European Union (EU-27) in 2015 shows its structural importance (Table 1). It covers 99.8% of all enterprises, employs 66.8% of all private-sector jobs and generates 57.4% of the total value added (calculated as the value of their proceeds minus the cost of their intermediate consumption), which is created by private sector. But the contribution of SMEs in economic development varies from country to country. It ranges from 16% of GDP on the average for countries with low GDP (where SME sector is usually large, but mainly operates in the shadow market, so official data is often lower than the real). According to the World Bank and the International Finance Corporation, researches the largest number of SME businesses is concentrated in Eastern Asia, Oceania and high-income OECD countries. It is important to note that high-income OECD countries show higher average density and higher share of high-tech SME business compared with other groups of countries highlighted in the study. Therefore, these countries invest in the most innovative business at the early stages of development. The Edinburgh Group has tried to capture the contribution of SME businesses to the GDP of countries based on the World Bank database. They considered the official market and roughly estimated shadow market (which is extremely high in low-income countries). In comparison to a large business, the contribution of small manufacturing business to GDP per enterprise will be smaller. It is more labor-intensive and usually has a lower productivity than large corporate sector, that is why SMEs need more labor resources. Thus, SME sector compensates shortage of significant capital investments through the labor, which is beneficial to low-income countries where there is a lack of capital, but large number of unemployed. Under such conditions, SMEs can be a long-term development engine for developing countries. The conclusion of research by the World Bank, which covered 47,745 businesses from 99 countries where the size of SMEs is from 5 to 250 employees, could be evidence of that fact. They found out that the total growth in the number of employed provided by SME during 2008-2014 was at an average 85%. Modern peculiarities of the SME market and technology are primarily determined by the progress of information in the world. To estimate the volumes and growth rates of high-tech SME businesses we could compare the growth index of Silicon Valley innovation business and other businesses on the basis of international rating agency Moody's. Based on this comparison, we can observe high growth of SME high-tech businesses. An innovative business often creates a new market or niche, which is unsaturated at the moment but they hardly enter a market where any significant potential for growth couldn't be observed. The role of factors that affect the potential employer's decision to start own business has been explored by Amway international company within the framework of the 2015 AGE Global Enterprise Recognition Report. The result of the study shows that among the main determinants are as follows: • potential debt burden -41% of respondents; • the treat of economic crisis -29%; • probable inability to find a job in case of failure -16%; • probability of law problems, contradictions with the law -16%; • potential personal disappointment in business initiative -15%; • necessity of making decisions -13%; • probability of absence of second chance in case of failure of the first attempt -8%. As part of a study, conducted by Institute of Economic Studies, Faculty of Social Sciences Charles University in Prague (2016), a review of the SME businesses environment was provided. The research was conducted on the basis of the World Bank database "Enterprise survey dataset". It determined the main factors that influence the activity of SMEs in developing countries ( Figure 3). Most problems that business faces are similar for different sizes of businesses, but they have different levels of cruciality. For example, the issue of financing is the most sharp for a rapidly developing SME, but at the same time not sufficiently guaranteed for creditors and investors. As the business grows, the debt/equity ratio improves. It means that business has already won a part of the market and has enough assets to meet its obligations. On the other hand, political instability as a critical fact is more often defined by large business than SMEs. SME acute is often less dependent on the general macroeconomic policy pursued by political power because of its local specifics. This applies to government programs with the participation of the private sector, public procurement tenders, public infrastructure projects, etc. Political instability violates the healthy functioning of the SME business ecosystem. It starts to cause things as inflation, devaluation of the national currency, general stagnation of the economy, adverse changes in the legislation directly to SME, abolition of tax privileges, increase in the tax burden, bureaucratization, etc. External factors such as shadow economy and tax system, in terms of their level of influence, are inferior to the problems of financing. But they are almost equally important for all sizes of enterprises, although for SME businesses these questions are sharper. SMEs are very sensitive to things such as the shadow economy. Illegal import of products/ raw materials by unfair market participants allows illegal increase in the business margin saving on custom duties, to sell non-licensed or non-certified goods, avoid taxation, decline the tax base, conduct informal cash transactions, etc. SMEs that operate in a legal and transparent market, have no resources to fight with unfair competition through the judicial system, propaganda/ promotional campaigns or significant investments aimed at increasing operational efficiency. Therefore, unfair competition leads to a decline in sales and profitability of the legally operating business. It also reduces tax and customs revenues, negatively affects the fiscal policy of the country and reduces its economic efficiency. The last but not the least factor generated by the World Bank is corruption, which adversely affects the business. In countries with high probability of giving bribes to officials there is a small number of SMEs. Experts from UNIDO and OECD (2016), emphasize that many SMEs carry out their activities outside the official market. Therefore, one of the main goals of the governments from different countries is to create an institutional, organizational and regulatory environment that would encourage the SMEs to legitimate economic activity. The governments cannot form their own "entrepreneurial culture", but through their activity, they can either slow down or stimulate its improvement. The main difficulty in SMEs' regulation is a remarkable differentiation of this sector of entrepreneurial activity. Hence, it is important not only to have impact on individual business groups but also to create a socially responsible environment with a well-developed entrepreneurial culture, where development and improvement will be initiated and developed by bottom-up approach. While forming public policy it is important to realize that SMEs will function effectively if: • a culture of entrepreneurship will be brought up in society, which will encourage both individual and collective initiatives and innovations. Policy makers must understand that financial education is the fundamental thing in building the same culture; • economic, political and social climate will be ensured, which will provide a high level of startups and help existing businesses to survive; • high-quality, socially responsible and innovative business will prevail in the overall mass of SMEs; • there will be friendly economic and social climate that will stimulate the growth and development of existing SME businesses; • all SMEs' stakeholders will sympathize to business and operate as an entrepreneur. The last point is especially important for SMEs, because they rely on high level of sympathy, support and protection from government, the educational system, regulators, financial institutions, individual professionals, large corporations, etc. A favorable environment for SMEs' activities could be created only if all stakeholders have common entrepreneurial spirit. In the research, we argue to discover the particular importance of SMEs' environment component such as patenting. The patenting is probably more related to high-tech and innovative startups than to other environmental components due to its "double nature". On the one hand, it creates a fundamentally new product, service technology, but, on the other hand, it tries to replace or modify existing one. Therefore, there are two important issues, namely: to protect own innovation and to avoid the restriction of already existing competitors' patents. The problem is that the young businesses do not have sufficient knowledge and resources to patent their products and services in right way. The solution to this problem depends on external factors, in particular: the patent rights in the country/region, the activity of consulting and legal companies in stimulating small business development, the general business culture and fairness of competition on the market, the availability of funds necessary for participation in litigation, dispute resolution and receipt necessary documents. The business arises within innovative ecosystem that largely determines its further development, potential risk levels and the chance of its realization. Moreover, startup turns into booming business only under the influence of highly developed infrastructure and all-round business support, which allows to decrease negative influence or to avoid risks at all. The major imperatives of SMEs' development are: lack of financing, shadow economy development, corruption, tax regime, country macroeconomic and political stability, patent law accomplishment and level of other stakeholders qualitative involvement to its implementation. As it was mentioned, SMEs play a crucial role in the global economy. Especially, small business that has a significant impact on the highly technological sector. Therefore, this area of small business activity has a pivotal role in the context of ICT exponential development. Moreover, the format of SMEs and startups is particularly competitive in an era of constant changes and transformation. A small business success and competitiveness depend on critical features that are the subject of our investigation, namely: • mobility; • low capital intensity; • a minimal potential losses volume from the initial launch failure; • a well-developed infrastructure or powerful corporate add-ons requirements that are not always essential for a startup initiation in underdeveloped countries; • a lack of procedural, corporate and bureaucratic restrictions that differs from large companies; • a lack of intense internal competition between high-tech SME businesses. Cooperation between startups, young entrepreneurs and consulting corporations make their idea competitive as against large companies. SMEs are ready to donate part of their future revenues for the sake of strategic partnership and their idea implementation. Modern entrepreneurial culture has caused that simply ideas commercialization is less important for startups and breakthrough project managers than social problem solution, additional value creation and improvement in the people's life; • a number of potential breakthrough ideas do not depend on place of employment, country of origin, age or gender. Therefore, everyone is able to start a small business. The formation of new entrepreneurship culture is associated with everyone, who seriously considers the possibility of starting own business. SME original ideas are highly diversified worldwide due to the cultural differences. In particular, different countries need different problems to solve. The need of satisfaction depends on the segment of the population, differences in education and culture cause a variety of values and development trends, etc. In addition, main modern post-industrial information society features are able to eliminate some of the SME weaknesses. For example, SMEs are not local any more due to the yearly Internet penetration increase, including: 3G and 4G Internet worldwide availability, Wi-Fi institutional spreading, cloud services that become the main means for significant data volumes exchanging and collaborative environment development. Moreover, SMEs have fewer problems with the lack of funds for promotion and advertising. A large number of global social networks have extremely simplified and boosted the distribution of information about an innovative, competitive and unique product. In addition, SMEs have new sources of funding. For instance, over the past decades, sources of funding such as crowd funding became more widespread (international online platforms -Kickstarter, Indiegogo). The number of venture funds, business angels, business incubators and accelerators, who invest in promising high-tech startups, has also increased. Thus, high-tech SME is developing with the entire sector and the sector is transforming with the emergence of new breakthrough ideas and products. All SMEs' peculiarities and changes can be considered as features of the entire sector, because SME's business is an integral part of it. According to OECD data, about 30-60% of all small manufacturing businesses in the OECD countries have initiated new or improved products that can be characterized as innovative. SME business tends to drive an increase in R&D in most OECD countries and its share in R&D is about 17%. This proves the positive impact of SMEs and high-tech startups on the country's scientific and technological development. SMEs' R&D stimulate the emergence of new forms of cooperation among the government, social and private sectors and SMEs. In particular, separate state agencies are established in the areas of cooperation with high-tech SMEs. Industrial parks, business accelerators, incubators based on educational institutions, separate departments in large corporations, which attract SMEs to their R&D centers are also established. So, the impact of SMEs on the world economy is increasing year by year, changing the entrepreneurial culture, the functioning of markets, the climate in the labor market, and creating completely new markets and technologies. At the same time, trends in global information and technology markets have significant impact on SMEs in the high-tech sector. SME's sector growth highly depends on government support. There are different roles of govern-ment in supporting innovations in different countries ( Figure 4). Its crucial impact on their development has to provide favorable climate (political, legislative and economic factors that effect development of innovations). Figure 4 shows that there are many instruments among economic factors that are used in different countries according to the general policy of innovation development. Considering business grant support, example of Ireland is especially noteworthy. In Ireland, the government offers grants for researches in science to big IT companies such as Dell and Intel on interest-free basis for fixed investment for SMEs. The government creates market supply and requests SMEs to design innovations by signing a contract with them. Special funding programs also exist in France, Germany and Switzerland. The funding is directed at supporting innovations in specific sectors of economy. French government provides cash grants, loans, tax credits, reduced tax rates, accelerated depreciation on R&D assets, patent-related incentives. Portugal and Spain have large set of fiscal instruments for all legal entities for innovation support. Exclusive SMEs' privileges are used in Great Britain, including cash grants, tax credit, patent-related incentives, and accelerated depreciation on R&D assets. Other type of support is related to tax holidays. For example, Chinese government supports SMEs' innovations by offering full tax exemption during the first profitable year and tax payments decrease of 50% over the next 3-5 years. The same is in India. There is also full tax exemption up to 5 years and decrease of 50% for SMEs from 6th to 15th years of functioning. SMEs take advantages within 7 years of government tax holiday in Israel. SMEs frequently use tax credits, cash grants, income tax withholding incentives, patentrelated incentives, tax exemptions, reduced tax rates, tax deduction, including super deductions, loans, VAT reimbursements and reduced SSC rates in Belgium. Hungary provides cash grants, reduced tax rates, tax credit, patent-related incentives, reduced SSC, tax deduction, including super deductions to strengthen SMEs' capabilities. Government policy of SME development in the Netherlands allows using reduced tax rates, tax credit, patent-related incentives, reduced SSC rates, tax deduction (including super deduction), accelerated depreciation on R&D assets, cash grants. Compensations for losses of profits are provided in Singapore. In the era of 4IR, the Ukrainian economy competitiveness in the international market depends on the innovative technological business development. The share of SMEs that introduce innovative products or processes has increased from 7.4% in 2015 to 7.9% in 2016. However, such growth is not enough to enable small business to drive economic growth in Ukraine. This reflects the need of the innovative SME development policy implementation and understanding small business nature. Therefore, our research provides a specialized survey of Ukrainian technological startups. A list of questions related to directions of business activity, funding and taxation and the most urgent needs or changes were prepared within this research. More than 20 representatives of active high-tech SMEs were interviewed. As a result, the largest share of respondents (30%) are related to computer technology business ( Figure 5). Ukrainian high-tech SMEs' representatives point out the key obstacles for their businesses, namely: imperfect regulatory framework (70% of respondents), problems with the promotion of a product or service (50% of respondents), lack of investments (45% of respondents). At the same time, they identify the activities of foreign individuals and organizations (55% of respondents), public and business organizations (45% of respondents), IT clusters, business incubators and accelerators Legacy making and other rules or codes for: • protection of intellectual property; • foreign investor regime, etc. Legislative (40% of respondents) as the most contributing factors to their businesses development. It is important to underline that none of the respondent benefit from the government authorities' activities. Only 10% of respondents benefits from the activities of individual government officials. This indicates a weak government involvement in specific problems of high-tech SMEs solving and inefficient mechanism of cooperation between government and business. As a result, 65% of the respondents approved the necessity of state programs for SMEs' support through grants, investments and cheap loans implementation to improve businessenvironment for innovative enterprises. On the other hand, none of the respondents defines the state as business-partner or co-owner. The largest part of respondents (30%) intend to implement their own business initiative on their own, which indicates a high level of distrust of entrepreneurs in other market participants. Only 25% and 20% of respondents are ready to cooperate with venture or direct investment funds and a large business, respectively. However, business angels and other private investors are defined as the main source of financing used by 45% of respondents. Other businesses were funded by business incubators and accelerators (25%) and with non-bank loans (20%). At the same time, Ukraine has great advantages comparing to other countries, including high quality human capital; competitive high-tech developments; space and aviation industry; outstanding math and biotech schools; leading posi-tion in the IT industry in a global market. Ukraine took the third place in the world after the USA and India by the number of certified IT specialists. This proves that Ukraine has chances to be an important player on the innovation market. The policy of SMEs' development has to be an integral part of national structural reforms in Ukraine. This is confirmed by a benchmarking assessment of the implementation parameters of development policies in the Eastern Europe. This policy will facilitate the sources diversification for economic growth in the country. SME could play an important role among such sources both in the medium-term and long-term framework. The analysis of the weak and strong points allows us to define the prospects for Ukraine and the friction points to work at (Table 2). Every EU country has its own roadmap of development strategies to maximize their opportunities ( Figure 6), because they understand the importance of SMEs for improving economic competitiveness, restoring sustainable growth, developing an enabling environment for business, and attracting new investments to the country. The potential of economic growth depends on effective government policy in the area of SMEs. Ukraine has its Strategy for Sustainable Development "Ukraine 2020" which includes top-10 priorities such as: reform of national security and defense system; renewal of authorities and anti-corruption reform; judicial and law enforcement reform, decentralization and public admin- Therefore, due to the advantages, which are mentioned above, we offer to reconsider the priorities of the Development Strategy of Ukraine. Among them should be space and aviation, ICT in industry, data analysis and information management, smart agriculture, biotechnologies in food production, etc. Ukrainian government policy should be directed at the support of these branches and appropriate ambitious startup ideas. The government should also encourage informal investors, including business angels and crowdfunding network, which play a key role in financing innovative SMEs in developed countries. This will force the development of innovative SMEs and make them more competitive in the global world.
The Institutional Dynamics Perspective of ICT for Health Initiatives in India As there has been a considerable investment in ICT for development (ICT4D) initiatives, policymakers, practitioners and academics are calling for a more comprehensive and meaningful assessment of the impact of such initiatives. While the impact assessment of ICT4D can be carried out from multiple perspectives, the institutional lens is opportune in examining the softer aspects of the impact such as the behavioural, cultural, and social dimensions. ICT4D interventions juxtapose two institutional logics, that of designer and of the users, which may or may not align with each other. The impact of the initiative depends on how the interplay between the logics unfolds. We exemplify the importance of institutional context in impact assessment of ICT4D initiatives by examining the interplay of the institutional logics in the healthcare system. We conceptualise healthcare system in terms of the logic of choice, perpetuated by the ICT for health initiative, and the logic of care which is embedded in the core of the health system. The interaction between the two logics, in turn, determines how the intervention evolves. We arrive at a framework outlining the tensions arising from the interplay of the logic of choice and logic of care in the healthcare system when an ICT4D intervention is introduced. Introduction Information and communication technologies for development (ICT4D) projects are not implemented in a vacuum. Rather, ICT4D interventions can potentially influence the existing sociocultural-technical systems. In turn, the systems themselves can influence the evolution and adoption of the technology. Scholars have emphasised that human agency and technology have a bidirectional relationship and that evolution of technological intervention depends upon the interaction between human agency and technology (Orlikowski 1992). The interaction between human agency and technology is determined by institutional forces. "Institutions are the rules of the game in a society or, more formally, are the humanly devised constraints that shape human interaction" (North 1990: p. 3). The institutional perspective is opportune in highlighting the sociocultural-technical aspects of the ICT4D interventions. Introduction of an ICT4D initiative juxtaposes two different institutional systems, that of the project designers and implementers and of project users. The outcome of the project is then determined by the evolution of this institutional dualism (Heeks and Santos 2007). Previous research has highlighted the importance of sociocultural factors as critical determinants for realisation of benefits from ICT usage (Chib et al. 2008;Miscione 2007;Walsham et al. 2007). The holistic assessment of impact of ICT4D initiatives, however, needs deliberation on how the ICT4D initiatives shape the existing institutional systems and how the institutional systems shape the technology usage and adaption. Scholars have called for research on the interaction of technology itself with specific aspects of social, economic, and cultural contexts (Walsham et al. 2007: p. 323). While most of the assessment frameworks used to measure the impact of ICT4D projects highlight the economic aspects of "impact" (such as in Arul Chib's introductory chapter and Kathleen Diga and Julian's May chapter later in this book), the institutional perspective is opportune in assessing the softer aspects of impact of ICT4D initiatives, such as the behavioural, cultural and social dimensions of the impact (Heeks and Molla 2009 Heeks and Molla 2009) highlights the complex interaction between human agency, technology, and the institutional environment, both formal and informal, in shaping the impact of ICT4D interventions. In this chapter, we focus on understanding how the implementation of an innovation in the form of an ICT4D initiative creates tension in the institutional dynamics and how these tensions affect the adaptation of the intervention. To highlight the institutional dualism (Heeks and Santos 2007), we refer to the "institutional logics" perspective. Institutional logics are the sociocultural norms, beliefs, and rules that shape the actors' cognition and behaviour; that is, how they make sense of the issues and how they act (Friedland and Alford 1991;Thornton 2004;Lounsbury 2007). Institutional logics provide a "stream of discourse that promulgates, however unwittingly, a set of assumptions" (Barley and Kunda 1992: p. 363). But institutional logics are seldom unidimensional and coherent. Rather, institutions, especially those which involve multiple and diverse stakeholders, such as healthcare systems, are characterised by coexistence of multiple, and sometimes conflicting, logics (Dunn and Jones 2010). We exemplify the interplay of institutional dynamics by situating our discussion in the context of healthcare systems. The system of healthcare delivery can be conceptualised as an institution that is governed by logics that determine behaviour of stakeholders of health systems (Dunn and Jones 2010). Healthcare systems should be seen as institutions that are socio-technical systems with multiple stakeholders interacting with each other, such as the public and private healthcare organisations, political bodies, local community, regulatory bodies, financial institutions and so on (Arora 2010). The stakeholder behaviour and the socio-technical system are determined by the institutional context of healthcare. It is argued that ICT interventions can enable innovations in healthcare service delivery to extend the provision of affordable and quality healthcare for all (Sosa-Iudicissa et al. 1995). There has been substantial interest and investment in ICT interventions to enhance efficiency and effectiveness of healthcare delivery in developing countries. Facilitated by the increasing penetration of ICT, governments have made heavy investments in e-health initiatives (Blaya et al. 2010). E-health refers to the "use of information and communications technologies (ICT) in support of health and healthrelated fields, including healthcare services, health surveillance, health literature, and health education, knowledge and research" (World Health Organization 2005). The broad area of e-health includes several types of ICT for health initiatives such as m-health which involves use of mobile technologies, e.g. cell phones, SMS, etc. for strengthening healthcare delivery (Kahn et al. 2010), telemedicine which refers to use of telecommunication for connecting patients and doctors across geographies (Zolfo et al. 2011), Electronic Medical Records (EMR) which refers to creation and storage of health-related information of individuals in an electronic form that can be used by clinical and analytical purposes (Fraser et al. 2005), and so on. ICT interventions in the form of e-health initiatives can potentially influence the institutional system of healthcare delivery, and in turn the system can influence the evolution and adoption of the technology itself (Nicolini 2006). The posited Interplay between ICT for health interventions and the institutional context interplay between the ICT for health initiative and the existing institutional forces that shape healthcare delivery is represented in Fig. 2. The literature on ICT for development, especially the dominant discourse on ICT for health initiatives, examines the economic dimension of interventions which highlights the efficiency-related aspects of the intervention (Blaya et al. 2010). There is a need to understand the adoption and evolution of ICT4D interventions from the sociocultural perspective and to explore how the system as an institution undergoes change, if any, from the intervention. The sociocultural and the institutional aspects of ICT interventions are more relevant for the effectiveness and sustainability of the intervention (Heeks and Molla 2009). In this paper, we attempt to address this gap by examining the evolution of ICT for health initiatives from the institutional logics perspective. ICT for health initiatives can be regarded as innovations that provide citizens with an alternative to their usual health-seeking avenues and that which can potentially alter the balance between conflicting logics prevalent in healthcare institutions where the interventions are attempted. The dominant rational view that ICT can act as a conduit for information transfer and hence knowledge transfer, making it possible for extending the access of medical knowledge for marginalised population and geographies, should be critically examined by investigating its effect on the basic assumptions and values that characterise the system (Arora 2010;Miscione 2007). In other words, innovations in healthcare can be understood in the backdrop of the logics that govern the healthcare system. Hence, we situate our discussion in the broad domain of logics in healthcare service institutions, specifically highlighting the trade-off between the logic of choice and logic of care (Mol 2008;van Schie and Seedhouse 1997). The rest of the chapter is structured as follows. In the next section, we dwell on the concepts of logic of choice and logic of care in healthcare. Next, based on the extant literature, we highlight the emerging themes that arise from the interplay of the two logics in the healthcare domain. In the discussion and conclusion section, we arrive at a theoretical model explicating important dilemmas and tensions occurring due to institutional dynamics when an ICT initiative is introduced in the healthcare system. The chapter concludes by outlining the agendas for future research. The Logic of Choice and Logic of Care The logic of choice in healthcare represents the libertarian conception of healthcare systems, emphasising that market-driven competition-enhancing policy measures can enhance the efficiency and effectiveness of the services provided to the patient (Fotaki 2010;van Schie and Seedhouse 1997). This assumption, however, does not take into account the complex relationships between the diverse stakeholders, the socio-economic, and the cultural norms which determine the aspect of "care" in the healthcare context. The logic of care refers to practices such as support, advice, encouragement and consolation, thus including both medical and social dimensions. The logic of care broadens the scope of healthcare by regarding patients as individuals being embedded in a social milieu rather than just diseased bodies and entails collaborative attempts to understand and attune diseased bodies and complex lives (Mol 2008). The logic of care takes into account the practices, "what they do", while the logic of choice refers to the possibilities presented to the stakeholders, "what are the choices available and what they choose to do" (Fotaki 2010;van Schie and Seedhouse 1997). The "they" in the above discussion could represent any stakeholder in the healthcare system. However, most of the studies, and rightly so, conceptualise the variables in terms of "they" as patients. For example, when a telemedicine programme is implemented in rural areas, the patient has a choice to consult a remote specialist on telemedicine or to continue seeking medical services from the local practitioners, usually practising complementary and alternative medicine (CAM). Thus, the technology drives the logic of choice. However, studies have shown that the patients continue to consult their local health service providers and come for telemedicine only if there is no relief from their primary recourse (See Miscione 2007). One of the important reasons for this behaviour is that the existing network of the health system provides the environment of care (Miscione 2007). Health-seeking behaviour, and choice, is determined by the prevalent norms which in turn are determined by the logic of care. Franckel and Lalou (2009) studied the health-seeking behaviour for childhood malaria in rural Senegal. In the community, the child care-taking was a collective process involving mother, father, friends, and relatives, and treatment decision was a collective one. The collective management favoured home care and resulted in delayed recourse to health facilities. The above study highlights that logic of care is embedded in the relational and sociocultural and economic web. Indeed, it is argued that "care" is an integral and central part of healthcare systems and "choice" operates in a milieu of a broader "care". It should be noted that the policy interventions and innovations in healthcare domain ultimately aim at improving the distal outcome variables such as decreased morbidity and mortality and enhanced quality of life. For example, the Millennium Development Goals (MDG), adopted by the UN General Assembly in 2000, targets poverty alleviation and improvement in health by 2015 as their ultimate distal outcome through international development programmes. The three MDGs directly relating to health aim at more proximal and measurable outcomes-reducing child mortality (MDG 4), improving maternal health (MDG 5), and controlling HIV, malaria, tuberculosis and other diseases (MDG 6). The inherent pluralism of logics in the healthcare domain highlights an important dilemma faced by the studies examining the impact of ICT for health interventions to identify a proximal variable that is relatively easy to assess (e.g. enhanced patient choice and incidence of malaria) or to examine a distal variable that is more difficult to measure, although it is more desirable (e.g. enhanced quality of life for inhabitants and that of care facilities). However, the above distinction is not very compartmental as it can be argued that the intermediate or proximal variables can be regarded as an end in themselves, for example, enhancing patient autonomy or patient choice. Further, there may be conflict between variables that can be categorised in a single domain, for example, increasing the lifespan of elderly patients (albeit with associated morbidities) may not align with the goal of enhancing the quality of life for the elderly. The policymakers' dilemma of emphasising proximal versus distal variables is discussed in the next section. Here, we would posit that in the context of ICT for health interventions, generally, the proximal variables such as patient choice, patient autonomy, patient centredness, and adoption of technology represent the logic of choice while the distal variables such as patient satisfaction, equity of care, and quality of life derive more from the care perspective. The extant literature highlights the following aspects of the interaction between the logic of choice and the logic of care: (1) the complex relationships between the proximal and the distal variables, (2) the contextual nuances that affect how the interaction between the logics unfold, (3) the overemphasis on the "expert patient" in the logic of choice, and (4) the issues arising from the coexistence of a formal system of choice and a predominantly informal system emphasising the logic of care. Policymaker's Dilemma: Proximal or Distal Variables The assumption that intermediate variables such as patient autonomy relate to distal variables such as improved health outcomes should be understood in relation to the contextual and individual level variables. For example, if the patient is unable to make a choice, or for that matter if he or she does not wish to make a choice, or if the choice involves gathering and assimilating loads of information that is not readily available, emphasis on the patient autonomy and choice can be detrimental to long-term quality of life. Lee and Lin (2010) in their study on diabetic patients highlight that patient's autonomy is not directly related to favourable outcomes in the form of glycemic control. The relation is contingent upon various factors such as high decisional and high informational preferences. Further, there may be a conflict between choice and autonomy. Aune and Möller (2010), for example, demonstrated that women welcomed the option of getting an early ultrasound for detecting chromosomal anomalies in the fetus but did not want to take a decision regarding getting the ultrasound done themselves. Rather they preferred that their doctor should prescribe the investigation. Thus, they welcomed choice but not autonomy. Pilnick and Dingwall (2011) problematise patient centredness which entails patient autonomy as a universal good. They highlight that asymmetry is engrained in the institution of medicine and hence in the doctor-patient relationship and implementing autonomy that may be beneficial for some patients who prefer decision-making on their own. Dixon-Woods et al. (2006: p. 2742) similarly highlight how women provided "informed consent" for surgery though they were ambiguous about the decision as the decision-making process was "enmeshed in the hospital structure of tacit, socially imposed rules of conduct". "Informed decision-making", yet another concept emphasising patient autonomy and choice, thus, reinforced passivity rather than autonomy. Arguably, the complexities of the relationship between proximal and distal variables would be more pronounced in the context of healthcare for poor populations in developing countries, which is characterised by a high level of health illiteracy. Isolated emphasis by policymakers on some intermediate outcomes, such as patient autonomy and choice, may negatively impact the universal values that a healthcare system envisages, for example, equity of care. The individualistic paradigm that forms the fundamental basis of patient choice and autonomy is diametrically opposite to the collectivistic and welfare paradigm that emphasises solidarity and equity of care (Fotaki 2010). Scholars call for de-emphasising the implicit incorporation of independence in autonomy, arguing for a relational understanding of autonomy while recognising its embeddedness in a web of relationships, and emphasise incorporating logic of care in doctor-patient relationships (Entwistle et al. 2010). To summarise, the logic of choice, while having merit, emphasises more on the proximal variables and may not resonate with the long-term distal outcomes. Thus, arguably, ICT for health interventions that solely emphasise choice without taking into consideration the distal variables determined by care is more likely to face resistance in their adoption. Contextual Aspects Can Affect the Logics The logic of care emphasises that contextual nuances should be taken into account in designing and modifying an intervention design to suit the context. It is the context that determines the environmental factors involved in the delivery of care. Hardon et al. (2011) problematise the mono-dimensional view, emphasising a patient's autonomy without taking contextual factors into consideration. They found that contrary to the choice logic, "provider-initiated tests" for HIV were more acceptable than voluntary testing in HIV centres in Uganda and Kenya. An in-depth analysis revealed that the sociocultural aspects of the society made voluntary testing, based on the principle of patient autonomy and choice, less attractive. The patients going for voluntary testing were considered to have a loose character, having "slept around", and hence were more comfortable when the tests were initiated by the healthcare provider. Further, the design of the interventions should pay attention to the existing health-seeking behaviour of the patients. Adoption of any intervention that undermines the existing channels or patterns of health-seeking behaviour is less likely. Chandler et al. (2011), for example, examined the introduction of a diagnostic test for malaria through a drugstore in Uganda. The intervention, however, did not result in expected increase in use of the test before taking treatment for malaria. The drug shops, which were an important source of healthcare services for community and were an important stakeholder in the established network of care and healthseeking, considered the diagnosis and treatment of malaria as synonymous. Thus the rational "choice" of having a diagnosis before the treatment was not deemed necessary in the existing network of care (i.e. drug shops). In both the examples above, the logic of care contrasted with the logic of choice. Contextual factors shape the evolution of an intervention, determining whether the intervention will be adopted or not, or will be adopted fully or partially, or not at all, and the possible intended and unintended consequences (Orlikowski 1993). The dominant logics of healthcare systems may, thus, shape the introduction of interventions emphasising the logic of choice or logic of care. Robertson et al. (2011) explicate how the phenomenon of "shared decision-making" highlighted the role of a general practitioner as an expert rather than a partner in decisionmaking. "Shared decision-making" was used in minimising resistance to treatment solutions rather than in involving patients in their treatment decisions. Thus the characteristic of the context (i.e. power distance in doctor-patient interaction in healthcare) shaped the adoption and use of intervention (i.e. emphasis on shared decision-making) and, in fact, ossified the existing power distance between the provider and the receiver of service (Greenfield et al. 2012;Robertson et al. 2011). Kaufman et al. (2011), on the other hand, examine how a technology with indefinite or indeterminate effects found universal acceptance among the stakeholders. Their study demonstrates how the "technology imperative" drove the physicians, patients, relatives, and other stakeholders, such as manufacturers of the instruments and the insurance companies, to adopt novel technologies (e.g. implantable cardioverter defibrillator for elderly patients) that have ambiguous results in terms of choice as well as care (e.g. postponement of death but prolongs morbidity). The existing health-seeking behaviour and the sociocultural and economic milieu in which the health-seeking behaviour is embedded form an integral part of the logic of care. Any intervention that supports the logic of choice should be designed in a manner that is supported by the contextual factors that determine the logic of care. Stoopendaal and Bal (2012) explicate how a "sociomaterial" setup was used in organisations for providing care to the elderly to enhance the quality of care provided by facilitating the choice of food for the inhabitants. The attempts for quality improvement recognised the situatedness of the phenomenon, thus providing alignment between the logic of care and logic of choice. The above discussion highlights the importance of contextual dimensions such as the sociocultural, economic, and political aspects, which determine the logic of care. While ICT for health initiatives such as telemedicine or m-health can be assumed to promote the logic of choice by making modern medical knowledge accessible to remote and rural populations, how the technology is adopted and how it evolves will be determined by the existing logic of care in which the intervention is embedded. Assumption of an "Expert Patient" Changing lifestyles and demographics have resulted in epidemics of chronic lifestyle-related illnesses such as diabetes and hypertension across the developed and the developing world. In case of such illnesses, where lifestyle modification forms an important aspect of treatment, it is often assumed that effective management involves converting a patient into an "expert patient" (Greenhalgh 2009). Driven by the logic of choice, the concept of the expert patient is based on the assumption that "teaching and training" of the patient in self-management will equip the patient with adequate knowledge and motivation to adhere to the prescribed treatment protocols (Mol 2008). The logic of choice emphasises that the patient is a rational individual who, once acquainted with the benefits of self-management, will indulge in actions that would maximise his or her wellbeing as an individual, that is, adherence to the treatment protocols (Gomersall et al. 2012). However, the studies emphasise that equipping patients with self-management may deprive them of the care environment and put the "blame" of any mismanagement onto the patients themselves (Mol 2008). Indeed, some patients consider self-management of diabetes at home as a demanding work (Hinder and Greenhalgh 2012). The success of self-management depends not only upon individual factors such as knowledge and motivation but also upon the family support and socio-economic contexts (Hinder and Greenhalgh 2012). The latter form a part of the care environment. Henwood et al. (2011) describe how a citizen patient, who is "nudged to adapt" the choice of healthy living habits in everyday life, negotiates between this logic of choice and the alternative logic of care in adopting health-promoting practices in their daily life. The sense-making that occurs in the process of negotiation is determined by the logic of care. Patient expertise has three aspects: managing illness, managing everyday tasks with illness and enhancing the valuable sense of self. While the first aspect relates to logic of choice, the third aspect relates to logic of care, to feel secured and connected, and developing a sense of meaning and coherence (Aujoulat et al. 2012). Thus, the proper care of patients with chronic illnesses requires looking beyond the patient as an individual and laying isolated emphasis on making the patient "expert" in managing his or her illness (Greaney et al. 2012). The logic of care emphasises building more holistic models of care with the patient embedded in the family, society and the political contexts (Gomersall et al. 2012;Greenhalgh 2009). Potentially, ICT for health interventions such as telemedicine, m-health, etc. can have a differential impact on patient empowerment and autonomy. An "expert" patient, who is thoroughly conversant with the use of technology and has the knowledge about his or her illness and management, may feel empowered by the use of ICT for health as it will result in "freeing" up of the abilities of the "expert" patient. However, the naïve patient, who has a limited knowledge of his illness and management or who is unable to utilise that knowledge effectively, may feel abandoned as the logic of care gets de-emphasised. A large majority of the poor population, which is the focus of most of the ICT for health interventions, has limited health literacy (Bhattacharyya et al. 2010) to fully utilise the capabilities emphasised by telemedicine. Further, ICT for health introduces a new dimension to the "expertise", technological expertise which refers to being conversant and comfortable with technology, thus complicating the issues arising from the assumption of an "expert" patient. Formal Versus Informal Systems An important aspect related to the logic of choice and logic of care is the interrelationship between the formal and informal systems that coexist within the healthcare context. ICT for health initiatives, largely driven by the governments or funding agencies, emphasises changes within the formal healthcare delivery system to enhance efficiency or effectiveness of the delivery process. For example, telemedicine initiatives in the developing countries, which seek to make "expert" specialist knowledge available to remote rural populations through the use of ICT, are usually implemented in the existing public health infrastructure in the remote areas. However, the informal systems in healthcare, such as the sociocultural aspects, play a crucial role in determining delivery and perception of "care". ICT for health initiatives, which provide an efficient alternative "choice" of healthcare delivery to the patients, generally highlight the formal aspects of the health system. In other words, ICT for health initiatives driven by the logic of choice largely emphasises the formal healthcare system. The logic of care, on the other hand, concerns the informal system of healthcare delivery. Empirical studies have shown that patients frequently resort to the informal systems that are driven predominantly by logic of care. For example, Stenner et al. (2011) find that patients preferred a nurse practitioner over a specialist in accessing primary care for chronic illnesses as they valued the "non-hurried" approach adopted by the nurses, the involvement of nurses in providing care and showing concern for the patients, their higher degree of approachability and length of the interactions, and better interpersonal skills, which raised the satisfaction level of patients' interaction with nurse practitioners. The above aspects highlighted the awareness about embeddedness and care in increasing patient satisfaction from the interaction. Similarly, studies have highlighted the preference of the informal over formal channels in case of doctors. Birk and Henriksen (2012) explicate that general practitioners, when referring a patient to a particular hospital, rely on the informal channels for gathering information about quality of care and services offered by the hospitals, for example, feedback from previously referred patients, recommendations from friends rather than the official data and figures available in the hospital databases. Chib et al. (2013a) highlight that rural doctors in China utilised both informal and formal networks to address their need for medical information, with informal guanxi networks compensating for the limitations of the formal healthcare information system. Yet another important dimension highlighting the informal versus formal dilemma is the traditional versus biomedical systems of medicine which coexist in a healthcare system. Studies have shown that traditional systems of medicine form an integral part of health-seeking behaviour of the patients, especially in developing countries, and that these systems are central aspects of the logic of care (Miscione 2007;Sujatha 2007). Indeed, many educated patients and intelligent therapists resort to alternative medicine in spite of the limited scientific and statistical support for effectiveness of such therapies (Beyerstein 2001). Nissen and Manderson (2013) map the changing attitude of the healthcare systems in various countries across the world towards CAM (complementary and alternative medicine). CAM is heterogeneous with several societies considering specific CAM as legitimate, for example, Ayurveda in India and Chiropractic in Australia (Nissen and Manderson 2013). The coexistence of these systems affects the healthcare delivery processes. Sachs and Tomson (1992), for example, in their study on drug utilisation among doctors and patients in Sri Lanka illustrate how the sociocultural norms about Ayurveda influenced the doctor-patient interaction and drug usage. Policymakers and healthcare systems in several countries have started recognising the contribution of these systems of medicine in emphasising the logic of care, though the biomedical system proponents raise issues about the "lack of evidence base" in some of these systems. Telemedicine can complicate the already complex relationship between the traditional and biomedical systems of medicine interventions. CAM forms the usual recourse adopted by the patients, especially in the case of primary care in developing country contexts (Sujatha 2007;Shaikh et al. 2006). Telemedicine interventions that are largely restricted to the field of modern medicine emphasise biomedical conceptualisation of health, perpetuate the logic of choice by providing people with an alternative to their usual course of health-seeking in primary care, thus accentuating the conflict between the logic of care and logic of choice (cf. Merrell and Doarn 2012). Further, the technology itself can be perceived differently by the doctors as well as patients. For example, while doctors are more likely to perceive telemedicine from a formal perspective (e.g. technology as conduit of knowledge), the patients may perceive it as an informal channel for communication (such as appearing on a television screen). The doctor can also invoke formality or informality in a telemedicine encounter by verbal and non-verbal cues. The differential perspectives potentially invoked by the doctors and patient can complicate the interplay between the choice and care logic. Presence of Multiple Personnel in the Doctor-Patient Interaction In this discussion we focus on a specific type of ICT for health intervention, namely, telemedicine. In a doctor-patient interaction, the patient shares personal information related to health, illness and disease and about his or her personal life with the doctor to enable the doctor to reach a particular diagnosis. Maintaining the confidentiality of such personal information and a concern about privacy have been voiced and debated extensively in the healthcare literature (Chalmers and Muir 2003). The discussion on privacy and confidentiality in medical practice involves complex philosophical and conceptual issues (Rothstein 2010) and is out of the scope of this review. Here we highlight an important aspect of telemedicine which can potentially affect the perception of the patient about confidentiality and privacy issues, namely, presence of additional personnel in the telemedicine interaction and the concern about sharing the information with a person via electronic media rather than faceto-face (Miller 2001;Nicolini 2006). The traditional doctor-patient interaction in a personal visit usually occurs on a one-to-one basis between the doctor and the patient. In tele-visits (telemedicine interactions), however, there are multiple personnel such as technicians, coordinators, and IT assistants who are listening to the interaction, though not directly involved in the medical aspects of the consultation. Involvement of multiple personnel jeopardises the perception of confidentiality and privacy in such consultations (Stanberry 2001). Further, as Labov (1972) highlighted, the phenomenon of the observer's paradox, that is, the difference in behaviour from the usual norms occurring as a result of perception of being observed, can alter the dynamics of doctor-patient interaction. Previous researchers have highlighted other issues arising from one to many medical consultations such as loss of patient centredness in the consultation process (Bristowe and Patrick 2012) and a perception of disempowerment and loss of self-autonomy in the patients (Rees et al. 2007;Maseide 2006). Pilnick et al. (2009) map the studies on conversational analysis of doctor-patient interactions, highlighting the need to look beyond dyadic interactions between doctor and patient to include other health professionals as well and multiparty interactions (see also Rothstein 2010). Thus it can be conceptualised that the traditional personal visit supports the patient's perception of privacy and confidentiality, thus emphasising the logic of care. Telemedicine consultations, on the other hand, are characterised by the patient's apprehension about sharing his or her personal information with a "remote" consultation over an electronic media and in the presence of multiple personnel, thus jeopardising the logic of care. Scholars have called for further research exploring the dynamics of privacy issues in telemedicine consultations (Fleming et al. 2009). Confidentiality issues have been highlighted in other ICT for health interventions also such as m-health. Chib et al. (2013b) examine the Ugandan HIV/AIDS SMS campaign and highlighted complex gender issues involved in the implementation of the programme. They found that as the primary user of the mobile phone was a male, it jeopardised the freedom and confidence of female patients to share private information over the SMS. Discussion and Conclusion Most of the 2.6 billion people living under USD 2 a day, largely in low-and middle-income countries, have limited access to health services due to limited economic resources, residence in remote or rural areas, and lack of health literacy (Bhattacharyya et al. 2010). This results in significant lacunae in healthcare delivery among a population that, in fact, requires affordable, accessible, and quality healthcare services. In India, for example, 75 % of healthcare facilities (i.e. infrastructure and manpower) is concentrated in urban areas which accounts for only 27 % of the population of the nation. The lack of manpower is mainly at the specialists' level with about half of the posts for surgeons, gynaecologists, paediatricians and physicians lying vacant in rural areas (Bhandari and Dutta 2007). ICT interventions, such as m-health and telemedicine, acting as a conduit for information offer a promise to bridge the knowledge gap between the "haves" and the "have-nots" between the urban and rural areas. However, isolated emphasis on the economic aspects of ICT interventions assumes a linear relationship between knowledge-information transmission and development, which usually underlies such endeavours. ICT interventions considerably alter the institutional dynamics of the existing healthcare system, which is embedded in the sociocultural-political environment. Impact assessment of such interventions would be incomplete without considering the softer dimensions of the "impact", such as the changes in behavioural and cultural aspect of the community, and the effect on the existing institutional systems. Indeed, the longevity of the change process depends upon the softer dimensions such as depth (deep and consequential change in the processes), sustainability (the programme should result in policy implications so that the change is sustained over time), spread (involve the diffusion of underlying beliefs, norms and values that form the bases of the programme) and shift in the reform ownership (the shift of knowledge, ownership and decision-making from external sources who initiated the project to internal people who are the part of the process) (Coburn 2003: p. 4). Scholars have called for incorporating the cultural and institutional dimensions of the context in the frameworks for assessing the impact of ICT4D intervention (Heeks and Molla 2009). We began with the broad theoretical framework (Fig. 2), emphasising that the ICT for development initiatives, potentially, can affect and are affected by the institutional context. We attempt to exemplify the importance of institutional context in impact assessment of ICT4D initiatives by examining the interplay of the institutional logic of choice and logic of care in the healthcare system. When an innovative intervention in the form of ICT for health initiative such as telemedicine or m-health is introduced in the system, it juxtaposes the two institutional logics. The evolution of the interaction between the two logics, in turn, determines how the intervention unfolds. The tensions between the key conceptual aspects arise when the logic of choice and logic of care are juxtaposed as a result of ICT4D intervention. Figure 3 details the tensions arising from the interplay of the logic of choice and logic of care in healthcare system when an ICT4D intervention is introduced. We identified four different aspects of the tension between the logics of choice and logic of care, namely, (1) between the proximal and distal variables, (2) an "expert patient" and Fig. 3 The tension between logic of choice and logic of care determining the adoption and evolution of ICT for health initiatives a "naïve patient", (3) formal and informal systems and (4) presence of multiple personnel in the doctor-patient interaction and one-to-one interaction, respectively. These "tensions" form the two ends of a continuum, and it is the interplay between these factors that determine the adoption and evolution of the ICT for health initiative in the community. The above analysis reiterates the complex sociocultural context of the healthcare system that determines the care environment. An ICT4D intervention, such as m-health or telemedicine, enables the patient to access the healthcare system through ICT or through the conventional face-to-face encounters. The intervention, thus, enhances the "choice" to the patient to seek access to healthcare. However, the innovative interventions that are driven by the logic of choice and patient centredness are embedded in the context of the logic of care and hence any intervention results in an interaction between the two logics. The adoption of the intervention will be determined by the interplay between the two logics. As "care" forms the core of a healthcare system, we posit that the intervention will be adopted, often in a modified form, so as to facilitate an overlap between the two logics, that of care and of choice. In other words, adoption of the "choice" and the entailing improvements in the system are determined by the alignment of the programme with the broader environment of the "care". The above analysis has an implication for scholars engaged in assessment of the "impact" of ICT4D initiatives as well as the designers and the implementers of the initiative. The programme designers of one of the m-health initiatives in India (http://e-mamta.gujarat.gov.in/), for example, recognised that in rural areas, the community health worker form an important aspect of primary care-seeking. The programme aimed at early detection and treatment of high-risk pregnancies in rural parts of the state of Gujarat, India. The programme designed involved collecting and reporting health information from the patients through the use of SMS in the local vernacular language (in Gujarati). The health workers collected simple information such as vital signs, vaccination status and so on from the patient and sent the information to the State Rural Health Mission, which then set alerts for mother and infants for regular medications and vaccination. The above example highlights that successful implementation of ICT for health initiative emphasises the overlap between the logic of choice (m-health) and logic of care (involvement of community health worker). The framework highlighting the institutional dualism between logic of choice and logic of care in healthcare reiterates the importance of developing an indepth understanding of the context and the existing institutional systems. The analysis based on the framework would not only enable a context-sensitive design of the innovation but also outline a roadmap for assessment of an ICT for health intervention for policymakers. While the above framework pertains to the context of healthcare system, we posit that the core concepts of the framework, that is, the tensions arising from diverse institutional systems being juxtaposed, are generalisable to a broader domain of ICT4D interventions. Scholars have highlighted the issue of "institutional dualism" that ensues when an ICT4D intervention is introduced in an existing system (Heeks and Santos 2007). Directions for Future Research The above analysis reveals several potential areas for future research for examining how the logic of care and logic of choice interplay with each other and the role of ICT in the evolving dynamics. The context of ICT interventions in healthcare such as telemedicine, Web 2.0, HIS systems, electronic medical records, m-health and so on provides fertile areas to investigate the changing logics of the institution of healthcare. For example, studies have shown that Internet can not only enhance clinical and material care which enables managing diseases more effectively but also can act as spaces where people can care for themselves and others (Atkinson and Ayers 2010). As the use of social media becomes ubiquitous and the Internet alters the semantics of "friendship" and "relationships", future research is required to determine the role of the Internet in providing care. Further, as patients and doctors increasingly use Web 2.0 and mobile technologies to gather and share information (see, e.g. Chib 2010), several issues need to be investigated such as the pattern of knowledge sharing and the evolution of the relationships among the healthcare professionals and between healthcare professionals and the patients. Eriksson and Salzmann-Erikson (2013), for example, highlight how the cyber nurses project their expertise on the Web in medical discussion forums. Researchers also need to investigate the gender issues and other sociocultural suspects of the use of ICT in healthcare. Another interesting area of research would be to examine how the doctors and patients make sense of a virtual doctor-patient interaction. For example, tacit clues like non-verbal communications, body language, etc. play an important role in determining the effectiveness of doctor-patient interaction (Henry et al. 2011). To enhance the effectiveness of the virtual tele-consultation, it is essential to examine "how the doctors and patients perceive non-verbal and tacit communication in a virtual interaction" and how this perception affects the quality of the interaction. Further, while we have examined the ICT for health initiatives from the institutional logics perspective, further research from diverse perspectives, such as behavioural sciences and communication studies, will provide a more holistic understanding of the phenomenon.
Suicide by hanging: Results from a national survey in Switzerland and its implications for suicide prevention Background Hanging is a frequent suicide method, but developing measures to prevent suicide by this method is particularly challenging. The aim of this study is to gain new knowledge that would enable the design of effective of measures that would help prevent suicide by hanging. Methods A total of 6,497 suicides registered across the eight Swiss Forensic Institutes (IRM) were analysed. Of these, 1,282 (19.7%) persons hung themselves. T-test and chi-square tests. and chi-square tests were used to analyse …(or determine, or investigate) …group differences regarding sociodemographic variables and triggers Findings Men and women who hung themselves showed no significant differences in sociodemographic variables. However, women were significantly more likely to have a psychiatric illness history, whereas men were more likely to have somatic diagnoses. In controlled environments, people used shelves, plumbing and windows more often than beams, pipes, bars and hooks to hang themselves. Compared with other suicide methods, hanging was more likely to have been triggered by partner and financial problems. Conclusions Suicide by hanging can be best prevented in institutions (e.g. psychiatric hospitals, somatic hospitals, prisons). These institutions should be structurally evaluated and modified with a primary focus on sanitary areas, windows and shelves. Otherwise, it is important to use general suicide prevention measures, such as awareness raising and staff training in medical settings, low-threshold treatment options and regular suicide risk assessment for people at risk. Introduction Worldwide, hanging is one of the most used suicide methods [1]. In Switzerland it is the most frequent suicide method [2]. The lethality of hanging is between 69% and 84% and thus only slightly lower than the lethality of shooting [3]. In addition, there has been a general increase over time in hanging as a method of suicide [4,5]. Various studies show that less-educated [6,7] and unemployed people are more likely to hang themselves [8]. Some studies have shown a connection between hanging and marriage, relationship and financial problems [9,10]. However, the literature shows conflicting results relating to civil status. Isolated studies have shown that married people hang themselves more often [9], whereas at least one study showed that people in this group hang themselves less frequently [6]. Concerning age, the literature yields similarly heterogeneous results. Whereas a few publications have shown that younger persons tend to hang themselves more often [9], others have found that the middle-aged are more prone to choosing this suicide method [4,11]. Some papers found that predominantly older people hang themselves [6]. Regarding gender, approximately three-quarters of all people who hang themselves are male [4,11]. Several publications present significant differences in hanging between men and women. For example, Kanamüller, Riipinenn, Riala, Paloneva and Hakko [12] showed that women who had hung themselves had been psychiatrically hospitalised more often than men who had hung themselves. Accordingly, Kanchan and Menezes [13] argued that statistical analyses in epidemiological studies regarding hanging should address males and females separately. People who choose hanging as the method of suicide differ from those who choose other methods in a few ways. For instance, De Leo, Evans and Neulinger [14] showed that men who hang themselves are more likely to suffer from a psychiatric illness and less often from a somatic disease compared with men that choose suicide by shooting. Overall, however, only a few comparative studies exist. In summary, findings regarding sociodemographic and medical risk factors are rare and heterogeneous. Further studies in diverse countries are called for to understand these converse findings better. Our study aims to contribute here. A number of researchers have suggested that fewer starting points exist for the prevention of suicide by hanging than other methods of suicide [8,15]. This is partly due to the fact that strangulation tools, such as ropes, are easy to obtain [1,4,16,17,18] as well as to the fact that many different and readily available low-hanging suspension points can be used [17,19,20,21,22]. Furthermore, most suicides by hanging take place in a private setting, which reduces the probability of a life-saving intervention [23,24]. For these reasons, suicide prevention of hanging poses special challenges [1]. Preventing suicide by hanging though is particularly important because of its frequency and lethality [25]. In addition to improvements in overall suicide prevention [12,23] and influencing social acceptability of the method [6,18,26], suicide prevention mainly focuses on the prevention of hanging in controlled environments, such as psychiatric hospitals, police custody, prisons, dormitories and other places where people are closely supervised by third parties [27]. Approximately 10% of all suicides by hanging take place in these high-risk environments [1,17,24], and hanging is the most frequently used method of suicide in psychiatric hospitals [21]. The first goal of this study is to improve our comprehension of the medical and sociodemographic variables within the group of people who hang themselves. Further, we search for distinctive features in people that hang themselves in controlled and uncontrolled environments and study the exact details of how they carried out the hangings. Based on our findings, we make suggestions to improve suicide prevention measures. Method Our study is part of the Swiss National Science Foundation Project "Suicides in Switzerland: a detailed national survey of the years 2000 to 2010" (NF 32003b_133070 / 1) [28]. In this project, all suicides examined by Swiss Forensic Institutes (IRM) are included. This dataset allows us to analyse all suicides by hanging between 2000 and 2010. To do this we applied a standardised data entry form developed by the research group [24]. No other data source was used or linked. Detailed information on the data collection can be found in Thoeni, Reisch, Hemmer and Bartsch [29], Ruff, Bartsch, Glasow and Reisch [30] and Gauthier, Reisch and Bartsch [31]. The above-mentioned dataset contains all given suicide cases from the IRM of Zurich, Berne, Basel, Chur, St. Gallen, Lausanne, Geneva and Locarno between the beginning of 2000 to the end of 2010. The data collection by master's students in medicine took place between spring 2011 and winter 2013. Sociodemographic variables (e.g. age, sex, place of death, partnership, citizenship), various medical variables (e.g. method of suicide, previous suicide attempts, psychiatric history, reference to psychiatric diagnosis) as well as details on suicide (farewell letters, exact location) and applied suicide method (e.g. place of hanging, suspension point) were recorded. The completed data entry forms were anonymised, scanned and automatically imported into an SPSS file. After this semi-automatic process, the data were manually controlled for scanning or data processing errors. For the present study, we extracted the data regarding suicide by hanging. Total sample The total sample consisted of 6,497 suicides from the years 2000 to 2010. Of these, 4,480 were male (69%) and 2,016 (31%) female. The mean age in the total sample is 50.3 years (SD = 18.6 years). After shooting, hanging was the second most common suicide method. Of the total sample of 6,249 suicides, 1,282 people (19.2%) died from suicide by hanging (see Table 1). Analysis We performed standard statistical analyses (t-tests, chi-square tests and descriptive statistics) using SPSS (version 22) and executed all tests for each gender separately because many studies showed marked differences for male and female subjects. Medical officers of the IRM investigated each suicide, mainly to exclude the possibility of influences of third persons in the deaths of suicide victims. Due to this fact, some of the variables relevant for our research questions were incomplete. The included variables therefore have marked differences regarding missing data. In many files, the psychosocial background was insufficiently mentioned (e.g. in the farewell letters). Due to this, data related to relationship problems as well as financial problems were often able to be included in the analysis. When comparing hanging to other suicide methods, we carried out two different types of analyses. First, we compared hanging to all other methods of suicide. Secondly, we compared hanging to suicide by shooting, jumping off heights, jumping in front of a train and self-intoxication individually. Sociodemographic data Of the 1,282 suicides by hanging, 1,008 (78.6%) were male and 274 (21.4%) female. The average age is 49.2 years (SD = 18.3 years) and there is no statistically significant age difference between men and women (women 47.6 years, SD = 18.3 years; men 49.7 years, SD = 18.3 years). IRM files contain data regarding education for 584 people, with 12.7% holding a university or a college degree. Approximately two-thirds lived in a partnership, one-eighth were unemployed and almost three-quarters were Swiss citizens. We found no statistically significant differences between men and women in respect to these variables (for details see Table 2). Medical data Almost half of the people who died from suicide by hanging had a previous suicide attempt noted in their biography. This rate was larger for women than for men. Numerically, men hung themselves more often during a psychiatric hospitalisation, but the relative proportion of all suicides by hanging is significantly higher in women. Women also had a higher rate of previous psychiatric in-and outpatient treatment than men. More than one fifth who hung themselves had a known history of a somatic illness. We found relationship crises as triggers, as well as farewell letters similarly often for both sexes. Overall, financial issues as a background to suicides were rare but this was statistically significantly more often in males than in females. Psychiatric problems were indicated in 647 data sets (50.5%), which occurred proportionately less frequently in men than in women (for details see Table 2). Details of hanging procedure In 1,052 cases (82.1% of all records) the strangulation tool is specified in the reported IRM data. Women were significantly more likely to use clothes or accessories (including belts), whereas men numerically more often used ropes. Women more often hanged themselves in their own home and in a psychiatric hospital, whereas men more often chose public places (e.g. stairwells), nature, their workplace or while in police custody/prison (for details see Table 3). The suspension point of the strangulation tool was 250.4 cm on average (SD = 131.5 cm). On average, men chose higher suspension points (men: 259.9 cm, SD = 137.6 cm; women: 206.8 cm, SD = 87.8 cm; t = 2.34; p = 0.02). In 60.2% (534 of 887 datasets), the feet touched the ground (incomplete hanging entries in 69.2% of the datasets). We found no significant difference between men and women regarding complete versus incomplete hanging (incomplete hanging: males: 59.8%, females: 61.7%). The exact suspension point was described in 627 datasets (48.9%). Inside homes, people chose a variety of suspension points. with beams, rods/tubes and shelves being the most common points of suspension. If a door was used, in almost half of the cases (18/40; 45%) the strangulation tool was attached to the handle of a door. Outside buildings, trees were used almost exclusively. In protected environments (e.g. hospitals, prison, police custody, residential homes) people often hanged themselves on furniture, windows and on sanitary installations (shower rod, etc.). Almost half the people outside protected environments used some kind of installation such as pipes, bars, hooks, curtain rails and beams as suspension points. In protected environments, people used trees less often (for details see Table 4). Comparison of sociodemographic medical profiles (hanging vs. other methods of suicide) In comparison with men who committed suicide by all other methods recorded in the dataset, men who hung themselves were less likely to be married (Chi 2 = 18.06; p< .05, Bonferroni corrected), and to live in a partnership (Chi 2 = 8.54; p< .05, Bonferroni corrected)) and were less likely to be Swiss (Chi 2 = 38.45; p< .05, Bonferroni corrected) and less likely had a history of a somatic illness (Chi 2 = 25.27; p< .05, Bonferroni corrected). In comparison to all other methods of suicide, women who hanged themselves rarely had a somatic anamnesis (Chi 2 = 26.78; p< .05, Bonferroni corrected). Comparison hanging vs. shooting Men who hanged themselves were less often Swiss nationals than men who shot themselves. They were more likely to have attempted suicide in the past, had received inpatient psychiatric treatment more often and seldom showed a somatic anamnesis. They less often wrote a farewell letter. Women who hanged themselves, compared with women who shot themselves, did not show any group difference. Comparison hanging vs. suicide by train Compared with men who died by rail suicide, men that died by hanging were more often married and less often hospitalised for psychiatric treatment. We find indications of financial problems and partnership problems as a trigger more often in men who hanged themselves. They also more often wrote farewell letters. Women who hanged themselves did not differ from those women who died by rail suicide. Comparison hanging vs. jumping from heights Men who hanged themselves were more often married, less often possessed a university degree, less often had a somatic anamnesis, more often left a farewell letter and we found evidence of relationship problems as well as financial problems as a trigger more often than for men who died by jumping from heights. Women who hanged themselves had somatic problems less often than women who died by jumping from heights. Comparison hanging vs. intoxication Men who hanged themselves were more likely to be married than men who intoxicated themselves, more likely to live in a partnership, less likely to have been in psychiatric inpatient treatment, were in outpatient psychiatric treatment less often and showed a somatic anamnesis less frequently. Women who hanged themselves showed similar differences in their profile as evident for men. In addition, the proportion of women being currently (but not lifetime) in psychiatric inpatient treatment at the time of death was higher than for those who intoxicated themselves (for details see Tables 5 and 6). Discussion Our results regarding suicide method are broadly consistent with other studies. As it is generally the case in Switzerland, the sample examined in our study shows that suicide by hanging is the second most often used suicide method. As with Kurtulus, Nilufer Yonguc, Boz and Acar [11] and Russo, Verzeletti, Piras and De Ferrari [4], approximately three-quarters of those who hanged themselves were male. Slightly over 10% of individuals in the dataset hanged themselves in controlled environments. This corresponds to the range found by other researchers [1,17,23,24]. In the following, we discuss the individual results in detail. As described in the literature [1,4,16,17], people who hang themselves use readily available tools of strangulation. In the present study, women tended to use clothes, whereas men used ropes. More than half of the people who hanged themselves touched the ground with their legs. This finding is in line with several other studies [1,17,19,20,22]. This result underlines that low-level suspension points are used as well [1]. Although most of the persons used a suspension point at or above door handles, a few cases were found below, especially in institutions with high rates of suicide, such as psychiatric hospitals or prisons. Securing these points should be especially considered in suicide prevention in such controlled environments Hanging requires preparation time. It is obvious that this is exactly what co-determines the choice of place of execution. As also found in other studies [24], most of the individuals studied in the present study hanged themselves in their own homes. Outside of their own premises, a significant proportion of suicides by hanging was in nature. Individuals, as described by Gunnell, Bennewith, Hawton, Simkin and Kapur [1], often used trees as points of suspension. According to Russo, Verzeletti, Piras and De Ferrari [4], a trouble-free preparation and execution of a suicide is easily possible at such locations and thus method-specific suicide prevention is difficult. Further analysis of the execution locations shows that in controlled environments, such as prisons or closed psychiatric wards, individuals use other kinds of execution locations. It is not surprising that in such environments individuals use trees less, as accessibility to them is restricted for suicidal individuals and accordingly, a preparation in the garden or courtyard of such an institution is usually not possible. Surprisingly, unlike in uncontrolled environments, individuals use typical suspension points, such as pipes and rods, in institutions less frequently [17]. One possible explanation here is that these have already been made inaccessible through Table 6. Do men who hang themselves differ from men who used other suicide methods? Comparison of sociodemographic and medical variables. Hanging Firearms Train Jumping Intoxication Hanging vs Firearms Hanging structural suicide prevention measures in possible retreat areas. Similar to Glasow [32] and Ruff, Bartsch, Glasow and Reisch [30], we found that individuals used sanitary facilities within controlled environments more often than other areas. Sanitary facilities provide typical retreat areas in controlled environments. Our data show that within the framework of institutional suicide prevention, these retreat areas need to be secured with special care. Indeed, suicidepreventive alternatives are often possible for shower hoses, shower rods, etc. without greater restriction of function and aesthetics. Within controlled environments, more than one-quarter of cases used furniture (e.g. shelves) as points of suspension. According to our research, this is a result not yet described in the literature. Although it does not directly emerge from this study, we assume that most of the furniture used were in individuals own rooms and cells and not in public areas of these institutions. Technically, it is possible to build e.g. shelves that make hanging significantly more difficult, even impossible. It is also possible e.g. to avoid using shelves completely in these areas. Securing furniture, including shelves in closets, should be an inherent part of structural suicide prevention in the above-mentioned institutions. The same goes for windows, which are also commonly used suspension points in protected environments. Windows should therefore also be examined using a critical suicide prevention perspective and modified accordingly, if necessary. We found no relevant differences between men and women regarding sociodemographic variables in the method suicide by hanging. The medical variables, in contrast, show distinct differences. Women had more previous suicide attempts noted, and they were hospitalised for psychiatric treatment more often at the time of suicide or had been at an earlier point. Our finding matches Kanamüller's results (12), which found that women also exhibited more hospitalisations in their anamnesis. Women are thus at least partly (e.g. in the context of a suicide attempt or psychiatric hospitalisation) accessible for general suicide prevention. Considering that in any form of out-or inpatient psychiatric treatment, suicide risk assessment is a standard procedure, no concrete improvements for suicide prevention can be derived from these results. Women and men who hanged themselves were (numerically) less often Swiss nationals in comparison with persons that died by other suicide methods. In Switzerland, non-Swiss nationals have very limited access to firearms, whereas suicide by firearms is one of the main methods used by Swiss men [29]. In most Western countries, suicide by hanging is more common than in Switzerland. Therefore, on average, suicide by hanging can be considered a rather familiar method in the migrant population group. This difference regarding nationality is therefore primarily explained by the reduced physical availability of firearms and probably also by the greater psychological availability of the method in non-Swiss citizens. Corresponding with De Leo, Evans and Neulinger [14], we found differences between individuals who died by suicide by hanging and those who used a different method. In both sexes, current relationship problems were (numerically) more prevalent in suicide by hanging than in other methods (except firearms). In addition, men who hanged themselves were (numerically) more often married or lived in a partnership. This result suggests that hanging, which is carried out at home at a greater rate than most other methods, could often have a relationship component, as described by Bastia and Kar [10]. Easier access to family or couple counselling as well as increasing the degree of familiarity of such offers could contribute to suicide prevention. Limitations Our paper has several limitations. The most important limitation of the study relates to the included data. The IRMs do not systematically examine all suicides in Switzerland. The main reason for this is the fact that not all cantons in Switzerland have an IRM and other medical officers examine suicides in some of the Swiss cantons. More importantly, a selection bias could have occurred. The IRM investigate all cases in which third-person influence must be excluded. This is rarely the case in hanging but may be of some significance in other suicide methods (e.g. intoxication). The IRM files are based on data from the police and doctors and the findings from the IRM investigation. The quality and quantity of the data about variables that are of less importance to the IRM's examination is also significantly lower. In respect to these variables, missing data is significantly higher and the quality of results significantly lower. Implications for suicide prevention Concerning the method of suicide by hanging, prevention is a challenge. However, it can be improved selectively. If the suicide takes place outside of controlled environments, it is often carried out in the person's own home or in seclusion with the aids for hanging being ubiquitously available. Suicide by hanging can thus be executed relatively quickly, impulsively and invisibly from third parties, even within the person's own home. To reduce the number of suicides by hanging, general suicide prevention, for instance regular suicide risk assessments in ambulatory therapy or a high level of awareness raising of 24-hour crisis intervention services, must be continued consistently and even expanded. General programmes that increase the awareness of suicide [33], for example, by staff training programmes focusing on early recognising and dealing with suicide patients can help to reduce suicides. More suicide prevention options exist within controlled environments. However, we also find a large number of possible strangulation tools and suspension points in these places, which makes structural and method-restrictive interventions difficult. Our results show that sanitary facilities must be secured with greater care. For furniture, and shelves in particular, products must be used that inhibit, or at least impede, hanging. In addition, windows should be secured thoroughly. Since doorknobs are frequently used as suspension points, investments should be made in the development of suicide-proof doorknobs. Due to the complexity and peculiarity of controlled environments, we recommend suicide prevention assessments by external experts to effectively design in-house structural suicide prevention. Supporting information S1 Appendix. SPSS data set. The file (S1_File). contains data that support the presented analyses. Due to data safety reasons the file is anonymized and the variable "age" was reduced to 5-year age groups. (SAV)
Spherically Symmetric Accretion onto a Compact Object through a Standing Shock: The Effects of General Relativity in the Schwarzschild Geometry A core-collapse supernova is generated by the passage of a shockwave through the envelope of a massive star, where the shock wave is initially launched from the ``bounce'' of the neutron star formed during the collapse of the stellar core. Instead of successfully exploding the star, however, numerical investigations of core-collapse supernovae find that this shock tends to ``stall'' at small radii ($\lesssim$ 10 neutron star radii), with stellar material accreting onto the central object through the standing shock. Here, we present time-steady, adiabatic solutions for the density, pressure, and velocity of the shocked fluid that accretes onto the compact object through the stalled shock, and we include the effects of general relativity in the Schwarzschild metric. Similar to previous works that were carried out in the Newtonian limit, we find that the gas ``settles'' interior to the stalled shock; in the relativistic regime analyzed here, the velocity asymptotically approaches zero near the Schwarzschild radius. These solutions can represent accretion onto a material surface if the radius of the compact object is outside of its event horizon, such as a neutron star; we also discuss the possibility that these solutions can approximately represent the accretion of gas onto a newly formed black hole following a core-collapse event. Our findings and solutions are particularly relevant in weak and failed supernovae, where the shock is pushed to small radii and relativistic effects are large. INTRODUCTION At the end of the life of a massive star, where it has exhausted its supply of nuclear fuel, the core collapses ∼ dynamically under its own self-gravity. The process of inverse-decay during the collapse removes electron pressure, further destabilizing the core and producing an abundance of neutrons; this de-leptonized, collapsing core is the proto-neutron star (PNS). The PNS "bounces" due to the nucleon-nucleon interaction potential (that is, the equation of state of the nuclear material stiffens) and launches an outward-propagating shock wave. The shock dissociates heavy nuclei as it propagates outward, loses energy, and eventually stalls under the ram pressure of the infalling envelope (Colgate & White 1966;Bethe et al. 1979;Woosley & Weaver 1986;Bethe & Wilson 1985;Herant et al. 1992Herant et al. , 1994. Some mechanism revives the shock and yields the powerful and luminous explosion that is the supernova. This "supernova problem," being the stall and eventual revival (in successful explosions) of the shock wave launched by the bounce of the PNS, has eluded a conclusive theoretical explanation for decades. Two likely mechanisms for accelerating the stalled shock are the neutrino mechanism (Bethe & Wilson 1985;Bethe 1990;Murphy & Burrows 2008;Ugliano et al. 2012;Nakamura et al. 2015;Chan et al. 2018) and the standing accretion shock instability (Blondin et al. 2003; ★ E-mail:skundu@syr.edu † E-mail: ecoughli@syr.edu Blondin & Shaw 2007;Blondin & Mezzacappa 2007;Ohnishi et al. 2008;Burrows et al. 2012). The neutrino mechanism, first laid out in the 1960s (Colgate & White 1966;Arnett 1966) and revived in the mid-1980s by works such as Bethe & Wilson (1985); Bruenn (1985), propose that some fraction of the neutrino flux radiated from the nascent neutron star is absorbed by the abundant free neutrons and protons in the post-shock layer; the energy and momentum deposited by the absorbed neutrinos then revives the stalled shock. The standing accretion shock instability arises from the fact that some of the large-angle (i.e., small spherical harmonic number ℓ), oscillatory modes of the accretion shock can be dynamically unstable, the instability drives the shock outward leading to an asymmetric explosion (Blondin et al. 2003). Subsequent work (e.g., Marek & Janka 2009) has shown that hydrodynamic instabilities can aid the neutrino mechanism if not independently capable of driving the explosion. Independent of the mechanism responsible for reviving the shock, it has been found to stall in many numerical calculations. Newtonian solutions describing the velocity, density, and pressure of the post-shock gas were found by Lidov 1957;Chevalier 1989;Houck & Chevalier 1992. In these solutions, the gas "settles" and the velocity of the fluid approaches zero asymptotically close to the origin for sufficiently small values of the adiabatic index (see the left panel of Figure 4 below). Blondin et al. (2003) obtained these solutions in the adiabatic limit (Houck & Chevalier 1992 included the effects of cooling) and analyzed their response to angular perturbations. These solutions and the numerical work of Blondin et al. (2003) were in the Newtonian limit, and the gravitational field around the central compact object (PNS) was described as a Newtonian point mass. However, for a neutron star with mass (∼ 1.4 ⊙ ) and radius (∼ 10 km), the gravitational radius is /( 2 ) ≃ 0.2, and the free-fall speed of the shock radius at sh ≃ 100 km is 2 / sh ≃ 0.2 . Relativistic effects therefore introduce orderunity corrections to the behavior of the gas within the shock and will non-trivially modify the Newtonian settling solutions. Michel (1972) expanded the classic work of Bondi (1952) on spherically symmetric accretion by incorporating the effects of general relativity in the Schwarzschild metric (see also Blumenthal & Mathews (1976) who expanded Michel (1972)'s work to non adiabatic equations of state). Bondi accretion (and its relativistic generalization) does not account for the existence of a shock 1 . When the freely falling fluid passes through an existing strong shock, a substantial fraction of its kinetic energy is converted into internal energy -an effect that cannot be considered a small perturbation on top of a pure freefall solution. Subsequent work (e.g., Fukue 1987;Chakrabarti 1989;Chakrabarti & Molteni 1993;Gu & Foglizzo 2003) has also explored similar topics. However, the generalization of the Lidov (1957), adiabatic settling solution through an existing standing shock to the Schwarzschild metric incorporating the appropriate jump conditions -for which there are exact solutions (as we show below)does not appear to have been detailed in the literature. Here, we present and analyze the relativistic generalization of adiabatic settling solutions for the post-standing accretion shock flow that were studied numerically in Blondin et al. (2003). In Section 2.2 we describe the model and write down the fluid equations. The ambient fluid -assumed to be freely falling and effectively pressureless -is analyzed in Section 2.3, and we give the relativistic jump conditions in Section 2.4. Stationary solutions to the fluid equations that satisfy the relativistic jump conditions and are adiabatic are presented in Section 3, where we also discuss the variation of the solutions with the shock radius (effectively the ambient velocity at the location of the shock), the variation with the adiabatic index and the behavior of the flow near the horizon. We present the physical interpretation of these solutions in Section 4, discuss the implications of our findings, and identify directions for future work in Section 5. Metric We assume that there is a compact object (without spin) that dominates the gravitational field of the infalling fluid. With these assumptions, the metric describing the spacetime is given by the Schwarzschild metric: where is the mass of the compact object. We have adopted the Einstein summation convention (as we will throughout the remainder of the paper) so that repeated upper and lower indices imply summation, and we have let = = 1. Fluid Equations We let the accreting gas -which has passed through the standing shock -be a relativistic perfect fluid with total energy ′ , pressure ′ and rest-mass density ′ . For simplicity, we assume that the gas is adiabatic, with ′ = ′ /( − 1) and the adiabatic index. With the four-velocity of the fluid, the energy-momentum tensor is (Anile 1990 Energy-momentum conservation is expressed as where ∇ is the covariant derivative. Conservation of mass (or particle number) is and from Equation (1) we have the conservation of the norm of the four-velocity: It is useful to work with the time and space-like projections of Equation (3) (Anile 1990;Coughlin 2019). Taking the contraction of Equation (3) with yields We now introduce the projection tensor Π = + , which projects the components of Equation (3) onto the 3-space orthogonal to ; contracting Equation (3) with the projection tensor then gives the momentum equations, One can also re-express the energy equation in the following convenient form using the continuity Equation (4), where ′ = ln ( ′ / ′ ). This equation demonstrates that ′ , which we interpret as the entropy of the gas, is a conserved Lagrangian quantity in adiabatic regions of the flow. Since we are assuming that the flow is spherically symmetric and irrotational, there are only two components of the four-velocity that are related by Equation (5); we will refer to the radial component of the four-velocity by . Ambient Fluid Pressure support is lost from the core, and a rarefaction wave travels through the overlying stellar envelope, which causes shells of material at successively larger radii to fall inward. If the density and pressure of the ambient medium fall off as power-laws with distance from the core, one can show that there exists a self-similar solution to the fluid equations that describes the propagation of the rarefaction wave and the fluid interior to the wave (Coughlin et al. 2018); the wave travels at the local sound speed, and the gas pressure of the fluid interior to the wave is much lower than the ram pressure. Therefore, the infalling gas can be treated effectively as pressureless. We denote ( )  ≃  Figure 1. The ratio between the post-shock fluid velocity to the ambient fluid velocity ( s / a ) at the location of the shock as a function of the ambient fluid velocity, a , in both the Newtonian and General Relativistic settings for = 4/3. Note that a is the freefall speed at the shock normalized by the speed of light, and hence the relativistic solution approaches the nonrelativistic one when a ≪ 1; in the Newtonian limit, the entire problem is self-similar and a can be set to one without loss of generality. the four-velocity of the ambient fluid by a = ( t a , a , 0, 0). The radial momentum equation, Equation (7) gives a a = − 2 (9) Integrating the above equation and assuming that the gas is weakly bound, so the binding energy is ∼ 0, then gives The time-steady solution to the continuity equation (4) where is the shock radius and¯ a is the density of the ambient gas at = + in the limit that → 0. Jump Conditions In the adiabatic limit the energy, mass, and momentum fluxes must be conserved across the shock, which give the strong-shock jump conditions -assuming the ambient gas pressure is negligible -in the lab frame (which equals the rest frame of the shock by assumption): The subscript "s" indicates that these are the properties of the postshock fluid at the shock radius. Equations (12), (13) and (14) can be combined into the following cubic 2 to be solved for s : Figure 1 shows the ratio s / a resulting from Equation (15) as a function of the ambient four-velocity. We see that the ratio is nearly equal to its Newtonian value (∼ 0.14) for a 0.2, but as the ambient velocity becomes more relativistic, the ratio deviates significantly from the Newtonian value. Bernoulli, mass, and entropy conservation equations Combining Equations (4) and (6) yields the conservation of the radial energy flux, We use the continuity Equation (4) to express the density of the post-shock fluid in terms of the four-velocity as while entropy conservation gives With these results, we can write Equation (16) as Equation (19) is the relativistic generalization of the Bernoulli equation, to which it manifestly reduces in the limit that the velocity is small (see Equation 4 in Blondin et al. 2003). Impact of varying the ambient fluid velocity Here, we discuss the effects of varying the velocity of the ambient fluid, a , while keeping fixed at 4/3. Given the fluid velocity a , we can calculate the entropy = ′ /( ′ ) 4/3 , the radial energy flux and the radial mass flux by using the jump conditions and we can then solve Equation (19) numerically for the post-shock fluid velocity ( ). We can then calculate the fluid three-velocity as seen by an observer who is stationary with respect to the compact object and who employs locally flat coordinates (this coordinate frame will be represented with 'hats') as where in the last equality we used Equation (5). For a neutron star of mass 3 M ⊙ and radius 3 10 km, the solutions for the post-shock 2 This equation reduces to the special relativistic jump conditions obtained in Coughlin 2019 (Equation 23 therein with ′′ 2 = s and ′′ = a ). 3 We use 10 km for the neutron star radius for concreteness and simplicity, though Lattimer & Prakash (2016) and Haensel et al. (1999) find that causality arguments require that the neutron star radius satisfy 2.823 / 2 , which is closer to 12.5 km for = 3 ⊙ . If we used 12.5 km, the maximum fluid four-velocity, three-velocity, density and pressure are presented respectively in the top-left, top-right, bottom-left and bottom-right panels in Figure 2; the ambient four-velocity -which implicitly establishes the physical location of the shock since the mass of the neutron star is set -for each curve is shown in the legend. In Figure 2, the radial coordinate is in km, while in Figure 3 it is normalized by the shock radius. In Figure 2, the location of the neutron star surface is shown with a black dashed line, and the location of the shock, appropriate to the specific ambient velocity, is shown by the respective colored dashed line. In Figure 3 we show the threevelocity as a function of / , in which case the shock is always at / = 1, represented by a black dashed line, while the neutron star surface now takes different / values and is shown with colored dashed lines. Figure 3 demonstrates that the relativistic solutions are not self-similar -each curve displays qualitatively different behavior as a function of the ambient velocity, whereas the Newtonian limit (shown by the black curve) is independent of this quantity once it is scaled out of the solution. Both of these figures show that the relativistic solutions approximately equal the Newtonian solution in the limit that the ambient speed is non-relativistic, which is not surprising. However, substantial deviations arise when the ambient speed reaches substantial fractions of the speed of light, and this is especially true deep in the interior of the flow where relativistic gravity is yet more important. Impact of varying the adiabatic index In addition to the magnitude of the infall velocity (relative to the speed of light), the solutions for the post-shock fluid variables depend on the adiabatic index of the gas. The = 4/3 adiabatic fluid is likely a reasonable approximation of the radiation-pressure dominated post-shock fluid accreting onto a neutron star (Chevalier 1989;Houck & Chevalier 1992) over the expected temperature and density ranges (Schinder et al. 1987). Although in our model we neglected non-ideal effects (e.g., neutrino cooling), these can be roughly captured by using a softer equation of state, i.e., reducing the value of the adiabatic index. While it is difficult to see how they are physically relevant, we also analyze the = 5/3 monoatomic ideal gas and one or two cases with even higher adiabatic indices, primarily to compare with Blondin et al. (2003). The left panel of Figure 4 illustrates the absolute value of the velocity as a function of distance behind the shock in the Newtonian limit for the adiabatic indices in the legend. The solid curves show the solution to the Bernoulli equation in the non-relativistic limit (i.e., Equation 19 when the rest-mass energy far outweighs the internal energy and /( 2 ) ≪ 1; see also Equation 4 in Blondin et al. 2003), while the dashed curves give the asymptotic scaling near the origin 4 This scaling can be derived from Equation (19) in the Newtonian limit and assuming that the internal energy far outweighs the kinetic energy near the origin, which is a valid assumption when < 5/3. speed able to be achieved by the infalling fluid -obtained when the shock radius is comparable to the neutron star radius -would be ∼ 84% c instead of ∼ 90% c, as shown in Figure 2. 4 We have arbitrarily scaled the analytic, asymptotic solutions by a factor of 0.9 so that they can be distinguished from the exact solutions in this figure. Equation (21) also shows, and the left panel of Figure 4 verifies that the velocity declines in absolute magnitude as one approaches the origin for < 1.5, while the gas accelerates (in terms of the magnitude of the velocity) for > 1.5; we believe that the value of c = 1.522 quoted in Blondin et al. (2003) that differentiates between these two limits is in error -it is only for ≡ 1.5 that the velocity approaches a finite value (i.e., neither zero nor infinite) near the origin. For = 5/3 the velocity satisfies / a = 0.25( / ) −1/2 and scales exactly with the freefall speed, as also found in Blondin et al. (2003). For > 5/3, Equation (21) predicts that the fluid speed increases in a manner that exceeds the freefall scaling as the gas approaches the origin. However, this is not consistent with the assumption that the kinetic energy remains sub-dominant to the thermal energy, upon which the assumption Equation (21) is based. Instead, the blue curve in the left panel of Figure 4 demonstrates that this super-freefall acceleration is only maintained for a finite distance beneath the shock, and that the solutions terminate at a sonic point (i.e., the derivative of the velocity diverges at radius of / ≃ 0.14 for = 1.75, and solutions do not exist for radii smaller than this value). While this behavior is interesting from an academic standpoint, it is difficult to see how such stiff equations of state could be realized in nature, and hence we do not consider these solutions further here. In the right panel of Figure 4, we present the solution to the relativistic Bernoulli equation (Equation 19) by solid curves, where the adiabatic index appropriate to each curve is given in the legend. We set the ambient fluid velocity to 0.2 for all solutions as a fiducial value. The dashed curves represent the corresponding Newtonian solutions. We see that the fluid "settles" at the event horizon, instead of conforming (approximately) to power-laws near the origin, as is the case in the Newtonian approximation. The Newtonian solutions display qualitatively different behavior above and below = 1.5, as illustrated in the left panel of Figure 3, with the gas decelerating (accelerating) for < 1.5 ( > 1.5); on the other hand, the relativistic solutions all decelerate and settle as we approach the horizon, even though they closely match the Newtonian solutions near the shock. The asymptotic scaling of these solutions near the horizon is presented in the next subsection. Asymptotic behavior of the general relativistic solutions near the horizon The neutron star surface -where the fluid must physically stopis always outside of the horizon, but it is instructive to discuss the extreme limit where the neutron star surface approaches the horizon. The fluid variables as measured by an observer in the comoving and locally flat frame either approach zero (velocity) or diverge (density and pressure) near the Schwarzschild radius, and it is straightforward to show from Equations (17), (18), and (19) that the rates at which they do so are Thus, in the general relativistic solutions presented here, the fluid velocity "settles" as in the Newtonian regime. However, instead of settling near the origin, the fluid velocity approaches zero near the event horizon. Similarly, the density and pressure diverge at small radii, but instead of following ∝ as they do in the non-relativistic regime, they diverge as simple (and smaller) powers of 1 − 2 / . Due to the presence of the strong shock, the fluid loses most of of its kinetic energy (gains equivalent thermal energy) and hence inevitably comes to rest interior to the shock. As it approaches the horizon it sees an unbounded gravitational field and can therefore only decelerate by developing an infinite pressure gradient as the event horizon is approached. We turn to the implications of these findings in the next section and our corresponding physical interpretations. PHYSICAL INTERPRETATION The settling solutions obtained by Lidov (1957), Chevalier (1989) and Blondin et al. (2003) in the Newtonian limit possess the feature that the velocity settles, or approaches zero, near the origin, which coincides with the location of the compact object in the Newtonian point-mass limit. This feature seems to imply that the Newtonian settling solutions describe the accretion of material onto the surface of an object and that these solutions apply to an accreting neutron star during the initial, stalled phase of the bounce shock, or at later times when weakly bound material falls back from the expulsion of the envelope in a successful explosion (e.g., Chevalier 1989). The pressure and density of the gas also rise dramatically near the origin in the Newtonian settling solutions, which suggests that the pressure of the gas as it is brought to a halt at the surface of the neutron star provides the force that decelerates the flow. However, while the velocity of the Newtonian solution approaches zero near the origin, the mass flux remains finite, as it must by virtue of the time-steady nature of the accretion through the shock. Similarly, the energy flux is conserved throughout the domain, and is given by the kinetic energy flux across the shock. Both of these quantities must effectively vanish at the surface of the neutron star where the velocity is (again, effectively) zero and the density and pressure retain finite values. Thus, if the solutions here are to describe accretion onto a neutron star (or any object with a "surface" at which the equation of state stiffens substantially), then the mass flux should not contribute substantially to the mass of the compact object, and there should be a mechanism for removing the incident energy flux. In core-collapse supernovae, both of these conditions are approximately upheld over the freefall time from the shock (i.e., conditions are roughly timesteady): the mass accumulated near the surface is small compared to the mass of the star, and neutrino losses from a very thin layer near the surface negate the incoming energy flux. Thus, the relativistic solutions here should apply to accretion onto a neutron star and / = = = = = Figure 3. The normalized three-velocity as seen by a static and locally flat observer for the ambient speeds (at the location of the shock) shown in the legend as functions of normalized radial coordinate by the shock radius ( / ); the black dashed line at / = 1 indicates the fixed location of the shock in this coordinate and the colored dashed lines indicate the location of the neutron star surface, assumed to be at 10 km. The Newtonian solution, which is self-similar (i.e., it does not depend on a ), is shown by the solid black curve. become particularly relevant for scenarios in which the shock is pushed to small radii (see the additional discussion in Section 5 below). In failed supernovae, the continued accretion onto the neutron star eventually pushes it over the TOV-limit, and the star collapses dynamically to a black hole (Sumiyoshi et al. 2007(Sumiyoshi et al. , 2008O'Connor & Ott 2011;Ugliano et al. 2012;Sukhbold et al. 2016;Kuroda et al. 2022). When the star collapses, an inward velocity must develop near the surface, and the pressure gradient must fall below the value necessary to retain hydrostatic balance. Thus, the solutions analyzed here cannot describe the entirety of the flow structure near the horizon when the black hole forms, which is also not expected because of the highly time-dependent nature of the interior of the flow as the core starts to collapse (i.e., the assumption of steady flow is clearly violated at this point). It is then possible that the shock is not supported by the reduced pressure in the interior and is correspondingly swallowed by the black hole. On the other hand, it seems possible that the adiabatic increase in the pressure of the gas as it nears the horizon (as evidenced from Figure 2 above) is sufficient to support the shock for at least a finite time, even though the divergence of the pressure is not physical (the true location of the horizon must increase to accommodate the large increase in the density predicted by our solutions, which would result in non-divergent values of fluid variables at the horizon). One could analyze the effects of imposing a more negative velocity at an inner radius (mimicking the effects of a true horizon) on the flow by using linear perturbation analysis, similar to what was done in Blondin et al. (2003) (though they used a smaller velocity, which caused the outward motion of the shock). The fading mass supply from the overlying envelope as less-dense regions of the progenitor are swallowed also implies that the ram pressure should be less capable of stifling the shock, which could allow the shock to remain quasi-stationary even in the absence of the strong pressure gradient in the interior that is predicted by the solutions here. For these solutions to arise physically (i.e., in a setting in which the assumptions made here are relaxed), a steady-state must be reached by the flow. In general, this will take on the order of a sound-crossing time from the shock, which for a shock radius of = 150 km and neutron star mass of = 2 ⊙ is ∼ 3/2 / √ ∼ 3.5 ms. If the neutron star collapses to a black hole on a timescale that is shorter than this, then these solutions will not be realized and the flow will be better approximated by pure freefall (e.g., Figure 7 of Kuroda et al. 2022). SUMMARY AND CONCLUSIONS In this paper we analyzed the adiabatic accretion of gas through a stalled shock by modeling the gravitational presence of the compact body with the Schwarzschild geometry. The strong-gravitational effects (i.e., the inclusion of general relativity) is especially important in the case of weak or failed supernovae, where the shock wave launched by the bounce of the proto-neutron star could stall at much smaller, more relativistic radii, and during which a black hole forms. The black hole can continue to accrete at least as long as there remains supply from the host star, and we suggest that the solutions outlined here should also apply in this phase, i.e., while the black hole accretes through the stalled shock. For a neutron star mass of 1.4 ⊙ and a shock radius of 100 km, the freefall speed at the shock satisfies ∼ 0.2 and the differences between the relativistic and Newtonian solutions are at the level of ∼ 10 % (see the red curves in Figure 2). However, for failed supernovae (in which the shock is not revived), the shock can be pushed to smaller radii and the mass of the neutron star increases to near the TOV limit (maximum mass limit for neuron stars 3 ⊙ for most equations of state). For example, O'Connor & Ott (2011) find that the shock can stall at radii as small as 20 km; with a neutron star mass of 3 ⊙ , the freefall speed at this radius satisfies ≃ 0.66, and relativistic effects become much more important (see the green and orange curves in Figure 2). Once the star exceeds the TOV limit, the mass increases to even larger values, while the shock can be compressed to even smaller radii, which necessitates the usage of our solution over the Newtonian one. Relativistic effects will change the stability criteria and the corresponding growth of the standing accretion shock instability (SASI) (e.g., Blondin et al. 2003;Foglizzo et al. 2007;Fernández & Thompson 2009a,b), and these effects can be important because the growth rate of the SASI has been found to be small (i.e., the e-folding time of the instability is many freefall times). For example, Blondin & Mezzacappa (2007) find from their one-and two-dimensional simulations that the growth rate of SASI is between ∼ (0.1 − 0.22) in units of the freefall time (i.e., the instability grows as ∼ (0.1−0.22) , where is time relative to the freefall time; see their Figure 2). Their results agree well with those obtained analytically by Houck & Chevalier (1992). Foglizzo et al. (2007) and Fernández et al. (2014) too find similar, small growth rates. Small changes in the background state can therefore change the stability of the system, and the solutions here present one such instance in which small changes arise from physical (relativistic) considerations. Our solutions also have implications for gravitational wave signals from core-collapse supernovae. For example, Morozova et al. (2018) discuss the possibility that material that falls through the shock can impact the nascent neutron star and generate detectable gravitational wave signatures. Relativistic effects will modify the properties of the fluid as it rains down onto the neutron star surface, which could lead to pronounced differences in the gravitational wave signal owing to the fact that the neutron star radius is only marginally greater than the Schwarzschild radius. In this work we assumed spherically symmetric, irrotational flow. If the progenitor does not contain a large reservoir of bulk angular momentum (which is likely, because stellar winds will carry away a substantial amount of angular momentum over the lifetime of the star), then angular momentum is important during the late stages of infall; this is because it is only in the outer envelope of the progenitor where random, convective motions can yield a specific angular momentum that exceeds the innermost stable circular oribit (ISCO) value (Quataert et al. 2019). If the progenitor does have significant net angular momentum, the gas can circularize outside of the ISCO and rotation can be dynamically important. The disc-like structure that forms in this case is likely optically and geometrically thick owing to the extremely high accretion rates, and we can integrate the equations over the scale height of the disc to obtain height-averaged Euler equations. The resulting disc solutions would be analogous to the advection-dominated accretion flow solutions of Narayan & Yi 1994 and the adiabatic inflow-outflow solutions of Blandford & Begelman 2004, but with the added constraint of satisfying the boundary conditions at a shock. We defer further investigation of this possibility to future work. We assumed that the post-shock fluid is adiabatic and that the time-steady nature of the flow then ensures that the gas is isentropic. The gas is expected to be nearly isentropic because the fluid in the gain region is convectively unstable (Bruenn 1985;Burrows 1987;Burrows et al. 1995). Chevalier (1989) justifies the adiabatic assumption by noting that neutrino losses become important in a very thin layer near the surface of the accreting body, and thus the post-shock flow is effectively adiabatic over most of its volume. Müller (2020) notes the neutrino cooling is not likely to result in a significant entropy gradient across the flow in the post-shock region. Furthermore, in his 3 simulations, he finds that mixing at the onset of convection and SASI largely reduce any entropy gradient. Nevertheless, some simulations in lower dimensions (2 ) do find a more substantial entropy gradient, e.g., Müller et al. (2010). Interestingly, however, the numerical solutions of Müller et al. (2010) appear to agree fairly well with the analytic solutions presented here. For example, in Figure 8 of Müller et al. (2010), the density is shown to increase by ∼ 6 orders of magnitude going from the location of the shock at ∼ 200km to the origin, which agrees with what we find analytically (see the purple curve in the bottom left panel of Figure 2). The neutrino cooling in the thin layer acts effectively as a global sink on the energy, which balances the influx of energy from the material falling through the shock and allows the system to reach a steady state (i.e., the shock stalls). Chevalier (1989) also shows the ratio of thermal pressure to radiation pressure is th rad ≈ 0.02 (Schinder et al. 1987;Chevalier 1989) reinforcing the adiabatic claim further. The time-steady nature of the solutions presented here requires specific ambient density and velocity profiles. More realistically, the mass infall rate will decline with time and the density profile of the accreting gas will change nontrivially as a consequence of the shells of nuclear ash in the progenitor star. The reduction in the mass supply rate and the ram pressure of the envelope then likely results in the outward motion of the stalled shock. We will analyze the consequences of a time-varying infall rate through a perturbative approach in future work. DATA AVAILABILITY STATEMENT Code to reproduce the results in this paper is available upon reasonable request to the corresponding author.
Comparison of demographic and clinical characteristics influencing health-related quality of life in patients with diabetic foot ulcers and those without foot ulcers Background A number of studies have demonstrated that health-related quality of life (HRQoL) is negatively affected by diabetic foot ulcers. The aim of this study was to compare HRQoL in diabetic patients with and without foot ulcers and to determine demographic and clinical factors influencing HRQoL. Methods There were no variables affecting HRQoL except for gender in diabetic patients without foot ulcers. Demographic and clinical variables were recorded and HRQoL was evaluated using the Short Form 36 (SF-36) survey for all participants. The summary physical component score (PCS) and mental component score (MCS) and eight domains of HRQoL were compared in the two groups. Linear regression analysis was also used to investigate sociodemographic and clinical characteristics as predictors of quality of life as measured by SF-36. Results The overall score, PCS, and MCS, were significantly higher in patients without diabetic foot ulcers. Except for gender, none of the variables affected HRQoL in diabetic patients without foot ulcers. Male gender had a higher score in all domains of quality of life than female gender in diabetic patients without foot ulcers. Living alone, a low educational level, and having at least one complication were all associated with a lower HRQoL score in patients with foot ulcers. High-grade ulcers determined by Wagner’s classification and poor glycemic control as measured by HbA1C predicted HRQoL impairment in patients with diabetic foot ulcers. Conclusion Because Wagner’s grade was one of the strongest variables associated with HRQoL, this scale is recommended for monitoring of patients with diabetic foot ulcers in order to prevent continuing deterioration of HRQoL by treatment of foot ulcers at an earlier stage. Introduction Research has shown that people with diabetes have a worse health-related quality of life (HRQoL) compared with people without chronic disease. Diabetic patients report lower HRQoL, especially with regard to physical functioning. Furthermore, it has been shown that individuals with more symptomatic and disabling conditions have the lowest Short Form 36 survey (SF-36) physical component scores (PCS). 1 Foot problems often persist for a long period of time and may result in amputation. 2 The presence of diabetic foot ulcers may have a major impact on patient HRQoL. 3 International epidemiologic studies suggest that 2.5% of diabetic patients develop foot ulcers each year, and 15% of all diabetic patients develop foot ulcers during their lifetime. 3 Currently, the prevalence of diabetes is 7.7%, which is equivalent to 3 million cases when extrapolated to the Iranian population aged 25-64 years. The prevalence of foot ulcers is estimated to be 3% in diabetic patients in our region. 4 This figure is expected to rise considerably by 2025. 4 Several factors influencing the impact of diabetes on HRQoL include sociodemographic and clinical characteristics, such as age, level of education, comorbid conditions, and complications. 1 Older age, presence of type 2 diabetes mellitus, increased severity of Wagner grade, longer duration of foot ulcer, and the presence of more ulcers were also found to be significant predictors of lower HRQoL in other report. 5 HRQoL is often described in patients with diabetic foot ulcers, but comparisons have rarely been made with HRQoL in diabetic patients without foot ulcers. Such a comparison would give us a broader picture of HRQoL in our region by considering the way in which clinical and demographic characteristics affect HRQoL in diabetic patients with and without foot ulcers. The aim of this study was to compare HRQoL between diabetic patients with and without foot ulcers and to examine the differences between the two groups according to sociodemographic and clinical characteristics. The results of this study may provide a useful guide for the interpretation of HRQoL scores, and may assist in identifying patient problems when setting treatment goals. Materials and methods Two groups of adult patients were recruited for this study. Subjects were allocated to Group 1 if they had suffered from diabetes without current or previous foot ulcers and any other complications of diabetes. Subjects were allocated to Group 2 if they had diabetes mellitus with at least one foot ulcer, defined according to Wagner's classification, and were admitted to hospital. Some individuals in this group also had other diabetic complications. The aim of the study was explained to all subjects with and without diabetic foot ulcers, and all participants signed a formal consent form. All responses were anonymous. Permission to conduct this research was approved by the ethics committee of Urmia University of Medical Sciences. Subjects without diabetic foot ulcers This cross-sectional prospective study was conducted from September 2009 to December 2010 in the urban area of Urmia city using two-stage cluster random sampling to obtain data from diabetic patients. Based on a power analysis using a moderate effect size (0.5), at the 0.05 significant level and power 90%, and considering the design effect of cluster sampling, a sample size of 160 was estimated. First, eight of 30 health care centers were selected as clusters. Twenty diabetic patients who met the inclusion criteria were then chosen from each center. Currently diagnosed type 2 diabetic patients aged older than 30 years were enrolled into the study. Patients with complications or conditions that would potentially affect quality of life were excluded. Subjects with diabetic foot ulcers All subjects with diabetic foot ulcers admitted to one of two medical training hospitals (Taleghani or Emam-Khomaini) from September 2009 to December 2010 were enrolled in the study. A total of 90 subjects diagnosed with type 2 diabetes, having diabetic foot ulcers, and aged more than 30 years completed the study. Sociodemographic and behavioral variables Demographic data were collected about age, gender, educational level, and cohabitation. Age was categorized into two groups, ie, $50 years and ,50 years, and a low education level was defined as illiterate/primary school. The sociodemographic variable "cohabitation" was categorized as living with others or living alone. All sociodemographic variables were self-reported. Behavioral factors, including current smoking (daily and occasional smokers), and body mass index were also obtained. Body mass index was divided into two categories, ie, normal (body mass index , 25) or overweight (body mass index $ 25). clinical characteristics A questionnaire was used to collect data about general clinical status, duration of diabetes, treatment intensity ( classified as insulin therapy, or other therapy such as oral agents and diet), and baseline laboratory data, including glycosylated hemoglobin (HbA 1C ) and blood sugar. HbA 1C . 8.5 was considered to indicate poor glycemic control. 6 Additional information on diabetic subjects with foot ulcers included diabetes complications according to medical records or drug history (at least one complication), grade of foot ulcer, and amputation as an adverse outcome during hospitalization. Wounds were classified into Wagner grade # 2 (low-grade) or grade $ 3 (high-grade) foot ulcers. Using Wagner's classification, diabetic foot ulcers are classified as Grade 0, high risk foot; Grade 1, superficial ulcer; Grade 2, deep ulcer penetrating to tendon, bone or joint; Grade 3, deep ulcer with abscess or osteomyelitis; Grade 4, localized gangrene; or Grade 5, extensive gangrene requiring a major amputation. Amputation was 395 Health-related quality of life and diabetic foot ulcers defined as complete loss in the transverse anatomical plane of any part of the lower limb. Health-related quality of life HRQoL was measured using the SF-36 health survey, a geometric instrument that allows results to be compared across studies and between populations. 1 The SF-36 consists of 36 questions, and measures eight conceptual domains, ie, physical functioning, role limitations due to physical health, bodily pain, general health perceptions, vitality, social functioning, role limitation due to emotional problems, and mental health. The scores in each domain are transformed into measurements on scales of 0 to 100, and a high score indicates good HRQoL. The SF-36 has satisfactory reliability and validity, and is the most thoroughly tested and accepted measure for assessing psychometric properties in many countries. 3 The validity and reliability of the Persian translation of the SF-36 are also acceptable for assessing health perceptions in the population. 7 The SF-36 has also been developed into a two-factored model with PCS and MCS scales. 1 Because most participants were not able to complete the questionnaire, two medical students were trained to complete the SF-36 and gathered the demographic and clinical data. Although earlier research has shown that in-person interviews tend to elicit more socially desirable responses than do selfadministered questionnaires, the use of t-tests indicated no significant differences in HRQoL between the two groups. 3 Patients in the two groups were therefore interviewed to complete the SF-36 questionnaire. Statistical analysis All analyses were conducted using SPSS version 17 (SPSS Inc, Chicago, IL). Descriptive analyses were used to present the demographic and clinical characteristics of the two groups. Chi-square and t-test analyses were used to evaluate differences in the distribution of sociodemographic and clinical characteristics between the diabetic patient groups with and without foot ulcers for categorical and continuous variables. The relationships between sociodemographic and clinical variables and HRQoL data were analyzed using Spearman's correlation coefficients. Linear regression analysis was used to investigate the sociodemographic and clinical characteristics as predictors of HRQoL measured by the SF-36. Results With regard to demographic and clinical characteristics and HRQoL in the presence and absence of diabetic foot ulcers, of 250 diabetic subjects, 90 had foot ulcers and 160 had no foot ulcers. The mean age of patients with and without diabetic foot ulcers was 60.73 ± 11.3 years and 50.36 ± 7.1 years, respectively (P = 0.000). There were significantly more patients older than 50 years in the group with diabetic foot ulcers than in the group without foot ulcers (84.4% versus 56.2%). There was also a significant difference in the gender distribution between the two study groups. Most patients with diabetic foot ulcers were male (63.3% versus 24.4%). More than half (57.4%) of the participants with diabetic foot ulcers had a low education level (elementary school and/or illiterate). However, 86% of diabetic patients without foot ulcers were in the low education category (P = 0.000). The mean body mass index in patients with diabetic foot ulcers was significantly lower than in those without foot ulcers, and 63.3% of patients with diabetic foot ulcers were overweight (body mass index $ 25 kg/m 2 ) versus 88.1% in diabetic patients (P = 0.000). The smoking habit was significantly more frequent in patients with diabetic foot ulcers compared with the other group (53.3% versus 8.8%, P = 0.000). Treatment intensity in the group with diabetic foot ulcers showed that 86.2% of participants were managed with oral agents/diet, only 13.8% were on insulin alone or in combination, and 10.9% of diabetic patients with no foot ulcers were treated by insulin. There was no significant difference in duration of diabetes between the two groups. A higher proportion of participants with diabetic foot ulcers were living alone than those in diabetic patients (32.7% versus 28.4%). However, this difference was not statistically significant. Eighty-four percent of subjects with diabetic foot ulcers reported having at least one complication of diabetes, eg, cardiovascular disease, nephropathy, or retinopathy. Almost three quarters (71.3%) of participants had a previous history of diabetic foot ulcers. Eighty-three percent of wounds were classified as high-grade ulcer (Grade $ 3). Thirty-four percent of diabetic foot ulcers met clinical criteria for amputation during hospitalization. Baseline laboratory data including HbA 1C and blood sugar at the time of admission were significantly higher among respondents with diabetic foot ulcers. The frequency of poor diabetes control in the participants was 31.2% in the patients without diabetic foot ulcers group versus 57.8% in the patients with diabetic foot ulcers group (P = 0.000). Table 1 shows demographic and clinical characteristics in diabetic subjects with and without foot ulcers. A comparison of eight domains of HRQoL in the two groups showed higher scores in four domains in patients without foot ulcer. There was no significant difference in bodily pain, general health perceptions, mental health, and vitality domains. Patients with diabetic foot ulcers had significantly poorer HRQoL, as submit your manuscript | www.dovepress.com 396 Yekta et al indicated by lower mean scores in four domains including physical functioning, role limitations due to physical health, role limitation due to emotional problems, and social functioning than did the other group. The largest differences between the groups was found for the social functioning domain. Similarly, the physical and mental summary scores on the SF-36 showed poorer HRQoL in diabetic patients with foot ulcers than in those without foot ulcers (P = 0.000), with differences of around seven points for physical and eight points for the mental summary scores. All these differences remained significant after adjustment for variables including age, gender, and duration of diabetes. Table 2 shows HRQoL in the two study groups. Variables associated with hrQoL in subjects with and without diabetic foot ulcers The total HRQoL score was 53.03 ± 13. The scores, including total, PCS, and MCS scores, did not differentiate between gender and age in diabetic patients with foot ulcers. Differences in total, PCS, and MCS scores were also found according to cohabitation and level of education as demographic variables. HRQoL in patients having diabetic foot ulcers with a lower level of education, obesity, and living alone was significantly poor. A high-risk wound, as defined by Grade $ 3 Wagner classification, having complications, and poor glycemic control as measured by HbA 1C were all clinical variables associated with HRQoL impairment. The risk of amputation was strongly associated with lower HRQoL. Table 3 shows the demographic and clinical variables associated with HRQoL. The risk of amputation was strongly associated with lower score of HRQoL. In regression analysis, after adjusting for demographic and behavioral variables, poor diabetic control (HbA 1C . 8.5) and a high-grade ulcer were significant variables in the final model, which predicted a lower total, and PCS and MCS scores of HRQoL. In diabetic subjects without foot ulcers, female gender was the only factor associated with poor HRQoL. Discussion Foot ulcers are a common, serious, and costly complication of diabetes, preceding 84% of lower extremity amputations 397 Health-related quality of life and diabetic foot ulcers in diabetic patients and increasing the risk of death by 2-4-fold compared with diabetic patients without ulcers. HRQoL is worse in individuals with diabetes than in those without diabetes, and complications of diabetes, including diabetic foot ulcers, have a major negative effect on HRQoL. Qualitative research has confirmed the clinical observation that diabetic foot ulcers have a huge negative psychologic and social effect. 8 Armstrong et al suggested that patients with diabetic foot ulcers have severely impaired physical and mental functioning, which is comparable with those with other serious medical conditions. 9 Nabuurs-Franssen et al revealed that HRQoL of patients with chronic neuropathic and neuroischemic foot ulcers, without critical limb ischemia, is poor and comparable with, for instance, the HRQoL of patients with relapsed breast cancer. 10 This cross-sectional prospective study demonstrated that HRQoL was severely impaired by diabetic foot ulcers and described an important correlation between HRQoL scores and severity of foot ulcers. The most important sociodemographic characteristics that differ between patients with and without diabetic foot ulcers are male gender, living alone, and obesity. One study demonstrated that most diabetic foot patients were men and nearly twice as many of those with foot ulcers were living alone. 1 This finding indicates that men living alone are an especially vulnerable group among the diabetic population. 1 Interestingly, Hjelm et al found that different beliefs about health and illness between male and female foot subjects may affect self-care. They found that women are usually more active in self-care and preventive care, whereas men show a more passive attitude. 11 Our findings showed that HRQoL in four areas (bodily pain, general health perceptions, mental health, and vitality domains) was lower in diabetic patients with foot ulcers compared with those without foot ulcers. This may be due to differences in sociodemographic and clinical characteristics in the two groups, eg, patients with foot ulcers were slightly older, overweight, and smokers. However, differences in total, PCS, and MCS HRQoL scores between the two groups remained significant after adjusting for confounders. Similar findings have been reported by several other studies, which found that HRQoL scores were significantly lower for patients with diabetic foot ulcers. 5,12-15 Tennvall and Apelqvist compared health status in diabetes with current foot ulcers, and those who underwent minor or major lower extremity amputations. The results of their study showed that subjects with current ulcers had lower health status than did patients who had healed primarily without any amputation and those who had undergone a minor amputation. Patients who had undergone a major amputation had poorer health status than patients who healed primarily and those who had undergone a minor amputation. 14 Jelsness-Jorgensen et al reported that a diabetic foot had a major negative impact on 7/8 subscales of the SF-36 compared with a diabetic outpatient group. 16 Another study revealed that the most striking differences were in role limitations due to physical health and physical functioning. 1 In our study, the lower physical function scores in patients with foot ulcers are in accordance with other studies, in which the physical functioning scale changed the most among those with diabetes complications. 1,17 In diabetic patients with no foot ulcers, HRQoL scores in men were significantly higher than those in women; however, scores in patients with diabetic foot ulcers were similar in men and women. Age had no significant impact on HRQoL in both groups. A low educational level and living alone were other variables which decreased total, PCS, and MCS scores in the diabetic foot ulcer group in our study. In accordance with our findings, Ribu et al showed that women reported poorer health than did men. They found no significant association between self-assessed health and age in patients with diabetic foot ulcers. The reason may be that ulcers cause poor physical functioning regardless of age. 3 Another study indicated that female gender and macrovascular complications are related to worse physical and psychologic well being as detected by the SF-36 questionnaire. Increasing age showed a strong correlation with decreased physical functioning but a positive association with the MCS of the SF-36. 6 Quah et al reported that higher quality of life in diabetic patients is associated with younger age, male gender, and a higher educational level. 18 Another study in Turkey reported that quality of life was higher in diabetic patients who were less than 40 years of age, male, married, had less than 8 years of education, lived with their family, and had no complications or prior hospitalization. 19 Obesity was a much more common problem in diabetic patients with diabetic foot ulcers in our study than those without foot ulcers, indicating a sedentary lifestyle, as reported in some studies, 1,17 although it was not associated with HRQoL in patients without foot ulcers. Obesity had a negative effect on HRQoL scores in our study. In agreement with our finding, Redekop et al suggested that obesity was correlated with lower HRQoL independent of gender and age. 20 In contrast, another study showed that patients with a body mass index ,25 kg/m 2 scored lower on general health perceptions, vitality, and mental health, and notably on general health perceptions. 3 As expected, there was a significant relationship between the presence of complications and lower HRQoL in total, PCS and MCS scores, as demonstrated by several studies. 3,19,20 Quah et al reported that lower quality of life is associated with comorbidities and diabetic complications. 18 In contrast, factors linked to the development of late complications, such as cardiovascular comorbidity and neuropathy, were not detected in the study by Jelsness-Jorgensen et al. 16 Another study showed that neuropathy also proved to be a variable that reduced HRQoL. Paradoxically, peripheral vascular disease did not prove to have a negative impact on quality of life. 15 Short-term glycemic control as measured by HbA 1C was variable in regression models among patients with diabetic foot ulcers in our study; however, the association between poor glycemic control and lower HRQoL was not identified in the diabetic patient group. One study reported that higher fasting blood glucose and HbA 1C levels were negatively associated with HRQoL, but these factors were not significant after adjustment for other factors using multivariate analysis. 14 Quah et al indicated that HbA 1C did not correlate with quality of life. They suggested that the diabetic patient might not appreciate the impact of good diabetic control immediately on his or her HRQoL. More effort should be invested in patient education concerning the importance of glycemic control to prevent these long-term complications. 18 Another study revealed that diabetic patients with poor metabolic control reported more retinopathy and vascular and nervous problems than did patients with acceptable metabolic control. Furthermore, patients with poor metabolic control also had a lower level of education. 21 A high-grade ulcer, as determined by Wagner's classification, was another variable which was found to be a significant and independent predictor of HRQoL impairment in patients with diabetic foot ulcers in our study. We also found that the risk of amputation was significantly higher in patients with lower HRQoL. One study showed that individuals with diabetic foot ulcers experienced profound compromise of physical quality of life, which was worse in those with unhealed ulcers. 22 Ragnarson et al reported that patients with current foot ulcers rated their HRQoL significantly lower than patients who had healed primarily without amputation. 14 Severity of foot ulcer as an independent predictor on HRQoL impairment was also demonstrated in a study by Valensi et al. 5 In conclusion, these findings have implications for clinical and policy decisions, as well as for the design for future studies with larger sample sizes. In particular, our findings underscore the importance of HRQoL in the Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/diabetes-metabolic-syndrome-and-obesity-targets-and-therapy-journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy is an international, peer-reviewed open-access journal committed to the rapid publication of the latest laboratory and clinical findings in the fields of diabetes, metabolic syndrome and obesity research. Original research, review, case reports, hypothesis formation, expert opinion and commentaries are all considered for publication. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. Health-related quality of life and diabetic foot ulcers management of diabetic patients with or at risk of foot disease. Wagner's grade was one of the strongest variables associated with HRQoL, which may suggest a role for this scale in the monitoring of patients with diabetic foot ulcers in order to prevent continuing deterioration of their HRQoL by treatment of foot ulcers at an earlier stage.
Downregulation of the tumor suppressor HSPB7, involved in the p53 pathway, in renal cell carcinoma by hypermethylation In order to identify genes involved in renal carcinogenesis, we analyzed the expression profile of renal cell carcinomas (RCCs) using microarrays consisting of 27,648 cDNA or ESTs, and found a small heat shock protein, HSPB7, to be significantly and commonly downregulated in RCC. Subsequent quantitative PCR (qPCR) and immunohistochemical (IHC) analyses confirmed the downregulation of HSPB7 in RCC tissues and cancer cell lines in both transcriptional and protein levels. Bisulfite sequencing of a genomic region of HSPB7 detected DNA hypermethylation of some segments of HSPB7 in RCC cells and concordantly 5-aza-2′-deoxycytidine (5-Aza-dC) treatment of cancer cells restored HSPB7 expression significantly. Ectopic introduction of HSPB7 in five RCC cell lines remarkably suppressed cancer cell growth. Interestingly, we found that HSPB7 expression could be induced by p53 in a dose-dependent manner, indicating that this gene functions in the p53 pathway. Our results imply that HSBP7 is likely to be a tumor suppressor gene regulated by p53 and its downregulation by hypermethylation may play a critical role in renal carcinogenesis. Introduction Renal cell carcinoma (RCC) accounts for approximately 2% of all cancers worldwide (1) and its incidence has increased by 2-3% in the last decade with even higher rate in developed countries (2)(3)(4)(5)(6). The underlying mechanisms such as some environmental and genetic risk factors including smoking, obesity, acquired cystic kidney disease and inherited susceptibility (von Hippel-Lindau disease) (3,7,8) have been indicated, but the etiological and pathological mechanisms of this disease are still far from fully understood. Although local renal tumors can be surgically removed (9)(10)(11), distant metastasis is often observed even if the primary tumor is relatively small (12,13). Patients with metastatic RCC generally result in extremely poor outcomes with overall median survival of around 13 months and the 5 year survival rate of <10% (13). For the advanced-stage patients, systemic therapy including immunotherapy (e.g. IL-2, IFN-α) and/or molecular-targeted drugs (e.g. sunitinib, bevacizumab, sorafenib, temsirolimus and everolimus) is recommended (14), but the response rates are not satisfactory. To better understand the molecular mechanisms of renal carcinogenesis and apply the information for the development of effective treatment and early diagnosis, we performed genome-wide gene expression profile analysis and identified a small heat shock protein, HSPB7, whose function in cancer is unknown, to be downregulated in a great majority of human RCC samples. In this study, we attempted to address two key questions, i) whether HSPB7 has growth suppressive function and ii) how HSPB7 is downregulated in RCCs. We here report for the first time that HSPB7 is likely to be a tumor suppressor which is frequently downregulated by DNA methylation in RCCs and is involved in the p53 pathway. cDNA microarray and selection of candidate genes. We prepared a genome-wide cDNA microarray with totally 27,648 cDNAs/ESTs selected from the UniGene database of the National Center for Biotechnology Information (NCBI). This microarray system was constructed as previously described (15,16). We analyzed 15 clear cell renal cell carcinomas (RCC) and selected candidate genes according to the following criteria: i) genes for which we were able to obtain expression data in more than 50% of the cancers examined; ii) genes whose expression ratio was <0.2 in more than 50% of informative cases; and iii) the function of the gene was still unknown. Through these criteria, several candidates including HSPB7 were further validated. Gene expression data were deposited in the Gene Expression Omnibus database (accession no. GSE39364). Quantitative real-time PCR (qPCR). We extracted total RNA from the microdissected RCC clinical samples, microdissected normal renal cortex, 25 different normal organs (17) and cultured cells using RNeasy mini kits (Qiagen, Valencia, CA, USA). RNAs from cell lines were reversely transcribed using the oligo (dT)21 primer and SuperScript III reverse transcriptase (Invitrogen). RNAs from tissue samples were treated with DNase I and subjected to two rounds of RNA amplification using T7-based in vitro transcription (Epicentre Technologies, Madison, WI, USA), then amplified RNAs were reversely transcribed to single-stranded cDNAs using random primer with Superscript II reverse transcriptase (Invitrogen) according to the manufacturer's instruction. qPCR was conducted using the SYBR-Green I Master (Roche) on a LightCycler 480 (Roche). Standard curve method was used for quantification analysis, and β2 microglobulin (B2M) served as a control gene. The qPCR primers for HSPB7 in cell lines were: 5'-ACTTCTCACCTGAAGA CATCATTG-3' (forward) and 5'-CATGACAGTGCCG TCAGC-3' (reverse). The qPCR primers for HSPB7 in tissues were: 5'-GACCTTCCATCAGCCTTAACC-3' (forward) and 5'-ATGTGGGAGACGA A ACCA AG-3' (reverse). The qPCR process was started at 95˚C for 5 min, then under-went 45 cycles at 95˚C for 10 sec, 55˚C for 10 sec and 72˚C for 10 sec. Data analysis including standard curve generation and copy number calculation was performed automatically. Each reaction was performed in duplicate and negative controls were included in each experiment. Immunohistochemistry (IHC). A kidney tissue array (BioChain Institute, Inc., USA) was used to analyze the protein expression of HSPB7 by IHC staining. This tissue array included 11 cases of RCC with corresponding normal tissues from the same patients as controls. Tissue sections were deparaffinized, rehydrated, and processed under high pressure (125˚C, 30 sec) in antigen-retrieval solution of pH 9.0 (S2367, Dako, Carpinteria, CA, USA). Sections were blocked with Protein Block Serum Free (Dako) for 1 h at room temperature, followed by incubation with primary antibody (HSPB7, 1:100, Proteintech, Chicago, IL, USA) overnight at 4˚C. At day 2, endogenous peroxidase activity was blocked by incubation in 3% hydrogen peroxide for 30 min at room temperature. Sections were incubated with a secondary antibody (Dako Envision + system-HRP labeled polymer anti-rabbit K4003) for 30 min at room temperature, followed by DAB staining (K3468, Dako), counter stained with hematoxylin QS (H-3404, Vector Laboratories, Burlingame, CA, USA), dehydrated and mounted. Three independent investigators semi-quantitatively assessed the HSPB7 positivity without prior knowledge of clinicopathological data. According to the intensity of HSPB7 staining, these samples were evaluated as: negative (-), weakly positive (+), moderate positive (++), and strong positive (+++). HSPB7 negative or weakly positive (-/+) were considered low expression, and moderate or strong positive were considered high expression (++/+++). 5-Aza For a negative control, 5 RCC cell lines were treated with DMSO alone for 4 days. For a 5-Aza-dC group, cells were treated with DMSO for 1 day, following 5-Aza-dC-treatment (1, 3 and 10 µM, respectively) for 3 days. On the fifth day, total RNAs of all cells were isolated using the RNeasy mini kits (Qiagen, Valencia, CA, USA), according to the manufacturer's directions. qPCR was subsequently performed to detect the expression of HSPB7. To detect the protein level of HSPB7 in 5 RCC cell lines after the same treatment (5-Aza-dC 1 µM was used in 5-Aza-dC group), western blot and immunocytochemical (ICC) analyses were performed. Construction of HSPB7 expression vector. To construct an HSPB7 expression vector, the entire coding sequence of HSPB7 cDNA (based on NM_014424.4 in Pubmed) was amplified by PCR using KOD-Plus DNA polymerase (Toyobo, Osaka, Japan). The primers used for PCR reaction were 5'-AAAGAATTCCGTCCGTGGATGAGCCACAG-3' (forward) and 5'-TTTCTCGAGGATTTTGATCTCCGTC CGGA-3' (reverse). The PCR product was inserted into the EcoRI (Takara) and XhoI (Takara) sites of pCAGGSnHC expression vector containing the HA tag. The sequence and protein expression for pCAGGSnHC-HSPB7-HA were confirmed by DNA sequencing, western blot and ICC analyses. Immunocytochemistry (ICC). Five RCC cell lines were seeded on Lab-Tek II chamber slide system (Nalge Nunc International). At day 5 after the 5-Aza-2'-dC-treatment, the cells were fixed with 4% paraformaldehyde in PBS for 10 min and permeabilized with 0.2% Triton X-100 in PBS for 5 min at room temperature. Cells were covered with blocking solution (3% BSA in PBS contained 0.2% Triton X-100) for 60 min at room temperature. Then the cells were incubated with rabbit anti-human HSPB7 polyclonal antibody (Proteintech, diluted 1:250) overnight at 4˚C, following an Alexa Fluor 488 goat anti-rabbit IgG antibody (Molecular Probes, Eugene, OR, USA, diluted 1:1,000) for 1 h at room temperature. PBS or 0.2% Triton X-100 in PBS was used for washing after each step. Then cells were stained with DAPI (Vector) and viewed with a laser scanning spectral confocal microscope (Leica TCS SP2). Colony formation assay. Cells were plated in a 6-well plate and transfected with pCAGGSnHC-HSPB7-HA or empty vector using FuGENE 6 (ACHN and Caki-1) or lipofectamine LTX (Caki-2, A498 and 786-O) transfection reagent (Roche). After 48 h of transfection, cells were selected with G418 (Gibco) for 14-21 days. Colonies (>1 mm diameter) were counted using the Image J software after fixed with methanol and stained with 0.1% crystal violet. The experiment was carried out twice in duplicate wells. DNA-damaging treatments. When cells reached 60-70% confluence in the culture dish, HCT116 (p53 -/-) and HCT116 (p53 +/+ ) cells were incubated with adriamycin for 2 h at the indicated concentration. The cells were harvested at different time points after cell-damaging treatment as indicated in the figure legends. Replication-deficient recombinant adenovirus encoding p53 (Ad-p53) or LacZ (Ad-LacZ) was generated and purified as previously described (18,19). NCI-H1299 lung cancer cells were infected with viral solutions at an indicated multiplicity of infection (MOI) and incubated at 37˚C until harvest. Statistical analysis. All statistical analyses including t-test and Fisher's exact test were carried out by using the SPSS software (version 17). Data are shown as mean ± SD. All tests were 2-sided and p-value of <0.05 was considered to indicate a statistically significant difference. Downregulation of HSPB7 in RCC. Based on the analysis of microarray data of 15 clear cell renal cell carcinomas, we found HSPB7 to be significantly and commonly downregulated in RCC. qPCR experiment confirmed its downregulation in 11 (85%) of 13 RCC tissues and in all of the five RCC cell lines ( Fig. 1A and B), compared with their corresponding normal controls. IHC analysis of a tissue array consisting of 11 pairs of human RCC sample revealed that the expression of HSPB7 was significantly higher in normal kidney tissues than that in RCC tissues (Fig. 1C and Table І). We also detected HSPB7 expression mainly in the cytoplasm of normal renal tubular epithelial cells. To explore the expression patterns of HSPB7 in other normal tissues, we performed qPCR analysis using mRNAs isolated from 25 normal tissues. HSPB7 expression was detected ubiquitously in human tissues (Fig. 2). 5-Aza-dC treatment restores HSPB7 expression in RCC cell lines. To investigate whether the methylation status of the HSPB7 gene could affect HSPB7 expression in RCCs, 5 RCC cell lines, Caki-1, Caki-2, ACHN, 786-O and A498 were Figure 1. Downregulation of HSPB7 in RCC. qPCR analysis shows that HSPB7 mRNA expression was significantly downregulated (A) in 11 (85%) of 13 RCC tissues compared with the normal renal tissue, and (B) in all the five RCC cell lines compared with normal HEK 293 and RPTEC cells. T and N represent RCC tissue sample and normal renal tissue, respectively. B2M (β2 microglobulin) was used for normalization of expression levels. Values are expressed as the mean ± SD. (C) IHC analysis of a tissue array consisting of 11 pairs of human RCC sample reveals that the expression of HSPB7 was significantly higher in normal kidney tissues than in RCC tissues. According to the intensity of HSPB7 staining, these samples were evaluated as: negative (-), weakly positive (+), moderate positive (++), and strong positive (+++). HSPB7 negative or weakly positive (-/+) were considered low expression, and moderate or strong positive were considered high expression (++/+++). Summary of the IHC results is shown in Table І. treated with a demethylating agent 5-Aza-dC, and then the expression levels of HSPB7 were analyzed by qPCR, western blot and IHC analysis. We found that HSPB7 mRNA expression were restored in all the 5 RCC cell lines by the treatment with 5-Aza-dC (Fig. 3A), and the HSPB7 protein expression could also be detected in two cell lines, 786-O and A498, in which mRNA expression was most highly induced (Fig. 3B), indicating suppression of HSPB7 in RCC was caused probably by DNA hypermethylation. We performed exon sequencing of HSPB7 in these five RCC cell lines, but no mutation or deletion/insertion was detected (data not shown). Hypermethylation of HSPB7 in RCC. To confirm the methylation status of the HSPB7 gene, bisulfite sequencing was performed for the 5 RCC cell lines Caki-1, Caki-2, ACHN, 786-O and A498 as well as 2 normal renal cell lines RPTEC and HEK293. We first screened two CpG islands, regions 1 and 2 shown in Fig. 3C, but no significant difference of methylation status was found in these two regions in normal and cancer cell lines. Then, we performed the second screening for regions 3 and 4 (Fig. 3C) (we also screened the other regions in normal cells, but data are not shown). In region 4, we found significantly higher levels of methylation in the 5 RCC cell lines than in the 2 normal renal cell lines. Ectopic HSPB7 expression suppresses RCC cell clonogenicity. To study the effect of HSPB7 expression on tumor growth, Caki-1 and ACHN cells were transfected with HSPB7 expres- All tests were 2 sided and P<0.05 was considered to indicate a statistically significant difference. sion vector, pCAGGSnHC-HSPB7-HA. Introduction of HSPB7 into these two cancer cell lines caused significant decrease in the number of colonies, compared with corresponding mock-transfected controls (Fig. 4A). We also performed colony formation assay in 3 other RCC cell lines (Caki-2, A498 and 786-O) using the same vectors, and confirmed similar growthsuppressive effects (Fig. 4B), implying that HSPB7 may function as a tumor suppressor gene. HSPB7 is regulated by p53. To further elucidate the biological significance, we first investigated its possible involvement in the p53-pathway because α B-crystallin, one of the small heat shock protein family members, was reported to be induced by p53 (22,23). We applied qPCR analysis to evaluate the expression of HSPB7 in NCI-H1299 (p53 null) cell lines with or without introduction of p53 using the adenovirus system. After the infection of Ad-p53, we observed induction of HSPB7 in a dose-and time-dependent manner ( Fig. 5A and B), while no induction was observed in the control cells. After the 48-hour treatment with 40 MOI of Ad-p53, the expression level of HSPB7 became nearly 5 times higher than the control cells (Fig. 5A). Induction of HSPB7 was also confirmed under the treatment with relative lower dose of Ad-p53 (8 MOI) at different time points. Concordantly, DNA damage by adriamycin treatment induced HSPB7 expression in HCT116 cells with wild-type p53, but not in HCT116 cells without wildtype p53 (Fig. 5C and D), indicating that HSPB7 expression is regulated by wild-type p53. To further investigate whether HSPB7 is directly regulated by p53, we screened two possible p53-binding sites indicated by the p53-binding site search software developed by us, but neither of these two candidate sites was confirmed to be a direct p53-binding site (data not shown). Although there might be another site(s) that p53 binds to, we are unable to conclude whether HSPB7 is directly or indirectly regulated by p53, it is certain that HSPB7 expression is inducible by wild-type p53. Discussion Scarce knownledge exists on the biological function of HSPB7, a member of the small heat shock protein family that is characterized by possessing a conserved α-crystallin domain. HSPB7 has been shown to interact with the cytoskeletal protein α-filamin (24) as well as other small heat shock proteins (25). HSPB7 belongs to a non-canonical HSPB protein that prevents the aggregation of polyQ proteins in an active autophagy machinery, but overexpression of HSPB7 alone did not affect the autophagy event (26). Several genomewide association studies found that SNPs in the HSPB7 gene were strongly associated with idiopathic cardiomyopathies and heart failure (27)(28)(29)(30)(31). Recently, HSPB7 was suggested to regulate early developmental steps in cardiac morphogenesis (32). However, the involvement of HSPB7 in carcinogenesis has not been described. Through the genome-wide expression analysis in RCCs, we identified HSPB7 as a candidate tumor suppressor gene because of its common and significant downregulation in RCCs. Subsequent functional analysis revealed that HSPB7 was downregulated in cancer cells by hypermethylation. Bisulfite sequencing of genomic regions of HSPB7 confirmed hypermethylation in RCC cell lines. Although region 4 (Fig. 3C) contained no CpG Island, we observed significantly higher level of methylation in RCC cell lines than normal cell lines. Consistently, restoration of HSPB7 expression was observed by the treatment of cancer cells with 5-Aza-dC. In addition, since no somatic changes in coding regions of the HSPB7 gene were found in our sequence analysis of RCC cell lines or in the COSMIC database, HSPB7 in RCC is considered to be downregulated mostly by hypermethylation. The second key finding in this study is that HSPB7 showed growth suppressive effect in cancer cells. Ectopic expression of HSPB7 significantly impaired colony-forming ability for 5 RCC cell lines, indicating that HSPB7 may function as a tumor suppressor gene. Similarly α B-crystallin, one of the small heat shock protein family members, was also indicated to function as a tumor suppressor in nasopharyngeal carcinoma cells (33). Furthermore, the region on chromosome 1p36.23-p34.3, where HSPB7 is located, showed frequent loss of heterozygosity in many types of solid tumors (34). However, further studies are needed to clarify the detailed tumor suppressor function of HSPB7 in RCC. The third important finding in this study is that HSPB7 was likely to be involved in the p53 pathway. The expression of HSPB7 was significantly induced in p53-dependent manner that was clearly demonstrated by two experiments, i) that introduction of adeno-p53 in p53-negative cancer cells showed strong induction of HSPB7 and ii) that DNA-damage-dependent introduction of HSPB7 was observed in HCT116 cells with wild-type p53, but not in those lacking p53. Although we failed to identify the p53-binding site in or near the HSPB7 gene, these two pieces of evidence strongly imply a critical role of HSPB7 as the direct/indirect p53-signal transducer and its downregulation may be involved in the development of various types of cancer including RCC. In conclusion, we carried out a genome-wide gene expression analysis and identified HSPB7 to be a candidate tumor suppressor gene in RCC. We confirmed downregulation of this gene caused by DNA hypermethylation, its growth suppressive effect in RCC cell lines and its p53-dependent expression, indicating the important roles of HSPB7 in renal carcinogenesis. Our finding could contribute to better understanding of the novel function of HSPB7 in cancer.
Quantifying Multipartite Quantum Entanglement in a Semi-Device-Independent Manner We propose two semi-device-independent approaches that are able to quantify unknown multipartite quantum entanglement experimentally, where the only information that has to be known beforehand is quantum dimension, and the concept that plays a key role is nondegenerate Bell inequalities. Specifically, using the nondegeneracy of multipartite Bell inequalities, we obtain useful information on the purity of target quantum state. Combined with an estimate of the maximal overlap between the target state and pure product states and a continuous property of the geometric measure of entanglement we shall prove, the information on purity allows us to give a lower bound for this entanglement measure. In addition, we show that a different combination of the above results also converts to a lower bound for the relative entropy of entanglement. As a demonstration, we apply our approach on 5-partite qubit systems with the MABK inequality, and show that useful lower bounds for the geometric measure of entanglement can be obtained if the Bell value is larger than 3.60, and those for the relative entropy of entanglement can be given if the Bell value is larger than 3.80, where the Tsirelson bound is 4. I. INTRODUCTION Quantum entanglement plays a fundamental role in quantum physics and quantum information, where it often serves as the key factor in physical effects or key resource in information processing tasks [1,2]. Therefore, how to certify the existence of quantum entanglement and even quantify it in physical experiments are two important problems. However, due to the imperfection of quantum operations and inevitable quantum noise, fulfilling these two tasks reliably is extremely challenging. As a result, though some methods like entanglement witnesses have been applied widely in quantum laboratories [3], they usually depend heavily on accurate knowledge on involved quantum systems, and possibly give incorrect results when it is not fully available [4]. Meanwhile, some other methods, like quantum tomography, consume too much resources, making it hard to apply them on large systems [5,6]. To overcome these difficulties, a promising idea is to design protocols for these tasks in such a way that beforehand assumption needed on involved quantum systems, particularly on the precisions of quantum devices or quantum operations, is as little as possible, which allows us to draw reliable conclusions on quantum entanglement that we are interested. Following this idea, various device-independent approaches have been proposed to tackle the problem of characterizing unknown quantum entanglement [7][8][9][10][11]. The key idea of these approaches is that the judgements are only based on quantum nonlocality that we can observe in quantum laboratories reliably, where one has to build nontrivial relations between quantum nonlocality and the aspects of quantum entanglement that we want to know. Indeed, a lot of interesting results of this kind have been reported or even demonstrated to certify the existence of genuine multipartite entanglement [12][13][14][15][16][17][18]. If only focusing on the issue of quantifying unknown quantum entanglement experimentally, a lot of results have also been reported under the idea of deviceindependence [19][20][21][22][23]. For example, inspired by the Navascues-Pironio-Acin (NPA) method [24], in Ref. [21] a device-independent approach to quantify the negativity, a measure of entanglement [25], was provided. In Ref. [20], based on the idea of semiquantum nonlocal games [26], an approach that quantifies negativepartial-transposition entanglement was reported, where one does not have to put any trust onto measurement devices. In Ref. [22], a new method with excellent performance was proposed to characterize the quantitative relation between entanglement measure and Clauser-Horne-Shimony-Holt inequality violations. Particularly, in Ref. [23] another general approach that is able to provide analytic results on entanglement measures, like the entanglement of distillation and the entanglement of formation, was proposed. Basically, this is a semi-device-independent approach, where the only assumption that we have to make beforehand is quantum dimension, and the key idea of this approach is introducing the concept of nondegenerate Bell inequalities, which plays a crucial role in providing nontrivial information on the purities of target quantum states. As a result, the purity information allows us to quantify the target entanglement by lower bounding coherent information, which is known to be a lower bound for the entanglement measures that we are interested [27]. However, an apparent drawback of the approach in Ref. [23] is that it only works for bipartite entanglement. For multipartite quantum entanglement, it has been known that its mathematical characterization, especially quantification, is a notoriously hard problem. However, it turns out that the geometric measure of entanglement (GME) and the relative entropy of entanglement (REE) are two quite successful measures for multipartite entanglement [28][29][30][31]. In this paper, we propose two theoret-arXiv:2008.12064v3 [quant-ph] 18 Sep 2020 ical approaches to quantify the two above measures of unknown multipartite quantum states in a semi-deviceindependent manner. The concept of nondegenerate Bell inequalities is essential to these approaches. Indeed, combined with the purity information provided by applying nondegenerate Bell inequality onto experimental statistics data, we manage to lower bound the GME by proving a continuous property of this entanglement measure. Furthermore, with the help of the purity information, we show that the REE can also be quantified by estimating the maximal overlap between the target state and pure product states. To achieve these tasks, we need to certify the nondegeneracy of multipartite Bell inequalities. As a demonstration of our approaches, we show that the Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality [32][33][34] is nondegenerate for qubit systems, and then we demonstrate that nontrivial lower bounds on the GME and the REE of multipartite quantum entanglement can be obtained when the violation of the MABK inequality is sufficient, where it can be seen that the approaches have decent performance. II. NONDEGENERATE BELL INEQUALITIES Bell inequalities are crucial tools in the current paper, and in history they played a key role in the development of quantum mechanics [39]. In a so-called n-partite Bell settings, n space-separated parties share a physical system. Each party, say i ∈ [n] ≡ {1, 2, ..., n}, has a set of measurement devices labelled by a finite set X i , and the corresponding set of possible measurement outcomes are labelled by a finite set A i . Without communications, all parties choose random measurement devices from their own X i to measure their subsystems respectively, and record the outcomes. By repeating the whole process for sufficient times, they find out the joint probability distribution of outcomes for any given choices of measurement devices, denoted p(a 1 a 2 ...a n |x 1 For simplicity, we call the above joint probability distribution a quantum correlation, and write it as p( a| x) (or just p if the context is clear), where a = (a 1 a 2 ...a n ) and x = (x 1 x 2 ...x n ). Then a (linear) Bell inequality is a relation that p( a| x) must obey if the system is classical, and it can be expressed as where for any x and a, c a x is a real number. However, a remarkable fact on quantum mechanics is that, if the shared physical system is quantum, Bell inequalities can be violated. Suppose the shared quantum state is ρ, then according to quantum mechanics p( a| x) can be written as where for any i and x i , M ai xi is the measurement operators with outcome a i for the measurement with label x i performed by the i-th party. For convenience of later discussions, we let I(ρ, M a1 x1 , ..., M an xn ) be the Bell value achieved by ρ and M ai xi . Then as mentioned above, if we let where the maximum is taken over all possibilities of ρ and M a1 x1 , ..., M an xn , then it is possible that C q > C l , indicating that quantum systems are able to produce stronger correlations than classical ones. In the joint quantum system, suppose the dimensions of the subsystems are d 1 , d 2 , ..., d n respectively, then we call the vector d ≡ (d 1 d 2 ...d n ) the dimension vector of the joint system. In this paper, we are interested in the maximal value of I(ρ, M a1 x1 , ..., M an xn ) for fixed dimension vector d. Similar with C q , we denote it as C q ( d). The concept of nondegenerate Bell inequalities was proposed when studying bipartite quantum systems [23]. As we will see later, it can also be applied in the multipartite case and plays a key role in entanglement measure quantification. Suppose I ≤ C l is a Bell inequality for an n-partite such that for any two quantum states of this system, |α and |β with α|β = 0, and any quantum measurement sets M a1 x1 , ..., M an xn , the relation that Roughly speaking, if I is a nondegenerate Bell inequality on dimension vector d, then for any two orthogonal quantum states, at most one of them is able to violate I remarkably using the same measurements. We M ai xi , then it can be seen that M is a Hermitian operator. And for any ρ with dimension vector d, it holds that where the maximum is taken over all possible local quantum measurements. Then we immediately have that C q ( d) = C(I, d, 1). Furthermore, an important fact that allows us to certify the nondegeneracy of Bell inequalities is that, for any multipartite Bell inequality I and any dimension vector d, I is nondegenerate if and only if C(I, d, 2) < 2C(I, d, 1), and when I is nondegenerate, the parameters can be chosen by the relations 1 < C(I, d, 1) − 1 2 C(I, d, 2) and 1 + 2 = 2C(I, d, 1) − C(I, d, 2) [23]. III. QUANTIFYING THE GEOMETRIC MEASURE OF ENTANGLEMENT The geometric measure of entanglement is a wellknown measure for multipartite quantum entanglement [28,29]. Suppose |ψ is a pure state of a joint system composed by n subsystems. Define G(|ψ ) to be the maximal overlap between |ψ and a product pure state, that is to say, where sep n is the set of n-partite product pure states. Then for |ψ , its geometric measure of entanglement is defined to be For a mixed state ρ of this joint system, the geometric measure can be defined by convex roof construction, which is The GME has many nontrivial applications in quantum physics and quantum information, for example quantifying the difficulty of multipartite state discrimination under local operations and classical communications (LOCC) [35], constructing entanglement witness [29,36], characterizing ground states of condensed matter systems and detecting phase transitions [37,38], and so on. Therefore, it will be very nice if we can quantify the GME reliably in quantum laboratories. We now show how the concept of nondegenerate Bell inequalities allows us to achieve this, and the approach is composed by three steps as below. Step 1 Suppose ρ is the global state that produces the quantum correlation p( a| x). Let the underlying measurements be M a1 x1 , ..., M an xn ; that is, p( a| x) = tr n i=1 M ai xi ρ . Now, since a crucial component in the definition of GME is the maximum overlap where F is the fidelity, we wish to quantify the related fidelity max |φ ∈sep n F (|φ φ|, ρ) = max |φ ∈sep n φ|ρ|φ in a fully device-independent manner, where sep n is the set of product pure states. Suppose |φ ∈ sep n is the state that maximizes F (|φ φ|, ρ). Let q * ( a| x) be the correlation produced by |φ upon measurements M a1 x1 , ..., M an xn . Since |φ is a product pure state, the correlation q * is a product correlation; that is, there exists probability distributions When ρ and |φ are measured, the fidelity between them should increase [2]; that is, for any x the resulting probability distribution p x ≡ p(·| x) and q * hence it holds that min x F (q * x , p x ) ≥ F (|φ φ|, ρ). Since q * is a product correlation, we have where the outmost maximization is over product correlations q and q x ≡ q(·| x). By the max-min inequality, it holds that Then by numerical calculations on the correlation data, we can get an upper bound on the fidelity between the target state and a pure product state, denoted asF . For example, once x is fixed, the inner maximization can be computed using symmetric embedding [41] and the shifted higher-order power method (SHOPM) algorithm [42], yielding a correct answer up to numerical precision with very high probability (see also Ref. [43]). Step 2 Since computing GME for a mixed state requires complicated optimization over ensembles, it would be ideal for the quantification of GME if ρ is a pure state. Therefore, we wish to bound the purity of ρ, defined as Tr(ρ 2 ), from below, which is accomplished by the nondegeneracy property of Bell inequalities [23]. Let ρ = i a i |ψ i ψ i | be the spectral decomposition of ρ. Suppose I is a nondegenerate Bell expression with parameters 1 and 2 satisfying 0 ≤ 1 < 2 . If This implies that a i ≥ 1 − 1 / 2 . Since the order of eigenstates in the spectral decomposition is arbitrary, for convenience we now relabel the index i found above to 1, then it holds that a 1 ≥ 1 − 1 / 2 . This allows us to lower bound the purity. Step 3 In the previous two steps, we obtained a lower bound for a 1 in the spectral decomposition of ρ and an upper boundF for F (|φ φ|, ρ) = φ|ρ|φ among all product pure states |φ . The following theorem shows that, ifF ≤ a 1 , then we can derive a lower bound for E G (ρ) by proving a continuous property of GME. The proof for this theorem can be seen in the appendix. Theorem SupposeF ≤ a 1 , then it holds that In particular, if ρ is a pure state, then it holds that a 1 = 1. In that case, the lower bound in Proposition 1 reads which agrees with the definiton of GME on pure states, indicating that our lower bound is tight in this case. Therefore, combining all the above three steps together, we obtain a semi-device-independent approach to quantify the GME for unknown multipartite entanglement. We now demonstrate that this approach indeed works well by quantifying the GME of an n-partite quantum system with the MABK inequality (n = 3, 5). Recall that we have known that this inequality is nondegenerate. At the same time, we would like to stress that in principle the approach can be applied on any multipartite quantum systems with known dimensions. There exist many configurations that achieve the maximum violation to the MABK inequality, and it turns out that they are essentially equivalent [44]. For example, one can let the state be then measure the observables σ x and σ y on each qubit. That is, for each site, we select where |± = 1/ √ 2(|0 ± |1 ) and | ± i = 1/ √ 2(|0 ± i|1 ). To obtain physical statistic data of the Bell experiments, we perturb the state |Φ and the above optimal measurements, which produces a series of legitimate quantum correlations. We then apply our approach to each correlation. The result is shown in Fig 1. FIG. 1: Lower bounds for the GME and the REE, where n = 3, 5. Note that the maximal Bell value is 2 (n = 3) and 4 (n = 5) respectively, and we focus on the gaps to the maximal Bell values. It turns out that when n = 3, if the Bell expression value is more than 1.80, our approach is able to provide nontrivial result on the GME. As a comparison, the Tsirelson bound for this case is 2. Furthermore, when the violation approaches the maximum, our approach gives a tight result 0.5, considering that the maximal violation is achieved by |Φ . Similarly, Fig 1 also illustrates the result of our approach on 5-partite qubit systems, where the same patterns with the case n = 3 can be observed. Here, nontrivial GME lower bounds can be obtained when the Bell expression value is more than 3.60, where the Tsirelson bound is 4. IV. QUANTIFYING THE RELATIVE ENTROPY OF ENTANGLEMENT Interestingly, Step 1 and Step 2 introduced above are already sufficient for us to lower bound the relative entropy of entanglement (REE) in a semi-deviceindependent manner. The REE of ρ is defined to be the minimal relative entropy of ρ to the set of separable states, that is, Tr(ρ log 2 ρ − ρ log 2 σ), (8) where D is the set of all separable states [30,31]. It turns out that the REE has many profound applications in quantum information theory. For example, for bipartite quantum states, REE can lower bound the entanglement of formation and upper bound the entanglement of distillation [31]. Therefore, quantifying the REE reliably in experiments is naturally very important and rewarding. We now show that E R (ρ) has a close relation with the quantityF introduced above. In fact, it has been known that [45] where S(ρ) is the Von Neumann entropy and Since max |φ ∈sep n φ|ρ|φ ≤F 2 , it holds that In the meantime, in Step 2 we get a lower bound for the purity of ρ (in terms of a 1 ). Combining this fact and the approach introduced in Ref. [46], we can derive a upper bound for S(ρ) (see [23] for a complete demonstration). According to Eq. (11), this implies that we are able to lower bound the REE and any other multipartite entanglement measures that are lower bounded by the REE (for example, the generalized robustness of entanglement [45,47,48]). Still using the MABK inequality and the samples of quantum correlations generated above, we test the performance of the second approach, and the result can also be seen in Fig 1. Particularly, when n = 3, our approach can give positive lower bound for the REE when the Bell value is larger than 1.88; when n = 5, it can provide nontrivial results when the Bell value is larger than 3.80. V. CONCLUSION Based on the concept of nondegenerate Bell inequalities, we show that multipartite quantum entanglement can be quantified experimentally in a semi-deviceindependent way. The key information provided by this concept is on the purity of the target quantum systems. Based on this, by studying the mathematical properties of the geometric measure of entanglement and the relative entropy of entanglement, we can provide nontrivial lower bounds for these two well-known entanglement measures. Our approaches do not need any trust on the precision of the involved quantum devices except for their dimensions and have decent performance. We hope that these approaches would prove to be valuable for characterizing unknown multipartite states in future quantum experiements. Theorem SupposeF and a 1 are defined as in the text, andF ≤ a 1 , then it holds that Proof Suppose ρ = jã j |ψ j ψ j | is an ensemble of ρ that obtains the GME of ρ. Let c be a real number in the interval [F / √ a 1 , √ a 1 ]. Consider the sets of indices J 1 ={j : | ψ 1 |ψ j | ≥ c}, which form a partition of the set of all indices j. Intuitively, the set J 1 consists of components with high fidelity with |ψ 1 . Let µ = j∈J1ã j . We have which is lower bound for the sum of weights of components whose indices belong to J 1 . Note that µ → 1 when a 1 → 1 if c < √ a 1 , and µ = 1 if a 1 = c = 1. By the definition ofF , for any product pure state |φ , we havê thus | φ|ψ 1 | ≤F √ a 1 . AsF / √ a 1 ≤ c, the inequality above implies For j ∈ J 2 , we upper-bound the overlap via | φ j |ψ j | ≤ 1, thereby obtaining a lower bound for the GME of ρ as Note that the above relation holds for any c ∈ [F / √ a 1 , √ a 1 ], which concludes the proof.
Reduced nicotinamide adenine dinucleotide phosphate-sulfite reductase of enterobacteria. V. Studies with the Escherichia coli hemoflavoprotein depleted of flavin mononucleotide: distinct roles for the flavin adenine dinucleotide and flavin mononucleotide prosthetic groups in catalysis. Abstract Escherichia coli NADPH-sulfite reductase (EC 1.8.1.2), molecular weight 670,000, contains 4 FAD, 4 FMN, approximately 16 atoms of non-heme iron-acid-labile sulfide, and 3 to 4 molecules of siroheme per enzyme molecule. NADPH interacts with the flavins, while sulfite interacts with the siroheme component. The sulfite reductase-FMN complex exhibits a dissociation constant (Kdiss) of 10 nm at 25°. All four FMN groups appear to be equivalent. The FAD prosthetic group is bound much more tightly than the FMN. The fluorescence of free FMN is largely quenched when the flavin is enzyme-bound. Sulfite reductase freed of g95% of its FMN while retaining g85% of its FAD was prepared by irradiation of enzyme solutions, in phosphate buffer 30% saturated in ammonium sulfate, with a strong fluorescent light. Such FMN-depleted enzyme preparations can bind FMN with a Kdiss identical with that of the native enzyme. By comparing properties of FMN-depleted enzyme, with and without added FMN, with those of native enzyme, the following conclusions were reached: 1. The FAD prosthetic group of sulfite reductase serves as the sole "entry port" for electrons of NADPH. Thus, the ability of the enzyme to bind pyridine nucleotide (4.0 [14C]-NADP+ per enzyme molecule; Kdiss = 0.1 mm) and to catalyze electron exchange between pyridine nucleotides (NADPH and 3-acetylpyridine adenine dinucleotide phosphate) is unaffected by the presence or absence of the FMN prosthetic group. The FAD moiety of FMN-depleted enzyme is reducible by NADPH. The FAD is reduced as rapidly by NADPH (k = 190 s-1) as is the most rapidly reducible flavin component of the native (FMN-containing) enzyme. 2. The FMN prosthetic group is required for transfer of electrons from NADPH, via the FAD, either to enzyme-bound siroheme, and thence to sulfite, or to the exogenous electron acceptor cytochrome c. The majority of electrons which are transferred from NADPH to the "diaphorase"-type acceptors 2,6-dichlorophenolindophenol, ferricyanide, or menadione by sulfite reductase, pass through an FMN-dependent pathway. The Michaelis constant for FMN-dependent reactivation of the NADPH-cytochrome c reductase activity of enzyme from which FMN has been removed either by irradiation or dilution of the enzyme is 8 to 11 nm, a value in good agreement with the Kdiss of the enzyme-FMN complex. Other flavins, including FAD, can substitute for FMN, although FMN exhibited the smallest Km of all flavins tested. The 5'-phosphate group of FMN appears to be quite important for interaction of the flavin with the enzyme as measured by reactivation of enzymatic activity. L. M., KAMIN, H., AND ROSENTHAL, D. (1973) J. Bid. Chem. 248, 2801). The enzyme catalyzes the stoichiometric conversion of sulfite to sulfide at the expense of 3 NADPH. The K,,, values for sulfite and NADPH are both 4 to 5 pM. Reduced methyl or benzyl viologens can serve as electron donors for sulfite reduction, but NADH cannot. In addition to sulfite reduction, the enzyme catalyzes the NADPH-dependent reduction of a variety of "diaphorase" acceptors (cytochrome c, ferricyanide, 2,6dichlorophenolindophenol, menadione, FMN, FAD) as well as NADPH oxidase, NADPH-3-acetylpyridine adenine dinucleotide phosphate transhydrogenase, NADPH-nitrite and -hydroxylamine reductase and reduced methyl viologen-NADPf reductase activities. All NADPH-dependent activities examined were competitively inhibited by NADP+. Agents which react with the heme prosthetic group, i.e. CO, cyanide, and arsenite, inhibit the reductions of sulfite, nitrite, and hydroxylamine (with either NADPH or reduced methyl viologen as electron donor), while all other activities are unaffected. Cyanide and CO binding to and C-0 dissociation from the enzyme (determined spectrophotometrically) parallel the respective development and relief of inhibition of NADPH-sulfite reductase activity. Development of inhibition requires the presence of reductant (NADPH) as well as inhibitor, in accord with the observation that CO, cyanide, or arsenite can react with reduced, but not oxidized enzyme. Treatment of enzyme with 1 PM p-chloromercuriphenylsulfonate causes the dissociation of virtually all of the FMN * These studies were supported in part by Research Grants AM-13460 and AM-040663 from the National Institutes of Health, and GB-7905 from the National Science Foundation, Veterans Administration Project No. 7875-01. $ To whom inquiries should be addressed. while permitting retention of FAD and heme. This treatment inhibits all pyridine nucleotide-dependent reactions of the enzyme except transhydrogenase and FMN reductase. The methyl viologen-sulfite reductase is unaffected. The development of fluorescence due to FMN release parallels the development of the observed inhibitions. The FAD of the FMN-free enzyme is reducible by NADPH, but the heme is not. If exogenous FMN is added, the heme becomes reducible and all NADPH-dependent activities are restored. We have concluded that electron flow from NADPH to sulfite follows the minimum linear sequence: NADPH + FAD + FMN ---f heme -+ sulfite In this scheme, FAD serves as the "entry port" for electrons from NADPH. It can transfer electrons directly to FMN (internal or external) or to pyridine nucleotides and their analogues. The heme is required for electron transfer to sulfite (and nitrite and hydroxylamine). The FMN is required for electron transfer from the reduced FAD to the heme (and hence to acceptors dependent on the heme) or, more directly, to diaphorase-type acceptors and OZ. Reduced methyl viologen can donate electrons to both the FMN and heme. The patterns of inhibition by a variety of salts of the NADPH-cytochrome c and reduced methyl viologen-sulfite reductase reactions are consistent with the hypothesis that these two reactions involve independent portions of the enzyme molecule. He also noted that an NADPH-hydroxylamine reductase activity copurified with the sulfite reductase. Lazzarini and Atkinson (4) further purified this enzyme as a NADPHnitrite reductase, and Kemp et al. (5) in Atkinson's laboratory subsequently showed that the NADPH-sulfite, nitrite, and hydroxylamine reductions were catalyzed by a single enzyme. These workers showed that all three activities were inhibitable by arsenite and the mercurical p-chloromercuribenzoate as well as by cyanide; furthermore, the enzyme preparation contained a NADPH-cytochrome c reductase activity which copurified with the other activities cited, and which was repressed in cysteine-grown organisms. We have purified to homogeneity the NADPH-sulfite reductase of E. coli (6) and shown it to be a complex hemoflavoprotein of molecular weight 670,000. The enzyme contains, per mole, the following prosthetic groups: 4 FAD, 4 FMN, 20 to 21 atoms of iron, 14 to 15 labile sulfides, and 3 to 4 moles of a novel type heme. This heme has been identified (7) as an octacarboxylic iron-tetrahydroporphyrin of the isobacteriochlorin type (adjacent pyrrole rings reduced), and has now been observed to serve as prosthetic group of several sulfite reductases, both assimilatory and respiratory (8,9). It has been termed "siroheme" (8). It is our object to describe the catalytic mechanism whereby a 6-electron reduction is accomplished by this hemoflavoprotein, one of the most complex arrays of electron-transport prosthetic groups yet observed in a single enzyme. To this end, we have investigated the interaction of sulfite reductase with a variety of electron donors, acceptors, and inhibitors, and have studied the effect of these agents both upon catalysis and upon optical properties of the enzyme. The results reported in this paper, some of which have been presented previously in preliminary form (lo), support the following conclusions: (a) The site of entry of pyridine nucleotide electrons is probably FAD. (5) The site of interaction of sulfite with enzyme appears to be the heme. (c) The FMN prosthetic group is required for electron transfer between the reduced FAD and the heme. These studies have not as yet assigned a specific role for the non-heme iron-labile sulfide groupings. NaHSOs, KNO*, NH%OH.HCl, and K,Fe(CN)s were Baker "Analyzed" reagents. CO, Hz, and Nz were purchased from Matheson; the latter two gases were freed of residual oxygen before use by passage through a column of hot copper. PAPS was prepared by the method of Kredich (11). E. coli sulfite reductase was purified by the procedure of Siegel et al. (6) ; all enzyme samples used in this study had a specific activity of at least 2.8 units 1 The abbreviations used are: p-CMPS, p-chloromercuriphenylsulfonate, monosodium salt; AcPyADP+, 3-acetylpyridine adenine dinucleotide phosphate; AcPyADPH, reduced 3-acetylpyridine adenine dinucleotide phosphate; DCIP, 2,6-dichloroindophenol; MVH, reduced methyl viologen; PAPS, adenylyl sulfate-3-phosphate. per mg. Naz a5S03, specific activity 15 Ci per mole, was purchased from New England Nuclear. Enzyme Assays-NADPH-dependent reduction reactions were measured in l.O-ml reaction volumes containing 0.1 M potassium phosphate buffer (pH 7.7), 0.2 mM NADPH, acceptor. and an appropriate amount of enzyme. Acceptors were present at the following concentrations: sulfite, 0.5 mM; nitrite or hydroxylamine, 10 mM; oxygen, 0.25 mM; ferricyanide, menadione, FMN, FAD, DCIP, or cytochrome c, 0.1 mM; AcPyADP+, 0.2 mM. Rates were measured spectrophotometrically using a Cary model 14 spectrophotometer, with a control solution which for most reactions contained buffer in place of electron acceptor in the reference cuvette; for the NADPH-cytochrome c, DCIP, and AcPyADP+ reductase reactions, in which reduction of acceptor rather than oxidation of NADPH was measured, the control solution contained buffer in place of enzyme. NADPHferricyanide reductase and NADPH oxidase activities were also corrected for the nonenzymatic reaction. Absorbance changes were followed at 340 nm for all acceptors except the following: cytochrome c, (550 nm) ; DCIP, (600 nm); AcPyADP+, (363 nm). MVH-dependent reduction reactions were measured under anaerobic conditions in Thunberg cuvettes fitted with serum caps. Reaction mixtures contained, in 2.5 ml total volume, 0.1 1~ potassium phosphate buffer (pH 7.7), 0.1 mM MVH, acceptor (0.2 mM sulfite or NADP+), and an appropriate amount of enzyme. Buffer and acceptor, in a 2.3.ml volume, were added to the main compartment of the Thunberg cuvette, and 0.1 ml of enzyme was added to the side arm. The system was bubbled with On-free Nz for 15 min. The enzyme was then tipped in and 0.1 ml of MVH (reduced with H2/Pt asbestos) was added with a gas-tight Hamilton syringe to start the reaction. Control mixtures contained buffer in place of electron acceptor. Rates of MVH oxidation were measured spectrophotometrically at 604 nm using a Cary model 14 spectrophotometer. Other Assays-Concentration of sulfite reductase was determined spectrophotometrically, using an extinction coefficient for the enzyme of 3.1 x lo5 M-I cm-r at 386 nm (6). Protein was measured by the Zamenhof (17) adaptation of the microbiuret method described previously (6). Sulfide and sulfite were measured by the methods of Siegel (18) and Grant (19), respectively; concentrations of standard solutions were determined by iodometric titration. FMN and FAD were measured fluorometrically by the procedure of Faeder and Siegel (20) ; concentrations of standard solutions were determined spectrophotometrically by means of their absorbances at 450 nm, utilizing reported extinction coefficients (1.22 X lo4 Me' cm-l for FMN and 1.13 x lo4 Me1 cm-' for FAD (21,22)). Spectroscopic illeusurements-Absorption spectra were measured, tiersus appropriate solvent blanks, with a Cary model 14 spectrophotometer equipped with 0 to 0.1 A and 0 to 1.0 A slide wires. Fluorescence spectra were measured in a Turner model 210 spectrophotofluorometer, equipped with constant energy attachment. For determination of flavin concentrations, an excitation wavelength of 450 nm (band width 10 nm) and an emission wavelength of 535 nm (band width 25 nm) were utilized. Fluorescence polarization measurements were made with a Farrand Mark II spectrophotofluorometer. All spectroscopic measurements were performed at 23-25", utilizing l-cm light paths unless otherwise indicated. Radioactivity Measurements-Radioactivity of a5S-containing solutions was determined on appropriately diluted aliquots (4 ml of aqueous sample plus 16 ml of the xylene-Triton X-114 mixture of Greene (23), with the naphthalene omitted) with a Packard model 3375 Tri-Carb liquid scintillation spectrometer. For all samples of standards and unknowns, measurement of radioactivity was continued until the statistical counting error was less than 1 To. Concentration and Gel FiZtration-Ultrafiltration of enzyme solutions was performed with an Amicon concentrator equipped with a Diaflo PM-30 membrane. For removal of low molecular weight solutes from enzyme in ligand-binding experiments, l.Oml samples were applied to a column (1.5 X 15 cm) of Sephadex G-25 (coarse) and 1.2.ml fractions were collected. Following either concentration or gel filtration, enzyme content was determined in appropriate fractions by measurement of protein concentrat.ion. In all such experiments, recovery of enzyme protein was at least 85%. Sulj2e Reduction Stoichiometry-As shown in Table I, E. coli sulfite reductase catalyzes the stoichiometric reduction of sulfite to sulfide at the expense of 3 NADPH, this stoichiometry being maintained throughout the course of the reaction. For these measurements, a reaction mixture containing NADPH, sulfite, and enzyme was incubated for varying periods during which t.he amount of NlZDPH oxidized was followed by the absorbance change at 340 nm. The reaction was stopped by addition of the colorforming reagents for determination of either sulfite or sulfide. Anaerobiosis was maintained to prevent oxygen-dependent consumption of NADPH (due to the NADPH oxidase activity of the enzyme, vide infra) and thereby avoid a spuriously high NADPH-sulfide stiochiometry. The results demonstrate that purified E. coli sulfite reductase can catalyze the complete 6electron reduction of sulfite to sulfide without the accumulation of significant quantities of sulfur-containing compounds of intermediate oxidation states. This behavior is in marked contrast to that reported for the dissimilatory sulfite reductases of Desulfovibrio (24,25) and Desuljotomaculum (26), which appear to catalyze an incomplete reduction of sulfite to sulfide, with an observed stoichiometry of 10 to 12 electrons consumed per sulfide produced. With the Desulfovibrio enzyme, sulfurcontaining intermediates such as trithionate and thiosulfate have been reported to accumulate in the reaction mixture during the course of sulfite reduction (24-30). Kinetic Parameters-A series of Lineweaver-Burk plots of the initial velocities of NADPH oxidation at varying sulfite and NADPH concentrations is shown in Fig. 1. From these data, the V,,, of sulfite reduction, at "infinite" concentrations of both reactants, is 1850 NADPH per enzyme per min in 0.1 M potassium phosphate buffer, pH 7.7, at 23". The K, for sulfite, at infinite concentration of NADPH, is 4.3 PM. The K, for NADPH, at infinite concentration of sulfite, is 4.5 PM (Table 11). These values are somewhat lower than those reported previously (sulfite K, = 7 to 9 /JM, NADPH K, = 18 to 60 ).kM (5, 31)), but are considered more reliable, since the present Stoichiometry of NADPH-dependent sul$te reduction Reaction mixtures contained in a 3.0-ml total volume: 0.1 M potassium phosphate, pH 7.7; 200 GM NADPH; 80 PM NaHS03; 25 nM sulfite reductase; 10 units per ml of glucose oxidase; and 10 mM glucose. The mixtures were present in anaerobic cuvettes (stoppered with tight-fitting serum caps) of l-cm light path and reactions were initiated by injecting Nz-bubbled solutions containing all substrates with 5 ~1 each of glucose oxidase and sulfite reductase, in succession, with a period of 60 s between injections. NADPH oxidation was followed by the decrease in absorbance at 340 nm with a Cary model 14 spectrophotometer. Absorbance readings were initiated approximately 10 s after injection of sulfite reductase, and the AA340 extrapolated back to time of injection. At each of the indicated times, the reaction was stopped by addition of the color-forming reagents used in determination of sulfide (Experiment 1) or sulfite (Experiment 2). The amount of sulfite in each reaction mixture was subtracted from the amount present in a control reaction mixture from which sulfite reductase was omitted. The amount of sulfide in each reaction mixture was determined with reference to a control in which sulfite reductase was omitted. There was negligible nonenzymatic disappearance of sulfite or production of sulfide during the time period of the assay. NADPH oxidation was also negligible in a control sample from which sulfite had been omitted. ----measurements were obtained with 5-and IO-cm light paths and a spectrophotometer with a 0 to 0.1 A slide wire, where necessary, to facilitate measurement with substrate concentrations in the 1 to 10 PM range. All previous data were obtained using l-cm light paths. The fact that the reciprocal plots yield parallel lines is compatible with (but does not require) a catalytic mechanism in which the first reactant, presumably NADPH, converts enzyme to a reduced form which subsequently interacts with an oxidizing substrate to yield original enzyme and final product (32). This is compatible with the previously noted (6, 10) reduction of enzyme by NADPH in the absence of acceptor, as deduced from optical and EPR spectroscopy. pH Optimum-When the velocity of NADPH oxidation was studied as a function of pH, using the standard assay concentrations of NADPH and sulfite, the optimum pH was 7.9 ( Fig. 2). Activities were identical in 0.1 M potassium phosphate and Tris-HCl buffers. Since the pK, for HS03 is 7.2 (33), the predominant sulfite species in solution at the optimal pH is soag-. Electron 1. Lineweaver-Burk plot of NADPH-sulfite reductase activity as a function of sulfite concentration. Reaction mixtures contained 0.1 M potassium phosphate (pH 7.7), 0.3 to 1.2 nM enzyme, and the indicated concentrations of NADPH and sulfite. Absorbance change was followed at 340 nm in a Cary model 14 spectrophotometer equipped with 0 to 0.1 A and 0 to 1.0 A slide wires, at 23" in cells of either 5-or lo-cm path length. The reference cuvette contained buffer in place of sulfite. Initial velocities (v) are expressed as moles of NADPH oxidized per mole of enzyme per min. The points at infinite concentration of NADPH were obtained from the intercepts of a l/v sers'sus l/(NADPH) plot, at several sulfite concentrations, of the same data plotted in the figure. Such plots also yielded a series of parallel lines. viologen at the same concentration of substrates. NADH, FMNH2, GSH, and reduced cytochrome c, all at 0.2 mM concentration, did not promote conversion of sulfite to sulfide with enzyme sufficient to allow detection of a reduction rate 1% of that found with NADPH as electron donor. Othm Reactions Catalyzed In addition to sulfite reduction, E. coli sulfite reductase is capable of catalyzing a number of other pyridine nucleotidedependent reduction reactions. NADPH-Nitrite and Hydroxylamine Redmtase-As reported previously (4), E. coli sulfite reductase catalyzes NADPH-dependent reduction of hydroxylamine and nitrite to ammonia. MVH can also serve as electron donor for reduction of these substrates, but we have not studied this reaction quantitatively. Lineweaver-Burk plots of the initial velocities of NADPH oxidation at varying nitrite and NADPH concentrations yield a series of parallel lines, as was observed with sulfite as acceptor. With hydroxylamine as electron acceptor, on the other hand, a series of converging lines is obtained; we have no ready explanation for this observation. Kinetic parameters for the NADPH-nitrite and -hydroxylamine reduction reactions are presented in Table II. The Ir,,, values for both nitrite (3100 NADPH per enzyme per min) and hydroxylamine (13,700 NADPH per enzyme per min) reduction are greater than that observed with sulfite, but the K, values for these substrates (0.8 mM for nitrite, and 10 mM for hydroxylamine) are much higher than for sulfite (4.5 PM). The pH optima for nitrite and hydroxylamine reduction, 8.6 and 9.5, respectively, are more alkaline than that for sulfite reduction (7, 9) (Fig. 2). As shown by Kemp et al. (5), it is unlikely that NADPH-sulfite reductase functions physiologically as a nitrite or hydroxylamine reductase. NADPH-Diaphmase and MVH-NADP+ Reductase Actitities-Sulfite reductase catalyzes the transfer of electrons from NADPH to a wide variety of acceptors, including cytochrome c, ferricyanide, DCIP, menadione, and FMN. As shown in Table III, the rates of these diaphorase-type reactions, under standard assay conditions (0.2 mM NADPH and 0.1 mM acceptor), varied from 10,000 to 28,000 NADPH per enzyme per min. FAD TA~LIC III Reactions catalyzed by Escherichia coli sul$te reductase: e$ect of inhibitors Reactions were measured as described under "Materials and aerobic solution of enzyme plus NADPH was incubated with the Methods." Rates are expressed as a-electron equivalents trans-inhibitor and the reaction initiated by addition of electron acferred per enzyme per min. With NADP', p-CMPS, and fluoride ceptor. Activities are expressed relative to a control treated in &s inhibitors, enzyme was incubated in 0.1 M potassium phosphate parallel in which buffer replaced the inhibitor solutions. Incubabuffer (pH 7.7) containing the inhibitor for the period of time in-tion times: NADP+, cyanide, arsenite, fluoride, and 0.2 rnM dicated below, and the reaction initiated by addition of electron p-CMPS, 5 min; 1 pM p-CMPS, 60 min; CO, 30 min. acceptor and NADPH. With cyanide, CO, and arsenite, an an- a Incubation of enzyme with MVH for 30 min causes 90% inhibition of MVH-NADPH+ reductase activity (but not MVH-sulfite reductase activity). Therefore, inhibition of this activity by CO, which requires prolonged incubation with CO in the presence of reductant, was not examined. b Not examined because of high rate of MVH-NADP+ reductase activity catalyzed by sulfite reductase. (0.1 mM) also served as an acceptor for the electrons of NADPH, with a velocity of 5,600 NADPH per min per enzyme. Kinetic studies of four of these diaphorase-type reactions are summarized in Table II. Each of the reactions studied, i.e. t,he NADPHdependent reductions of cytochrome c, ferricyanide, DCIP, and menadione, yielded a series of parallel lines in Lineweaver-Burk p1ot.s. Although K, values for NADPH and acceptor varied with the reaction studied, the V,,, values, at infinite concentrations of both NADPH and acceptor, were identical within experimental error for each of the four reactions (38,000 f 2,000 NADPH per enzyme per min). These turnover numbers represent the highest observed for any of the reactions catalyzed by sulfite reductase, and are over 20 times as great as the V,,, for sulfite reduction with NADPH as electron donor. The results suggest the presence of a common rate-limiting step in the reductions of cytochrome c, ferricyanide, DCIP, and menadione. An additional rate-limiting step, considerably slower than that for reduction of diaphorase-type acceptors, must become operative in the reduct)ion of sulfite. The enzyme also catalyzes another diaphorase-type reaction, the reduction of methyl viologen by NADPH. Since the potential of MVH is considerably more negative than that of NADPH, we have followed the reverse reaction, i.e. the reduction of NADP+ by MVK. The observed velocity of the latter reaction, 36,000 NADPH per enzyme per min (Table III), is comparable to that of the other diaphorase activities of sulfite reductase. NADPH Oxidase-Sulfite reductase can catalyze the oxygendependent oxidation of NADPH (Table III). At 0.2 mM NADPH and 0.25 mM 02, this reaction proceeds with a velocity of 75 NADPH per min per enzyme, i.e. 4% of the rate of the NADPH-sulfite reductase activity in the standard assay. No detailed studies of the NADPH oxidase activity have been performed. At 0.2 mM concentration of each nucleotide, the reaction velocity is 9500 NADPH per min per enzyme. When velocity is plotted versus concent.ration of either substrate, at a fixed concentration of the other, the curve exhibits a maximum, indicating inhibition by excess substrate. Detailed kinetic analyses of the transhydrogenase reaction (as well as competitive inhibition by NADP+ (vi& infra)) are compatible with a common binding site for both oxidized and reduced pyridine nucleotides. Inhibitors By studying the effect of inhibitors on the various reactions catalyzed by E. co& sulfit,e reductase, we hoped to define more clearly those segments of the enzyme molecule with which different electron donors and acceptors can interact, and thereby tentatively deduce a sequence of electron flow within the sulfite reductase hemoflavoprotein molecule. CO and Cyanide-We first examined the catalytic effects of inhibitors which can be expected to react with the heme moiety, i.e. CO and KCN. These compounds have been demonstrated (6) to react with both free and enzyme-bound sulfide reductase heme to form spectrally distinct complexes. As reported (6), CO can complex only to reduced heme, while cyanide can be a ligand to either reduced or oxidized heme. However, since only the reduced enzyme is "accessible" to cyanide, the oxidized enzyme-cyanide complex can only be observed by first preparing the reduced enzyme-cyanide complex, and then permitting it to oxidize. The enzyme-cyanide complex forms rapidly but apparently irreversibly (6) ; the enzyme-CO complex forms reversibly, but its rate of formation and dissociation is slow. The subsequent section will describe the correlation between the spectrophotometrically observed processes of enzyme hemeligand complex formation, and the catalytic events which are presumed to involve the heme. When sulfite reductase was incubated with CO or cyanide in the presence of reductant, as described in Table III NADPHsulfite reductase reaction was initiated by addition of 0.1 ml of 5 mM NaHSOa (when NADPH was present in the preincubation mixture) or 0.1 ml of a solution containing 5 mM NaHSOa plus 2 mM NADPH (when NADPH was not nresene in the nreincubation mixture). All'solutions were in 0. : a solution containing 20 nm enzyme, 0.2 mM NADPH, 0.5 mM CO, and 0.1 M potassium phosphate (pH 7.7), in a total volume of 0.9 ml, was incubated, in a cuvette of l-cm path length, under anaerobic conditions at 23'for the time indicated. The NADPH-sulfite reductase reaction was then initiated by addition of 0.1 ml of 5 mM NaHS03 to the cuvette containing the enzyme-NADPH-CO solution. Absorbance changes were followed at 340 nm with a Cary model 14 spectrophotometer at 23". Inset, dependence of pseudo-first order rate constant for inhibition of NADPH-sulfite reductase activity upon CO concentration. The kinetics of develonment of inhibition of sulfite reductase activity was measured as described above with each of the CO concentrations indicated. hydrogenase reactions was significantly inhibited by either CO or cyanide. The relationship between spectrophotometrically observable CO and cyanide binding to the heme prosthetic group and the inhibition of sulfite reductase activity was examined in detail as described below. The following correlations were obtained. 1. CO and cyanide bind only to reduced heme, and inhibition of activity by these agents occurs only if enzyme is reduced prior to addition of sulfite. As shown in Fig. 3, when CO or cyanide was incubated with enzyme (either with or without sulfite) in the absence of a reducing agent. and the remaining reactant(s) were added to start the sulfite reductase reaction, no inhibition of sulfite reductase activity, as compared to controls, was detected. However, when enzyme was incubated with NADPH plus either CO or cyanide, a progressive inhibition of sulfite reductase activity was observed. 2. The rate of CO and cyanide binding to the heme equals the rate of development of inhibition of sulfite reductase activity. The rate of CO binding to reduced enzyme was followed by recording at successive time intervals the absorption spectra of the enzyme plus NADPH plus CO solution. As described previously (6), the AAs,,o-aso of the enzyme solution is a good measure of the amount of enzyme-CO complex formed, since the AA between these two wavelengths is negligible in both oxidized and reduced enzyme, while the formation of the CO complex is accompanied by greatly increased absorbance at 600 nm with little change at 560 nm. Aliquots of the reaction mixture were measured periodically for NADPH-sulfite reductase activity, while AA600-560 was monitored. As shown in Fig. 4, the rates of formation of enzyme-CO complex, determined spectrophotometrically, and of loss of sulfite reductase activity, exhibited identical pseudo-first order kinetic patterns, with identical rate constants of 1.56 X 1O-3 s-l at 0.5 mM CO. This corresponds to a second-order rate constant of 3.1 M-I s-l for the reaction E + CO -+ E-CO, in agreement with that reported previously (6) for the formation of the E-CO complex. The rate of inhibition of sulfite reductase activity by CO was followed as a function of CO concentration. As shown in the inset to Fig. 4, the pseudo-first order rate constants for this process were proportional to CO concentration, and a second order rate constant of 3.2 M-l s? could be obtained from the slope of this line. This value is in excellent agreement with that measured for formation of the enzyme-CO complex.* The rate of cyanide binding to reduced enzyme has not been studied previously. A difference spectrum between reduced enzyme plus cyanide and reduced enzyme is shown in Fig. 5. A prominent maximum is observed at 411 nm. The timedependence for the development of this absorbance change was compared to that for development of cyanide inhibition. These data are shown in Fig. 6. The AAdu between the two solutions increased according to pseudo-first order kinetics. If one assumes that the rate of the reactions E + cyanide -+ E-cyanide, determined spectrophotometrically at the cyanide concentration indicated in Fig. 6 Absorbance changes were followed at 340 nm with a Cary model 14 spectrophotometer at 23'. Inset, dependence of pseudo-first order rate constant for inhibition of NADPH-sulfite reductase activity upon KCN concentration. The kinetics of development of inhibition of sulfite reductase activity was measured as described above with each of the KCN concentrations indicated. (E) . (cyanide), then a second-order rate constant of 210 .M+ s-l can be calculated. The rate of development of inhibition of sulfite reductase activity by cyanide in the presence of NADPH was then compared to the rate of formation of the E-cyanide complex. At each of the cyanide concentrations tested, the loss in activity followed pseudo-first order kinetics (Fig. 6). As shown in the inset to Fig. 6, the pseudo-first order rate constants for cyanide inhibition of sulfite reductase activity were proportional to cyanide concentration, yielding a value for the second order rate constant for the cyanide inhibition of sulfite reductase activity of 201 M-r 0, in good agreement with the value obtained from spectrophotometric measurements for formation of the E-cyanide complex. 3. The rate of dissociation of the enzyme-CO complex equals the rate of reappearance of sulfite reductase activity. A solution of enzyme-CO complex was prepared as described in Fig. 7 and maintained at 4". Aliquots were examined at intervals over a 5-day period for content of enzyme-CO complex (AAc~~-w,) and for sulfite reductase activity. The results are shown in Fig. 7. As CO dissociated from the complex to yield free, oxidized enzyme (6), sulfite reductase activity reappeared. Both reactions followed first-order kinetics with the same rate constant: 1. formed by incubating 15 FM sulfite reductase anaerobically with 0.2 rnM NADPH and 0.5 mM CO at 23" for 1 hour. The l-ml solution was then passed through a column of Sephadex G-25 as described under "Materials and Methods," and the resulting enzyme, 5.2 PM by protein determination, was incubated at 4'. At the times indicated, absorption spectra of the solution were recorded (at 23") and aliquots assayed for NADPH-sulfite reductase activity. A control sample of enzyme was treat,ed with 0.2 mM NADPH anaerobically, passed through the Sephadex G-25 column, and incubated at 4" in parallel with the enzyme-CO complex. Aliquots of the latter solution were assayed for sulfite reductase activity each time the enzyme-CO solution was so assayed. The activity of the control enzyme solution decayed by only 10% during the entire period of incubation at 4". Fraction of enzyme-CO The activity after 105 hours of incubation at 4" (7Oyc of control enzyme) was taken as the t = 00 value. 10e5 s-l at 4". The total sulfite reductase activity recovered was 70% of that observed with a parallel solution of enzyme treated with NADPH alone, passed through an identical column of Sephadex G-25, and incubated along with the enzyme-CO complex at 4". When a similar experiment was attempted with the enzymecyanide complex, which reoxidizes when NADPH is removed (6), there was no detectable dissociation of cyanide, as determined spectrophotometrically, nor return of sulfite reductase activity after 1 week of incubation at 4". The results with CO and cyanide as inhibitors, then, strongly indicate involvement of the heme prosthetic group of sulfite reductase in the passage of electrons from NADPH or MVH to sulfite, nitrite, and hydroxylamine. Since (Table III) CO and cyanide have little effect on reactions of enzyme with pyridine nucleotides, diaphorase acceptors, or oxygen, it may be concluded that the heme is probably not involved in the latter processes. Arsenite-Arsenite has previously been reported to inhibit the NADPH-nitrite reductase activity of E. coli sulfite reductase (4). We have found that arsenite, like CO and cyanide, forms a spectrally detectable complex with the heme of sulfite reductase. As shown in Fig. 8 When the reduced complex was mixed with air and allowed to reoxidize, the spectrum returned to that of native, oxidized enzyme. Similarly, if the reduced enzyme-arsenite complex was passed through a column of Sephadex G-25 (aerobically) to remove NADP (H) and excess arsenite, the recovered enzyme was spectrally indistinguishable from free oxidized enzyme. This result, together with data on inhibition of enzyme activity to be presented below, suggests that arsenite can form a stable complex only with reduced enzyme; the complex dissociates rapidly when the components required for its formation are removed. With the concentrations of arsenite tested (1 and 10 mM), complex formation with NADPH-reduced enzyme, as measured spectrophotometrically, was complete within 10 s, the minimum time required to initiate measurement in the Cary model 14 spectrophotometer. In the experiments to be described, preincubations of enzyme with arsenite or other components, or both, were routinely conducted at 23" for 1 min, but identical results were obtained when preincubations with arsenite were as short as 10 s or as long as 20 min. When sulfite was added to preformed reduced enzyme-arsenite complex (enzyme preincubated with arsenite plus NADPH), the initial rate of sulfite reduction was strongly inhibited ( Fig. 9, Curve A). However, the reaction rate progressively increased until a steady state constant rate of about TIME AFTER ADDITION OF LAST COMPONENT Irec) FIG. 8 (left). Spectra of sulfite reductase in presence of arsenite. The following additions were made to an anaerobic solution of 2.7 I.~M enzyme in 0.1 M potassium phosphate buffer (pH 7.7), and absorption spectra were recorded as soon as possible after addition of all components with a Cary model 14 spectrophotometer at 23" in cells of l-cm path length: A, no addition; B, 10 mM NaAsO, (superimposable upon A), p; C, 0.5 mM NADPH, ..*a ; D, 10 mM NaAsOz plus 0.5 rnM NADPH, -----. FIG. 9 (center). Effect of order of addition of components upon arsenite inhibition of NADPH-sulfite reductase activity. Reaction mixtures contained, in 1.0 ml total volume, 8 nM enzyme, 0.2 mM NADPH, 0.5 mM NaHS03, and 5 mnn NaAsOz where indicated. The indicated components were preincubated for 1 min in a volume of 0.9 ml. The final component(s) was then added in a volume of 0.1 ml and the absorbance change at 340 nm followed in a Cary model 14 spectrophotometer with respect to a reference solution which contained all components except sulfite. All solutions were in 0.1 M potassium phosphate (pH 7.7), cells were 1 cm in path length, and all operations were performed at 23". Each curve represents the average of three independent measurements. There was no significant difference in the control curves for A through E. Reaction mixtures contained 0.1 M potassium phosphate (pH 7.7), 9 nM enzyme, 0.2 mM NADPH, and the indicated concentrations of NaAsOz and sulfite. Reactions were initiated by the addition of sulfite as the final component. Absorbance change was followed at 340 nm in a Cary model 14 spectrophotometer at 23" in cells of l-cm path length. Rates (v) were determined from the linear portion of the progress curve, following a short, (<l min in all crtses) initial lag period (see Fig. 9A). Data are plotted as l/v (min per Asa) versus l/(sulfite). The Ki determined from the data is 0.17 mM. 40% of the uninhibited rate was achieved. In contrast, if the arsenite complex was not preformed (i.e. the following combinations: (a) arsenite plus enzyme preincubated, followed by addition of NADPH + sulfite; (b) arsenite plus sulfite plus enzyme preincubated, followed by addition of NADPH; (c) NADPH plus sulfite plus enzyme preincubated, followed by addition of arsenite; or (d) arsenite plus sulfite plus NADPH preincubated, followed by addition of enzyme), then the initial velocity was not inhibited. However, over a period of approximately 1 min, the reaction velocities progressively decreased, reaching a steady state constant rate of, again, about 40 to 45% of the uninhibited control rate. Thus, the steady state level of arsenite inhibition is independent of the order of addition of reagents, even though the initial velocities are strongly dependent upon the operation sequence. When the steady state rate of sulfite reduction was examined as a function of arsenite and sulfite concentration (Fig. lo), arsenite was found to behave as a competitive inhibitor (with respect to sulfite) of the NADPH-sulfite reductase reaction. The Ki for arsenite is 0.17 mM. This result indicates that the enzymearsenite complex responsible for inhibition of enzyme activity must be a reversible one. In keeping with this conclusion, when the arsenite-NADPH-enzyme solution, the spectrum of which is shown in Fig. 8, was passed through a Sephadex G-25 column to remove pyridine nucleotide and excess arsenite, the resulting enzyme was 95 y0 as active as untreated enzyme. The results of the order of addition experiments (Fig. 9) indicate that the "reduced state" of enzyme is required for formation of an inhibitory complex with arsenite. This correlates with the requirement for NADPH for formation of a spectrophotometrically detectable enzyme-arsenite complex (Fig. 8). The progressive relief of inhibition observed under catalytic conditions (Fig. 9, Curve A) is consistent with the previously-noted reversibility of the spectrophotometrically observable complex. The steady state level of arsenite inhibition during catalysis, as modified by arsenite and sulfite concentration (Fig. lo), could be expected to represent a complex function of: (a) the relative rates of binding of arsenite and sulfite to reduced enzyme; (5) the relative rates of "release" of enzyme from its complexes via dissociation (arsenite and sulfite) or turnover (sulfite), or both; (c) the steady state oxidation-reduction level of the enzyme heme. The pattern of arsenite inhibition of the various reactions catalyzed by sulfite reductase is shown in Table III. The pattern is identical to that shown by CO and cyanide as inhibitors, in that sulfite, nitrite, and hydroxylamine reduction are inhibited, while the other NADPH-and MVH-dependent reduction reactions are not. The data again strongly suggest reaction of arsenite with the heme of sulfite reductase, and strengthen the conclusion that the heme prosthetic group is required for electron transfer to sulfite, nitrite, and hydroxylamine. Mercurial: p-CMPX-Treatment of the enzyme with 1 pM p-CMPS causes release of the FMN prosthetic group while the heme and FAD moieties remain enzyme-bound and apparently functional. This conclusion is based upon the following. 1. p-CMPS treatment causes the enzyme solution to become markedly fluorescent. This fluorescence is unpolarized, and is due to flavin, since it exhibits activation maxima at 268, 376, and 448 nm, and a single emission maximum at 532 nm. The intensity of this fluorescence was 90% of that observed upon boiling an equivalent amount of enzyme, a procedure which releases both the FMN and FAD prosthetic groups (6). Since Fluorescence measurements were made with a Turner model 210 spectrophotofluorometer, with an excitation wavelength of 450 nm (bandwidth 10 nm) and an emission wavelength of 535 nm (band width 25 nm), using a chart recorder. Enzyme assays were performed as described under "Materials and Methods." free FMN has an intrinsic fluorescence approximately 10 times that of FAD under the experimental conditions used (20), it is apparent that mercurial treatment must have caused the release of at least 90% of the enzymic FMN. The appearance of flavin fluorescence with 1 ).LM p-CMPS is first order, with a half-time at 23" of 2 to 3 min (Fig. 11). Titration of 20 nm enzyme with p-CMPS (overnight incubation at 4") showed that appearance of maximal flavin fluorescence was achieved with 0.4 PM p-CMPS, i.e. 20 moles p-CMPS per mole enzyme (Fig. 12). 2. Four hundred milliliters of 20 nM enzyme were treated with 1 pM p-CMPS, and then concentrated loo-fold by ultrafiltration. Filtrate and concentrate were analyzed for FMN and FAD by the procedure of Faeder and Siegel (20). The filtrate contained 78 nM FMN (3.9 moles of FMN per mole of original enzyme) and 4 nM FAD (0.2 mole of FAD per mole of enzyme). The concentrated enzyme, 1.5 PM on the basis of protein content, contained 0.3 PM FMN (0.2 FMN per enzyme) and 5.7 PM FAD (3.8 FAD per enzyme). Thus, mercurial treatment results in release of at least 95% of the enzyme FMN, while permitting retention of about 95% of the enzyme FAD. 3. The absorption spectrum of the p-CMPS-treated enzyme after ultrafiltration, when compared to that of native enzyme (Fig. 13), has diminished absorbance in the region 340 to 540 nm, as would be expected from loss of half of its flavin: the AAds between native and p-CMPS-treated enzyme corresponds to 3.5 FMN per mole, assuming the ~4~~ of 12.2 X lo3 M-' cm-l (21) of free FMN. There is no change in the spectrum in the 540 to 750 nm regions, indicating no effect of mercurial treatment of the heme prosthetic group (Fig. 13). 4. The FAD bound to the p-CMPS-treated, FMN-free enzyme, remains reducible by NADPH, as shown in the data of Fig. 13 the A~450 of 10.3 X lo3 M-' cm-l for oxidized minus reduced FAD (21). However, NADPH, despite its ability to reduce the FAD, can no longer reduce the heme of mercurial-treated enzyme (Figs. 13 and 14) under conditions which permit reduction of a substantial portion of the native enzyme's heme (6). The heme remains reducible by dithionite (Fig. 13). NADPHreducibility of the heme of mercurial-treated enzyme can be restored by addition of 10 pM FMN (5 FMN per enzyme in the experiment of Fig. 14). Thus, we can conclude that FMN is required for internal electron flow, since mercurial treatment of sulfite reductase, which removes the FMN prosthetic groups, leaves both the FAD and heme moieties functionally intact, but interrupts electron flow between them. The catalytic consequences of this p-CMPS-induced interruption of electron flow between FAD and heme are shown in Table III. The following reactions are relatively unaffected by p-CMPS: (a) reduction of sulfite by MVH, a result consistent with the lack of apparent mercurial effect on the heme prosthetic group; (b) transfer of electrons from NADPH to AcPyADP+, suggesting that the primary site of pyridine nucleotide interaction with enzyme is unaffected (and is therefore probably FAD); and (c) reduction of FMN by NADPH, a result consistent with the ability of FMN to interact with mercurial-treated enzyme and reverse the p-CMPS effecL3 8 It should be noted that p-CMPS at much higher concentration (0.2 mM) does inhibit the NADPH-AcPyADP+ transhydrogenase and NADPH-FMN reductase reactions. The inhibitions are not reversible by added FMN. However, even at this high concentration of mercurial, the MVH-sulfite reductase activity remains unaffected (Table III) FIG. 13. Absorption spectra of p-CMPS-treated enzyme. To 200 ml of 40 nM sulfite reductase were added 200 ml of 2 pM p-CMPS. All solutions were in 0.1 M potassium phosphate (pH 7.7). The mixture was incubated for 30 min at 23", then concentrated at 4" to a final volume of 4 ml with an Amicon ultrafiltration apparatus. Flavin analysis of the filtrate showed 78 nM FMN (3.9 FMN per enzyme) and 4 nM FAD (0.2 FAD per enzyme) released from the enzyme. The concentrated enzyme solution was centrifuged for 60 min at 40,000 X 8. A 0.5-ml aliquot was assayed for protein concentration. Flavin analysis of the concentrated enzyme, 1.5 PM on the basis of the protein determinat.ion, revealed 5.7 I.~M FAD (3.8 FAD per enzyme) and 0.3 PM FMN (0.2 FMN per enzyme). A l.O-ml aliquot of the concentrated enzyme solution was placed in a Thunberg cuvette (with 10 ~1 of 50 mM NADPH in the sidearm) and the solution rendered anaerobic by repeated evacuation and flushing with Ns. An absorption spectrum of the enzyme following anaerobiosis was recorded (B). NADPH (final concentration, 0.5 mM) was then tipped in, and the spectrum of the reduced p-CMPS-treated enzyme recorded (C). Following this, the solution was opened to air, a few crystals of sodium dithionite added, and the spectrum quickly recorded (D). An absorption spectrum of native sulfite reductase, at the same protein concentration, is shown for comparison (AA). In contrast, the following reactions were strongly inhibited by p-CMPS: (a) NADPH-diaphorase-type reactions, including cytochrome c reductase; (b) NADPH oxidase; (c) the NADPH-(but not MVH-) dependent reductions of sulfite, nitrite, and hydroxylamine; and (d) the reduction of NADP+ by MVH. As shown in Figs. 11 and 12, the inhibition of these activities by p-CMPS shows a dependence on both time and mercurial concentration which parallels the development of flavin fluorescence on these parameters. The mercurial-induced inhibition of these pyridine nucleotide-dependent reactions can be reversed by addition of 50 PM FMN, as shown in Table IV.4 FAD at this concentration has but little effect. The concentration-dependence of the FMN effect in reversing mercurial inhibition of two NADPH-dependent reactions is shown in Fig. 15. FMN, 3 to 5 PM, gave 50% of maximal stimulation in both cases. This number, 'contrasted to the dissociation constant of 0.01 PM observed by Siegel et al. (34) for the dissociation of FMN from native sulfite reductase, indicates that the action of 1 PM p-CMPS may be considered as report of Asada et al. (39) that the mercurial p-chloromercuribenzoate at such concentrations can inhibit the MVH-sulfite reductase of spinach. 4 The inhibition of certain NADPH-dependent reduction reactions observed at high concentrations of FMN ( Fig. 15 and Table IV) could be accounted for by competition between acceptor and FMN for the electrons of NADPH. Thus, the absolute rate of absorbance change at 340 nm is unaffected or increased by FMN, but the apparent rate of reduction of acceptor (measured as the difference in the AAM per min between a complete reaction mixture and one from which acceptor, but not FMN, has been omitted) is inhibited. It should be noted that the K, for FMN in the NADPH-FMN reductase reaction (at 0.2 rnM NADPH) is 20~~. WAVELENGTH (nm) FIG. 14. Reducibility of p-CMPS-treated enzyme by NADPH: effect of exogenous FMN. p-CMPS-treated concentrated enzyme was prepared by a procedure similar to that described in Fig. 13. The resulting concentrated enzyme, in this experiment, was 2.1 pM and contained 3.9 FAD and 0.3 FMN per enzyme by fluorimetric analysis. One-milliliter aliquots of the p-CMPS-treated enzyme were placed in two modified Thunberg cuvettes fitted with rubber serum caps and the solutions made anaerobic by repeated evacuation and flushing with Nz. Ten microliters of 50 mM NADPH were then added to the cuvette in the reference compartment of the Cary model 14 spectrophotometer, and 10 ~1 of anaerobic 0.1 M potassium phosphate (PH 7.7) buffer to the sample cuvette. The difference spectrum between the two cuvettes was recorded (-----), using the 0 to 0.1 and 0.1 to 0.2 A slide wires of the Cary spectrophotometer. Following this, 10 ~1 of an anaerobic solution of 1 rnM FMN were added to both cuvettes and the difference spectrum again recorded (-). effectively decreasing the affinity of the enzyme for its FMN prosthetic group by two to three orders of magnitude. We may conclude, then, that the FMN moiety of sulfite reductase is required for electron transfer from NADPH and reduced FAD to the heme prosthetic group (and thence to those electron acceptors dependent upon the heme for reduction, i.e. sulfite, nitrite, and hydroxylamine), as well as to diaphorasetype acceptors and 02. The FMN moiety is not required for interaction of either pyridine nucleotides or sulfite with the enzyme. NADP+-As seen in Table III, NADP+ inhibits all NADPHdependent reactions catalyzed by sulfite reductase. (Since the enzyme catalyzes a very rapid MVH-NADP+ reductase reaction, the effect of NADPf on the MVH-sulfite reductase reaction could not be tested.) When the steady state kinetics of the NADPH-sulfite and cytochrome c reductase reactions were studied as a function of NADP+ and NADPH concentration, NADP+ was found to be a competitive inhibitor with respect to NADPH in both reactions, with a K; for NADP+ equal to the K, for NADPH in each reaction (Table II). Inhibition by NADPf, on the other hand, was noncompetitive with respect to sulfite or cytochrome c. Similar kinetic studies have been performed with NADP+ as an inhibitor of the NADPH-AcPyADP+ transhydrogenase reaction. In this case, NADP+ inhibition was competitive with respect to both substrates. Sulfite reductase, 20 nM, was incubated with 1 J.LM p-CMPS in 0.1 M potassium phosphate (pH 7.7) for 20 hours at 4' ("p-CMPStreated enzyme"). A parallel sample of enzyme was incubated with buffer alone ("untreated enzyme"). Aliquots of each enzyme solution were then assayed for the activities indicated below in reaction mixtures containing either no added flavin, 50 PM FMN, or 50 PM FAD. NADPH-sulfite, nitrite, hydroxylamine, ferricyanide, menadione, and MVH-NADP+ reductase activities were corrected for a blank containing enzyme, NADPH or MVH, and the indicated flavin, but no other electron acceptor. NADPH-ferricyanide reductase activity was also corrected for nonenzymatic oxidation of NADPH. NADPH-AcPyADP+, cytochrome c, and DCIP reductase activities were corrected for a blank containing all components except enzyme. Details of the aesay procedures are given under "Materials and Methods." Since both FAD and FMN, at 50 FM can serve as electron acceptors for the oxidation of NADPH catalyzed by the enzyme, these flavins, by their competition for electrons, can function m apparent "inhibitors" of the reduction of other acceptors by NADPH. With untreated enzyme, the following percentage activities for enzyme plus 50 fbM, FMN relative to enzyme without added flavin were noted: NADPH-sulfite, 44%; -nitrite, 55%; -hydroxylamine, 75%; -ferricyanide, 88%; -menadione, 86%; -AcPyADP+, 94%; -DCIP, 106%; -cytochrome c, 112%; MVH-NADP+, 98%. Similar effects were noted when 50 PM FAD was substituted for FMN. In the data below, all activities are presented relative to that of the untreated enzyme assay in the presence of the indicated flavin. &&s--In addition to the relatively specific inhibitors cited above, a number of anions (as sodium salts) were found to inhibit the NADPH-sulfite reductase activity (Table V). In all cases, inhibition occurred rapidly, without requiring prereduction of enzyme, since it made no difference in the final activity observed whether enzyme was: (a) preincubated with NADPH or sulfite, together with salt, for 1 min prior to addition of the second reactant; or (b) preincubated with salt for 1 min prior to addition of an NADPH-sulfite mixture; or (c) added to a salt, plus NADPH plus sulfite mixture with no preincubation. The kinetics of inhibition of the NADPH-sulfite reductase activity were noncompetitive with respect to sulfite for each of the salts. can be arranged oppositely in the two reactions. The inhibition of NADPH-cytochrome c reductase activity by the same salts follows the series: SCN-> I-> NO, > Tfr-> Cl-> SOd2-> F-. The markedly different effect of salts on the two types of reactions emphasizes again the fundamental difference between the diaphorase class and the "sulfite reductase" class of reacbions catalyzed by this enzyme. Complex with Sulfite The experiments previously described combine to suggest strongly that the heme is involved in electron transfer to sulfite. This suggestion is supported by the spectral data of Fig. 16. This figure demonst,rates the appearance of a new species, with a visible wavelength maximum at 585 nm, and a broad Soret band in the 395 to 410 nm region, when excess sulfite is added to NADPH-reduced enzyme.5 This species persists after all reductant has been consumed and after enzyme flavins have become reoxidized. However, as in the case of cyanide, CO, and arsenite, sulfite has no effect on the spectrum of oxidized sulfite reductase. We have no evidence which designates the oxidation state of the added sulfite when it is in the complex shown in Fig. 16, but, for convenience, we refer to this bound form as the enzyme-sulfite complex. This complex is quite stable in the absence of enzymatic turnover. No alteration in the spectrum of the sulfite complex is observed following chromatography of the reoxidized enzyme-NADP(H)-sulfite solution of Fig. 16 on a column of Sephadex G-25. The g = 6 EPR signal of the heme of native oxidized enzyme (6, 10) disappears following NADPH-sulfite treatment, and is not regenerated by Sephadex chromatography. The EPR spectrum of oxidized enzyme (like the absorption spectrum) is unaffected by sulfite unless NADPH is added. To ascertain whether the spectrophotometrically observed enzyme-sulfite complex is associated with actual physical binding of sulfite sulfur to the enzyme, several preliminary experiments with [a5S]sulfite have been performed. Enzyme (1 pM) was incubated for 5 min at room temperature with 0.1 mM NADPH and 0.1 mM Naza5S03. The incubation mixture was then passed over an aerobic column of Sephadex G-25 and the eluted fractions assayed for protein and radioactivity. The radioactivity associated with the chromatographed protein amounted to 1.2 and 1.5 a5S per enzyme molecule in replicate experiments. When NADPH concentration was increased to 0.5 mM, or decreased to 2 PM, the a5S bound to the enzyme decreased 3-to 4-fold. When NADPH was totally omitted in an analogous experiment, less than 0.1 mole of 35S was retained per mole of enzyme. We do not consider that the amount of sulfite bound in these experiments necessarily represent saturation of sulfite binding sites, and experiments designed to establish the number of sites are currently in progress. DISCUSSION The results reported in this paper suggest the minimum linear scheme of electron flow within the sulfite reductase molecule shown in Scheme 1 (NADP+, p-CMPS, CO, CN-, and AsOzact as inhibitors). The dotted arrow between FMN and heme indicates that the mechanism of electron flow from flavin to heme is not clear, since no role for the non-heme iron prosthetic groups of the enzyme has as yet been identified. In this scheme, FAD and FMN prosthetic groups serve distinctly different roles in the process of electron transfer. Thus, the FAD serves as the entry port for electrons of NADPH, while the FMN serves as a transmitter of electrons between the reduced FAD and either the heme prosthetic group or artificial electron carriers such as the diaphorase-type acceptors (in this case, cytochrome c, ferricyanide, DCIP, menadione, or externally added flavins), 02, or methyl viologen. The sequence of the FAD and FMN prosthetic groups in the electron transfer process is suggested by the results with p-CMPS-treated enzyme. Such treatment apparently leads to an effective decrease in the affinity of the enzyme for FMN by at least two orders of magnitude (34) and therefore results in dissociation of this prosthetic group when enzyme is maintained in the 10e8 M concentration range. The catalytic effects of the mercurial treatment can be reversed by addition of external FMN in the 10Fe to 10e6 M concentration range. With p-CMPS-treated enzyme, freed of its FMN by ultrafiltration, the FAD moiety is reducible by NADPH and the transfer of electrons between pyridine nucleotides is unaffected. However, addition of FMN is required for transfer of electrons from NADPH to the diaphorase acceptor9 and 02, and for 6 The maintenance of a residual activity of 10 to 20% that of native enzyme activity for the NADPH-dependent reductions of transfer, via the heme, to sulfite, nitrite, and hydroxylamine. The immediate physiological acceptor for the electrons of reduced FMN has not been established. We have not demonstrated that electron flow from reduced FMN to heme is direct, and the previously mentioned possibility that the non-heme iron-labile sulfide groups may be involved must be considered? With MVH as donor, electron transfer to NADP+ is inhibited in the p-CMPS-treated enzyme and can be restored by addition of FMN. However, reduction of sulfite by MVH is unaffected. Thus, while FMN is required for "reverse" electron flow from MVH to pyridine nucleotides, it is clearly not required when MVH serves as electron source for reduction of sulfit,e-type acceptors. The ability of mercurial-treated enzyme to catalyze MVH-sulfite (but not MVH-NADP+) reduction suggests that MVH can interact with the enzyme at a site "after" FMN. However, the inability of heme-binding agents to inhibit the MVH-NADPf reductase activity demonstrates that MVH can interact with a group other than the heme. The data are compatible with an ability of MVH to reduce both the heme and FMN groups (as indicated in Scheme 1). They are also compatible with a carrier between FMN and heme (perhaps non-heme iron?) as a site of MVH reduction. When CO, cyanide, or arsenite was added to the enzyme, a striking parallelism was observed between spectrophotometric evidence of heme complex formation with these ligands, and catalytic evidence of inhibition of reduction of sulfite-type acceptors. This parallelism, both in extent and in rate of complex formation and inhibition, compels the conclusion that the heme prosthetic group is required for electron transfer from either MVH or NADPH to sulfite, nitrite, and hydroxylamine. Furthermore, sulfite perturbs the spectrum of the reduced enzyme in a manner characteristic of formation of a complex with the heme; under such conditions, binding of [asS]sulfite to the enzyme, not reversible by dialysis, occurs. With CO, cyanide, and arsenite, binding to enzyme requires the presence of a reductant. Since the cyanide ligand (and possibly the sulfitederived sulfur as well) remains bound to the enzyme following its reoxidation, and indeed cyanide binds to the oxidized free heme (6), the requirement for reductant may reflect conditions needed for accessibility of enzyme heme to ligands, rather than DCIP, ferricyanide, and menadione in the p-CMPS-treated en- Thus, it may become possible in the future to define more clearly the role of these groups in the catalytic mechanism. a necessarily higher affinity of ferro-versus ferriheme for these agents. The ability of sulfite to form a nondissociable complex with the reduced enzyme is consistent with the observed ability of sulfite to halt rapidly the development of inhibition by agents such as cyanide and CO, although sulfite cannot reverse such inhibition once an enzyme-cyanide or -CO complex has formed. The nature of the enzyme-sulfite complex is of particular interest. Since we have detected formation of the complex only in the presence of reductant and excess sulfite, the oxidation-reduction state of the heme iron and the sulfite cannot be precisely defined. The complex may contain sulfite itself or a reduced sulfur moiety "trapped" by exhaustion of reducing power at an oxidation state intermediate between sulfite and sulfide. Barring unexpected complexities, one might anticipate that the amount of enzyme-bound 35S diminishes when a stoichiometric excess (>3 NADPH per sulfite) of reductant is added, since sulfide would then be released. Preliminary experiments suggest that this is indeed the case, and this may provide a clue as to why the enzyme is not usually isolated as the sulfite complex. The sequence of electron flow indicated in Scheme 1 is supported not only by the evidence cited herein, but by additional bodies of data. Thus, the scheme is similar to that proposed by for the yeast sulfite reductase, an enzyme which, like the E. coli enzyme, is a high molecular weight hemoflavoprotein. The suggested role of the heme chromophore in the process of sulfite reduction per se is further strengthened by its presence in all sulfite reductases examined to date, including many which contain no flavin, such as those from higher plants (38, 39), yeast and Salmonella mutants (36,40), and the sulfate-reducing bacteria (24-26). Genetic studies by with Salmonella typhimurium mutants have led to the conclusion that a single gene product (that of the cys J gene in Salmonella) which can be isolated as an iron-free flavoprotein, is responsible for all of the NADPH-diaphorase reactions of enterobacterial sulfite reductase, while the products of two different genes (termed cys G and I) are required for MVH-sulfite reductase activity. Recombination of appropriate mutant extracts (e.g. cys J extract with either cys G or cys I extract) leads to reconstitution of NADPH-sulfite reductase activity in U&O. The reduction of sulfite to sulfide in E. coli also requires the products of three genes (termed qs G, P, and Q) which map in positions analogous to those of the cys G, I, and J genes on the Salmonella chromosome (45). Thus, we may conclude that NADPH and sulfite interact with the sulfite reduct,ase hemoflavoprotein at entirely different sites on the enzyme molecule, and in fact probably 'at sites located on different polypeptide chains. The physical distinction (Scheme 1) between the processes of NADPH-diaphorase activity and MVH-sulfite reductase activity is supported by the widely different effects of salts on the two types of catalytic activity. It is of interest to note that the effect of the salts tested on the over-all enzyme activity, NADPH-sulfite reduction, can be explained (as a first approximation) as the resultant of the combined effects (Table V) of each salt on the NADPHdiaphorase and MVH-sulfite reductase activities. Thus, these two processes can be considered to occur more or less independently of one another on the intact enzyme. Why is E. coli sulfite reductase so complex in structure? A priori, one might expect that this complexity may reflect the catalytic requirements of a complex reaction, the 6-electron reduction of sulfite to sulfide. The electron transport chain could thus serve as a storage device for electrons. And yet, the actual reduction of sulfite to sulfide, in ot,her enzymes, can be done with smaller hemoprotcin molecules (36, 3%40), using external electron sources, either artificial (MVH) or natural (as yet incompletely described, but quite possibly involving ferredoxins (46)). Certainly, the minimum catalytic requirements for sulfite reduction in E. coli include two: the highly specific sulfite reductase heme, siroheme; and, for the pyridine nucleotide-mediated reduction (as distinct from MVH), a device for "stepping-down" a a-electron donor such as NADPH to a presumed l-electron acceptor, the heme. It is clear that much of the observed complexity of the B. cola' enzyme structure is due to the latter requirement rather than the former. Oxidation-reduction enzymes containing flavin prosthetic groups are widespread in nature. The step-down catalytic function of such flavoproteins can be achieved in a variety of ways: (a) the flavin may act independently (e.g. NADPHcytochrome bg reductase (47), NADl'+-ferredoxin oxidoreductase (48)) ; (b) flavins may act in concert and be functionally indistinguishable (as in the mechanism proposed for microsomal NADPH-cytochrome c reductasc by Kamin et al. (49)) ; or (c) fiavins may serve functionally distinct roles but act cooperatively. Enzymes which cont,ain both FMN and FAD could logically be expected to fall into the latter class, but the class need not by any means be restricted to these (e.g. the nonequivalence of the two FAD moieties of xanthine oxidase suggested by the results of Kanda and Rajagopa1a.n (50)). Existing st,udies with FADFMN enzymes do indeed suggest the possibility of different roles. Iyanagi and Mason (51) have isolated a form of liver NADPH-cytochromc c reductase which appears to contain both FAD and FMN rather than just FAD, as described in other laboratories (52-57).* These workers suggest that one of these groups (but not yet identifiable as FAD or FMN) may serve uniquely as the initial electron acceptor. Rajagopalan and his colleagues have informed us9 that the FMN of dihydroorotic dehydrogenase appears to be required for electron transfers involving the pyrimidine, while the FAD is required for NAD+-dependent reactions. Heterogeneity in the flavin functions in dihydroorotic dehydrogenase has previously been suggested by t'he EPR studies of Aleman et al. (58). The sulfite reductase described in bhis study has propert.ies which make it unusually suitable for elucidation of the specific roles of individual flavin species in an enzyme which contains multiple flavins. This is possible because its FMN prosthet,ic group dissociates more readily than bhe FAD prosthetic group, and because the FMN can be specifically removed by treatment of the enzyme with the mercurial p-CMPS. Thus, FMN-free enzyme, containing a full array of its other prost,hetic groups (including FAD), and apparently fully competent catalytically upon readdition of FMN, can be prepared. The data in this paper indicate that FAD serves specifically as the entry port for NADPH electrons in this enzyme and the FMN serves to transmit these further along the elect,ron transport chain. Thus, the flavins may operate "in series." Additional studies by Siegel et al. (34,43) have led to a proposed mechanism in 8 B. S. S. Masters and H. Kamin, using the technique of Faeder and Siegel (20) have recent,ly re-examined the flavk content of both pig and rat liver microsomal NADPH-cvtochrome c reductase. -Their results confirm those of Iyanagi and Mason (51) and show that these enzymes contain approximately equimolar quantities of FAD and FMN. 9 M. Kanda and K. V. Rajagopalan, Department of Biochemistry, Duke University, Durham, N.C., personal communication. which the oxidation-reduction cycles of the FAD and FMN cooperate in such a fashion so as to convert "input" electron pairs from NADPH into "output" single electrons at constant potential. Aclcnowledgments-The authors are indebted to Drs. E. Phares and G. D. Novelli of Oak Ridge National Laboratory for kindly providing the E. coli B cells used for purification of sulfite reductase.
Majorana-Weyl cones in ferroelectric superconductors Topological superconductors are predicted to exhibit outstanding phenomena, including non-abelian anyon excitations, heat-carrying edge states, and topological nodes in the Bogoliubov spectra. Nonetheless, and despite major experimental efforts, we are still lacking unambiguous signatures of such exotic phenomena. In this context, the recent discovery of coexisting superconductivity and ferroelectricity in lightly doped and ultra clean SrTiO$_3$ opens new opportunities. Indeed, a promising route to engineer topological superconductivity is the combination of strong spin-orbit coupling and inversion-symmetry breaking. Here we study a three-dimensional parabolic band minimum with Rashba spin-orbit coupling, whose axis is aligned by the direction of a ferroelectric moment. We show that all of the aforementioned phenomena naturally emerge in this model when a magnetic field is applied. Above a critical Zeeman field, Majorana-Weyl cones emerge regardless of the electronic density. These cones manifest themselves as Majorana arcs states appearing on surfaces and tetragonal domain walls. Rotating the magnetic field with respect to the direction of the ferroelectric moment tilts the Majorana-Weyl cones, eventually driving them into the type-II state with Bogoliubov Fermi surfaces. We then consider the consequences of the orbital magnetic field. First, the single vortex is found to be surrounded by a topological halo, and is characterized by two Majorana zero modes: One localized in the vortex core and the other on the boundary of the topological halo. For a finite density of vortices forming close enough to the upper critical field, these halos overlap and eventually percolate through the system, causing a bulk topological transition that always precedes the normal state. Finally, we propose concrete experiments to test our predictions. I. INTRODUCTION Finding robust experimental realizations of topological superconductivity is an important goal, both for fundamental research of topological matter and for possible applications to quantum technology [1][2][3]. However, materials which naturally host such exotic ground states are scarce. Moreover, measuring non equivocal signatures of topological superconductivity is an outstanding experimental challenge [4][5][6], because such signatures are often obscured by imperfections in the sample or probe. Most candidate materials also realize low-dimensional topological superconducting states. Thus, new candidate bulk superconductors might help overcome such challenges. The coexistence of superconductivity and ferroelectricity in low-density systems [26][27][28][29] opens new opportunities in this context. A ferroelectric crystal breaks inversion symmetry spontaneously and therefore can be easily manipulated. Moreover, such systems are often close to their ferroelectric transition, where the dielectric constant is huge [30,31]. As a consequence the influence of disorder is dramatically suppressed [32,33]. These properties make low-density superconductors close to a ferroelectric quantum critical point prime candidates for engineering unconventional superconducting states. Motivated by the physics in ferroelectric STO, we revisit the problem of a Rashba spin-orbit coupled superconductor subject to a magnetic field, where we focus on the case of three spatial dimensions. Rashba spinorbit coupling originates from the combination of inversion breaking by the ferroelectric moment and atomic spin-orbit coupling [45,46]. Therefore, the axis of the Rashba spin-orbit coupling can vary in space and may also be externally manipulated. In the absence of superconductivty and magnetic fields, the Fermi surfaces are spin split everywhere in momentum except for two pinching points, which lie along the axis of the polar vector (Fig. 1a). Consequently, in the superconducting state, pair breaking is strongest at the vicinity of these points. When the magnetic field exceeds a critical threshold, the gap closes along this polar axis causing four Majorana-Weyl points to emerge, accompa-nied by surface Majorana arcs. We then show that the Majorana-Weyl cones can be tilted by tuning the angle between the polar moment and field, such that the superconductor becomes type-II-Weyl with Bogoliubov Fermi surfaces [47,48] above a a certain critical angle. We also study the Fermi arcs forming on domain walls between different polarization directions. We find that chiral surface states do not appear for all angles of the magnetic field. Finally, we turn to the more realistic scenario, where the field is non-homogeneous and penetrates the sample through line vortices. We first study the single vortex problem, where we show that the magnetic field always exceeds the critical threshold close enough to the center, forming a topological halo surrounding the vortex. We show that each such vortex has a single zero mode in its core with a counterpart at the boundary of the halo, yielding corresponding signatures in the tunneling density of states. Then, as the magnetic field is increased towards H c2 the density of vortices increases and the halos begin to overlap, forming larger topological regions. As a consequence we predict that the trivial-superconducting and normal states are always separated by a topological phase in any polar superconductor. The topological phase is characterized by percolation of the halos, akin to a transition between integer quantum Hall states. The rest of this paper is structured as follows. In Section II, we describe the model and show in the meanfield picture, neglecting the orbital effects of the magnetic field, that Majorana-Weyl superconductivity develops when the magnetic field exceeds certain threshold. In Section III, we discuss the Fermi arcs on surfaces and interfaces between the ferroelectric domains. In Section IV, we consider a more realistic model taking into account orbital effects of the magnetic field. We show that in addition to a Majorana string located in the core, an isolated vortex is surrounded by a chiral Majorana mode with the wavefunction peaked at a finite distance from the core. We also show that with the increase of the magnetic field towards H c2 , there is always a percolationtype phase transition to a bulk Majorana-Weyl superconductivity, at which chiral modes going around each vortex overlap. In this section we also calculate contribution from the Majorana modes to the tunneling density of states. Finally, we give our conclusions with emphasis on experimental consequences caused by the physics considered in Section V. Throughout the paper we work in units in which = k B = 1. II. MAJORANA-WEYL SUPERCONDUCTIVITY IN THE PRESENCE OF A ZEEMAN FIELD We now describe the microscopic model. We start with the coupling between the optical phonon displacement (the ferroelectric order parameter P ) and the conduction electrons [49][50][51][52][53] where ψ k is an annihilation operator for the electron with momentum k andλ is a coupling. This term has its microscopic origin as a consequence of combined effect of spin-orbit coupling and interorbital hybridization allowed by inversion breaking [45]. In the ferroelectric phase [54], where the displacement field P develops a non-zero expectation value, this coupling leads to the celebrated Rashba spin orbit couplingĤ SOC = wheren is a unit vector parallel to the ferroelectric order parameter P and λ = | P |λ. Additionally, we consider a Zeeman coupling to an external magnetic field B, and neglect its orbital effects pro tem [55]. Without loss of generality we align the zaxis with the local polarization P (hencen =ẑ), and obtain the dispersion Hamiltonian where σ = (σ x , σ y , σ z ) is a vector of Pauli matrices in spin space and we have assumed the dispersion k = k 2 2m − µ is spherically symmetric. In the following, we work in units in which gµ B /2 = 1. We next add an attractive interaction between electrons, which causes a Cooper instability at low temperature. For simplicity we restrict ourselves to s-wave superconductivity [56], which is also reported in the experiments on paraelectric STO [33]. Finally, writing the Hamiltonian in BdG form we ob-tainĤ is the Nambu spinor, ∆ = iσ y ∆ in the s-wave BCS channel, and we choose a gauge in which ∆ is real. The BdG Hamiltonian above enjoys a particlehole symmetry, implemented by P = τ x C, where τ j , j = x, y, z are Pauli matrices in the particle-hole space and C is the complex conjugation operator. Namely, the Hamiltonian obeys PH BdG (k)P † = −H BdG (−k). Additionally, when B P the Hamiltonian has rotational symmetry about the axis parallel to the polarization, where the rotation includes both spatial and spin rotation. In the presence of higher order terms the continuous rotational symmetry is reduced to discrete four-fold rotations about the polarization axis. The energy dispersion is determined from the solutions of a quartic equation [see Eq. (A1)], which for a magnetic field parallel to the polarization yields The Fermi surface of the free Rashba gas (in the ferroelectric phase). Blue and green arrows denote spin texture of the outer and inner sheet of the FS, respectively. (b) In the presence of non-zero magnetic field parallel to the polarization, the Fermi surface develops a gap at the points of overlap of the two sheets of the Fermi surface of the Rashba gas. Blue and green arrows denote the direction of the spins corresponding to the momenta on the FS lying on kz-axis. (c) Depairing effect of the Zeeman field is maximal along kz, and sufficiently strong magnetic field destroys superconductivity locally in momentum space, making the spectrum gapless at the specific momenta along kz -the Weyl points denoted by the red and blue spheres, which colors signify positive and negative chiralities, respectively. (d) Bogoliubov Fermi surfaces appear when the magnetic field is perpendicular to the the polarization direction. where k ⊥ = (k x , k y ) denotes the projection of the momentum onto the xy-plane. The 3D Fermi surfaces of the free Rashba gas described by Eq. (2) have the shape obtained by rotating two displaced circles around the axis connecting their crossing points (see Fig. 1a). Consequently, the crossings form pinching points along the k z axis, where two Fermi sheets with opposite helicities touch. Upon turning on a magnetic field in the z-direction, the two sheets separate, and the spins at these points becomes co-linear with the field direction. Thus, the depairing effect of the magnetic field in the superconducting phase is expected to be strongest at these pinching points. Indeed, a sufficiently strong magnetic field closes the gap at the pinching points on the k z axis. From Eq. (4), we see that the gap closes for B 2 > ∆ 2 at momenta p = (0, 0, p z ), where This equation is satisfied at four points with j = 1, . . . , 4 labeled in descending order along the k z -axis (see Fig. 1c). The closing of the gap at these momenta can be viewed as a topological phase transition in the two-dimensional Hamiltonian H BdG (p x , p y , p z ), where p z is a tuning parameter. Indeed, for p 2 < p z < p 1 and p 4 < p z < p 3 the two dimensional Bloch bands have non-zero Chern numbers ±1 (of equal sign), signaling that the Weyl nodes are monopoles of Berry charge. It is worth noting that in the low density limit there are only two Weyl nodes p 1 and p 4 , in accord with the finding of previous literature [18][19][20][21]. Rotation of B with respect to the ferroelectric moment P profoundly changes the quasiparticle spectrum. Due to the rotational symmetry, the dispersion is symmetric for both k → −k and E → −E separately, when B P . However, in the presence of a perpendicular component, the spectrum is invariant only under the combined action of these two operations. This means that when the angle is large enough, the Weyl cones over tilt and become type II [57], which is accompanied by the development of the Fermi surface of zero-energy Bogoliubov quasiparticles [47,48] (see Fig. 1d). This mechanism is analogous to the one described in Ref. [58] for the surface of a 3D topological insulator and 2DEG Rashba spin-orbit gases with the proximity induced superconductivity and applied inplane magnetic field. For more details see Appendix A. To make these observations more concrete, we derive the low-energy effective Hamiltonian in the vicinity of the Weyl nodes by projecting to the low-energy subspace. This yields the 2 × 2 Hamiltonian where ), and all other components of the matrix A are equal to zero. The chiralities of the Weyl nodes are determined by and are controlled by B z , which is the projection of the magnetic field, B, on the polarization vector P . The σ 0 -term in Eq. (7), which is proportional to the components of B that are perpendicular to P , is responsible for tilting the Weyl cones when the magnetic field and polarization are not collinear. This can be seen by noting the energy spectrum of the Hamiltonian Eq. (7) As mentioned above, the system can even be driven into a type-II phase, where the cones tilt is so strong they dip below the Fermi energy and form Bogoliubov Fermi surfaces [47]. The condition for Bogoliubov Fermi surfaces to develop is the existence of non-zero k such that Using the expressions in Eq. (8), we find that this criterion is satisfied when B 2 ⊥ > ∆ 2 . Close to the cone, the Bogoliubov Fermi surface sheet defined by (k) = 0 from Eq. (10) is a cone with the opening angle in k x k y -plane φ = π − 2 arcsin ∆ B ⊥ . However, inspecting the full Hamiltonian Eq. (3) (see Appendix A), we find that, in fact, the Bogoliubov Fermi surfaces form the shape of two bananas touching at the Weyl points (see Fig. 1d). Before proceeding to the physical consequences of the Weyl nodes, we comment that in our model they appear exactly at zero energy. This is however, not fixed by symmetry, but an artifact of the gap function we chose, which is purely the A 1g representation (s-wave). The inversion symmetry breaking renders this representation indistinguishable from A 2u (p z , which is triplet). Therefore the gap is in general a mixture of the two, which is characterized by nodes shifted from zero energy, where the sign of the shift for each node depends on the sign of the momentum along z. Such a shift will inflate the nodes leading to small Bogoliubov Fermi surfaces (see Appendix B). We finally note that the angle between P and B can be spatially manipulated, for example across a domain wall separating different ferroelectric domains. This opens a path to control the Weyl nodes, as we discuss in the following Section. III. FERMI ARCS ON SURFACES AND DOMAIN-WALLS In this section we discuss the Majorana Fermi arcs, which appear on surfaces and domain walls. We first review the well known case of an interface between a single domain and vacuum. We then turn to the case of internal tetragonal domain walls. A. Majorana arcs on the surface of a single domain We first show that Majorana arc states appear on the boundary between a single domain and the vacuum. Assuming that the ferroelectric moment P is tilted with an angle θ to the interface, we pick a coordinate system such that the yz-plane is in the plane of the interface, the z-axis aligns with the projection of P onto the interface, and the x-axis directs into the domain. The Hamiltonian for the domain is given by Eq. is a momentum in the plane of interface. We then seek zero energy eigenstates ing open boundary conditions: The Bogoliubov quasiparticles operators are defined as Thus, the reality conditionγ We now look for the solution of Eq. (11a) in the form Ψ k || (x) = Ψ 0,k || e −αx , where Re(α) > 0 implies decaying solutions as x → ∞. Plugging this into Eq. (11a), we obtain the characteristic equation for α [ee Appendix C, Eq. (C1)], the solutions of which for the parallel momentum k || , denoted α k || , obey α k || = α * −k || in accord with the reality condition Eq. (13). Analysis shows (see Appendix C) that for |B y | < ∆ there are four roots with positive real part. In this case, a general decaying solution for Eq. (11a) is a linear combination of four solutions: . Plugging this into the boundary condition Eq. (11b) and requiring vanishing of the determinant of the resulting set of the linear equations with respect to coefficients C i , one obtains k || for which a non-trivial solution, corresponding to the Majorana-Fermi arc, exists. For |B y | > ∆, when the Weyl cones overtilt in x-direction, we do not find Fermi arcs on the x = 0 surface. It is easy to find analytical solution for B y = 0. In this case, it is expected that Majorana-Fermi arcs are formed at k y = 0. Indeed, in this case Eq. (C1) for α splits into two simpler ones where η = ±1, and we find v kz,s = ηu kz,s , as required by particle-hole symmetry. Thus, the problem separates into two sectors corresponding to η = ±1. The number of roots in the right half-plane depends on the sign of the quantity Defining a new (primed) coordinate system, rotated such that its k z -axis aligns with the ferroelectric moment, k z = k z cos θ, we see that Ξ < 0 is just the condition for the momenta k z to lie between the two Weyl nodes p 1 and p 2 or p 3 and p 4 . Precisely, for Ξ < 0 (Ξ > 0), there are three (two) roots in the right half-plane for η = −1, and one (two) roots for η = 1. The Dirichlet boundary condition Eq. (11a) and the normalization of the wave-function define three conditions to be satisfied. Thus, for η = −1 for momenta on k z -axis lying between the projections of two nearby Weyl nodes, a non-trivial solution corresponding to Majorana-Fermi arc exists, see the dashed lines in Fig. 3a. In Appendix C, we show that for non-zero B x and |B y | < ∆, the Majorana-Fermi arcs remain to be straight lines connecting the projections of the Weyl nodes. B. Majorana arcs on domain walls In the previous subsection we showed that Majorana zero modes (MZMs) connecting into Fermi arcs appear on the boundary with vacuum. We now turn to discuss another situation relevant to experiments in STO: Domain walls between different tetragonal domains. To understand the nature of such domain walls, we recall that low-temperature STO has spontaneously broken its cubic symmetry into tetragonal structure. In this phase each oxygen octahedra rotates about one of the three cubic axis, clockwise or anticlockwise, alternating from unit cell to unit cell [33,59], which is known as antiferrodistortive (AFD) order. The rotation axis fixes the polarization direction when tuning into the ferroelectric phase. For example, in calcium doped STO, the polarization develops in the [1, 1, 0] or [1,1, 0] directions [60,61] if we assume the axis of the AFD rotation is [0, 0, 1]. Without loss of generality we consider this specific case hereafter. The AFD phase is notoriously known to breakout in domains [62,63], which appear in two types, one endows the system with the reflection symmetry about the wall and the other endows the system with the reflection symmetry about the wall combined with a glide [64]. The AFD order parameters in neighbouring domains constitute ± π 2 angle with each other. In turn, the ferroelectric polarization in the neighbouring domains will also differ by direction with a relative angle of π 3 or − 2π 3 , see Fig. 2. We fix the polarization vector in the first domain to be A 4 ( Fig. 2). When the polarization vector in the second domain is B 2 or B 4 , the Weyl nodes coincide when projected onto the momentum plane parallel to the wall (see Fig. 3a). In contrast, if the polarization vector in the second domain is B 1 or B 3 , the projections of the Weyl nodes from the two domains are at different points ( Figs. 3b and 3c). Below we present a qualitative description of the resulting Fermi arcs for these scenarios. FIG. 2. Interface between the two AFD domains, D1 and D2, with ferroelectric orders. AFD1,2 -direction of the AFD distortion in the left (red) and right (blue)) domains, respectively; Ai, Bi -possible directions of the ferroelectric moments in the left (red) and right (blue)) domains, respectively. In both scenarios, the effective low-energy Hamiltonian is given by [65,66] where 1,2 (k || ) are the low-energy chiral modes of each of the domains, D 1 and D 2 , and the off-diagonal matrix component a(k || ) are the couplings. The eigenvalues of Eq. (16) are given by x 2 − ( 1 + 2 )x + 1 2 − |a| 2 = 0 and therefore, the Fermi arc states obey the equation (i) The scenario in which the projections of the Weyl points onto the interface of both domains coincide-This happens when the polarization vector in D 1 is A 4 , and the polarization vector in D 2 is B 2 or B 4 . We assume the magnetic field B lies in xz-plane for simplicity. We then identify two cases: case I-The chiralities of the Weyl nodes with coinciding projections are the same. In this case 2 = − 1 , and the condition becomes − 2 1 = |a| 2 , which can be satisfied only when |a| 2 = 0 for k || at which 1 = 0. However, there is no symmetry that fixes a(k) = 0 for k on that line. Therefore, the arcs are gapped out in the general case. In Appendix C, we discuss such unprotected zero energy solutions. case II-The chiralities of the Weyl nodes with coinciding projections are opposite. Here 1 = 2 . Consequently, the arcs are robust and found on the lines for which 1 (k || ) = ± a(k || ) (see Fig. 3a). An important consequence of the scenario of coinciding Weyl points when projected to the domain wall, is that a rotation of the magnetic field B about the y-axis allows to continuously tune between case I and case II. Then we expect arc states to disappear and reappear as a function of angle. (ii) The scenario where projections of the Weyl nodes do not coincide-This happens when the polarization vector in D 1 is A 4 , and the polarization vector in D 2 is B 1 or B 3 . For ∆ 2 < B 2 < ∆ 2 + µ 2 the Majorana Weyl arcs will "repel" and "attract" each other as shematically illustrated in Fig. 3b. For B 2 > ∆ 2 + µ 2 , a more significant reconstruction of the Majorana Fermi arcs happen. For the point close to the crossing point, we can write which defines a hyperbola in the vicinity of the crossing point, now connecting the projections of the Weyl nodes of the same chirality (see Fig. 3c). IV. WEYL-SUPERCONDUCTIVTY IN THE PRESENCE OF VORTICES Up to this point we have only considered the Zeeman coupling to the magnetic field. We now turn to consider the consequence of the orbital coupling. In a type-II superconductor, the field can induce vortices when it exceeds the value H c1 . We distinguish two limits of interest. In the small magnetic field limit H c1 < B H c2 the distance between vortices is much greater than the coherence length and each vortex can be treated independently. In the opposite limit, B H c2 the vortices become densely packed, overlap and significantly reduce the global average value of the order parameter. In what follows, we focus on these two limits. We start with the single vortex problem. Using the results of Section II, we show that individual vortices in ferroelectric superconductors can contain non-trivial Majorana bound states, even when the bulk superconducting state is trivial. Then in the next step, we find that there is always a critical magnetic field B * < H c2 , marking a percolation transition to a topologically non-trivial state with Majorana-Weyl nodes in the bulk. A. The single vortex problem -Non-trivial bound states In the solution of the Ginzburg-Landau equations for a single vortex, the superconducting order parameter ∆(r) and magnetic field B(r) both depend on the radial distance from the vortex core. Starting from the core and moving outwards, the order parameter is initially zero, and adjusts back to its bulk value at a distance of the order of the coherence length ξ. The magnetic field, on the other hand, is maximal at the core and gradually decays to zero at a distance given by the penetration depth λ L (we assume that λ L ξ). The dependence of these two fields is schematically plotted in Fig. 4. In light of the discussion in Section II, this implies that somewhere between the vortex core and r → ∞ there is a "halo" radius r h , where the critical threshold for creating Majorana-Weyl nodes B(r h ) = ∆(r h ) is satisfied (see Fig. 4). Majorana-arc states then appear on a cylinder of radius r h and at the core of the vortex. Clearly, such states can only be observed if their localization length l M is significantly smaller than r h . To obtain these states we solve the BdG equation explicitly (see Appendix D). We consider two models. First we consider a toy model, which we solve analytically. In this model B is taken to be constant and we mimic the spatial dependence of the gap near the vortex core by breaking it into two steps (see dashed lines in Fig. 4). Namely, the core region is defined to be in the region r < r 1 , where the gap is zero. The second region is the topological "halo" defined in the region r 1 < r < r 2 (where r 2 is the halo radius r h in this model). In this region the gap takes a non-zero value ∆ 1 , which is smaller than the field, such that the topological criterion ∆ 1 < B is satisfied and there are Weyl nodes. The third region is r > r 2 , where we assume ∆(r > r 2 ) = ∆ 2 such that ∆ 2 > B and therefore the superconducting state is trivial and fully gapped. The explicit solution shows there are two exponentially localized Majorana bound states, which are slightly split in energy due to the finite spatial separation between the boundaries at r 1 and r 2 . The key result we obtain from the toy model is an estimate of the localization length of these states which was estimated at the momentum k z located between the Weyl nodes (i.e. at the point where the localization length is minimal). As can be seen, the length scale Eq. (19) appears in units of ξ and is proportional to the parameter λ/v F . Close to H c2 the halo size becomes of the order of the correlation length. Therefore the Majorana arc states on the edge of the Halo can be resolved from the core Majorana states in the limit λ v F . Recent theoretical results estimate the electron coupling to the transverse optical phonon mode in STO [53]. Using the average displacement in the ferroelectric phase [37,43], this coupling constant can be estimated to be λ = 254 meV· • A. Using this value of λ, we find the concentration at which v F becomes greater than λ (which happens when the Fermi surface crosses the Dirac point) is n ≈ 2.3·10 19 cm −3 . For higher densities, the ratio λ/v F diminishes. For reference, this parameter diminishes to λ/v F = 1/5 at n ≈ 1.3 · 10 21 cm −3 . It is worth noting however, that other estimates of λ are smaller [50]. To confirm the results of the toy model we also solve the BdG problem numerically using a more realistic profile of the gap, ∆(r) = ∆ 0 tanh(r/ξ), where ∆ 0 = exp(iφ)|∆ 0 | and |∆ 0 | > B. We solve the BdG problem inside the interior of a cylinder of radius R. As before, the topological criterion ∆(r) < B is only satisfied within a finite halo radius r h surrounding the core. The resulting amplitude of one of the two BdG wave functions with nearly zero energy is shown in Fig. 5. We observe two peaks, corresponding to location of the core and the critical radius r h . An interesting aspect of the halo is that it realizes a local pseudo magnetic field [67]. The continuous variation of |B(r) − ∆(r)|, which controls the distance between the Weyl nodes, therefore acts as a pseudo gauge field in the z direction A z (r). The resulting pseudo magnetic field looks like a vortex circulating the core of the halo. An important physical consequence of this field is the emergence of a whole spectrum of Landau levels, which in this case are labeled by angular momentum. These states are plotted in Fig. 10 in Appendix D. For more details regarding the analytic and numeric solutions of the BdG problem we refer the reader to Appendix D. Using our results from the previous subsection, we now compute the local tunneling density of states in the vicinity of a vortex. The resolution of a typical scanning tunneling microscope is much smaller than the size of the vortex, and therefore it may be capable of distinguishing the core and edge states described above. To compute the tunneling density of states we use the standard BTK formalism [68,69] In Fig. 6a, we plot a contribution to dI/dV of eigenmodes corresponding to a particular p z = √ 2mµ as a function of voltage bias and distance from the vortex core r. One can clearly see peaks at zero bias for r = 0 and r ≈ r h . The integral over all p z values broadens these peaks. C. The many-vortex problem -Percolation of the topological phase The picture presented above, where each isolated vortex is surrounded by a topological halo, suggests the possibility of a percolation transition, where the halos overlap and the topological phase percolates through the system. At large field, B ∼ H c2 , the ground state of the system is expected to be a vortex lattice. Therefore, let us assume the magnetic field is large enough such that the lattice state is formed, yet the halos are still separated and each vortex is encircled by chiral Majorana zero modes. Upon increasing the field even further, the halos grow, and eventually touch, creating a connected sea of the topological phase. Below we develop a crude estimate for this percolation threshold B * and find that it is always smaller than H c2 . We use Abrikosov's theory [70], applicable for magnetic fields close to the upper critical field H c2 . The harmonic approximation solution for the first Ginzburg-Landau equation can be written in the form where ∆ 0 is the gap function at zero magnetic field and where ξ is the coherence length, x n is a position of nth vortex core on x-axis, 2π/q is the periodicity in the ydirection, and D n are dimensionless coefficients. Substi- (b) At the critical value B * , previously bounded topologically non-trivial "puddles" touch with the neighbouring "puddles" at one point. (c) For large values of the magnetic field, topologically non-trivial "puddles" around each vortex overlap, creating a topologically non-trivial "sea". (d) In the more realistic disordered network of vortices, the percolation will have a disordered character as well. Red lines are Majorana "halos" surrounding the topological phase. tuting Eq. (21) into the second Ginzburg-Landau equation, one finds an expression for the magnetic field where κ = λ L /ξ is the Ginsburg-Landau parameter. We note that Abrikosov's theory is valid for small f 2 , therefore in what follows we will consider B 0 0.9H c2 . To find the next order correction to f in the small parameter 1 − B 0 /H c2 one requires that [70] where O stands for the averaging O over one unit cell of the vortex lattice. Then, using a parameter which characterizes a lattice structure (for the square lattice β = 1.18, and for the triangular one β = 1.16), the non-zero solution for f 2 is The spatial profile f (r) of the order parameter is defined by the coefficients D n in Eq. (21). For simplicity, in the following we consider the case of the square lattice, for which D n are constants denoted by D, and x n = nqξ 2 . Percolation of the topological phase will occur when the magnetic field at the half distance between the neighboring vortices' cores, d/2, reaches the critical value for the topological phase transition (see Fig. 7 where f 0 is given by Eq. (22) in which all D n set to one. Combining this equation with Eq. (26), we find the value of B * needed to be applied to reach the percolation point where The field B * at which the topological phase percolates is therefore controlled by two phenomenological parameters. The first is K = ρ s /2∆ 2 0 N (0)ξ 2 , where ρ s is the superfluid stiffness and N (0) is the density of states of the underlying metal. Assuming a full volume fraction, a parabolic band dispersion ρ s = 2 n/4m [71] and ∆ 0 ≈ 1.76T c [72], we have K ≈ 0.1(µ/T c ) 2 /(k F ξ) 2 , which can be estimated directly from experiment (at n = 10 18 cm −3 µ = 2 meV, T c = 200 mK [73]) to be between 1 and 5 depending on the value of ξ between 100 and 50 nm, respectively. It is interesting to compare this result with the prediction of BCS theory K = 0.5 [74]. The second parameter controlling B * is the ratio ∆ 0 /H c2 . Comparing with the experimental data of Ref. [75] we find that this parameter can be of the order of (and even larger than) 1. We plot the resulting phase diagrams in the space of magnetic field B and temperature T in Fig. 8 for different values of ∆ 0 /H c2 and K. The value of δ in Eq. (28) does not depend much on κ for κ > 3, and we fix it to equal to 10. As can be seen, for all values of K there is a topological phase separating between the trivial superconducting and normal state. This result is much more generic than our particular model. We predict that any noncentrosymmetric superconductor where inversion is broken by a vector [49] will develop such Majorana-Weyl cones above a critical Zeeman field. Consequently, all such superconductors will undergo a percolation transition to a bulk topological phase before giving way to the normal state. V. CONCLUSIONS AND DISCUSSION We studied Majorana-Weyl superconductivity emerging in systems with intertwined superconducting and ferroelectric orders due to the application of a magnetic field. First, we considered the effect of a uniform Zeeman field. We confirmed that above the Clogston-Chandrasekhar threshold gµ B B > 2∆, Weyl cones appear in the Bogoliubov quasiparticle spectrum along the axis of the polarization moment, regardless of the charge density. We also showed that rotating the magnetic field with respect to the polarization tilts the Weyl cones and eventually causes Bogoliubov Fermi surfaces shaped as bananas to appear. However, the magnetic field is not expected to be uniform in the superconducting state. Instead it threads through the sample in the form of vortices. Due to the vanishing of the gap at the core of each vortex, the critical threshold gµ B B > 2∆ is always fulfilled in some area surrounding it, which we dub the "halo". Such halos are characterized by Majorana strings at their core and chiral Majorana arc states going around them. When the magnetic field is increased towards H c2 the vortices become denser, the halos merge and the system undergoes a percolation type phase transition to a bulk Majorana-Weyl superconductivty. This transition always precedes H c2 . Our predictions have a number of sharp experimental consequences. The first is the emergence of topological halos surrounding vortices. These can be observed in the local tunneling density of states using an STM. However, we expect a clear separation of scales between the size of the halo and the arc state's localization length, only close to H c2 . This is because the magnetic field at the center of an isolated vortex is of order H c1 , which is much smaller than the critical threshold. Therefore, the halo radius is very small when the magnetic field is far from H c2 . In addition to the zero modes, the nodes also modify the tunneling density of states away from zero energy. Namely, due to the bulk nodes there will be a quadratic dependence on bias. The arc and nodal states can also be observed in the heat conductivity. For example, we anticipate that close to H c2 , in the topological phase, the system will become heat conducting albeit still superconducting. Finally, when tilting the magnetic field to be perpendicular to the polarization direction we expect Bogoliubov Fermi surfaces to emerge. Close to H c2 these surfaces will contribute a T -linear term to the specific heat and a constant tunneling density of states. Finally, it is also possible that the existence of Majorana zero modes surrounding vortices will contribute a constant term to the specific heat close to H c2 , which will manifest itself as a Schottky anomaly at low temperatures. The size of the anomaly should diminish by a factor of 1/2 when crossing to the topological phase. It is interesting to consider the nature of the quantum percolation transition in the presence of disorder. Naively, we may anticipate that thin films will resemble the transitions between integer quantum Hall states [76]. Furthermore, it is also interesting to consider the transition between the topological state considered here and the FFLO state, which is also a relevant ground state when the magnetic field is perpendicualr to the polarization [12][13][14]. To that end, one needs to solve selfconsistently for the lowest energy ground state. We postpone the study of such questions to future work. The energy dispersion of Eq. (3) is determined from the equation For B x = B y = 0, its solutions are easily found and given in Eq. (4). Here we analyze its zero-energy solution for the case when the magnetic field is not parallel to the polarization. We first show that the conditions for the gap closure is essentially the same as for the case of perpendicular magnetic field. For a generic quartic equation a product of its roots 4 i=1 x i = e. Considering Eq. (A1) for k ⊥ = 0, e = 0 is satisfied at B 2 = ∆ 2 + 2 k , signifying that there is a zero root. In addition, this root is double, as it can be readily seen form Eq. (A1): the free term is zero, and the linear term is zero at k ⊥ as well. Thus, the gap closes at k ⊥ = 0 for B 2 > ∆ 2 at k z determined from the equation B 2 = ∆ 2 + 2 k . Other non-degenerate zero solutions might be determined from equation e = 0, where e is the free term in Eq. (A1). Without loss of generality, choosing the direction of the magnetic field such that B y = 0, we recast this equation in a form which determines the dependence of k y on k ⊥ and k z for the momenta satisfying the condition e = 0. It is indeed a solution if |k y | ≤ k ⊥ , which may happen only if B 2 x > ∆ 2 . Also, note that these roots are non-degenerate, and there are roots of different sign amongst those four corresponding to the solution of Eq. (4). It is easy to see that the solution with k ⊥ = 0, |∆| 2 + 2 k − B 2 = 0 (i.e. away from the Weyl nodes) is impossible. Thus, for B 2 x > ∆ 2 , we infer that the momenta at which E = 0 form closed surface(s) defining a 3D Bogoliubov Fermi surface. Numerical investigation shows that these surfaces connect the Weyl nodes at p 1,2 and p 3,4 , respectively (see Fig. 1d). In particular, for B 2 x = ∆ 2 , B z = 0, zero solution can exist only for k x = 0, and from Eq. (A3) we obtain which defines two intersecting circles We emphasize that this result is obtained under the assumption that the superconducting order parameter remains s-wave order. Here we illustrate that the presence of a triplet component (k z dependent) leads to the inflation of the Weyl nodes into Bogoliubov Fermi surfaces. We consider Eq. (3) with ∆ = iσ y (∆ + ∆ 1 k z σ z ) treating ∆ and ∆ 1 as parameters. In Fig. 9, we plot zero-energy surfaces in k-space in the parallel (a) and not parallel (and overtilted) (b) to P magnetic field for the case of B > ∆ ∆ 1 . We note that overtilting of the magnetic field produces large Bogoliubov Fermi surfaces. Appendix C: Additions to the "Fermi arcs on surfaces and Domain-walls" section of the main text The characteristic equation for α in the ansatz solution One may view this equation as an equation with real coefficients with respect to iα. Thus, its roots are symmetric with respect to the imaginary axis, i.e., if α is a root, then −α * is also. Therefore, Eq. (C1) can have four roots with positive real part. In the main text, we showed the existence of the Majorna-Fermi arcs for the case of B || = 0. Here, we present a solution for an arbitrary B. To find a locus of Majorana zero modes in the k y k z -plane by substituting the general solution of Eq. (11a) into boundary conditions Eq. (11b) without any assumption is quite difficult. Instead, we check if k y = 0, k z ∈ (p z2 , p z1 ) (p z4 , p z3 ) is the locus of zero-energy solutions. For k y = 0 at artibitrary B, as for the case of B y = 0, the characteristic equation for α, Eq. (C1), splits into two simpler equations where η = ±1, and we find Again, the problem separates into two sectors corresponding to η = ±1, and the further analysis proceeds in analogy to the presented one in the main text. In the main text, based on the low-energy theory, we pointed out that non-protected Fermi arcs still may exist in the case II of scenario (i), where the Weyl nodes of the same chiralities in two domains project onto same points on the interface. Here we show that such solution exists in our continuous model. We choose the coordinate system as described in the main text for the case of a boundary between the single domain and vacuum with x−axis pointing into the first domain, D 1 . The boundary problem to be solved is where H 1(2) and Ψ 1(2),k || (x) are the Hamiltonian and the wavefunction for the first (second) domain. Again, we are looking the solutions for k y = 0 in the form Ψ 1(2),k || (x) = Ψ 01 (2),k || e −α 1 (2) x , where α 1 (2) are determined from Eq. (C2). We consider λ > 0 in the first domain, and the flip of the Weyl node chiralities in the second domain correspond to the flip of the sign of λ, i.e., λ < 0 in D 2 . The decaying solutions in D 1(2) imply α 1(2) > (<)0. For k z ∈ (p z2 , p z1 ) (p z4 , p z3 ), there are three α 1 with Re(α 1 ) > 0 and three α 2 with Re(α 2 ) < 0 in case of η = 1. For η = −1, there are one α 1 with Re(α 1 ) > 0 and one α 2 with Re(α 2 ) < 0. For k z ∈ (p z2 , p z1 ) (p z4 , p z3 ) there are two α 1(2) with Re(α 1(2) ) > (< 0). Again, boundary condition Eq. (C4c) imply that we can stitch solutions in D 1 and D 2 corresponding to the same η = ±1 only, and that the problem separates into two sectors η = ±1. This results in that that the boundary conditions effectively give us four constraints, and together with the normalization condition there are five constraints. For k z ∈ (p z2 , p z1 ) (p z4 , p z3 ), the general solutions for Eqs. (C4a) and (C4b) are linear combinations of three functions, which gives us six unknown coefficients to be found. This is one more then the number of constraints we have, which implies that we get a family of solutions parametrized by one parameter, which might be thought of as an angle in two-dimensional vector space. Thus, this set of solutions can be thought as a linear combination of two orthogonal solutions. Appendix D: Majorana zero modes in the isolated vortex To obtain MZM in the presence of vortices in the low field regime, we solve the BdG problem in the vicinity of a single vortex. The corresponding BdG Hamiltonian in cylindrical coordinates assumes the form where the z-component of the momentum remains a good quantum number and Here pz = p 2 z 2m − µ and we have neglected the coupling to the vector potential [77]. The phase of the order parameter winds by 2π around the vortex origin, ∆(r) = ∆ 0 (r)e iφ . The cylindrical form of Eq. (D1) suggests to look for the energy eigenstates in the form Following Ref. [8], in searching for the Majorana modes, we focus on the l = 0 channel, which is also justified by our numerical calculations. Substitution of Eq. (D3) into Eq. (D1) leads to a system of ordinary differential equations (ODE) with real coefficients, and thus the functions Ψ iEl (r), i = 1..4 in Eq. (D3) are real. For l = 0, the particle-hole symmetry implies σ x ⊗ σ 0 Ψ pzE0 (r) * = ηΨ −pz−E0 (r), where η is a phasefactor. Given that the Hamiltonian Eq. (D1) is even in p z , we obtain σ x ⊗ σ 0 Ψ pzE0 (r) * = ηΨ pz−E0 (r). Combining this with the statement about the reality of Ψ i (r), we conclude η = ±1 and Ψ 3(4)E0 = ηΨ 1(2)−E0 . Although we anticipate the splitting in energy due to overlapping of Majorana states at r = 0 and r = r h , we start with seeking the zero-energy solution. In the following, we drop the subindices for the putative E = 0, l = 0 state and use Ψ i ≡ Ψ i00 , and thus we have Ψ 3(4) = ηΨ 1 (2) . Then the zero-energy eigenstate equation for BDG Eq. (D1) reduces to the system of two ODEs where Ψ(r) = (Ψ 1 (r), Ψ 2 (r)) T . In what follows, we first present an analytic analysis of Eq. (D4) for a simplified piece-wise constant model. Then in the next step we present numerical analysis for more realistic profiles of the gap and magnetic field, which continuously vary in space. The gap structure in the simplified model constitutes of three regions We also assume the magnetic field is uniform (justified by the type II condition λ L ξ). We then focus on the limit ∆ 1 < B z , in which case the intermediate region is "topological". In the region 0 < r < r 1 , where ∆(r) ≡ 0, we look for the solution in the form [8] where J n (z) are Bessel functions. Substituting this into Eq. (D4), we find a characteristic equation for α which has four solutions: ±α 1 and ±α 2 . Thus, the general solution in this region is For the regions r 1 < r < r 2 and r > r 2 , where ∆(r) = 0, we look for the solution in the form [8] and get a set of algebraic equations for the coefficients (a n , b n ). For n = 0, we obtain which gives the following equation forq = −iq The rootsq i of this equation satisfy the condition For the region r > r 2 , the decaying solutions correspond to suchq that Re(q) > 0. For 4 i=1q i > 0, there are two such roots for either η; for 4 i=1q i < 0, there are three such roots for η = 1, and one such root for η = −1. Two boundaries (at r 1 and r 2 ) with smooth continuity conditions for a two-component vector and one normalization condition bring nine conditions, in total. Now we count the number of yet unknown coefficients in the constructed solution to be obtained from these conditions focusing on the case 4 i=1q i > 0 for all p z (which correspond to ∆ 2 > B z ), where the middle region (the halo) r 1 < r < r 2 is in the topological phase, while the outer and inner regions are trivial. In the region 0 ≤ r < r 1 , there are two coefficients; in the region r 1 ≤ r < r 2 , there are four coefficients; and in the region r > r 2 , there are two coefficients. This brings in total eight coefficients, which is not enough to satisfy nine conditions. In fact, this anticipated and correspond to the overlapping of two Majorana states at r 1 and r 2 . Specifically, removing the "domain" wall at r 2 (or moving it to infinity), at the boundary r = r 1 we have to satisfy only five conditions at r = r 1 . In this case, for r > r 1 we have to single out only decaying at infinity solutions, which gives three coefficients in this region. And it total we have five coefficients to satisfy five conditions. Analogously, moving r 2 to infinity, and requiring that physical solutions decay far away from r 2 in the topological phase, we look for suchq in Eq. (D11) that Re(q) < 0. There is one such roots for η = 1 sector, and three such roots for η = −1. Then, in η = −1 sector we again have equal number of constraints and coefficients. As we move r 2 from large distance closer to r 1 , the overlapping of the two Majorana modes leads to splitting in energy of the states constructed out of the linear combinations of these Majoranas. However, because the halo has finite size it is essential to estimate the splitting of the zero modes due to their overlap. To this end, we assume the separation between the two boundaries r 2 − r 1 ∼ ξ is on the order of the coherence length ξ = v F π∆ [78]. This length should be compared with the localization length of the zero modes, l M , which can be estimated from the low-energy effective Hamiltonian Eq. (7) (for B = Bẑ) H ef f = vk y σ x + vk y σ y + E g σ z . (D12) where E g is the gap at a given k z away from the Weyl point. We then find that l M ∼ v Eg . Comparing with Eq. (7), we find that v = λ∆ B , E g = pz pkz 2mB yielding We then evaluate l M for k z located at the middle point between the two Weyl points under assumption µ √ B 2 − ∆ 2 . We find that l M ξ ∼ 2π λ v F ∆ 2 B 2 −∆ 2 . Thus, the ratio of the length scales is controlled by the small parameter λ/v F and is therefore expected to be very small except for very close to the nodes or close to the transition point. To confirm our analytical considerations, we perform numerical calculations. For simplicity, we consider a cylinder of a radius R with a single vortex located at the axis of the cylinder and impose zero boundary conditions at r = R. For r < R, we assume a radial dependence of the order parameter given by the function ∆(r) = ∆ 0 tanh(r/ξ), which reflects a typical behavior in a vortex core center. Also, we assume the magnetic field is uniform, and smaller than the bulk threshold B z < ∆ 0 . We represent the radial part of the spinor Ψ pzEl (r) in the Bessel-Fourier series form where µ l i is the set of roots of the equation J l (µ l i ) = 0, which guarantees that the boundary conditions are satisfied. Substituting this representation into Eq. (D1) and projecting onto J ν (µ i r R ), ν = l − 1, l, l + 1, we obtain an infinite system of algebraic equations, which is solved approximately by truncation. In the calculations used for producing plots in this article, we cut the system of algebraic equations at size 600 × 600. We plot eigenenergies corresponding to the wavefunctions Ψ El (r, θ) in Fig. 10. Under PHS, l → −l and E → −E. Thus, in fact, for l = 0, there are two near-zero energy solutions (for the parameters considered, these energies are on the order of 10 −4 ) that are indistinguishable in the plot and correspond to the states that are linear combination of Majorana zero modes.
One-step replica symmetry breaking solution of the quadrupolar glass model We consider the quadrupolar glass model with infinite-range random interaction. Introducing a simple one-step replica symmetry breaking ansatz we investigate the para-glass continuous (discontinuous) transition which occurs below (above) a critical value of the quadrupole dimension m*. By using a mean-field approximation we study the stability of the one-step replica symmetry breaking solution and show that for m>m* there are two transitions. The thermodynamic transition is discontinuous but there is no latent heat. At a higher temperature we find the dynamical or glass transition temperature and the corresponding discontinuous jump of the order parameter. I. INTRODUCTION In the last decades quadrupolar glasses have found widespread experimental and theoretical interest [1]. Disordered quadrupolar glasses are produced by random dilution of (quadrupolar) molecular crystals with atoms which have no quadrupole moments; well-known examples of such systems are K(CN ) x Br 1−x or N a(CN ) x Cl 1−x , or N 2 Ar, CuCN , or solid hydrogen (see Ref. [2] for a review). The success of the Sherrington-Kirkpatrick (SK) model [3] in providing a good theory to describe systems of interacting magnetic or electric dipole moments, suggests to extend to quadrupolar glasses the same kind of analysis. However, there are differences between standard spin glass systems and quadrupolar glasses; the latter do not have the global inversion symmetry S i → −S i for all spins. For several systems without reflection symmetry, and close to the transition temperature, only one step in the Parisi replica symmetry breaking scheme is sufficient to describe the transition para-glass above a lower critical dimension [4]. Indeed, the one-step replica symmetry breaking (1RSB) scheme has proven to provide stable solutions for the Potts glass model [5,6,7], the spherical p-spin model [8] and the p-spin Ising spin glass model [9]. It is the purpose of the present paper to show that the 1RSB scheme can be applied also to the quadrupolar glass model, and indeed it provides a stable solution in certain regimes. In the present paper we shall consider a perturbative evaluation of the free energy by means of a Taylor expansion up to fourth order in the order parameter. It is obvious that the perturbative approach is most reliable near the transition temperature. We shall show that the transition from the replica symmetric (RS) state to the 1RSB occurs either discontinuously or continuously, depending on the value of the quadrupole dimension m. A similar dependence of the RS to 1RSB transition, though on the value of an external field, is exhibited by the spherical p-spin model [8] and the Ising p-spin model [9]. For any p > 2, the transition is discontinuous (continuous) for fields weaker (larger) than a critical value of the external field h c , which depends on p. The plan of the paper is as follows: in Sec. II we shall use a rather pedagogical approach mainly to review the results obtained in the mean-field analysis of the quadrupolar glass model in the framework of the replica symmetry ansatz [10]. Section III is devoted to the study of the 1RSB solutions of the saddle-point equations, assuming all the transitions to be continuous or at worst weakly discontinuous. In Sec. IV we shall perform the de Almeida-Thouless (AT) stability analysis while the dynamical transition is discussed in Sec. V. The concluding remarks are given in Sec. VI. II. UNIAXIAL QUADRUPOLAR GLASS The infinite-ranged quadrupolar glass model has been first introduced by Goldbart and Sherrington (GS) [10]. The model assumes the quadrupole-quadrupole interaction to be more dominant than the interactions between dipoles. This appears to be the case in several experimental situations, where the quadrupolar species occupy the sites of a regular lattice, but share this lattice with a dilutant without quadrupole moment: argon in the case of interacting N 2 , parahydrogen in the case of interaction with orthohydrogen, and KBr in the case of KCN , etc. [2]. To construct the mean-field theory of a set of uniaxial quadrupoles interacting through randomly quenched and frustrated isotropic exchange one may adopt the Hamiltonian where the spin vector S i is defined via the component f i µν = S i µ S i ν − δ µν /m of the electric quadrupole moment tensor [10]. The summation (i, j) runs over all the distinct pairs. Each S i has m components S i µ (µ = 1, . . . , m) and, for convenience, is assumed to be a vector with fixed length |S i | = m. Of course, taking general m does not describe the experimental quadrupolar glasses. Rather it is a natural model to consider theoretically for classification. By analogy with SK, the spins are taken to interact via independent random interactions J ij which are assumed Gaussian distributed: The mean J 0 and the variance J of the distribution depend on the total number of quadrupoles N to ensure a meaningful thermodynamic limit (N → ∞) with an extensive energy: J 0 =J 0 /N and J =J/N 1/2 . The Hamiltonian (1) can be seen as the Hamiltonian of an infinite-range model for N classical vector spins S i and zero external field is assumed. In Ref. [10] it has been shown that in terms of the order parameters the free energy per spin f -by means of the replica trick -is given by where n is the number of replicas, β = 1/k B T , and L is The elements of the order parameters Q ab µνλρ and M a µν are not independent quantities and they can be parameterized in terms of five sets of independent parameters A a , B ab , C ab , D ab , and E ab . The non-zero extremal values of the above sets describe possible glass ordering [10]. Upon decreasing the temperature, when one of the parameters becomes different from zero, the high-temperature disordered phase becomes unstable. By assuming continuous transition in the replica symmetric ansatz and provided that the average interaction is not too positive, there is a transition to an isotropic glass state occurring at the temperature T RS = (2mJ)/[k B (m + 2)]. This highest temperature phase transition is associated with the order parameter B ab acquiring a non-zero value, hinting at the presence of isotropic quadrupolar order [10]. As GS showed, below T RS the replica symmetric solution isotropic glass phase is unstable with respect to fluctuations in the space of broken replica symmetric isotropic glass order parameters, the instability being stronger than in conventional systems. It is worth noting that Eq. (6) is the equivalent of the condition J 0 /J < 1 in the SK model to ensure a transition from the paramagnet state to spin glass state. Furthermore, when the numerator of the right hand side of Eq. (6) becomes negative, it is necessary to introduce a negative value for J 0 in order to obtain a glass phase at low temperatures; this happens for m > 3.37 . . .. Increasing the negativity of J 0 reduces the temperature of ferromagnetism onset, though it cannot be stopped. III. ONE-STEP REPLICA SYMMETRY BREAKING THEORY The further analysis of GS leads to the conclusion that a replica symmetric ansatz cannot give a stable, and hence physical, solution of the quadrupolar glass model. Thus, one has to resort to a replica symmetry breaking ansatz. Close to the isotropic quadrupolar glass transition, i.e. confining attention to regions of parameter space (J 0 , J) in which the highest transition temperature does correspond to a phase transition in the order parameter B ab , it is sufficient to consider only this parameter different from zero in the free energy (see Ref. [10]). Thus, one requires J 0 to satisfy the inequality (6) for T < ∼ T RS , i.e. in the neighbourhood of the transition temperature. The free energy is then given by The symbol ′ ab stands for a sum which excludes terms with any equal indices; i.e. a = b. The paramagnetic contribution βf P M = −(β J) 2 m(m − 3)/2 has been subtracted as it does not depend on the order parameter. One then looks for a replica symmetry breaking ansatz for B ab . Here we consider this to the first level in the standard Parisi procedure. Using the standard procedure of the replica symmetry breaking method, one groups the n replicas in blocks of x, where x is a parameter (between 1 and n) to be located by the saddle points equations. Each block contains x replicas. Thus, one has where I(y) is an integer valued function: its value is the smallest integer which is greater than or equal to y. Upon substituting in Eq. (7), one has where there are n/x blocks labelled by k = 1, . . . , n/x. The index a belongs to block k if: I(a/x) = k. We shall now perturbatively compute the free energy (9) by Taylor-expanding the free energy up to fourth order in q. This approximation implies that one is assuming the transitions to be continuous or at worst weakly discontinuous. After some lengthy but straightforward algebra, one obtains the following expression for the free energy: where the coefficients in the free energy expansion are given by and t is the reduced temperature, defined as t For m less than a critical value m * , discussed below, the transition is continuous, with the same T C as predicted by RS theory [10]. In this case one may use the saddle point method to evaluate the extremal values of the parameters q and x. These saddle point equations are from which one has According to the value of the quadrupole dimension m, the transition from the higher temperature RS phase can occur either continuously or discontinuously. The transition is continuous in q for m < m * ≃ 3.37, i.e. when the coefficientα 3 becomes larger than α 3 . q and x satisfy the following equations that express the extremum of the free energy functional (10) Close to the transition temperature T C , i.e. when t > ∼ 0, the solution(14b) is valid only for 2 < m ≤ m * ≃ 3.37 within 1RSB subspace. At m * the cubic term in the free energy functional (10) changes sign. This coincides with x = 1. Thus, above m * , the transition is not anymore continuous. The parameters q and x are plotted for m = 3 within this approximation in Fig. 1. By substituting Eq. (14b) in Eq. (10) one finds the free energy of the glass phase close to the transition, to be Below the transition the free energy is larger than that of the paramagnetic phase. On the other hand, when m > m * there is still a glass solution to the saddle point equations but with a discontinuous onset of q from the higher temperature RS phase. The transition may be found with the additional requirement that the free energy in the paramagnetic phase is equal to the one in the glass phase with break point x equal to 1. Denoting this transition temperature by T D , to the quartic order in q for the free energy, one finds that where There is a discontinuous jump in the order parameter at T D from zero to . where q D = q(T = T − D ). In the neighbourhood of the transition temperature T D one finds that (x − 1) ∝ (t − t D ) and that the free energy of the quadrupolar glass phase is given by Even though the transition is discontinuous in the order parameter q, there is no discontinuity in any thermodynamic quantities. Moreover, there is no latent heat at the transition. This behaviour is qualitatively common to a whole class of mean-field models of spin glasses e.g. the p > 2 spin model beneath a critical field [9] and the Potts glass model above the critical Potts dimension p = 4 [5,6,12]. Since the order parameter has a discontinuous jump at the transition temperature when m > m * , the perturbative approach should not be valid anymore. However, one may control the approximation by setting m = m * + ε, where ε ≪ 1. Thus, one obtains a quadrupolar glass phase with broken replica symmetry appearing below t D ∝ ε 2 , with q D ∝ ε and x(T → T − D ) → 1. Explicitly, to leading order, one has where g(m * ) = (m * + 6)/ 3(m * + 4)(m * 4 − 5m * 3 − 35m * 2 + 34m * + 168) < 0. In the next section, we shall investigate the stability of the 1RSB solution found against small further RSB fluctuations. IV. STABILITY ANALYSIS In order to study the stability of the 1RSB ansatz one introduces 1RSB-breaking fluctuations and expands the free energy to second order in the fluctuations η ab [13]. The group Kronecker delta δ GaG b is unity if a and b belong to the same group and zero otherwise [6]. One has to compute the second derivatives of the free energy (7) with respect to {B ab } at the 1RSB solution. The Taylor expansion of the free energy around the 1RSB solution is The quadratic form where a = b = c = d. Since a, b, c and d belong to same group The eigenvalues of the intragroup matrices, to order q 2 , are given by The behaviour of the eigenvalues in the ordered phase is obtained by substituting in to the above equations the values of the parameters q and x pertinent to the continuous or discontinuous transition. Close to the continuous transition temperature, one finds that the first two eigenvalues are positive in the range of validity of the solution 2 < m < m * : The last one, to order t 2 , is positive only for m > m * 2 ≃ 2.46. Thus, one finds a 1RSB stable mean-field theory with a continuous transition only in the range m * 2 < m < m * . This lower limit is the same as given in Ref. [4] and obtained by means of a complementary calculation based on the full replica symmetry breaking (F RSB) ansatz near T C with a perturbation treatment. For m > m * the behaviour of the eigenvalues in the ordered phase is obtained by setting x = 1 and by substituting q with the value q D obtained in Eq. (18a). Upon inserting these values, one easily finds that all the fluctuations around the ordered 1RSB phase are finite and positive: Thus, within our approximation, one finds a 1RSB stable mean-field theory with a discontinuous transition when m > m * . V. DYNAMICAL TRANSITION Generally, disordered systems with a discontinuous transition have a temperature T G where a dynamic instability appears. This temperature is called the glass temperature and is higher than the transition temperature T D where the replica symmetry breaks thermodynamically, if the latter breaking is discontinuous. In the soft spin version of the Potts glass model [14] it has been shown -by means of dynamical studies of the mean-field theory -that indeed there is another transition at temperature T G > T D as in the p-spin model for p > 2 [15]. Both static and dynamic transitions in the Potts (p > 4) case, have also been found in Refs. [6,7,12]. In the study of the thermodynamics of the quadrupolar glass, T G can be computed by means of marginal stability [16]. By requiring the vanishing of the first and second derivative of the free energy (10) with respect to q ∂f ∂q q=qG = 0 ; one finds, within our approximation, the dynamical transition temperature T G and the corresponding discontinuous jump q G of the quadrupolar glass model and where t G = 1 − (T G /T C ) 2 and q G = q(T = T − G ). Again, by assuming the jump q G near the temperature T G to be small, one can control the approximation by letting m = m * + ε, ε ≪ 1. Thus, one has Within the approximation used, one finds It is worth noting that, to the leading order, exactly the same value for this ratio has been obtained for the Potts glass in Refs. [6,7,12], suggesting a sort of universality related to the same general structure of the free energy for both quadrupolar and Potts glass models (see also Ref. [4]). The results for q D and q G as a function of 1/ log(m) are shown in Fig. 2. The ratio between the two transition temperatures T G /T D is very close to one ; Of course, a naive extension of our result to include large m results is not possible since we have assumed all the transitions to be at worst weakly discontinuous, implying the possibility to explore only the range m close to m * . However, since the mean-field theory of the Potts glass is qualitatively very similar to that of the quadrupolar glass, one may have an idea of the large m limit by considering the large p limit in the Potts glass model. There, the ratio q G /q D stays close to 3/4 for a large range of p, though it increases for very large p, approaching unity. The ratio T G /T D grows very slowly with p [7]. This behaviour is different from the case of the p-spin model, where the ratio q G /q D is not close to 3/4 even for small values of p and, in the limit p → ∞, it converges to unity. Moreover, the ratio T G /T D grows faster with p than in the Potts problem [7]. A useful and new representation of the phase diagram is obtained by plotting the phase boundary line in the plane (T, 1/m). One may identify three stable phases, RS paramagnetic and both 1RSB and F RSB glasses. In Fig. 3, the phases are labelled by their symmetry breaking and the manner of the onset from the paramagnet. The 1RSB transition is continuous between m * 2 ≤ m ≤ m * , whereas it is discontinuous above m * . Note that, at m = m * , the transition from RS passes continuously from continuous 1RSB (C1RSB) to discontinuous 1RSB (D1RSB) within the one-step RSB phase. The dotted line in the figure corresponds to the m > m * dynamical transition temperature given in Eq. (27). The situation is analogous to that of the Potts glass model which shows a crossover from the continuous transition to the discontinuous transition as the number of Potts states increases [5,6]. The p>2-spin Ising and spherical spin glasses also show transitions from C1RSB to D1RSB as an applied field is reduced but differ from the present problem in that the critical field makes also a maximum in the transition temperature, in contrast to the present monotonic variation with 1/m. For m < m * 2 the transition is continuous to full replica symmetry breaking. A phase line (not shown, but continuous) separates the one-step and full replica symmetry breaking phases within the RSB region. VI. CONCLUDING REMARKS In this paper we have investigated the quadrupolar glass model in the framework of the replica method. Upon introducing a simple one-step replica symmetry breaking ansatz, one may find a stable mean-field theory with a continuous or discontinuous transition, according to the value of the quadrupole dimension m. The transition is continuous to one-step replica symmetry breaking in the range of the quadrupolar dimension 2.46 < m < 3.37. For the discontinuous transition (m > 3.37) there are two different transition temperatures. We have computed the ratio T is in units of kB/J. For 2 < m < m * 2 the transition from the higher temperature RS phase occurs within the full replica symmetry breaking mechanism. For m < m * the thermodynamic and dynamical transitions coincide. For m > m * the dynamical transition, denoted by the dotted line, is higher that the thermodynamic one (solid line). The plot is shown within the quartic approximation for the free energy expansion in q. q G /q D , where q G and q D are the order parameters associated, respectively, with the dynamic and thermodynamic transition. The ratio between the two transition temperatures T G /T D is also computed. Within the approximation used, the values of these ratios, q G /q D = 3/4 and T G /T D ≃ 1 (to leading order), are the same as those found in Refs. [6,7,12] for the Potts glass model. The results we have obtained confirm the general wisdom that the properties of the quadrupolar glass, with continuous (m < m * ) and discontinuous transitions (m > m * ), are similar to those of the p < 4 and p > 4 Potts glass well studied in the literature [5,6,7,12,15]. Although the investigation focused on the quadrupolar glass phase, in the wider (J 0 , J, T ) space there should exist different types of ferromagnet, collinear and canted, see e.g. Ref. [10]. The full phase diagram should include also another curve which may be captured by complexity arguments. In analogy with the p-spin spin glass there should be another critical line T compl. associated with the onset of macroscopic complexity [17].
Data on the safety of repeated MRI in healthy children Purpose To address the question of the safety of MRI for research in normal, healthy children. We examined MRI, neurocognitive and biometric data collected in a group of healthy, normally developing children who have participated in a 10 year longitudinal fMRI study. Materials and methods Thirty-one healthy children ranging in age from 5 to 7 years were enrolled between 2000 and 2002 and were tested yearly as part of a longitudinal study of normal language development. Twenty-eight of these children have completed multiple neuroimaging, neurocognitive and biometric exams. These children ranged in age from 5 to 18 years during the course of the study and were exposed to up to 10 annual MRI scans. Linear regression of the IQ (WISC-III) (Wechsler, 1991), executive function (BRIEF) (Gioia et al., 2002), and language (OWLS) (Carrow-Woolfolk, 1995) measures was performed against the number of years of exposure to MRI in the study. Body mass index (BMI) (Ogden et al., 2006) was also examined as a function of years and compared with normative values. Results The WISC-III Full Scale (FSIQ) in our longitudinal cohort was higher than the average at baseline. There was no significant change over time in mean FSIQ p = 0.80, OWLS p = 0.16, or BRIEF p = 0.67. Similarly, over 10 years there were no significant changes in the Coding subtest of WISC III and height and body mass index did not deviate from norms (50th percentile). Conclusions Examination of neurocognitive and biometric data from a decade-long, longitudinal fMRI study of normal language development in this small, longitudinal sample of healthy children in the age range of 5 to 18 years, who received up to 10 MRI scans, provides scientific evidence to support the belief that MRI poses minimal risk for use in research with healthy children. Introduction Examining the current literature on magnetic resonance imaging (MRI) for keywords relating to biological effects of MRI turns up primarily articles relating to the operational hazards associated with MRI (Gangarosa et al., 1987) and protecting patients and radiology personnel from risks associated with ferromagnetic objects becoming projectiles in close proximity to MRI magnets (Gallauresi and Woods, 2008;Shellock and Crues, 2004). There is no question that the benefits outweigh the risks of MRI for clinical diagnostic purposes. However, for research in vulnerable populations such as children and minors who are dependent on parents or guardians for consent to participate in research protocols, it is the responsibility of the research community to insure that the risk is minimal if there is no direct benefit to the participant. Most Institutional Review Boards (IRBs) classify MRI as a minimal risk procedure and therefore the risk/benefit ratio works in favor of approval for many research protocols involving children as human subjects. According to the NIH-sanctioned Collaborative Institutional Training Initiative (CITI) program (Braunschweiger and Goodman, 2007), minimal risk means "The probability (of occurrence) and magnitude (seriousness) of harm or discomfort (e.g., psychological, social, legal, economic) associated with the research are not greater than those ordinarily encountered in daily life (of the average person in the general population) or during the performance of routine physical or psychological examinations or tests." Minimal risk, therefore, is used to define a threshold of anticipated harm or discomfort associated with the research that is low. This classification is based on a lack of evidence to the contrary. NeuroImage: Clinical 4 (2014) 526-530 Over the course of three decades of MRI use in humans, there have not been any acute or long-term deleterious biological effects attributed to MRI exposure, aside from the obvious physical injuries that occur because of ferromagnetic projectiles colliding with people on their path along the flux lines of the superconducting magnets that power the MRI machines. Still, there is a dearth of literature describing systematic studies of MRI biological effects using scientific or epidemiological methods to produce evidence upon which to base a conclusion or even make an estimate of how large such effects could be. This study aims to provide scientific evidence to test the hypothesis that MRI produces measureable adverse effects on cognitive and physical development in children who are exposed to repeated MRI scans between the ages of 5 and 18 years. While there is no existing data to support this hypothesis that we are aware of and we do not expect our data to allow us to validate this claim, we are forced to test this positive hypothesis because it is not possible to reject the null hypothesis with any degree of certainty based on one, small scale study such as the one reported here. Conversely, we expect to be able to reject the hypothesis that adverse effects will be found in our sample and to use our data to set an upper bound on the magnitude of such effects if they exist. Further we expect our result to provide justification for the classification of research using MRI as minimal risk. Much of the research involving the use of MRI in pediatric populations is aimed at understanding development and disorders of cognitive functions such as language and attention. Functional MRI of the developing brain exposes the brain and the entire human body to a static magnetic field, gradient magnetic field changes, and radio frequency (RF) electromagnetic fields (Haake et al., 1999). FDA guidelines and manufacturer limits prevent acute biological effects from RF heating and peripheral and vestibular nerve stimulation (Zaremba, 2003(Zaremba, , 2008. While acute effects of MRI below these limits have not been reported, researchers must question whether MRI exposure of the cerebral cortex, brain stem, thalamus, and neuroendocrine glands that moderate growth and development could possibly produce long-term effects, even though mechanisms underlying such effects have not been described (Chou, 2007;Dini and Abbro, 2005;Robertson et al., 2009;Weiss et al., 1992). Continued vigilance for such effects is incumbent upon us as medical researchers. While we aim to improve child health through scientific investigations, harm to human research subjects and particularly to a vulnerable population of children, is not an acceptable cost for such scientific advances. Here we examine the question of the safety of MRI from the point of view of its impact on physical and cognitive growth and development in healthy children. We address this question using MRI, cognitive, and biometric data that we have collected in a group of healthy, normallydeveloping children who have participated in a longitudinal study of language development using fMRI for the past 10 years (Szaflarski et al., 2006). Admittedly our data set is limited and the lack of significant MRI related effects on cognitive and biometric measures does not preclude discovery of biological effects from repeated MRI in the future. However, the data permit us to establish an upper limit for how large an effect could be and still avoid detection using the gross biometric and cognitive assessments that we have obtained in this longitudinal sample of healthy children. Controlling for relevant growth variables we are also able to estimate the sample size needed to detect measureable effects at specified levels. A verifiable positive finding would have implications for research in children and could allow us to estimate the scale of the potential impact that MRI exposure might have on the selected biomarkers. Results of this study establish a baseline for MRI bioeffects and gauge the necessity and scale for prospective studies of MRI bioeffects in the future. Materials and methods A longitudinal cohort of 31 healthy children was enrolled between 2000 and 2002 at age 5 (n = 9), 6 (n = 7) or 7 (n = 15) years. Twenty-eight (13 girls, 15 boys) of these children have completed multiple years of annual neuroimaging, biometric, neurological exams, and cognitive testing as listed in Table 1. Biometric data reported here include height, weight, and Body Mass Index (BMI) (Ogden et al., 2006). For each visit, MRI scanning was completed, if possible, given the child's status (e.g. orthodontic braces, and medical status). Cognitive, developmental, and biological measures were recorded according to the schedule in Table 2 for the longitudinal cohort. IRB approval was obtained for the study and informed consent was obtained from parents as well as assent of minor participants. We examined the longitudinal change in the Wechsler Intelligence Scale for Children, Third Edition (Wechsler, 1991) (WISC-III) administered to children prior to the first MRI and after the 3rd and 5th scans. Data from years 1, 3, and 5 for the FSIQ from WISC-III are reported. In addition, the Coding subtest from the WISC-III was administered to all participants again in year 10 and is used to model the longitudinal trend across all scan years (1st, 3rd, 5th and 10th). We computed the linear regression of the Coding subtest scores for WISC-III, accounting for the repeated nature of the data. The resulting line for the test with Table 2 List and administration time of relevant neuroimaging, cognitive and biometric measurements for the longitudinal cohort. Years Measurements 1 2 3 4 5 6 7 8 9 10 Neuroimaging:MRI X X X X X X X X X X Cognitive: WISC-III/WPPSI-III (Wechsler, 1991) X X X Coding X X X X WASI X OWLS (Carrow-Woolfolk, 1995) X X X L i s t e n i n gc o m p r e h e n s i o n X X X Oral expression X X X Oral comprehension X X X BRIEF (Gioia et al., 2000(Gioia et al., , 2002 parent X X X X X BRI X X X X X GEC X X X X X MI X X X X X Weight X X X X X X X X X X Height X X X X X X X X X X the corresponding 95% confidence interval is shown in Fig. 1. We also fit a linear regression for the FSIQ obtained from WISC-III across the 1st, 3rd and 5th scan times as displayed in Fig. 1 (right). In years 6-10, executive functioning was assessed annually by administering the parent form of the Behavior Rating Inventory of Executive Function (BRIEF) (Gioia et al., 2000). In this analysis we use the Global Executive Composite (GEC) score from the BRIEF as an overarching summary T-score with a mean of 50 and standard deviation of 10. As with the Wechsler scales above, we fit a regression model that accounts for the repeated measures nature of the data to examine the relationship between the number of MRI scans and these scores. We plotted the fitted line with the corresponding 95% confidence interval as shown in Fig. 2 (right). In addition, we examined the Oral and Written Language Scales (OWLS) (Carrow-Woolfolk, 1995) administered to children prior to the first MRI and after the third and fifth annual scans. These results are also shown graphically in Fig. 2 (left). Finally, we also evaluated collected biometric data for weight, height, and Body Mass Index (BMI) in this cohort and compared it to the corresponding norms, using age-and sex-adjusted data from the National Center for Health Statistics (NCHS) of the Center for Disease Control and Prevention (CDC). We used the 5th, 50th and 95th percentiles for body mass index (BMI) to illustrate the corresponding norm for our longitudinal cohort as shown in Fig. 3. The mean and standard deviation of the Coding subtests obtained from the WISC-III at baseline and at year 10 was 11.4(3.08) and 10.4(3.5), with a p-value of 0.35. The plot and fit of the data across years 1, 3, 5 and 10 have a non-significant slope (p = 0.15) as illustrated in Fig. 1 (left). The mean BRIEF GEC scores in years 6-10 were (49.5(9.3), 45.6(7.5), 47.3(11.2), 47.0(10.0) and 46.1(9.3); p = 0.67) respectively; again the trend in the linear regression with the number of annual MRI scans does not reach significance (Fig. 2, right). Similarly, the body mass index did not deviate from norms (50th percentile) and most of the measurements are within the 5th and 95th percentile of the CDC BMI chart over 10 years (Fig. 3). Note that the elevated scores for the cognitive measures in our cohort at baseline render comparisons with the population norms for the tests irrelevant. For example, the mean and standard deviation WISC-III FSIQ at baseline was 117.9 ± 13.5. Comparing our cohort directly with norms (100 ± 15) might suggest that only higher scoring children participate in MRI brain imaging research studies, which is not a relevant point to this study. Consequently we focus primarily on analysis of the trends in biometric and cognitive scores over time, relative to normative trends. Discussion Adverse cognitive or biological effects from repeated MRI scans are not evident in the data from this longitudinal sample of children in the age range of 5 to 18 years using the gross cognitive and growth rate measures administered during the course of 10 years of exposure to annual MRI scans. The effect size estimated as the least square mean difference between the scores at the last and first time points is small (effect size for WISC FSIQ = 0.17, BRIEF = 0.31 and OWLS = 0.38) and without consistent positive or negative trends. This suggests that any changes due to repeated MRI scanning are likely to be very subtle and not clinically significant. Based on these effect sizes, estimated sample sizes of 280, 83, and 55 would be needed to detect significant positive or negative changes over time in FSIQ, BRIEF, and OWLS, respectively. These estimates are based on five-year average exposure of MRI scans and 80% power. It is not possible to prove conclusively that deleterious effects do not occur with repeated MRI in children and the present data can only be properly interpreted as an upper limit on how safe repeated MRI can be for children in the specific age range of 5 to 18 years. In the present study, change over time for cognitive measures was less than the standard error for these measures. This magnitude of change is within the range that would be expected for retesting children on these measures, regardless of whether they had received repeated MRI. Likewise, the distribution of BMI is not distinguishable from normal trends. We attempted to minimize practice effects on the WISC-III and WAIS-III by not repeating the tests every year. The tests were administered every other year in most cases and the WASI test was administered 5 years after the last exposure to WISC for most children. The "Flynn Effect" is also known to result in increasing IQ scores in populations, related to increases in fluid and crystallized intelligence over time (Flynn, 1994). If practice effects or Flynn Effect are present in our dataset, it would tend to inflate the cognitive test scores over time. Such an effect could be offset in our data by decreasing cognitive ability due to the repeated MRI exposures. There is no way to disambiguate these factors based on our retrospective study design and data we have collected and this is a limitation of the study. Other limitations of this retrospective study on the potential impact of repeated MRI exposure on physical and cognitive growth and development in healthy children include the small sample size, inconsistent cognitive testing due to the wide age range and a lack of cognitive measures specifically designed to be sensitive to longitudinal trends. While each individual in the group may have a different trend for a specific measure collected at different time points, positive or negative variations in individual trends are expected. In most cases the variations in individual trends, upward or downward with time, fall within the standard deviations of the measures. To make the association between MRI exposure and neurobehavioral or biometric measures, we can only make statistical inferences from the group data. In this case we are able to estimate the significance of the trends relative to norms from the general population and generalize our findings to the larger population. At the group level none of these trends is statistically significant. Despite the limitations described above, we are able to reject the first part of our initial hypothesis, that repeated exposure to MRI produces measureable adverse effects on neurocognitive development in children who are exposed to repeated MRI scan between the ages of 5 and 18 years. The biometric data in Fig. 3, although limited to BMI trends, also points to the rejection of the hypothesis that repeated exposure to MRI produces measureable adverse effects on physical development, though admittedly BMI is a very gross biometric measure and does not allow us to explore impact of MRI on specific areas of growth and development. If direct evidence for an adverse interaction of magnetic fields or MRI with biological systems is identified, then researchers using MRI to study human development must pause to consider the implications. Until such a mechanism is discovered we can only examine the relationships between MRI exposure and biological and behavioral measures of development using an epidemiological approach. Recent discussion of the safety of MRI for research in healthy children (Holland et al., 2010;Jiao, 2010;Prato et al., 2010) motivates us to use this approach to examine data from our longitudinal cohort of pediatric subjects as they grow into adulthood. Future studies should be conducted comparing participants who have had repeated MRI scans to a normative control group without exposure. Ideally, a prospective study from birth to adulthood would be conducted in a large cohort of participants in a longitudinal study with repeated exposures to MRI along with consistent cognitive assessments using instruments designed to be administered repeatedly without influence from practice effects. There are a number of such instruments available such as the ANAM (Kabat et al., 2001) and Cogstate (Falleti et al., 2006). Generally these tests are designed to test for subtle decline in cognitive ability due to brain injury or neurodegenerative diseases in adults. However, there are few such instruments with norms for children. By our own estimates for effect sizes described above, cohorts of 100 to 300 children would be needed to detect significant changes over time using the gross measures we had available for this retrospective study. Using modern computer-based cognitive assessments designed to avoid practice effects in repeated administrations of the tests should improve sensitivity and might reduce sample size requirements. However, this type of study will take decades to complete and there are many disincentives to perform it, including cost, perceived risk to subjects, and reluctance in the medical community and corporate interests to turn up any adverse effects from MRI in children. Given the lack of evidence for acute adverse effects from MRI scanning during its long history and widespread clinical use, it appears unlikely that such effects exist. The benefit of MRI for clinical diagnosis is unequivocal and the medical-legal system in the United States weighs heavily in favor of using MRI in children to avoid missing a diagnosis or subjecting children to more invasive or risky procedures such as biopsies or X-rays. Consequently it is unlikely that the ideal prospective, longitudinal MRI bioeffects study will ever be funded or conducted in children. Meanwhile the data reported here provide some level of assurance that up to 10 MRI scans do not produce observable deleterious bioeffects in children and the results can be used to define a framework for the design of a larger scale study. Conclusion Examination of cognitive and biometric data from a decade-long longitudinal fMRI study of normal language development in this small, longitudinal sample of healthy children in the age range of 5 to 18 years, who received up to 10 MRI scans, provides evidence to support the belief that MRI poses minimal (if any) risk for use in research with healthy children.
Hormonal Therapy Resistance and Breast Cancer: Involvement of Adipocytes and Leptin Obesity, a recognized risk factor for breast cancer in postmenopausal women, is associated with higher mortality rates regardless of menopausal status, which could in part be explained by therapeutic escape. Indeed, adipose microenvironment has been described to influence the efficiency of chemo- and hormonal therapies. Residual cancer stem cells could also have a key role in this process. To understand the mechanisms involved in the reduced efficacy of hormonal therapy on breast cancer cells in the presence of adipose secretome, human adipose stem cells (hMAD cell line) differentiated into mature adipocytes were co-cultured with mammary breast cancer cells and treated with hormonal therapies (tamoxifen, fulvestrant). Proliferation and apoptosis were measured (fluorescence test, impedancemetry, cytometry) and the gene expression profile was evaluated. Cancer stem cells were isolated from mammospheres made from MCF-7. The impact of chemo- and hormonal therapies and leptin was evaluated in this population. hMAD-differentiated mature adipocytes and their secretions were able to increase mammary cancer cell proliferation and to suppress the antiproliferative effect of tamoxifen, confirming previous data and validating our model. Apoptosis and cell cycle did not seem to be involved in this process. The evaluation of gene expression profiles suggested that STAT3 could be a possible target. On the contrary, leptin did not seem to be involved. The study of isolated cancer stem cells revealed that their proliferation was stimulated in the presence of anticancer therapies (tamoxifen, fulvestrant, doxorubicine) and leptin. Our study confirmed the role of adipocytes and their secretome, but above all, the role of communication between adipose and cancer cells in interfering with the efficiency of hormonal therapy. Among the pathophysiological mechanisms involved, leptin does not seem to interfere with the estrogenic pathway but seems to promote the proliferation of cancer stem cells. Introduction Breast cancer is the most common cancer among women with 523,000 new cases, which represents 13.4% of all cancer cases in Europe and is the leading cause of death in women (138,000, 16.2%) [1]. Many epidemiological studies have confirmed the link between obesity and the development of cancers such as colon, prostate, and more recently, ovarian cancer. Overweight and obesity are associated with a higher risk of postmenopausal breast cancer (RR = 1.12, 95% confidence interval [95% CI] = 1.09-1.15) [1], larger tumor, positive lymph-node status, and reduced outcome regardless of menopausal status [2]. Furthermore, weight change during cancer treatment could also be associated with a poorer diagnosis [3]. Despite the accumulation of evidence linking obesity to the development of breast cancer, this factor is rarely taken into account and could be decisive in the implementation of individualized treatment for overweight patients. The mechanisms by which obesity could interact with the development of cancer are complex and not fully understood. The importance of the mammary tumor microenvironment in the development, growth, and progression of cancer is widely recognized today. Interactions between the different cell types present in the adipose microenvironment are now described. This mammary adipose microenvironment is very heterogeneous and mainly consists of adipocytes (50% to 80% of the vascular fraction) as well as a set of cell types forming the stromal-vascular fraction and containing adipose stem cells, endothelial and immune cells, fibroblasts, as well as extracellular matrix (laminin, fibronectin, collagens, proteoglycans). Adipose tissue is now considered to be an endocrine organ capable of secreting soluble factors that can act on surrounding cells and on the composition of the extracellular matrix [4]. These are mainly growth factors, cytokines, adipokines, proteases, or vascular stimulation factors. Breast stroma can undergo phenotypic and functional changes to be active and provide a favorable environment for mammary tumor development [5]. Resistance of cancer cells to therapies has been proposed to explain the link between obesity and the higher mortality observed in breast cancer patients. Indeed, studies suggest such resistance for aromatase inhibitors [6] or neoadjuvant chemotherapies, such as anthracyclines and taxanes [7]. About 70% of breast cancers express estrogen receptors (ER+) and the most effective treatment is hormonal therapy which blocks estrogen activity. Current hormonal therapies are mainly tamoxifen (Tx), an estrogen receptor antagonist, fulvestrant (Fv), a pure anti-estrogen responsible for the downregulation and degradation of this receptor, and aromatase inhibitors such as anastrozole and letrozole [8]. Unfortunately, de novo or acquired resistance could be developed, leading to disease progression and increased mortality, which could be amplified in overweight women [9]. Some authors demonstrated that aromatase inhibitors may be less efficient than tamoxifen in overweight and obese patients [10]. Tx resistance seems to be multifactorial with, for example, the lack of ER function or disruption of signaling pathways (RTK, PI3K, NF-kB) [11]. The role of cancer stem cells (CSCs) in hormonal therapy resistance has also been investigated. CSCs have common properties with normal stem cells, including their ability to self-renew and differentiate [12], but also additional specific features, such as uncontrolled proliferation and partial or abnormal differentiation, contributing to increased tumorigenicity in component tumors, potentially driving metastases. CSCs, previously discovered in liquid tumors, were finally found in solid tumors such as breast cancer in which CSCs could represent 0.1% to 2% of all breast cancer cells [13]. Breast CSCs express a particular phenotype based on the expression of markers such as CD44 + /CD24 − , an enzymatic activity of aldehyde dehydrogenase (ALDH), an overexpression of transmembrane pumps (ATP-binding cassette, subfamily G, member 2 ABCG2) leading to the exclusion of Hoechst 33342 fluorescent probe and the characterization of a cell population called a "side-population" (SP) fraction [14]. These SP fraction may therefore be able to reject toxic drugs, indicating their involvement in cancer drug resistance. The role of CSCs in resistance to hormonal therapy has also been investigated. Tx is able to promote the CSCs survival in MCF-7, since a pretreatment with 4-hydroxytamoxifen (4-OH-Tx), a metabolite of Tx, raises their ability to form mammospheres [15]. Thus, a worse prognosis is observed in obese or overweight women treated with hormone therapy with Tx that is more effective than anti-aromatase inhibitors, the resistance mechanisms remaining unknown. So the aim of this study was to investigate (i) how obesity could interfere with hormonal therapies by evaluating the role of the adipose microenvironment in signaling pathways and (ii) the comportment of isolated mammary CSCs in the presence of hormonal therapy and adipokines. All the cells were under mycoplasma-free conditions (MycoAlertTM PLUS Mycoplasma detection Kit, Lonza, Basel, Switzerland). Influence of Adipose Secretome To assess the specific role of adipose secretome, the proliferation of mammary cancer cells (MCF-7 and MDA-MB-231) cultured with conditioned media (CM) obtained from the culture of the MA was evaluated using the iCELLigence technology which allows automatic monitoring of cell adherence and proliferation in real-time. CM were collected after 48 h of culture of MA in DMEM/F12 supplemented with FBS (10%) and glutamine (1%). Before use, they were kept under nitrogen atmosphere at −80 • C. After 24 h of adhesion in the iCELLigence system, cells were exposed to CM (dilution 1:1 in fresh complete adipose cell media) and/or tamoxifen (Tx, IC 50 = 10 µM, Sigma-Aldrich) for 72 h. The impedance value of each well was measured by the iCELLigence system every 10 min for 72 h and expressed as cell index (CI) values. Data for cell adherence were normalized at 24 h corresponding to the time of treatment to give a normalized cell index. Three independent experiments were conducted. Evaluation of Cell Cell Interactions A system of co-cultures was used between breast cancer cells seeded at the bottom of wells and MA seeded in inserts, allowing to assess the interactions between the two cell types through a porous membrane (Transwell culture system, porosity 0.4µm). The proliferation was measured after 72 h with the resazurin test (3 independent experiments) (Exw = 530 nm and Emw = 590 nm, Fluoroskan Ascent FL ® , Thermo Fisher Scientific). Results were expressed as a percentage of cell growth ± ESM. Paired t-tests were used to compare gene expression levels with at least 2 valid pairs of values. Control of the false discovery rate due to multiple testing was done according to the Benjamini Hochberg method for each comparison separately. Using ∆CT values, gene expression was plotted as a heatmap, paired with a two-way hierarchical cluster analysis (package "gplots") in R version 3.5.0. To infer how gene expression covaried with the factors "Tx treatment" and "co-cultured MCF-7", a multiple factor analysis (MFA) was carried out on ∆CT using the package "FactoMineR". Phenotyping of SP Cells Cells were incubated in cold PBS-2% FBS with directly-conjugated primary antibodies or isotype controls, i.e., CD44 FITC (10 µL per test) or CD24 PE (10 µL per test) (BD Pharmingen, San Jose, CA, USA) for 20 min at 4 • C in the dark according to the manufacturer's instructions. PI (5 µg/mL) was added to discriminate dead cells, and cell suspensions were filtered through a 40 µm cell strainer (BD Biosciences, San Jose, CA, USA) just prior to analysis. Identification of ALDH-1-Expressing Cells Cells were analyzed using an ALDEFLUOR detection kit (StemCell Technologies, Vancouver, BC, Canada) according to the manufacturer's instructions. The activated BODIPYTM-aminoacetaldehyde (BAAA) is a fluorescent non-toxic substrate for ALDH that diffuses into viable cells. In the presence of ALDH1, BAAA is converted into BODIPYTM-minoacetate, which is retained inside the cells. The amount of fluorescence reaction is proportional to ALDH1 activity. A specific inhibitor of ALDH, diethylaminobenzaldehyde (DEAB), is used to control for background fluorescence. Cytometry Analysis, Cell Sorting Viable cells were analyzed and isolated by flow cytometry (BD FACSAria SORP, BD Biosciences) using an 85 µm nozzle. Data was acquired at 4 • C using BD Diva 7.0 software. The cytometer was equipped with an UV laser (OPSL 3.6, 100 mW, Coherent). Hoechst Blue and Hoechst Red were detected on a linear scale with a filter combination consisting of a 450/50-nm bandpass filter (Blue), a 670/30-nm bandpass filter (Red), and a 600-nm longpass filter to split the emission wavelengths. PI emission was measured on a logarithmic scale using a 695/40-nm bandpass filter and 488-nm excitation. Statistics Results were expressed as mean +/− SEM. Statistical analysis was performed using the paired, bilateral Student's t-test, or three-way ANOVA with RStudio, regression with Fisher's PLSD Post-Hoc test PLSD with StatView ® Software (SAS Institute Inc., Cary, NC, USA). Crosstalk between Mammary Cancer Cells and Adipocytes Is a Key Element in the Resistance of Antiestrogen Therapy and Can Be Mediated by the Indirect Estrogen Pathway In the presence of Tx (a selective estrogen receptor modulator (SERM), the proliferation of MCF-7 breast cancer cells was reduced (−50%) ( Figure 1A,B), corresponding to expected results since the half-maximal inhibitory concentration was used (IC 50 = 10 µM) as in our previous experiments [18]. When cancer cells were exposed to adipose secretome through the use of CM ( Figure 1A), MCF-7 proliferation was slightly increased (+130%, ns). The antiproliferative effect of Tx was reduced but not significantly (−25% in the presence of CM and Tx vs. −50% with Tx only, ns). Co-culture experiments were then realized in order to evaluate the interactions between cancer and adipose cells in the presence of antiestrogens ( Figure 1B). We found that MA and their secretions were able to double MCF-7 proliferation (+200%, p < 0.05 vs. control), to completely abolish the antiproliferative effect of Tx and even to increase the MCF-7 proliferation in the presence of Tx (+254%, p < 0.01 vs. control). These experiments confirmed the role of adipocytes and their secretome, but above all, the crosstalk between adipose and cancer cells. Similar results were found ( Figure 1C) using another antiestrogen agent such as fulvestrant (Fv), a selective estrogen receptor degrader (SERD). The antiproliferative effect of Fv was completely reversed in the presence of adipocytes and their secretions. The proliferation was even increased (+200%, p < 0.05), suggesting that adipocytes could also interfere with this pure antiestrogen. Since our results suggested that adipocyte and their secretions could modify the antiproliferative activity of two distinct antiestrogens routinely used in breast cancer treatment, we co-cultured MDA-MB-231, which are ER-mammary cancer cells, with hMAD differentiated cells. Tx was able to decrease the proliferation of these ER-cancer cells ( Figure 2A) and this effect was reduced in the presence of mature adipocytes. On the contrary, fulvestrant had no effect on this cell line ( Figure 2B). Co-culture experiments were then realized in order to evaluate the interactions between cancer and adipose cells in the presence of antiestrogens ( Figure 1B). We found that MA and their secretions were able to double MCF-7 proliferation (+200%, p < 0.05 vs. control), to completely abolish the antiproliferative effect of Tx and even to increase the MCF-7 proliferation in the presence of Tx (+254%, p < 0.01 vs. control). These experiments confirmed the role of adipocytes and their secretome, but above all, the crosstalk between adipose and cancer cells. Similar results were found ( Figure 1C) using another antiestrogen agent such as fulvestrant (Fv), a selective estrogen receptor degrader (SERD). The antiproliferative effect of Fv was completely reversed in the presence of adipocytes and their secretions. The proliferation was even increased (+200%, p < 0.05), suggesting that adipocytes could also interfere with this pure antiestrogen. Since our results suggested that adipocyte and their secretions could modify the antiproliferative activity of two distinct antiestrogens routinely used in breast cancer treatment, we co-cultured MDA-MB-231, which are ER-mammary cancer cells, with hMAD differentiated cells. Tx was able to decrease the proliferation of these ER-cancer cells ( Figure 2A) and this effect was reduced in the presence of mature adipocytes. On the contrary, fulvestrant had no effect on this cell line ( Figure 2B). These data suggested that the reduced antiproliferative effect of Tx observed in the presence of adipocytes was mediated by another pathway than the direct estrogen receptor pathway. These data suggested that the reduced antiproliferative effect of Tx observed in the presence of adipocytes was mediated by another pathway than the direct estrogen receptor pathway. Both co-culture and Tx treatment are responsible for specific gene expression profile. A multiple factor analysis (MFA) was conducted in order to evaluate the balance between two qualitative variable ("co-culture" and "Tx treatment" conditions) and quantitative variables corresponding to gene expression data. The MFA analysis revealed the most powerful dimension with the condition "Tx treatment" (Dimension 1 explained 51.37% of the variability). The second dimension corresponding to the condition "Co-culture" explained 37.75% of the variability ( Figure 3A). Four distinct groups were clearly segregated: control cells; co-cultured cells; cells treated with Tx; co-cultured cells treated with Tx. Barycentres of "Tx treated cells"; and "non treated cells" showed that Tx treatment accentuated the differences between groups according to dimension 1 (green lines). Similarly, barycenters of "cocultured cells" and "non treated cells" showed that the co-culture accentuated the differences between groups according to dimension 2 (red lines). Blue lines corresponded to the influence of gene expression. An analysis with circle correlation was made ( Figure 2B) and a resume of the MFA is supplied in Figure 3C. Hierarchical cluster analysis ( Figure 3D) segregated cell types according to Tx treatment which had a high impact on gene expression. Both co-culture and Tx treatment are responsible for specific gene expression profile. A multiple factor analysis (MFA) was conducted in order to evaluate the balance between two qualitative variable ("co-culture" and "Tx treatment" conditions) and quantitative variables corresponding to gene expression data. The MFA analysis revealed the most powerful dimension with the condition "Tx treatment" (Dimension 1 explained 51.37% of the variability). The second dimension corresponding to the condition "Co-culture" explained 37.75% of the variability ( Figure 3A). Four distinct groups were clearly segregated: control cells; co-cultured cells; cells treated with Tx; co-cultured cells treated with Tx. Barycentres of "Tx treated cells"; and "non treated cells" showed that Tx treatment accentuated the differences between groups according to dimension 1 (green lines). Similarly, barycenters of "co-cultured cells" and "non treated cells" showed that the co-culture accentuated the differences between groups according to dimension 2 (red lines). Blue lines corresponded to the influence of gene expression. An analysis with circle correlation was made ( Figure 2B) and a resume of the MFA is supplied in Figure 3C. Hierarchical cluster analysis ( Figure 3D) segregated cell types according to Tx treatment which had a high impact on gene expression. Apoptosis and cell cycle would not be the biological pathways involved in Tx resistance in our experiments. The analysis with circle correlation ( Figure 3B) permitted to observe that Tx treatment was positively correlated with the expression of MYC and negatively with BCL2, AKT1, CCND1. Gene expression analysis (Figure 4) showed that the expression of BCL2, which was increased in co-cultured MCF-7, was greatly reduced in the presence of Tx Apoptosis and cell cycle would not be the biological pathways involved in Tx resistance in our experiments. The analysis with circle correlation ( Figure 3B) permitted to observe that Tx treatment was positively correlated with the expression of MYC and negatively with BCL2, AKT1, CCND1. Gene expression analysis (Figure 4) showed that the expression of BCL2, which was increased in co- When we evaluated the apoptotic process by an annexin V-FITC/PI apoptosis assay, we found that Tx decreased the percentage of live cells (ns) even in the presence of the adipose secretome (p < 0.05) ( Figure 5). In parallel, Tx increased the percentage of cells in late apoptosis even in the presence of CM. STAT3 Could Be a Target in Tx Resistance in the Presence of an Adipose Microenvironment Concerning the adipokines and the estrogen pathway, the circle correlation ( Figure 3B) revealed that the condition "Tx treatment" was positively correlated with the expression of TNF and the condition "co-culture" was positively correlated with the expression of STAT3 and ESR2. The evaluation of gene expression (Figure 4) revealed that ESR1 was significantly under-expressed during Tx exposure, and that this expression returned to a level comparable to that of control in the When we evaluated the apoptotic process by an annexin V-FITC/PI apoptosis assay, we found that Tx decreased the percentage of live cells (ns) even in the presence of the adipose secretome (p < 0.05) ( Figure 5). In parallel, Tx increased the percentage of cells in late apoptosis even in the presence of CM. cultured MCF-7, was greatly reduced in the presence of Tx (co-cultured MCF-7: relative expression [RE] = 1.48, p < 0.05; co-cultured MCF-7 + Tx: RE = 0.49, p < 0.05). The expression of Bax was not modified between conditions. Akt and CCND1 (cyclin D1) were under-expressed in the presence of Tx (AKT: RE = 0.68, p = 0.16; CCND1: RE = 0.35, p < 0.05) and their expression remained stable when cells where cultured with mature adipocytes. When we evaluated the apoptotic process by an annexin V-FITC/PI apoptosis assay, we found that Tx decreased the percentage of live cells (ns) even in the presence of the adipose secretome (p < 0.05) ( Figure 5). In parallel, Tx increased the percentage of cells in late apoptosis even in the presence of CM. STAT3 Could Be a Target in Tx Resistance in the Presence of an Adipose Microenvironment Concerning the adipokines and the estrogen pathway, the circle correlation ( Figure 3B) revealed that the condition "Tx treatment" was positively correlated with the expression of TNF and the condition "co-culture" was positively correlated with the expression of STAT3 and ESR2. The evaluation of gene expression (Figure 4) revealed that ESR1 was significantly under-expressed during Tx exposure, and that this expression returned to a level comparable to that of control in the STAT3 Could Be a Target in Tx Resistance in the Presence of an Adipose Microenvironment Concerning the adipokines and the estrogen pathway, the circle correlation ( Figure 3B) revealed that the condition "Tx treatment" was positively correlated with the expression of TNF and the condition "co-culture" was positively correlated with the expression of STAT3 and ESR2. The evaluation of gene expression (Figure 4) revealed that ESR1 was significantly under-expressed during Tx exposure, and that this expression returned to a level comparable to that of control in the presence of adipose cells. qPCR analyses also showed a significant increase of STAT3 expression in the presence of adipose secretome and Tx, suggesting that this pathway may be a target in the drug resistance observed. Leptin Does Not Seem to Be Involved in the Reduction of the Antiproliferative Effect of Tamoxifen We then investigated if leptin, a major adipokine whose concentrations are increased in obese people, could be involved in the resistance to hormonal therapy. An anti-leptin antibody was used at a concentration of 0.5 µg/mL (according to supplier's recommendations), which is the highest concentration that we have found in the media, thus neutralizing all the leptin present in our experiments ([leptin] = 3.1 ng/mL in CM). When MCF-7 mammary cancer cells were co-cultured with adipocytes, the anti-leptin antibody had no effect ( Figure 6) suggesting that leptin was not the adipokine involved in the decreased antiproliferative effect of Tx. The ANOVA only revealed the discrimination between co-cultured cells vs. non-cultured cells. The analysis of gene expression did not indicate the involvement of leptin, adiponectin, and their receptors in the lower efficiency of Tx. presence of adipose cells. qPCR analyses also showed a significant increase of STAT3 expression in the presence of adipose secretome and Tx, suggesting that this pathway may be a target in the drug resistance observed. Leptin Does Not Seem to Be Involved in the Reduction of the Antiproliferative Effect of Tamoxifen We then investigated if leptin, a major adipokine whose concentrations are increased in obese people, could be involved in the resistance to hormonal therapy. An anti-leptin antibody was used at a concentration of 0.5 µg/mL (according to supplier's recommendations), which is the highest concentration that we have found in the media, thus neutralizing all the leptin present in our experiments ([leptin] = 3.1 ng/mL in CM). When MCF-7 mammary cancer cells were co-cultured with adipocytes, the anti-leptin antibody had no effect ( Figure 6) suggesting that leptin was not the adipokine involved in the decreased antiproliferative effect of Tx. The ANOVA only revealed the discrimination between co-cultured cells vs. non-cultured cells. The analysis of gene expression did not indicate the involvement of leptin, adiponectin, and their receptors in the lower efficiency of Tx. Anticancer Treatment and Adipokines Increase the Proliferation of Isolated SP Cells The identification of ALDH1+ cells (Figure 7) permitted us to identify 0.1% of ALDH1+ CSCs in parental MCF-7. This number was obtained by subtracting the percentage of ALDH1+ (DEAB+) cells from the percentage of ALDH1+ (DEAB-). When mammospheres were formed after 3 weeks of culture, the number of ALDH1+ cells was doubled in the cell population. Anticancer Treatment and Adipokines Increase the Proliferation of Isolated SP Cells The identification of ALDH1+ cells (Figure 7) permitted us to identify 0.1% of ALDH1+ CSCs in parental MCF-7. This number was obtained by subtracting the percentage of ALDH1+ (DEAB+) cells from the percentage of ALDH1+ (DEAB-). When mammospheres were formed after 3 weeks of culture, the number of ALDH1+ cells was doubled in the cell population. CD24low/CD44+ fraction was similarly increased in mammospheres in comparison with parental MCF-7 cells (from 1.3% in parental MCF-7 cells ( Figure 8A) to 3% in mammospheres ( Figure 8B)). SP fraction was similarly increased in mammospheres, compared to parental MCF-7 cells (0.5% in parental MCF-7 cells ( Figure 8C) to 3% (Figure 8D)). In both MCF-7 ( Figure 8E) or mammosphere culture ( Figure 8F) conditions, the ABC transporter inhibitor (Verapamil) was able to inhibit Hoechst exclusion, resulting in the disappearance of SP cells in the gate and thus confirming the proper identification of SP cells. CD24low/CD44+ fraction was similarly increased in mammospheres in comparison with parental MCF-7 cells (from 1.3% in parental MCF-7 cells ( Figure 8A) to 3% in mammospheres ( Figure 8B)). SP fraction was similarly increased in mammospheres, compared to parental MCF-7 cells (0.5% in parental MCF-7 cells ( Figure 8C) to 3% (Figure 8D)). In both MCF-7 ( Figure 8E) or mammosphere culture ( Figure 8F) conditions, the ABC transporter inhibitor (Verapamil) was able to inhibit Hoechst exclusion, resulting in the disappearance of SP cells in the gate and thus confirming the proper identification of SP cells. The impact of isolated CSCs in anticancer therapy resistance and the influence of adipokines were evaluated on isolated SP cells after exposure to Tx, Fv, Doxo, or leptin. Cell proliferation was measured at 24 h by resazurin assay (Figure 9). All the anticancer molecules used have shown an increase in the SP cell proliferation after 24 h of treatment. In the same way, the SP cell proliferation was also significantly increased after leptin treatment. The impact of isolated CSCs in anticancer therapy resistance and the influence of adipokines were evaluated on isolated SP cells after exposure to Tx, Fv, Doxo, or leptin. Cell proliferation was measured at 24 h by resazurin assay (Figure 9). All the anticancer molecules used have shown an increase in the SP cell proliferation after 24 h of treatment. In the same way, the SP cell proliferation was also significantly increased after leptin treatment. Discussion Increased BMI and obesity increase breast cancer mortality and some studies highlight the role of the adipose microenvironment in the resistance to cancer treatment [1]. Despite the high efficacy of Tx treatment, a relapse is observed in 10% to 40% of patients depending on initial nodal status and tumor grade [19]. Through tridimensional and co-culture models, we have recently shown that the secretome of adipocytes from obese women is able to reduce the antiproliferative activity of Tx [18]. In this present study, we aimed to identify biological pathways and the involvement of cancer stem cells in the resistance to hormonal treatment. Using an adipose cell line, which permitted us to eliminate interindividual variations, we confirmed that the adipose secretome and above all, the interactions between adipose and breast cancer cells participated in the lower efficacy of anti-estrogen treatments (Tx and Fv). The mechanisms involved did not seem to be apoptosis and cell cycle but rather the indirect estrogen pathway. This activation may involve adipokines through the activation of the JAK/STAT pathway. The role of leptin, a major adipokine secreted by adipose tissue, has been largely investigated [20,21] and we have shown that it stimulates the proliferation of breast cancer cells but not that of normal cells [20,22]. The different angiogenic processes (proliferation, migration, and invasion of endothelial cells) are also favored by leptin. In addition, this adipokine is able to reduce the effectiveness of antineoplastic molecules such as 5-fluorouracil, taxol, or vinblastine, as well as antiestrogens such as Tx [23]. Clinical studies confirmed a positive association between serum leptin levels and breast cancer risk particularly in overweight/obese women [24]. However, our results did not show the involvement of leptin in the resistance of hormonal therapies in our model. We hypothesized that the JAK/STAT pathway could be involved. In the literature, Janus kinases (JAK) and signal transduction and transcription activation (STAT) proteins, particularly STAT3, are described as the most promising targets for cancer treatment. STAT3 is described as both a transcriptional activator and an oncogene that is tightly regulated under physiological conditions. STAT3 is constitutively activated in all breast cancer subtypes and predominantly in triple-negative Discussion Increased BMI and obesity increase breast cancer mortality and some studies highlight the role of the adipose microenvironment in the resistance to cancer treatment [1]. Despite the high efficacy of Tx treatment, a relapse is observed in 10% to 40% of patients depending on initial nodal status and tumor grade [19]. Through tridimensional and co-culture models, we have recently shown that the secretome of adipocytes from obese women is able to reduce the antiproliferative activity of Tx [18]. In this present study, we aimed to identify biological pathways and the involvement of cancer stem cells in the resistance to hormonal treatment. Using an adipose cell line, which permitted us to eliminate interindividual variations, we confirmed that the adipose secretome and above all, the interactions between adipose and breast cancer cells participated in the lower efficacy of anti-estrogen treatments (Tx and Fv). The mechanisms involved did not seem to be apoptosis and cell cycle but rather the indirect estrogen pathway. This activation may involve adipokines through the activation of the JAK/STAT pathway. The role of leptin, a major adipokine secreted by adipose tissue, has been largely investigated [20,21] and we have shown that it stimulates the proliferation of breast cancer cells but not that of normal cells [20,22]. The different angiogenic processes (proliferation, migration, and invasion of endothelial cells) are also favored by leptin. In addition, this adipokine is able to reduce the effectiveness of antineoplastic molecules such as 5-fluorouracil, taxol, or vinblastine, as well as anti-estrogens such as Tx [23]. Clinical studies confirmed a positive association between serum leptin levels and breast cancer risk particularly in overweight/obese women [24]. However, our results did not show the involvement of leptin in the resistance of hormonal therapies in our model. We hypothesized that the JAK/STAT pathway could be involved. In the literature, Janus kinases (JAK) and signal transduction and transcription activation (STAT) proteins, particularly STAT3, are described as the most promising targets for cancer treatment. STAT3 is described as both a transcriptional activator and an oncogene that is tightly regulated under physiological conditions. STAT3 is constitutively activated in all breast cancer subtypes and predominantly in triple-negative cancers [25]. A large number of molecules are able to activate the STAT3 pathway. Firstly, cytokines (IL-6, IL-8, IL-11, Oncostatin, IL-10, IL-32), and growth factors are able to bind to receptors with tyrosine kinase activity (RTK) such as EGFR, HER, VEGFR, but also non-RTK (Src) receptors. The activation of STAT3 results in the activation of cell proliferation, survival, invasion, angiogenesis, and also in the epithelial-mesenchymal transition (EMT) [25,26]. Recently, authors highlighted the role of IL-6 in therapeutic resistance [27] and in inducing an EMT-phenotype. By deactivating the secretion of IL-6 by adipocytes and cancer cells, they managed to decrease proliferation, migration, invasion, and EMT, suggesting the paracrine role of this adipokine [28]. Thus, the different actors of the mammary adipose microenvironment and the cancer cells themselves can activate the STAT3 pathway in an autocrine or paracrine manner. A large proteomic study of adipose tissue samples collected next to mammary tumors identify a large number of hormones, cytokines, and growth factors involved in various biological pathways such as signal transduction, cell growth, immune response, or apoptosis [29]. Recently, conditioned media of mammary adipose tissue from women with breast cancer have been shown to promote proliferation, adhesion, and migration of mammary epithelial cells, contrary to the conditioned media from the adipose tissue of non-cancer patients, suggesting the importance of soluble adipose tissue factors nearby tumor cells [30]. In addition, mammary cancer cells may alter the phenotype of adipocytes, which in turn would promote tumor aggressiveness and local invasion [31]. These adipocytes located near the tumor are called "cancer-associated adipocytes" (CAA) and are characterized by a loss of lipid content, a decrease in late marker expression of adipogenesis, and an overexpression of inflammation markers (IL-6, IL-1β) and proteases (MMP-11, PAI-1) [31]. A recent study confirmed that CAA are smaller than adipocytes located more than 2 cm from the tumor or adipocytes from a breast without tumor. Moreover, it seems that these adipocytes more strongly express the versican (proteoglycan involved in the binding of cells with the extracellular matrix), AdipoR1 (compensatory phenomenon following the decrease of the expression of adiponectin), and CD44 (role in the adhesion and migration), and on the contrary show a decrease in the expression of adiponectin and perilipin (MA marker) [32]. We had previously shown the role of leptin in interfering with anti-cancer treatments [23], and herein we wanted to see if leptin had any activity on cancer stem cells (CSCs). Indeed, there is emerging evidence that CSCs are involved in breast cancerogenesis and resistance to anticancer therapies in breast cancer. However, the low number of these cells makes their analyses and functionality difficult to study. Under these conditions, mammosphere formation increased the percentage of SP cells, ALDH1 + , and CD24 low /CD44 + fractions and offered a better way to determine the functionality of these cells. Our results on the treatment of isolated SP fractions showed that CSCs were resistant to anticancer treatment such as Tx, Fv, and doxorubicin. It was recently found that Tx had the ability to promote the survival of cells with stem cell properties in MCF-7 cells, since a pretreatment with 4-OH-Tx raised the ability of these cells to form mammospheres [15]. Conversely, Ao et al. showed that treatment with 4-OH-Tx or Fv induced a decrease in cell proliferation, even if 4-OH-Tx or Fv did not affect the ability of these cells to form mammospheres or their tumorigenicity [15]. The difference between the two studies was explained by the differences in concentrations of 4-OH-Tx used and also in technical design. Our results also indicated that leptin was able to increase SP cell proliferation. It was previously shown that leptin was involved in the stimulation and maintenance of breast CSCs [33] and that CSCs survival, evaluated through mammosphere formation, increased in response to leptin, along with an increase in leptin receptor gene expression [34]. Adipokines such as leptin could therefore be involved in CSCs-induced resistance to anti-cancer treatment, especially in overweight patients. Thus, adipose secretome and cellular interactions seem to play a key role in resistance to hormonal therapy. This resistance cannot be explained by a single process. Among the pathophysiological mechanisms involved, leptin does not seem to interfere with the estrogenic pathway but could favor the proliferation of CSCs. Further studies are needed to identify targets and facilitate the implementation of personalized care for overweight breast cancer patients, which should be considered as an unique patient population.
Immunomodulatory Activities of a Fungal Protein Extracted from Hericium erinaceus through Regulating the Gut Microbiota A single-band protein (HEP3) was isolated from Hericium erinaceus using a chemical separation combined with pharmacodynamic evaluation methods. This protein exhibited immunomodulatory activity in lipopolysaccharide-activated RAW 264.7 macrophages by decreasing the overproduction of tumor necrosis factor-α, interleukin (IL)-1β, and IL-6, and downregulating the expression of inducible nitric oxide synthase and nuclear factor-κB p65. Further researches revealed that HEP3 could improve the immune system via regulating the composition and metabolism of gut microbiota to activate the proliferation and differentiation of T cells, stimulate the intestinal antigen-presenting cells in high-dose cyclophosphamide-induced immunotoxicity in mice, and play a prebiotic role in the case of excessive antibiotics in inflammatory bowel disease model mice. Aided experiments also showed that HEP3 could be used as an antitumor immune inhibitor in tumor-burdened mice. The results of the present study suggested that fungal protein from H. erinaceus could be used as a drug or functional food ingredient for immunotherapy because of its immunomodulatory activities. inTrODUcTiOn Mushrooms are rapidly becoming recognized as a promising source of novel proteins. Fungal immunomodulatory proteins (FIPs) are small-molecule proteins extracted from the fruiting body of some higher basidiomycetes (mushrooms). FIPs have similar structure and immune function as lectins and immunoglobulins, which were first extracted from Ganoderma lucidum in 1989. Different kinds of FIPs were extracted from G. lucidum, G. tsugae Murrill, Flammulina velutipes, and Volvariella volvacea continuously (1)(2)(3)(4). FIPs have exhibited many beneficial functions in previous studies, including antitumor (5), antiallergy (6,7), and the ability to stimulate immune cells to produce cytokines (8,9). Several proteins as lectins (10), lignocellulolytic enzymes (11)(12)(13)(14), protease inhibitors (15,16), and hydrophobins (17)(18)(19) have shown unique features and could offer solutions to several medical and biotechnological problems (such as microbial drug resistance, low crop yields, and demands for renewable energy). These stunning properties along with the absence of toxicity render these biopolymers ideal compounds for developing novel functional foods or nutraceuticals with the increase in consumers' consciousness and demand for healthy food. Large-scale production and industrial application of some fungal proteins prove their biotechnological potential and establish higher fungi as a valuable, although relatively unexplored, source of unique proteins. Hericium erinaceus, belonging to the division Basidiomycota and class Agaricomycetes, is both an edible and medicinal mushroom. It is popular across the continents for its delicacy and is used as a replacement for pork or lamb in Chinese vegetarian cuisine. It is rich in active constituents such as diterpenoid compounds, steroids, polysaccharides, proteins, and other functional ingredients, which are used as good natural plant resources (18). Previous studies have shown the effectiveness of H. erinaceus in improving cognitive impairment (20), stimulating nerve growth factors (21) and nerve cells (22), improving hypoglycemia (23), and protecting against gastrointestinal cancers (24,25). They are also processed into different kinds of products (beverage, cookies, oral liquid, and so on) sold in supermarkets and drugstores. Until now, little has been studied about the proteins from H. erinaceus (26). A previous study revealed, using Coomassie Brilliant Blue G-250 method, that the content of total proteins in H. erinaceus was up to 20 mg/100 g, indicating that the proteins in H. erinaceus might be good active ingredients and hence should not be ignored. Therefore, the aim of this study was to evaluate the immunomodulatory activities of FIPs extracted from the fruiting bodies of H. erinaceus using cells and animal experiments and to reveal the underlying mechanism. This study might lay a foundation for the application of the nutritional and medicinal value of H. erinaceus. Plant Material and Protein extraction The fresh fruiting bodies of H. erinaceus were collected from the Research Laboratory of Edible Mushrooms of Guangdong Institute of Microbiology, China, in June 2015, and identified by Prof. Xie Yizhen of the Guangdong Institute of Microbiology. Fresh fruiting bodies (5,000 g) of H. erinaceus were pureed in a blender (Philips, HR2095/30, ROYAL PHILIPS, Amsterdam of Holland), and extracts were prepared by the methods shown in the Presentation S1 in Supplemental Material. The solutions were combined, filtrated after acidification to pH 4.3 with dilute acetic acid, and then mixed with (NH4)2SO4 to 80% saturation. The resulting solution was kept in a refrigerator at 4°C overnight and then centrifuged at 5,000 rpm for 20 min at 4°C. The supernatant was removed. The precipitation was dissolved in 5 mL of pH 8.0 Tris-HCl buffer and lyophilized in a vacuum freeze dryer (Alphai-4LD plus, Marin Christ, Osterode, Germany) for crude protein extraction ( Figure 1A). The next purification was done using the membrane separation technology combined with the activity evaluation experiment in rats with trinitrobenzenesulfonic acid solution (TNBS)-induced inflammatory bowel disease (IBD). The protein extracts were analyzed by sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE). SDS-PAGE (10% w/v) was performed on a Mini-PROTEAN II gel apparatus (Bio-Rad Laboratories, Inc., USA) as described by Laemmli and Favre (27). The gels were stained with Coomassie Brilliant Blue R-250, and protein molecular weight standard (Amersham Biosciences, Sweden) was used as a reference. As shown in Figure 1B, the extracts contained many kinds of proteins, with the majority having a molecular weight of 37-100 kDa; some had a molecular weight of 50-60 kDa (HEP3, Figure 1B). The proteins were isolated and purified using the membrane separation technology combined with Sephadex G-75 chromatography (Sigma-Aldrich Co. LLC, USA). animals This study used 5-to 6-week male Sprague-Dawley rats (weighing 180-220 g), 4-to 5-week male BALB/c mice (weighing 16-20 g), and Kunming male mice (weighing 18-22 g); all purchased from the Animal Center of the Guangdong Medical Laboratory Animal Center, Guangzhou, China. The animals were kept in the specific-pathogen-free Animal Laboratory of Guangdong Institute of Microbiology, in a temperature (23 ± 1°C) and humidity (55 ± 10%) controlled room under a 12-h light/dark cycle (lights off at 1700 p.m.). The animals were given free access to food and water that were sterilized. The experimental protocols were approved by the Animal Ethics Committee of Guangdong Institute of Microbiology, and all experimental procedures conformed to the National Institutes of Health Guide for the Care and Use of Laboratory Animals. All efforts were made to minimize the number of animals used. cell culture The RAW 264.7 macrophages, HIEpiC, and CC531 cell lines were obtained from the Shanghai Aolu Biological Technology Co., Ltd. (China). They were maintained in Dulbecco's modified Eagle medium or RPMI-1640 supplemented with 10% fetal bovine serum at 37°C in a humidified atmosphere of 95% air and 5% CO2 and seeded into a 75-cm 2 culture dish. On reaching 80% confluence, the cells underwent digestive transfer culture after fusion growth at a density of 5 × 10 4 cells/mL. anti-inflammatory evaluation of iBD Model rats After 7 days of adaptation period, the animals were randomly divided into four groups [100 mg/(kg ⋅ day): proteins extracted from H. erinaceus (HEP), model, normal, and 5-aminosalicylic acid groups], with six rats in each group, and housed three per cage. The rats were fed a standard diet, and water was available freely. After 24 h of fasting, the rats were anesthetized by intraperitoneally injecting 2% sodium pentobarbital (0.2 mL/100 g). The rats were intubated (using latex tubing of 2 mm diameter, lubricated with edible oil before use) from the anus, gently inserting the tubing into the lumen about 8.0 cm. Then, 150 mg/kg of TNBS (dissolved in 50% ethanol; Sigma-Aldrich, MO, USA) solution was injected through the latex tubing, and the rats were hung upside down for 30 s to enable the mixture to fully seep into FigUre 2 | Effects of HEP on trinitrobenzenesulfonic acid solution (TNBS)-induced rats. Normal group; model group, induced by TNBS enema; HEP group, the crude protein extract-treated group after TNBS enema; and positive control group, treated with 100 mg/(kg ⋅ day) of 5-aminosalicylic acid after TNBS enema. After treatment for 14 days, cytokines interleukin (1L)-1α (a), 1L-2 (B), 1L-8 (c), 1L-10 (D), 1L-11 (e), IL-12 (F), tumor necrosis factor (TNF)-γ (h), TNF-α (g), vascular endothelial growth factor (VEGF) (i), MIP-α (J), macrophage colony-stimulating factor (M-CSF) (K), and myeloperoxidase (MPO) (l) were produced. The assays were carried out according to the procedures recommended in the enzyme-linked immunosorbent assay kit manual. Values were means ± SDs of three independent experiments. # P < 0.05 vs the normal group, *P < 0.05, **P < 0.01 vs the TNBS-treated group. The activities of malondialdehyde, total superoxide dismutase, and glutathione peroxidase of the cells. Values were means ± SDs of three independent experiments. # P < 0.05 vs the normal group, *P < 0.05, **P < 0.01 vs the model group, indicating significant differences compared with the model group. the lumen without leakage. The rats in the HEP group were treated by intragastric administration after 1 day of TNBS induction. After 14 days of treatment, the rats were anesthetized by intraperitoneally injecting 2% sodium pentobarbital (0.25 mL/100 g). The blood plasma was collected by the abdominal aortic method, and the serum by centrifugation (1,500 rpm, 10 min). Then, the serum was used to monitor the production of the cytokines interleukin (1L)-1α, 1L-2, 1L-8, 1L-10, 1L-11, and IL-12; tumor necrosis factor (TNF)-γ and TNF-α; vascular endothelial growth factor (VEGF); human macrophage inflammatory protein-1α (MIP-α); and macrophage colony-stimulating factor (M-CSF) and myeloperoxidase (MPO). The colons obtained from the rats were fixed in 4% paraformaldehyde at pH 7.4 for further pathological observation. immunomodulatory activity on raW 264.7 Macrophages After incubating RAW 264.7 macrophages with HEP3 (0-200 µg/mL) for 4 h, followed by an additional 24 h of treatment with lipopolysaccharide (LPS; 1 µg/mL), the supernatant was used to monitor the production of the cytokines 1L-1β, 1L-6, TNF-α, and nitric oxide (NO), and the intracellular levels of inducible nitric oxide synthase (iNOS) and nuclear factor-κB (NF-κB) p65. The assays were carried out according to the procedures recommended in the enzyme-linked immunosorbent assay (ELISA) kit manual, which was purchased from USCN Life Science Inc. (Wuhan, China). effect on the cyclophosphamide immunosuppressant Mice Model The animals were randomly divided into four groups (n = 10): normal, model, and HEP3-treated with 200 and 100 mg/(kg ⋅ day) groups. The immunosuppressant mice were induced by intraperitoneally injecting cyclophosphamide [cyclophosphamideinduced group (CTX), 80 mg/kg] once a day, for 3 days, while the mice in the normal group were intraperitoneally injected with saline as a control. All mice had free access to tap water and food (ad libitum). On day 14, the mice were sacrificed, and the serum, spleen, and cecal contents were isolated for further analysis. Prebiotic effect of heP3 on TnBs-induced Mice All animals were randomly divided into nine groups (n = 9): control, model, model and high-dose antibiotics, HEP3 [100 mg/ (kg ⋅ day)], Bifidobacterium, HEP3 and high-dose antibiotics, HEP3 and Bifidobacterium, Bifidobacterium and high-dose antibiotics, and HEP3 and Bifidobacterium and high-dose antibiotics. All the antibiotics were given for 4 days. Then, IBD was induced with TNBS, followed by 7 days of drug treatment and induction with TNBS again, and finally followed by another 4 days of drug treatment. The model mice were prepared using TNBS (150 mg/kg) enema according to the procedure described in Section "Antiinflammatory Evaluation of IBD Model Rats. " After treatment, the mice were anesthetized by intraperitoneally injecting 2% sodium pentobarbital (0.25 mL/100 g). The blood plasma was collected by the abdominal aortic method, and the serum by centrifugation (1,500 rpm, 10 min). Then, the serum was used to monitor the production of cytokines granulocytemacrophage colony-stimulating factor (GM-CSF), TNF-γ, 1L-10, IL-12, 1L-17α, 1L-4, TNF-α, and VEGF. The colons and spleens obtained from the rats were fixed in 4% paraformaldehyde at pH 7.4 for further pathological observation, and the cecum contents were collected for 16s rRNA analysis. antiaging Protective effect on the d-galactose-induced senescent cells The HIEpiC cells were induced by 40 g/L d-galactose for 72 h and co-incubated with or without different concentrations of HEP (0-200 µg/mL). The methyl thiazolyl tetrazolium (MTT) assay was conducted to assess the cell viability. Senescence-associated β-galactosidase staining (operational procedure according to the kits' instructions) was used to identify the senescent cells. The activities of malondialdehyde (MDA), total superoxide dismutase (T-SOD), and glutathione peroxidase (GSH-Px) were measured. The protein concentration of cells was determined using the Coomassie Brilliant Blue G250 assay. The enzyme activities, level of MDA, and protein content were all determined using the detection kits purchased from the Nanjing Jiancheng Bioengineering Institute (Nanjing, Jiangsu, China). The procedures were performed according to the manufacturer's instruction. The levels were normalized to the protein concentration of each sample and expressed as a percentage of non-treated controls. antitumor experiment The CC531 cells were cultured in the RPMI-1640 medium (containing 10% calf serum), placed in an incubator at 37°C with 5% CO2 and saturated humidity. The culture medium was replaced every 2 days, and the adherent cells were digested using 0.05% trypsin when the cells reached 80% confluence after 7 days of adaptation period. The logarithmic-phase human prostate cancer cell line CC531 was prepared to a concentration of 1.0 × 10 7 cells/mL. Each mouse was injected subcutaneously with 0.2 mL of cell suspension. Two weeks later, the minimum and maximum diameters of the tumor body were measured. Then, 24 moderately sized FigUre 4 | Continued Effect of HEP3 on the cyclophosphamide-induced immunotoxicity mice. Body weight changes (a), thymus index (B), and spleen index (c), neutral red engulfment (D), splenocyte proliferation (e), platelet (F), and white blood cell (g), the tissue structure of the spleen (h), the CD3 + (i), CD4 + /CD8 (J), CD4 + (l), CD8 + (M), CD28 + /CD8 (K), and naive T cells (n). CT is the control group treated with just vehicle, CTX is the cyclophosphamide-induced group (intraperitoneal injection of 80 mg/kg) group, HEP3-D is the group treated with 100 mg/kg HEP3 and intraperitoneal injection of 80 mg/kg cyclophosphamide, and HEP3-G is the group treated with 200 mg/kg HEP3 and intraperitoneal injection of 80 mg/kg cyclophosphamide. Values were means ± SDs of six independent experiments. # P < 0.05 vs the control group, *P < 0.05, **P < 0.01 vs the CTX, indicating significant differences. , and model group, with eight mice in each group, and another eight normal mice as the normal group. The volume of the dose was 0.2 mL per mice per day. The model and normal groups were given equivalent volume of phosphate-buffered saline (PBS). Three weeks later, the rats were anesthetized by intraperitoneally injecting 2% sodium pentobarbital (0.25 mL/100 g), decapitated, and dissected. The blood plasma was collected from the orbit, and the serum by centrifugation (1,500 rpm, 10 min). Then, the serum was used to monitor the production of tumor-associated cytokines TNF-α, interferon (IFN)-γ, M-CSF, transforming growth factor (TGF), and VEGF. All the assays were carried out according to the procedures recommended in the ELISA kit manual. The mice were sacrificed by cervical dislocation. The tumor tissue was stripped off, and the tumor inhibition rate (TIR) was calculated. The sample was stored in liquid nitrogen for further use. Microbiome analysis Fresh fecal samples were collected before the fasting of the rats and stored at −80°C. Frozen microbial DNA isolated from mice cecal sample with the total mass ranging from 1.2 to 20.0 ng was stored at −20°C. The microbial 16S rRNA genes were amplified using the forward primer 5′-ACTCCTACGGGAGGCAGCA-3′ and the reverse primer 5′-GGACTACHVGGGTWTCTAAT-3′. Each amplified product was concentrated via solid-phase reversible immobilization and quantified by electrophoresis using an Agilent 2100 Bioanalyzer (Agilent, USA). After quantifying DNA concentration using NanoDrop spectrophotometer, each sample was diluted to a concentration of 1 × 10 9 mol/µL in the Tris-EDTA buffer and pooled. Then, 20 µL of the pooled mixture was used for sequencing with the Illumina MiSeq sequencing system according to the manufacturer's instructions. The resulting reads were analyzed as described in a previous study (28). hematoxylin and eosin (he) staining and immunohistochemical analysis Tissues from the mice or rats were freshly excised and fixed in 10% triformol. Once the samples were fixed, dehydration, clarification, and inclusion were carried out. After the blocks were obtained, the sections were cut using a microtome (Microm HM325, Germany), with a thickness of 5 µm. Sections of hydrated and deparaffinized tissues were stained with HE followed by appropriate method for histological observation. From each colon description, 10 sections were analyzed by three independent observers (JM, EM, and RMC). The paraffin-embedded slices of colon tissue (4 µm) were incubated overnight with anti-NF-κB p65, anti-Foxp3, anti-IL-10, and anti-TNF-α primary antibodies at 4°C; all the antibodies were purchased from Abcam (Cambridge, UK). The slices were then washed with PBS and incubated with horseradish peroxidaseconjugated secondary antibody for 1 h at room temperature. After washing with PBS again, the slices were developed using 3,3′-diaminobenzidine as a chromogen and counterstained with hematoxylin. Images were acquired using a Leica DM2500 system (Leica Microsystems, Germany). statistical analysis All data were expressed as means plus SDs of at least three independent experiments. The significant differences between treatments were assessed with one-way analysis of variance or Student's t-test at P < 0.05 using the Statistical Package for the Social Sciences (SPSS; Abacus Concepts, CA, USA) and Prism 5 (GraphPad, CA, USA) software. resUlTs anti-inflammatory effect on iBD Model rats An IBD rat model was prepared to evaluate the immune enhancement effect of HEP (crude protein from H. erinaceus). After treatment with TNBS enema, the rats in all groups except the control displayed anepithymia with reduced activity, lethargy, and ruffled fur, along with bloody stools or stools containing occult blood, and weight loss. However, these symptoms disappeared from day 9 or 10. The results of the experimental treatments in terms of the Disease Activity Index are shown in Figure 1D. The rats in the HEP-treated group showed a significant improvement compared with the TNBS-treated group. Massive inflammatory cell infiltration was observed in the colonic mucosa and submucosa of TNBS-induced rats under a light microscope. Treatment of HEP did not relieve this inflammatory phenomenon, but it reduced the number of inflammatory cells obviously (Figure 1C). All sections were observed under the same conditions using light microscopy ( Figure 1C). Brown particles were considered as positive cells. The percentage of Foxp3-and IL-10-positive cells in rats in the model group was significantly lower than the normal (P < 0.05), while the percentage of TNF-α and NF-κB p65 was significantly higher (P < 0.05). After treatment with HEP, the percentages of Foxp3-and IL-10-positive cells significantly increased compared with the model group, and the percentages of TNF-α-and NF-κB p65-positive cells significantly reduced compared with the model group (P < 0.05). After treatment with 100 mg/(kg ⋅ day) of HEP, all the cytokine levels were restored to near normal; some antiinflammatory cytokines 1L-1α (Figure 2A), 1L-2 ( Figure 2B), 1L-8 (Figure 2C), 1L-10 ( Figure 2D), 1L-11 (Figure 2E), IL-12 ( Figure 2F), TNF-γ (Figure 2H), TNF-α (Figure 2G), VEGF (Figure 2I), MIP-α (Figure 2K), M-CSF (Figure 2J), and MPO activity ( Figure 2L) were secreted significantly and better compared with the positive control group (P < 0.05), as shown in Figure 2. Cumulatively, all these results suggested that HEP had an effective anti-inflammatory effect on IBD mice. heP3 is a FiP in lPs-activated raW 264.7 Macrophages A membrane separation technology method was used, and a single-band protein (HEP3, Figures 1B, 2 and 3) was isolated and purified to further target the active protein in H. erinaceus. Then, the RAW 264.7 macrophages were used to further evaluate the immunomodulatory activities. The results showed that after incubating the RAW 264.7 macrophages with HEP3 for 12 h and an additional 12 h of treatment with LPS (1 µg/mL), TNF-α production was significantly stimulated and 1L-1β and 1L-6 were also found to be significantly induced, as shown in Figure 3A. However, the overproduction of TNF-α (Figure 3A, b), IL-1β ( Figure 3A, c), and IL-6 ( Figure 3A, d) considerably reduced by 0.05-0.20 mg/mL HEP3 treatment, indicating that HEP3 was able to suppress the LPS-induced production of inflammatory cytokines in the RAW 264.7 macrophages. The HEP3 did not show any harmful effect at a concentration of 1.25 mg/mL. The results also revealed that HEP3 at 0.05-0.20 mg/mL perfectly suppressed NO secretion (Figure 3A, e), with no significant difference compared with the control at high concentration. HEP3 (0.05-0.20 mg/mL) significantly inhibited the LPS-induced iNOS expression (Figure 3A, f). It is suggested that HEP3 probably suppressed NO secretion by downregulating the expression of iNOS in the LPS-stimulated RAW 264.7 macrophages. All the results revealed that HEP3, a 52-kDa protein extracted from H. erinaceus, was a FIP. heP3 reversed d-galactose-induced hiepic senescent cell Proliferation As shown in Figure 3B, the number of blue-stained cells of the model group [induced by 40 mg/mL of d-galactose for 72 h (Figure 3B, a); the d-galactose-induced senescent cells are not shown] was obviously higher than that of the normal group (P < 0.05); HEP3 could reduce the number of senescent cells, especially in the high-dose group, and promote cell proliferation ( Figure 3B, b-e). The antioxidant protection activity was assessed by measuring the intracellular levels of MDA, GSH-Px, and SOD. After exposure of the cells to 40 mg/mL of d-galactose for 72 h, the intracellular MDA level was significantly elevated to 201% of the control value, while GSH-Px and SOD levels were substantially attenuated to 51.2 and 45.6% of the control value, suggesting that d-galactose induced marked oxidative stress. When the cells were co-incubated with HEP3 at concentrations of 0.05, 0.10, and 0.20 mg/mL, the intracellular MDA production significantly reduced (189, 156, and 114% of the control value, respectively; Figure 3B, h) compared with the d-galactose group. However, HEP also increased the GSH-Px (72, 85, and 101% of the control value, respectively; Figure 3B, i) and SOD levels (71, 87, and 93% of the control value, respectively; Figure 3B, g) compared with the d-galactose group. These experimental findings indicated that HEP3 treatment could significantly reduce the d-galactose-induced oxidative stress on the HIEpiC cells. Improvement in Clinical Parameters The immune response of mice with high-dose cyclophosphamide-induced immunotoxicity was monitored to further understand the immunomodulatory activity of the protein extracted from H. erinaceus. As shown in Figure 4, all the immune indexes, including thymus ( Figure 4B) and spleen ( Figure 4C) index, platelet ( Figure 4F) and white blood cell (Figure 4G), neutral red engulfment (Figure 4D), and splenocyte proliferation (Figure 4E), were enhanced (P < 0.05) compared with the CTX; the tissue structure of the spleen also improved ( Figure 4H). Moreover, the CD3 + (Figure 4I), CD4 + (Figure 4L), CD8 + (Figure 4M), CD28 + (Figure 4K), and naive T cells ( Figure 4N) were measured using the flow cytometry (FACS Calibur, Becton Dickinson, USA). All the mentioned parameters were activated compared with the high-dose CTX (P < 0.05), indicating that HEP3 could activate the T cells. The results of the present study showed that the HEP3 could reverse the high-dose cyclophosphamide-induced immunotoxicity in mice. Recapitulating the Gut Microbiota Composition The gut microbiota was proved to have a significant influence on the immune system of organisms. The changes in gut microbiota in the high-dose cyclophosphamide-induced group and normal group mice are shown in Figure 5. The Venn (Figure 5A), principal component analysis (PCA; Figure 5B), and heatmap ( Figure 5C) results showed that the high-dose cyclophosphamide changed the gut microbiota composition obviously compared with the normal group, as the relative abundances at the genus level of Oscillospira, Prevotella, Helicobacter, and Bilophila reduced, and those of Jeotgalicoccus, Staphylococcus, Acinetobacter, Aerococcus, Lactobacillus, Corynebacterium, Rikenella, Enterobacter, Proteus, Anaerotruncus, and Trabulsiella increased. These findings indicated a relationship between the gut microbiota and the immune system. After treatment with HEP3, the gut microbiota was different from that in the high-dose cyclophosphamide-induced and the normal groups (Figure 6). The rarefaction curve (Figure 6A) showed that HEP3 could maintain the diversity of population, as the chao1, ACE, simpson, and shannon of normal group is 3,191 the low-dose HEP3-treated group (P < 0.05), and the high-dose group (3,415.14; 3,540.04; 0.96; 7.33; respectively) were better (P < 0.01). The PCA (Figure 6B) could successfully distinguish between treatment groups. The cartogram of microbiota at the phylum level is shown in Figure 6C, revealing that the number of Actinobacteria (Figure 6C, a), Tenericutes (Figure 6C, f), and TM17 (Figure 6C, e) increased, whereas the number of Bacteroidetes (Figure 6C, b) and Firmicutes (Figure 6C, c) reduced. In the HEP3-treated group [100 and 200 mg/ (kg ⋅ day)], the abundance of Actinobacteria, Bacteroidetes, and Proteobacteria significantly changed (P < 0.05 compared with the high-dose CTX), and was close to the normal (P > 0.05). Moreover, the Venn ( Figure 6D) results revealed that HEP3 could change the microbiota composition of the cecal contents. The altered diversity of the gut microbiota was also observed at the genus level, as shown in Figure 7. After treatment with HEP3, the diversity of Corynebacterium, Bacteroides, Enterobacter, Acinetobacter, Desulfovibrio, and Lactobacillus increased, while the abundance of some pathogenic bacteria or conditioned pathogen increased. All the statistical results are shown in Figure 7; the outlier data samples of Z3, M3, and G4 were excluded (Figures 7A,B). A hierarchical tree was also built using the GraPhlAn software (29), as shown in Figure 6E, revealing that Firmicutes, Clostridia, Clostridiales, Lachnospiraceae, Bacilli, Lactobacillales, Lactobacillus, Bacteroidetes, Bacteroidia, and Bacteroidales were the advantage groups, which can be used as key researched bacteria while evaluating the immunity of HEP3 in further studies. The metabolic alterations were analyzed to determine the relationship between the relative abundance of Kyoto Encyclopedia of Genes and Genomes (KEGG) metabolic pathways and immunotoxicity (Figures 7C,D); the metabolism, genetic information processing, and environmental information processing were more or less different. After treatment with CTX, most metabolisms slowed down; while after treatment with HEP3, almost all the characteristic indexes recovered to the normal or were better than that, indicating that HEP3 could balance the metabolic activities of the gut microbiota to maintain the immunity. heP3 enhanced the immunity through the gut Microbiota HEP3 Markedly Relieved the Tissue Damage and Inflammation Induced by TNBS Combined Antibiotics An IBD mice model was prepared after treatment with broadspectrum antibiotics, to confirm the relationship between the immunomodulatory activity of HEP3 and gut microbiota. As shown in Figure 8D, the colon tissues were seriously damaged in the TNBS combined antibiotics-treated group compared with those induced by just TNBS, including the splenic tissues ( Figure 8E). All the cytokine levels deviated from the normal and TNBS, as some anti-inflammatory cytokines GM-CSF, TNF-γ, 1L-10, IL-12, 1L-17α, 1L-4, TNF-α, and VEGF were secreted significantly differently (P < 0.05 or <0.01), as shown in Figure 8C. Meanwhile, the LPS ( Figure 8B) levels were higher than those in the TNBS group. These results implied that excess antibiotics resulted in more serious damage and inflammation. After treatment with HEP3, Bifidobacterium, and HEP3 + Bifidobacterium, all the symptoms and parameters of IBD recovered to near normal, especially in the HEP3 + Bifidobacterium-treated group, as shown in Figure 9. Cumulatively, all these results suggested that HEP3 and Bifidobacterium had effective anti-inflammatory effects in Values were means ± SDs. # P < 0.05, ## P < 0.01 vs the normal group; *P < 0.05, **P < 0.01 vs the model group, indicating significant differences. IBD, and HEP3 and Bifidobacterium might act synergistically. However, the mechanism needs further investigation. HEP3 Promoted the Engraftment Ability of Bifidobacterium Significantly The bacterial composition was analyzed at the genus level to clarify the synergistic action between HEP3 and Bifidobacterium, especially the engraftment ability of Bifidobacterium. The results showed that the relative abundance of Bifidobacterium and other probiotics obviously increased (P < 0.05, Figure 10), with more diversity and stable structures. As a result, immunity was significantly enhanced as the expression of TNF-α (Figure 9A), NF-κB (Figure 9B), and IL-17 ( Figure 9C) in the HEP3-and Bifidobacterium-treated group decreased compared with the model (P < 0.05) and TNBS + antibiotics (P < 0.01) groups, while the expression of Foxp3 ( Figure 9D) increased (P < 0.01), indicating that HEP3 could alleviate the high-dose antibiotic-induced destruction of the intestinal microecology and play an effective prebiotic role. shown in Figure 11B [TIR = (average tumor weight of the model group − average tumor weight of the experimental group)/average tumor weight of the model group × 100]. The levels of β2-GM, TNF-α, IFN-γ, M-CSF, TGF, and VEGF related to immunity or inflammation improved to near normal (P > 0.05), as shown in Figure 11C. All the results demonstrated that HEP3 had a strong inhibitory activity and could be used for the treatment of tumor as a FIP. DiscUssiOn The immunomodulatory and antitumor activities of fungal proteins have been widely studied after polysaccharides and terpenoids in recent years (30)(31)(32)(33)(34). H. erinaceus, as an edible medicinal mushroom, is processed into a variety of products (beverage, cookies, oral liquids, and so on) sold in supermarkets and drugstores. However, all the products are not the proteins of H. erinaceus. It is thought that proteins extracted from the fruiting bodies of H. erinaceus have immunomodulatory and antitumor properties. A 50-to 55-kDa single-band protein was isolated in this study from the crude protein extracts using alkaline extraction and acid precipitation method, membrane separation technology, and a pharmacodynamic evaluation method. Evaluations revealed that HEP3 had a strong anti-inflammatory and immunohypofunction and could be used for treating IBD, hypoimmunity, or even tumors. Immune factors play a predominant role in the pathogenesis of IBD (35,36). Cytokines, including 1L-1, 1L-2, IL-12, TNF-α, VEGF, and MIP-α, are proinflammatory, while 1L-8, 1L-10, 1L-11, TNF-γ, and M-CSF are anti-inflammatory. These cytokines have many biological activities as transfer molecules, mainly regulating immune response, participating in the immune cell differentiation development and tissue repair, interfacing inflammation, and stimulating hematopoietic function and other functions. Anti-inflammatory and immunosuppressive treatments reduce and limit the damage caused by IBD (37). The evaluation tests showed that HEP had a strong antiinflammatory activity in IBD model rats and mice, indicating that HEP was a functional food ingredient for immunoregulation. Further, a single-band protein was isolated (HEP3) using the membrane separation technology, and RAW 264.7 macrophages were employed to evaluate the immunomodulatory activities. The results revealed that HEP3 elicited strong responses to TNF-α, 1L-1β, and 1L-6. It also suppressed the LPS-induced production of inflammatory cytokines in the RAW 264.7 macrophages through suppressing NF-κB DNA-binding activity, followed by the downregulation of iNOS activity, eventually resulting in the decrease in NO production. However, the detailed molecular mechanism and characteristics need to be revealed in further studies. Growing empirical evidences have shown that the diversity of gut microbiota in IBD patients is reduced (38,39 consistent observations of altered composition of the gut microbiota in IBD patients are a reduction in Firmicutes and an increase in Proteobacteria, which were same as in the cyclophosphamideinduced mice (Figure 5). In this study, after treatment with 80 mg/(kg ⋅ day) of cyclophosphamide for 4 days, the composition of the cecal content microbiota changed significantly compared with the normal group, as shown in Figures 5 and 6, revealing that the gut microbiome plays an important role in immune regulation and host defense. Previous studies have demonstrated that the gut microbiota have a barrier function to protect the host from the intestinal pathogen attacks (40) and immune regulation functions by regulating the proliferation and differentiation of T cells, and stimulating the intestinal antigen-presenting cells and some bacteria active metabolites (41,42). Previous studies have shown that the efficacy of the anticancer immunomodulatory agent CTX relies on intestinal bacteria (43,44), and high doses often damage the intestinal mucosa and metabolism, and probiotic bacteria such as Lactobacillus and Bifidobacterium can reduce intestinal mucosal injury and improve intestinal metabolism and intestinal microbiota (45,46). In this study, CD3 + , CD4 + , CD8 + , CD28 + , and naive T cells were inhibited in high-dose cyclophosphamide-induced immunotoxicity mice after treatment with HEP3 (Figure 4), and also the immunohistochemistry of colon tissues in the IBD model rats showed the same results that Foxp3, IL-10, TNF-α, and NF-κB p65 improved to near normal (Figures 1C and 9), indicating that HEP3 might improve the immune function via regulating the proliferation and differentiation of T cells with the help of gut microbiota, but much more details need to be revealed. The IBD model mice were prepared by TNBS enema after treatment with a large range of broad-spectrum antibiotics to explore if the gut microbiota took part in immunity activated by HEP3. As shown in Figures 8-10, without the microbiota, the colon tissues were easily damaged and inflamed (antibiotics-treated groups) compared with the only TNBS-induced group, which verified that gut microbiota could make an intestinal mucous membrane surface to form a biological barrier. With the help of HEP3, the Bifidobacterium abundance increased significantly (P < 0.05), and also the colon tissue damages, inflammation, other prebiotics, and diversity and structures improved significantly. These results confirmed that HEP3 had immunomodulatory activities and could serve as a good prebiotic. Lipopolysaccharide, mainly secreted from Bacteroides spp., B. vulgatus, and Desulfovibrio spp. (45,46), is regarded as a stimulating factor for inflammation ( Figure 8B). In this study, the levels of LPS were reduced after treatment with HEP3 and Bifidobacterium, and the abundance of Bacteroides spp., B. vulgatus, and Desulfovibrio spp. decreased, revealing that HEP3 inhibited the proliferation of these bacteria and hence reduced the secretion of LPS. How HEP3 influences the proliferation of Bacteroides spp., B. vulgatus, and Desulfovibrio spp. needs further exploration. This study also found that antibiotics rapidly declined the diversity and destroyed the stability of the whole ecological system (Figure 12A). Some special foods might help in controlling this situation ( Figure 12B). These results were consistent with previous reports that antibiotics were the most influencing factors on gut microbiota (47). HEP3 is a protein, and its digestion and absorption need many proteases and peptidases extracted from bacteria. In contrast, proteins and their degradation products serve as important nitrogen sources and growth factors, or even energy sources, for some anaerobic organisms (48). Therefore, HEP3 can significantly influence the diversity, structures, and metabolism of organisms and microorganisms. As shown in Figures 7 and 10, the diversity and structures were recovered with the treatment of HEP3, and some metabolic pathways were reactivated to near normal. Besides the improvement in IBD rats and mice, this study concluded that the changes in the gut microbiota structure might not be consistent in the high-dose cyclophosphamide-induced mice but could improve the disease, and that a steady gut microbiota was extremely important for health. The aging of intestinal mucosa cells is one of the reasons for inflammation and immunotoxicity. d-galactose, which is a reducing sugar, could induce senescence in the cells of rodents through the overproduction of reactive oxygen species and advanced glycation end products (49)(50)(51). The antiaging ability plays an important role in maintaining the immune system (52) and protecting from all living organisms from inflammation. In this study, HEP obviously reversed the d-galactose-induced oxidative stress (increased the GSH-Px and SOD levels, while reducing the MDA level) in the HIEpiC cells, implying that the antiaging activity was also an impetus for enhancing the immunity. Mushrooms produce many bioactive proteins, including FIPs, ribosome-inactivating proteins, lectins, ribonucleases, antibacterial/antifungal proteins, laccases, and other proteins (32,53,54). Although increasing reports are available on the isolation, purification, and functions of mushroom proteins, the mechanisms of their actions (e.g., immunomodulation, antiproliferation, antivirus, antimicrobes, etc.) are still poorly understood. Therefore, novel technologies should be promising in this aspect, and the relationship between structure and bioactivity should be considered. In summary, a single-band protein (HEP3) isolated from HEP exhibited immunomodulatory activities and could be used as a drug or functional food ingredient for immunotherapy in gastrointestinal diseases. Moreover, HEP3 could improve the immune system via regulating the composition and metabolism of gut microbiota to activate the proliferation and differentiation of T cells, stimulate the intestinal antigen-presenting cells, and hence play a prebiotic role. aUThOr cOnTriBUTiOns CD, ZC, YJ, LJ, SJ, XY, and LG conceived and designed the experiments. CD, ZC, YJ, LJ, SJ, and LG performed the experiments. CD, ZC, and YJ analyzed the data. CD and ZC wrote the paper and edited the manuscript. All authors read and approved the final manuscript. FUnDing This work was supported by the financial support from the China National Ministry of Science and Technology Plan Projects (2013BAD16B00), Guangdong Science and Technology Plan
Conducting Nano fi bers and Organogels Derived from the Self-Assembly of Tetrathiafulvalene-Appended Dipeptides : We demonstrate the nonaqueous self-assembly of a low-molecular-mass organic gelator based on an electroactive p-type tetrathiafulvalene (TTF) − dipeptide bioconjugate. We show that a TTF moiety appended with diphenylalanine amide derivative (TTF-FF-NH 2 ) self-assem-bles into one-dimensional nano fi bers that further lead to the formation of self-supporting organogels in chloroform and ethyl acetate. Upon doping of the gels with electron acceptors (TCNQ/iodine vapor), stable two-component charge transfer gels are produced in chloroform and ethyl acetate. These gels are characterized by various spectroscopy (UV − vis − NIR, FTIR, and CD), microscopy (AFM and TEM), rheology, and cyclic voltammetry techniques. Furthermore, conductivity measurements performed on TTF-FF-NH 2 xerogel nano fi ber networks formed between gold electrodes on a glass surface indicate that these nano fi bers show a remarkable enhancement in 20 V and in some cases up to 100 V. Temperature-dependent measurements were achieved by placing the sample on a thermo-electric heater/cooler in an in-house built stainless steel chamber connected to Agilent B1500A semiconductor parameter analyzer. The temperature of the Peltier element was controlled by an electric circuit with thermistor feedback via LabVIEW. The spring-loaded probes made electrical contact to the gold electrodes, and the measurements were taken from 20 to ∼ 80 ° C. The leakage current of the measurement system was subtracted from the I − V measurement of the samples. Two sets of samples were measured to obtain the dark conductivity of the gels of 1 in chloroform before and after doping with TCNQ/iodine. hydrogen-bonded conducting nanofibers based on the conjugates of p-type tetrathiafulvalene, amino acid (instead of peptides), and a long hydrocarbon chain. this form gels only in aromatic liquid crystals in common organic solvents at ■ INTRODUCTION There is significant current interest in the fabrication of functional organic nanomaterials based on electronically active π-conjugated chromophores, with diverse proposed applications in next-generation optoelectronic and bioelectronic devices. 1−6 One challenge in this area is the difficulty in obtaining a high degree of organization among the constituent electroactive components, ultimately affecting the overall performance of the devices. 7 The supramolecular assembly of π-conjugated chromophores into one-dimensional (1D) nanostructures with a high aspect ratio provides a potential strategy to tune the photonic/electronic properties as it facilitates the long-range intermolecular charge delocalization of π-electron cloud. 8−10 In this respect, low-molecular-mass organic gelators (LMOGs) have attracted interest because of their ability to self-assemble into entangled three-dimensional (3D) fibrous network structures driven by multiple, weak intermolecular forces such as π−π stacking, van der Waals, electrostatic, metal coordination, charge transfer, and Hbonding interactions. 11,12 A variety of supramolecular strategies have been developed to demonstrate the self-assembly of organogelators 13−16 based on π-electron-deficient (n-type) aromatic building blocks, including naphthalene bisimides 17 and perylene bisimides, 18,19 as well as π-electron-rich (p-type) aromatic building blocks, including triphenylenes, 20 oligo(pphenylenevinylenes), 21 oligothiophenes, 22 porphyrins, 23 phthalocyanines, 24 merocyanines, 25 and tetrathiafulvalenes (TTFs). 26 −33 In particular, TTF-based LMOGs have been extensively investigated for the development of organic conducting nanomaterials 27−30 and stimuli-responsive materials. 26,31 It has been shown that TTF and its derivatives can form charge transfer complexes with various electron acceptors such as tetracyano-p-quinodimethane (TCNQ) and iodine, and the resultant charge transfer complexes exhibit high electrical conductivity. 26,32,33 One powerful strategy for the formation of highly organized π-conjugated LMOGs is focused on bioconjugates of πconjugated chromophores and self-assembling peptides, 34−36 in which the precise orientation and packing of the πconjugated chromophores can be optimized by the intermolecular H-bonding of peptide motifs. 37−40 In recent years, various functional organogels have been reported based on bioconjugates of peptides and ferrocene, 41 azobenzene, 41 stilbene, 42 and pyrene. 41,43 Supramolecular self-assembly of bioconjugates of optoelectronically active (also including p-or n-type) π-conjugated chromophores and peptides in organic solvents have been reported. 44,45 We have very recently reported the first example of fluorescent organogel based on bola-amphiphile type bioconjugates of n-type perylene bisimide and dipeptides. 46 However, this system forms gels only in high boiling point polar organic solvents such as DMF and DMSO and was not tested for electroconductance. 46 To the best of our knowledge, the conducting nanofibers and organogels derived from the self-assembly of TTF−peptide bioconjugates have not been reported, and such nanostructures have not been used to measure conductivity directly. 5,32,47,48 Herein, we investigate the supramolecular self-assembly of various bioconjugates of short peptides and TTF. For this purpose, we chose two types of functional TTF−peptide bioconjugates by simply varying the length and sequence of the peptide motifs present (Figure 1a). One is a TTF-appended aromatic dipeptide derivative 1 (TTF-FF-NH 2 ): that is, TTF moiety functionalized with a diphenylalanine amide derivative (FF-NH 2 ). The FF 49 sequence was selected as a suitable selfassembling peptide motif because it has been widely employed by us 50 and others 49,51−55 previously. The other conjugate made use of an aliphatic tripeptide derivative 2 (TTF-L 3 -OMe): that is, TTF moiety functionalized with a trileucine methyl ester derivative (L 3 -OMe). The L 3 peptide sequence was selected based on our previous results using 9fluorenylmethoxycarbonyl (Fmoc) peptides. 47,56 The selfassembly of these bioconjugates is driven by the π−π stacking of TTF moieties as well as the intermolecular H-bonding of peptide motifs (Figure 1). The primary objectives of this work are (i) characterization of nanostructures formed in nonaqueous media using various spectroscopy, microscopy, and rheology techniques, (ii) investigation of the effect of the presence of charge transfer interactions between TTF and TCNQ/iodine vapor on self-assembling electroactive nanostructures, and finally (iii) measurement of the electrical conductivity of electroactive xerogel networks formed both in the absence and presence of charge transfer in ambient air at room temperature ( Figure 1). Synthesis. The TTF−peptide bioconjugates 1 and 2 were synthesized as reported in the Supporting Information. All reactions were carried out in oven-dried glassware and magnetically stirred. Thin layer chromatography (TLC) was performed on Merck silica gel 60 F254 plates. All compounds were visualized either by UV light source (254 nm) or by dipping in basic permanganate solution. Column chromatography was carried out by using silica gel 60 (230−400 mesh). High-resolution mass spectra (ESI-HRMS) were recorded on a Thermo Electron Exactive. 1 H and 13 C nuclear magnetic resonance (NMR) spectra were recorded on Bruker AV400 spectrometer in the deuterated solvents. All chemical shifts (δ) are quoted in ppm, and coupling constants (J) are given in Hz. Residual signals from the solvents were used as an internal reference. Gelation Experiments. The TTF-FF-NH 2 gelator (11.7 mg, 20 mM) was mixed in a particular organic solvent (1 mL) taken in a screw-capped sample vial. The mixture was vortexed followed by sonication for few seconds and heated until the solid was completely dissolved. The sample vial was cooled to room temperature and left for at least few hours under ambient conditions. Gelation was considered to have occurred when a homogeneous solid-like material was Figure 1. (a) Molecular structures of p-type TTF-peptide bioconjugates: TTF−diphenylalanine amide 1 (TTF-FF-NH 2 ) and TTF-trileucine methyl ester 2 (TTF-L 3 -OMe) derivatives, and TCNQ acceptor used in this study. Schematic representation of (b) the method of electrical conductivity measurements of xerogel nanofiber networks drop-casted between gold electrodes on a sodium-free glass substrate and (c) the proposed mechanism of the nonaqueous self-assembly process of TTF−peptide bioconjugates and the formation of charge transfer and mixed-valence states upon doping with TCNQ and iodine, respectively. obtained that exhibited no gravitational flow. In the case of chloroform, gelation was observed at 30 min after cooling to room temperature. Samples were incubated at room temperature for at least 12 h before further characterization, unless otherwise stated. In the case of charge transfer donor−acceptor organogels, both TTF-FF-NH 2 donor (2.93 mg) and TCNQ acceptor (1.02 mg) were mixed together in 250 μL of the solvent (chloroform/ethyl acetate) taken in a screwcapped vial. The self-supporting organogels were also obtained in this case by following the above procedure. The organogels were then characterized after equilibration for few hours under ambient conditions. The similar gelation tests were also performed on TTF-L 3 -OMe (12.9 mg, 20 mM) by adding to a particular organic solvent (1 mL). The results are summarized in Table S1 of the Supporting Information. UV−Vis Absorption Spectroscopy. UV−vis absorption spectra were recorded on a Jasco V-660 spectrophotometer. Samples were prepared in quartz cuvettes with 1 cm path length. The absorbance of TTF-FF-NH 2 solution in chloroform was measured in the absence and presence of TCNQ. The total chromophore concentration is 5 mM in this case. UV−Vis−NIR Absorption Spectroscopy. UV−vis−NIR absorption spectra were recorded on a Shimadzu UV-2600 spectrophotometer with integrating sphere attachment ISR-2600 Plus equipped with two detectors (photomultiplier and InGaAs detector). The measurable wavelength range was 220−1400 nm. The gel sample in chloroform (20 mM) was prepared by depositing the hot solution of TTF-FF-NH 2 inside a quartz cuvette with 1 cm path length, and the cuvette was placed horizontally. After about 30 min, the solution was then developed into a self-supporting organogel on the sidewall of the quartz cuvette. The absorbance of this xerogel (dried for 30 min) was measured from 220 to 1400 nm. After doping with iodine by exposing the sample cuvette to iodine vapor for 50 min in a sealed container with iodine crystals (∼10 mg), the absorbance of the gel was measured again. The appearance of a new absorption band near 850 nm accompanied by an increase in the absorbance indicated the formation of cation radical species and the corresponding mixed-valence states over time, which led to a change in color of the gel from yellow to dark brown. Oscillatory Rheology. The mechanical properties of the organogels were investigated by dynamic frequency sweep experiments which were carried out by a strain-controlled rheometer (Kinexus Pro Rheometer) by employing parallel plates of 20 mm diameter with 0.5 mm gap. The experiments were performed at 20°C, and this temperature was controlled throughout the experiment using an integrated electrical heater. Additional precautions were taken to minimize solvent evaporation and to keep the sample hydrated: a solvent trap was used and the internal atmosphere was kept saturated. To ensure the measurements were made in the linear viscoelastic regime, an amplitude sweep was performed, and the results showed no variation in elastic modulus (G′) and viscous modulus (G″) up to a strain of 1%. The dynamic modulus of the gels was measured as a frequency function, where the frequency sweeps were carried out between 0.1 and 100 Hz. The measurements were repeated three times to ensure reproducibility, with the average data shown. Fourier Transform Infrared (FTIR) Spectroscopy. FTIR spectra were recorded on a Bruker optics Vertex 70 spectrophotometer. The spectra were taken in the region between 800 and 4000 cm −1 with a resolution of 1 cm −1 and averaged over 20 scans. Spectra were background-subtracted to correct for atmospheric interference. For all measurements, the nondeuterated solvents (Sigma-Aldrich, UK) were used directly. The organogels were loaded between two CaF 2 windows using a 5 μm PTFE spacer. The spectra were recorded at 24 h after cooling the heated solution to room temperature under ambient conditions. For xerogel doping measurements, the instrument was used in attenuated total reflectance (ATR) mode. The gel sample in chloroform was allowed to dry on the diamond crystal, after which the spectra were taken. Samples were doped by placing I 2 crystals next to the dried sample and covering the sample with a beaker for 2 min, after which the beaker and I 2 were removed. Circular Dichroism (CD). CD spectra were measured on a Jasco J600 spectropolarimeter in a 0.1 mm path length cylindrical cell, with 1 s integration, step resolution of 1 mm, response of 0.5 s with a bandwidth of 1 nm and slit width of 1 mm. The freshly prepared samples (hot solution) were directly added to the cell using a pipet, and the spectra were recorded after equilibration for 24 h. The high tension (HT) voltage values reach maximum below the wavelengths of 250 nm due to the high extinction coefficient of these semiconductor chromophores under these conditions, and the CD spectra could not be recorded in this wavelength region. Atomic Force Microscopy (AFM). For AFM experiments, 20 μL of sample solution (as a gel/sol) was drop-casted onto a freshly cleaved mica surface (G250-2 mica sheets 1 in. × 1 in. × 0.006 in.; Agar Scientific Ltd., Essex, UK). Each sample was air-dried overnight in a dust-free environment prior to AFM imaging. For comparison, the sample solution was also drop-casted for AFM imaging on gold-coated glass surfaces used for electrical conductivity measurements. The images were obtained by scanning the mica/glass surfaces in air under ambient conditions using a Veeco diINNOVA scanning probe microscope (VEECO/BRUKER, Santa Barbara, CA) operated in tapping mode. The AFM measurements were obtained using sharp silicon probes (RTESPA; Veeco Instruments SAS, Dourdan, France). AFM scans were taken at 512 × 512 pixels resolution and produced topographic images of the samples in which the brightness of features increases as a function of height. Typical scanning parameters were as follows: tapping frequency 326 kHz, integral and proportional gains 0.1 and 0.3, respectively, set point 0.5−0.7 V, and scanning speed 1.0 Hz. AFM images were collected from two different samples and at random spot surface sampling (at least five areas). The images were analyzed using NanoScope Analysis software version 1.40. Transmission Electron Microscopy (TEM). Transmission electron microscopy (TEM) images were captured using a LEO 912 energy filtering transmission electron microscope operating at 120 kV fitted with 14 bit/2 K Proscan CCD camera. Carbon-coated copper grids (200 mesh) were glow discharged in air for 30 s. The support film was touched onto the gel surface for 3 s and blotted down using filter paper. Negative stain (20 μL, 1% aqueous methylamine vanadate obtained from Nanovan, Nanoprobes) was applied, and the mixture was dried down again using filter paper. Each sample was allowed to dry afterward for a few minutes in a dust-free environment, and the dried specimens were then imaged using the microscope. Cyclic Voltammetry (CV). Cyclic voltammetry (CV) measurements were performed on CH Instruments 660A electrochemical workstation with iR compensation using anhydrous chloroform as the solvent. The electrodes were platinum disk, platinum wire, and silver wire as the working, counter, and reference electrodes, respectively. All solutions contained substrates in concentrations of ca. 0.1 mM together with n-Bu 4 NPF 6 (0.1 M) as the supporting electrolyte. The measurements are referenced against the E 1/2 of the Fc/Fc + redox couple. The oxidation of the gel was performed by carefully inserting the platinum gauze (as a working electrode) into the gel phase. Electrical Conductivity Measurements. Plane-parallel gold contacts were thermally evaporated on sodium-free glass substrates of 1 × 1 cm 2 with varying gap lengths of approximately 30, 50, 70, and 90 μm and a gap width of 1000 μm. The substrates were prepared by drop-casting the solution (either as a hot solution or after cooling the heated solution to room temperature) of TTF-FF-NH 2 (in chloroform) and left overnight under ambient conditions for structure formation and solvent evaporation. On the following day, the samples were placed in a vacuum chamber for an hour prior to the measurements. Similarly, the substrates doped with TCNQ were prepared by drop-casting the mixture of TTF-FF-NH 2 and TCNQ (1:1 ratio) in chloroform and left overnight under ambient conditions for structure formation and solvent evaporation, while the iodine doping was achieved by exposing the drop-casted TTF-FF-NH 2 substrates (without TCNQ) to iodine vapor for about 30 min in a sealed container with iodine crystals (∼20 mg). The current−voltage (I−V) characteristics were measured in ambient air at room temperature using Signatone probe station and Agilent B1500A semiconductor parameter analyzer. The voltage was swept from 0 to Langmuir Article dx.doi.org/10.1021/la503459y | Langmuir 2014, 30, 12429−12437 20 V and in some cases up to 100 V. Temperature-dependent measurements were achieved by placing the sample on a thermoelectric heater/cooler in an in-house built stainless steel chamber connected to Agilent B1500A semiconductor parameter analyzer. The temperature of the Peltier element was controlled by an electric circuit with thermistor feedback via LabVIEW. The spring-loaded probes made electrical contact to the gold electrodes, and the measurements were taken from 20 to ∼80°C. The leakage current of the measurement system was subtracted from the I−V measurement of the samples. Two sets of samples were measured to obtain the dark conductivity of the gels of 1 in chloroform before and after doping with TCNQ/iodine. ■ RESULTS AND DISCUSSION The syntheses of 1 and 2 were carried out by the condensation of TTF-substituted carboxylic acids and the corresponding free amine derivatives of diphenylalanine amide (FF-NH 2 ) and trileucine methyl ester (L 3 -OMe). Details of the synthesis and characterization are reported in the Supporting Information. The analytical and spectroscopic data for 1 and 2 are fully consistent with their molecular structures. At first, the gelling abilities of both compounds 1 and 2 were tested in a number of common organic solvents. Typically, the compound (1 or 2) at 20 mM concentration was taken in an organic solvent and sequentially vortexed, sonicated, and heated until the solid was dissolved completely and left to stand at room temperature for few hours under ambient conditions. It was observed that 1 formed self-supporting yellow organogels particularly in chloroform (transparent gel formed within 30 min), ethyl acetate (slightly opaque gel formed within 2−3 h), and acetone (opaque gel formed within 2−3 h), which are stable in the gel state for several months. Formation of the gel in acetone required sonication for a few minutes before cooling the hot solution of 1 to room temperature, while direct cooling of hot solution without sonication led to precipitation and no gel was formed. Similar observations were reported elsewhere. 26,57 Tetrahydrofuran, methanol, and DMSO were found to be good solvents for 1, while it was found to precipitate in acetonitrile, cyclohexane, and benzene. The observations suggest differential self-assembly of 1 in various solvents which is likely to be dependent on the balance between intermolecular π−π staking of the hydrophobic TTF units and the H-bonding of diphenylalanine peptide motifs, similar to what was observed for perylene bisimide-appended peptides. 46 In contrast, gelation was not observed for compound 2, as it was either completely soluble or precipitated in these common organic solvents under ambient conditions. A complete list of solvents tested for 1 and 2 is summarized in Table S1 of the Supporting Information. The contrasting behavior of 1 and 2 can be reasonably explained by the fact that the presence of both aromatic (instead of aliphatic for 2) and terminal free amide (instead of methyl ester for 2) groups in the peptide backbone of 1 give rise to additional π−π stacking interactions combined with the extended H-bonding, offering a vital contribution to the overall thermodynamic stability of aggregates of 1 compared to the amorphous aggregates of 2. Therefore, only the self-assembly of 1 was then studied in more detail. We first focused on the morphological investigation of 1 in different solvents, e.g., chloroform, ethyl acetate, and tetrahydrofuran, by atomic force microscopy (AFM) as well as transmission electron microscopy (TEM). Tapping-mode AFM imaging of dried chloroform and ethyl acetate gel samples of 1 on mica revealed the formation of a dense network of entangled nanofibers of up to several micrometers in length (Figure 2a,b). The difference in transparency of the gels in various solvents (transparent, slightly opaque, and opaque gels in chloroform, ethyl acetate, and acetone, respectively) is due to the variation in the solubility with respect to temperature and assembly kinetics combined with the network properties of the nanofibers obtained. Also, TEM imaging of negatively stained (1% aqueous methylamine vanadate as a staining agent) dried gel samples of 1 in chloroform (on carbon-coated copper grid) similarly revealed the presence of micrometer-long nanofibers ( Figure S1 in the Supporting Information). In contrast, both AFM ( Figure 2c) and TEM (see Figure S1 in the Supporting Information) images of solution of 1 in tetrahydrofuran showed the presence of vesicles with variable diameters. It is noteworthy that the self-assembly of 1 into vesicles (instead of nanofibers) which lack the ability to generate a cross-linked network further explains the nongelation behavior of 1 in tetrahydrofuran. We then turned our attention to investigate the self-assembly behavior of 1 only in chloroform in further details, owing to its transparent gel nature and rapid selfassembly behavior compared to ethyl acetate system. Having established the stable self-assembling nanofibers of 1 in chloroform, the ability to form the charge transfer complex within the π-stacked TTF assemblies of 1 was further investigated. This can be approached in two ways. One approach is the partial oxidation of TTF moieties (giving rise to a mixed-valence state) which can be done either electrochemically or chemically, while the other approach is the chemical doping of the gels with electron acceptors such as TCNQ and iodine vapor. In order to investigate the first approach, the electrochemical properties of TTF moiety of 1 in chloroform were first analyzed by cyclic voltammetry that displayed two quasi-reversible one-electron oxidation waves in which the first oxidation at ca. E 1/2 = +0.04 V (vs Fc/Fc + ) and the second oxidation at ca. E 1/2 = +0.35 V (vs Fc/Fc + ) occurred which correspond to the formation of TTF radical cation (TTF •+ ) and TTF dication (TTF 2+ ) species, respectively ( Figure S2 in the Supporting Information). The electrochemical oxidation of TTF moiety of gel 1 in chloroform (20 mM) was also observed by applying an oxidation potential of +0.8 V for about 30 min and after which the gel was transformed into a dark browngreen suspension (Figure 3a). As reported previously, 26 such a relatively low first oxidation potential can be achieved by chemical oxidizing agents such as iron(III) perchlorate ((Fe(ClO 4 ) 3 ) and nitrosonium hexafluorophosphate (NOPF 6 ). In this context, we examined the state of gel of 1 in chloroform by carefully adding NOPF 6 on top of the gel surface and observed that the transparent yellow gel was destroyed gradually leading to a dark brown suspension within 2 h that eventually precipitates after about 10 h (Figure 3b). These findings prompted us to evaluate the chemical doping capability of gels and further investigate the subsequent formation of charge transfer complexes in the gel phase of TTF donors with various electron acceptors such as TCNQ and iodine vapor. We first mixed 1 with TCNQ in chloroform in a 1:1 ratio. Upon cooling of the heated solution, a dark brown gel was observed (Figure 3c). The significant change in color of the gel from yellow (without TCNQ) to dark brown (with TCNQ) is an indicative of the formation of a charge transfer complex (TTF + /TCNQ − ) between TTF of 1 and TCNQ moieties. This charge transfer complex formation was further confirmed by UV−vis absorption spectroscopy. As shown in Figure 4a, the appearance of new absorption bands centered at 748, 850 nm in chloroform correspond to the formation of TCNQ radical anion species (TCNQ •− ), 33 and the increase in the absorbance in 600−700 nm region corresponds to the formation of TTF radical cation species (TTF •+ ). 58 The presence of both these characteristic radical anion/cation species further confirmed the formation of charge transfer complex between TTF of 1 and TCNQ. This is an added advantage to the present system when compared to the previously reported systems on organogelators composed of modified TTF derivatives (1,4-dithiol ring fused TTF 26 and monopyrrolo-annulated TTF 33 ), where it was shown that the gel was destroyed upon the addition of TCNQ, caused by the electrostatic repulsions between positively charged TTF units (TTF •+ /TTF 2+ ) would possibly impair the intermolecular Hbonding of adjacent urea/amide groups (instead of dipeptides), in polar organic solvents such as 1,2-dichloroethane 26 and ethanol. 33 All these observations clearly indicate that stable two-component charge transfer gels can be developed by doping the gel of 1 with TCNQ even in polar organic solvents. 59 We then characterized the gel of 1 in chloroform and further investigated the effect of charge transfer interactions on the gel network after doping with 1 equiv of TCNQ. The frequencysweep measurements by oscillatory rheology for gel of 1 in chloroform at 20°C indicated that the storage modulus G′ is significantly higher than the loss modulus G″, confirming the viscoelastic behavior of the gel. The stiffness was found to be about 27 kPa (Figure 4b). Interestingly, the doping of gel of 1 with TCNQ slightly enhanced the stiffness up to 34 kPa (Figure 4b), indicative of the charge transfer complexation between TFF and TCNQ moieties. Furthermore, the morphological investigation of the TCNQ doped charge transfer gels in chloroform was carried out by tapping-mode AFM which clearly disclosed the development of a dense network of entangled nanofibers of up to several micrometers in length (Figure 4c). These results remarkably suggest that the charge transfer nanofibers of 1 were formed after doping with TCNQ, giving rise to the possibility of long-range charge delocalization of π−π associated TTF and TCNQ moieties along the length of the nanostructures. The supramolecular interaction governing the self-assembly of bioconjugate 1 in chloroform before and after doping with TCNQ was further evaluated. There is a likely role of Hbonding involved in the self-assembly of 1 which was investigated by FTIR spectroscopy. H-bonding contributions for the formation of secondary structures were investigated by monitoring the absorption of gel of 1 in the amide I region. The spectrum of gel of 1 in chloroform showed strong amide I band at 1645 cm −1 , a characteristic band for the formation of aggregated intermolecular β-sheet-like secondary structures in chloroform (Figure 4d). 28,45 Additionally, the presence of another strong band at 1670 cm −1 corresponds to the hydrogen bonding of free terminal CONH 2 amide groups present in 1. Furthermore, the formation of charge transfer complex (TTF + / TCNQ − ) in TCNQ doped gel of 1 in chloroform was confirmed by the presence of TCNQ radical anion (TCNQ •− ) peak at 2182 cm −1 in addition to TCNQ neutral peak at 2223 cm −1 (a characteristic peak for the charge transfer complex- (Figure 4d). The spectra of TCNQ doped charge transfer gel of 1 also showed a dramatic enhancement in the peak at 1645 cm −1 which is significantly dominating the peak at 1670 cm −1 , indicating the formation of reinforced intermolecular β-sheet-like H-bonding network (Figure 4d). CD spectroscopy was further used to investigate the relative intermolecular orientations of TTF units within the self-assembled nanostructures that give rise to specific CD signals emerging from the helical supramolecular orientation (rather than from the inherent molecular chirality) of the self-assembling building blocks. Notably, the CD spectrum of gel of 1 in chloroform displayed excitonic Cotton effects centered at approximately 285, 346, and 383 nm which correspond to π−π* transition peaks originated due to the helical orientation of TTF units ( Figure S4). 30,60 All spectroscopy, microscopy, and rheology results consistently indicate that the presence of additional charge transfer interactions within the self-assembled nanostructures, thereby perhaps allowing the more ordered coassembly of TTF donors of 1 and TCNQ acceptors, gives rise to the formation of reinforced charge transfer gels likely with better conducting pathways along the length of the nanofibers. 40 Similarly, such charge transfer complexation within the stacked TTF assemblies of 1 can also be obtained by doping with iodine. The UV−vis−NIR and FTIR spectroscopy techniques were employed to confirm the charge transfer complexation on xerogel networks (surface dried films) of 1 upon exposed to iodine vapor in a sealed container. To this end, we prepared the gel samples of 1 in chloroform on the sidewall of a quartz cuvette, dried for 30 min, and exposed the xerogel sample to iodine vapor for 50 min. The corresponding UV−vis−NIR absorption spectra of the doped gel sample in Figure 4e revealed the appearance of a broad absorption band at around 850 nm, which is a characteristic band for the formation of a complete charge transfer state between dimeric cation radical species of stacked TTF units (TTF + I − ). Additionally, the significant increase in the absorbance in the higher wavelength region (>600 nm) indicates the formation of a more conducting partial charge transfer (mixed-valence) state between neutral and cation radical species of TTF moieties ((TTF)(I) n , where n < 1) within the stacked TTF assemblies of 1. 32 Also, the charge transfer complex formation in the xerogel networks of 1 upon exposing to iodine vapor for 2 min was successfully confirmed by the significant increase in the absorbance over time above 1700 cm −1 region in the FTIR spectra (Figure 4f), indicating the formation of a mixed-valence state within the stacked TTF assemblies of xerogel of 1. 32 The electrical conductivity of nanofibers obtained by xerogel samples of 1 from chloroform was further measured. The conductivity measurements were performed by drop-casting the gel samples between gold electrodes deposited on a sodiumfree glass surface. All measurements for undoped and doped xerogel samples of 1 showed approximately linear I−V characteristics (Figure 5a). This indicates Ohmic rather than rectifying contacts. The conductivity of the xerogel of 1 was calculated as follows where R is resistance of the xerogel film, ρ is its resistivity, L is the length of the gap between gold contacts, d is the thickness of the film, W is the width of the gold contacts, σ is the conductivity of the film, and G is its conductance. From this, it can be seen that The dark conductivity was measured as a function of temperature between 20 and 80°C for both undoped and iodine doped xerogel samples of 1 obtained from chloroform and fitted with the following exponential dependence where T is the absolute temperature, k B is the Boltzmann constant, E a is the activation energy of the conductivity, and σ 0 is the conductivity prefactor. The activation energy E a can be found as the slope of the straight line on a semilogarithmic plot of the conductivity versus the reciprocal thermal energy. Whether band theory for structurally ordered materials, tightbinding models for weakly disordered systems, or hopping models for localized charges in strongly disordered materials are employed, the activation energy represents the energy barrier for the drifting charge carriers to reach the transport level or hop to the neighboring electronic site. The conductivity of the undoped xerogel of 1 shown in Figure 5b was σ 20°C = 1.9 × 10 −10 S cm −1 (typically behaving as undoped semiconductor), and the activation energy was E a = 0.67 eV. Interestingly, the conductivity of xerogel of 1 doped with TCNQ remarkably enhanced to σ 20°C = 3.6 × 10 −4 S cm −1 and the activation energy was E a = 0.18 eV. However, the conductivity of xerogel of 1 after iodine doping (30 min) was only increased to σ 20°C = 6.4 × 10 −7 S cm −1 (more conducting behavior in the charge transfer state), and the corresponding activation energy was E a = 0.53 eV. In particular, the remarkable enhancement in the conductivity for TCNQ doped xerogels is likely to depend on the strong intermolecular charge transfer interactions between TTF and TCNQ moieties as well as the length of stacks obtained. The σ 20°C of four samples of the undoped xerogel of 1 with varied L and d was the same within ±17%, while the E a showed standard deviation of ±0.07 eV. After doping with TCNQ/iodine, σ 20°C values fell within ±35− 45%, while E a exhibited smaller standard deviation of ±0.03 eV. Given the fibrous morphology of the xerogel samples of 1 (Figures 2a and 4c), some deviations are expected. Better connectivity of the densely populated nanofiber areas would lead to higher conductivity. In addition, since the nanofibers exhibit random orientation (Figures 2a and 4c), the effect of the direction of the applied electric field should be insignificant. The increase in the conductivity upon TCNQ/iodine doping of xerogel samples of 1 indicated the formation of a more conducting charge transfer state (Figures 4e,f), resulting in the increased conductivity prefactor. The significant reduction in the activation energy after TCNQ doping of xerogel samples of 1 suggests tighter packing of the TTF moieties due to strong charge transfer interactions between TTF and TCNQ moieties. Additionally, the temperature dependent conductivity measurements between 20 and 80°C reveal an exponential dependence of conductivity on temperature which is typical for semiconductor materials. Interestingly, our observations are similar to the values reported by Kitamura et al., where it was shown that the conductivity of the nanofiber networks composed of a TTF moiety functionalized with L-isoleucine in aromatic liquid crystals (instead of common organic solvents) exhibit ca. 3 × 10 −10 S cm −1 when undoped and 2 × 10 −7 S cm −1 after iodine doping for 2 min which was further increased to 3 × 10 −5 S cm −1 after 1 week, while the doping of fibers with TCNQ increased the conductivity to 1 × 10 −5 S cm −1 . 32 All these results consistently indicate that the two-component charge transfer nanofibers of TTF−dipeptide bioconjugates, formed upon doping with TCNQ/iodine vapor, exhibit good conductivity and clearly behave as semiconducting nanomaterials. ■ CONCLUSIONS In summary, we successfully demonstrated the supramolecular self-assembly of p-type TTF-dipeptide bioconjugates into 1D nanofibers that further lead to the development of selfsupporting gels in common organic solvents such as chloroform and ethyl acetate. It was also shown that the gels in chloroform (or ethyl acetate) can be doped with TCNQ/iodine vapor to form stable two-component charge transfer gels. The charge transfer complexation was further confirmed by various spectroscopy, microscopy, and rheology techniques. Moreover, the investigation of the conductivity of the nanofiber xerogel networks formed between gold electrodes on a glass surface showed a remarkable enhancement in the conductivity from 1.9 × 10 −10 (undoped) to 3.6 × 10 −4 S cm −1 when doped with TCNQ and 6.4 × 10 −7 S cm −1 when exposed to iodine vapor. This approach certainly provides potential opportunities for the bottom-up fabrication of cost-effective functional biomaterials that may find applications in interfacing biology with electronic devices such as organic and bioinspired solar cells, smart biomaterials, and biosensors. * S Supporting Information Synthesis and experimental details, additional spectroscopy and microscopy results, and copies of 1 H and 13 C nuclear magnetic
Common Reconstructions in the Successive Refinement Problem with Receiver Side Information We study a variant of the successive refinement problem with receiver side information where the receivers require identical reconstructions. We present general inner and outer bounds for the rate region for this variant and present a single-letter characterization of the admissible rate region for several classes of the joint distribution of the source and the side information. The characterization indicates that the side information can be fully used to reduce the communication rates via binning; however, the reconstruction functions can depend only on the G\'acs-K\"orner common randomness shared by the two receivers. Unlike existing (inner and outer) bounds to the rate region of the general successive refinement problem, the characterization of the admissible rate region derived for several settings of the variant studied requires only one auxiliary random variable. Using the derived characterization, we establish that the admissible rate region is not continuous in the underlying source source distribution even though the problem formulation does not involve zero-error or functional reconstruction constraints. I. INTRODUCTION T HIS paper considers a common-receiver reconstructions (CRR) variant of the successive refinement problem with receiver side information where the source reconstructions at the receivers are required to be identical (almost always). An encoder is required to compress the output of a discrete memoryless source (DMS) into two messages: • a common message that is reliably delivered to both receivers, and • a private message that is reliably delivered to one receiver. Each receiver has some side information jointly correlated with the source (and the other receiver's side information), and is required to output source reconstruction that meets a certain fidelity requirement. The CRR condition requires that these reconstructions be identical to one another. The CRR problem described above can be viewed as an abstraction of a communication scenario that could arise when conveying data (e.g., meteorological or geological survey data, or an MRI scan) over a network for storage in separate data clusters storing (past) records of the data. The records, which serve as side information, could be an earlier survey data or a previous scan, depending on the specific application. The framework considered here is the source coding problem that arises when data is to be communicated over a degraded broadcast channel to two receivers that have prior side information, and the three terminals (the transmitter and the two receivers) use a separate source-channel coding paradigm [1]. The problem of characterizing the achievable rate-distortion region of the general successive refinement problem with receiver side information is open [2]- [4]. The version of the successive refinement problem where the private message is absent, known as the Heegard-Berger problem, is also open [4]- [6]. However, complete characterization exists for specific settings of both successive refinement and Heegard-Berger problems. For example, the rate region of the successive refinement problem is known when the side information of the receiver that receives one message is a degraded version of side information of the other receiver [2]. Similarly, the Heegard-Berger problem has been solved when the side information is degraded [5], mismatched degraded [7], or conditionally less noisy [8]. Additionally, the HB problem has also been solved under list decoding constraints (closely related to logarithmic-loss distortion functions) [9], degraded message sets [10], and many vector Gaussian formulations [11], [12]. The common reconstruction variant of the Wyner-Ziv problem was first motivated and solved by Steinberg [13]. Common reconstructions in other problems were then considered in [14]. Benammar and Zaidi recently considered the HB problem under a three-way common reconstructions condition with degraded message sets [10], [15]. In our previous work [16], we characterized the rate region for several cases of the HB problem with the CRR requirement. In this work, we present single-letter inner and outer bounds for the rate region of the successive refinement problem with receiver side information and the CRR requirement (termed as the SR-CRR problem). For several specific cases of the underlying joint distribution between the source and the side information random variables (including those in our previous work [17]), we prove that the inner and outer bounds match, and therefore yield a characterization of the rate region. The characterization indicates that while the receiver side information can be fully utilized for reducing the communication rate by means of binning, only the Gács-Körner common randomness between the random variables (i.e., both auxiliary and side information) available to the two receivers can be used for generating the reconstructions. This feature is also seen in our characterization for the HB problem with the CRR requirement in [16]. This single-letter characterization for the rate region of the SR-CRR problem derived in this work is unique in the sense that it is the first rate region formulation where the Gács-Körner common randomness explicitly appears in the single-letter constraint corresponding to the receiver source reconstructions. Unlike the best-known bounds for the successive refinement problem, the characterization of the SR-CRR rate region (when the source satisfies a certain support condition) requires only one auxiliary random variable that is decoded by both receivers. Thus, the CRR requirement obviates the need for a second auxiliary random variable to absorb the private message. The paper is organized as follows. Section II-A introduces some basic notation; Section II-B reviews the concept of Gács-Körner common randomness; and Section II-C formally defines the successive refinement problem with the common receiver reconstruction constraint. The characterization of the paper's main contributions are summarized in Section III, including the single-letter characterization of the rate region, and the proof of the discontinuity of the characterization with the source distribution. The reader will be directed to the respective appendices for the proofs of the results contained in Section III. Finally, Section IV concludes this work. The probability of an event E is denoted by P(E), and E denotes the expectation operator. Lastly, for ε > 0. the set of ε-letter-typical sequences of length n according to pmf p X is denoted by and GK X,Y . B. Gács-Körner Common Randomness Given two jointly correlated random variables X and Y , the Gács and Körner's common randomness between X and Y [19] is the random variable Z with the largest entropy such that H(Z|X) = H(Z|Y ) = 0. This notion of common randomness will play a key role in this paper. To define this notion of common randomness we introduce the following terminology. Given (X, Y ) ∼ p XY on X × Y, let G X,Y [p XY ] denote the bipartite graph with left nodes X , right nodes Y, and edges between x ∈ X and y ∈ Y if and only if p XY (x, y) > 0. Now define an equivalence relation on Y by y 1 − − y 2 if and only if they are in the same connected component of be any mapping satisfying Of course, there are multiple choices for the Gács-Körner mapping in (6). However, all such choices are equivalent in the sense that if GK X,Y 1 and GK X,Y 2 satisfy (7) then As an illustration, let X = {a, b, c, d, e, f, g, h} and let Y = {α, β, γ, δ, , ζ, η}. Consider the following pmf p XY . Figure 1 illustrates the bipartite graph representation of p XY , and depicts one possible choice for the Gács-Körner mapping GK X,Y satisfying the requirement in (7). Notice that that G X,Y [p XY ] contains three connected components, and hence GK X,Y is a ternary RV taking values in {b, d, g} if the chosen mapping is the one illustrated in the figure. Note that for this pmf, there are 4 × 2 × 2 = 16 equivalent choices for the mapping. From the definition above, two properties of the Gács and Körner's common randomness are evident. • The Gács and Körner's common randomness between two random variables is symmetric, i.e., Note however that the above two Gács and Körner's common randomness variables take values over different alphabets even though each is a function of the other, i.e., GK X,Y (Y ) and GK Y,X (X) are random variables over Y and X , respectively. • Since the Gács and Körner's common randomness depends on the pmf p XY only through the bipartite graph, the Gács and Körner's common randomness between RVs computed using two pmfs p XY and q XY are identical if: (a) G X,Y [p XY ] and G X,Y [q XY ] are graph-isomorphic; and (b) the probabilities of the components of p XY and that of q XY are permutations of one another. To illustrate this, consider p XY of (9) and the following pmf q XY . The Gács and Körner's common randomness depends between X and Y computed using either p XY or q XY yields a ternary random variable with the pmf [ 7 20 3 10 7 20 ]. In the remainder of this work, for a given pmf p XY over X × Y, we will assume an arbitrary but fixed choice for the Gács and Körner's common randomness mapping that satisfies (7) without explicitly specifying this mapping. On account of notational ease, we will also drop the argument, and abbreviate it by It is to be assumed that the argument is always the second random variable in the superscript, and consequently, the Gács and Körner's common randomness GK X,Y is a random variable over the alphabet Y of the second variable. This is, quite simply, a only a notational bias, since Gács and Körner's common randomness is indeed symmetric. Before we proceed to formally state the problem investigated and the main results, we present two results that pertain solely to Gács-Körner common randomness that we will need in the main section of this work. Both results present decompositions of the Gács-Körner common randomness between two random variables when additional information about the support of the joint pmf of the two variables is known. Lemma 1: Suppose that the support set of (A 1 , Then, Proof: The proof can be found in Appendix A. Proof: Define a constant random variable W over a singleton alphabet, say {w} and let q W denote the degenerate pmf of W . Define q XY Z = q W q XY Z . Then, one can see that Then, an application of Lemma 1 yields where (a) follows since W is a constant RV. C. Problem Setup -Successive Refinement with the CRR Constraint Let pmf p SU V on S × U × V, a reconstruction alphabetŜ, and a bounded distortion function d : S ×Ŝ → [0,D] be given. We assume thatD ∈ (0, ∞). A DMS (S, U, V ) ∼ p SU V emits an i.i.d. sequence As illustrated in Fig. 2, the engineering problem is to encode the source S n into a common message M uv communicated to both receivers, and a private message M v communicated only the receiver having the side information V n so that the following three conditions are satisfied: 1) A receiver with access to the side information U n and the common message M uv can output an estimateŜ n of S n to within a prescribed average (per-letter) distortion D. 2) A receiver with access to the side information V n and both (common and private) messages M uv and M v can output an estimateS n of S n to within a prescribed average (per-letter) distortion D. 3) The estimatesŜ n andS n (both defined on the setŜ n ) are identical to one another almost always. The aim then is to characterize the rates of the common and private messages that need to be communicated to achieve the above requirements. The following definition formally defines the problem. Successive refinement with receiver side information and common receiver reconstructions (CRR). The main problem of interest in this paper is to characterize the D-admissible rate region R(D). Define D by III. MAIN RESULTS In this section, we will present inner (achievability) and outer (converse) bounds for the D-admissible rate region R(D), and we show that these bounds are tight in a variety of nontrivial settings. We will characterize these bounds through the following three rate regions defined over three corresponding spaces of auxiliary random variable pmfs. A. Three single-letter rate regions and their properties Definition 3: For k ∈ N, let P * D,k denote the set of all pmfs q ASU V defined on A × S × U × V such that (A, S, U, V ) ∼ q ASU V satisfies the following conditions: for some (A, S, U, V ) ∼ q ASU V ∈ P * D,k . Definition 5: For k ∈ N, let P ‡ D,k denote the set of all pmfs q ABCSU V defined on A × B × C × S × U × V such that (A, B, C, S, U, V ) ∼ q ABCSU V satisfies the following constraints: for some (A, B, C, S, U, V ) ∼ q ABCSU V ∈ P † D,k . We can establish the following preliminary inclusions between the three rate regions defined above. Lemma 3: For any k ∈ N, R * k (D) ⊆ R ‡ k (D). Proof: By simply choosing B = C = A with |A| ≤ k, we cover all rate pairs that line in R * k (D), and hence, . Proof: First, note that P † D,k ⊆ P ‡ D,k . So we are done if we show that the RHS of (28b) is numerically smaller than that of (30b) for any pmf in P † D,k . To do that, pick p ABCSU V ∈ P † D,k and consider the following argument. (c) follows by dropping variables in the conditioning; and finally (d) follows by reintroducing U in the second term of (31) without affecting the numerically affecting the terms. Finally, the claim follows by noting that (32) is bounded below by the RHS of (30b) thereby completing the proof of this claim. While the above two inclusions hold true for all DMSs p SU V , we can establish stronger results if we know something more about p SU V . In specific, if we know that the pmf p SU V satisfies the full-support condition of (33), then the following reverse inclusion also holds albeit with some alphabet size readjustment. In other words, when the full-support condition is met, any rate pair that meets (28a) and (28b) (with auxiliary RVs A, B, and C) also meets (26a) and (26b) for a different auxiliary RV A with an appropriately larger alphabet. Lemma 5: If the support of (S, U, V ) ∼ p SU V satisfies Proof: Lemma 5 is proved in Appendix B. Observe that each of the three rate regions defined above (Definitions 4, 6, and 8) can potentially be enlarged by merely increasing k. In other words, we are guaranteed to have Since we do not impose restrictions on how we encode the source, we can allow the alphabet sizes of the auxiliary RVs to be finite but arbitrarily large. Hence, it makes sense to introduce the following notation for the limiting rate regions allowing the alphabets of auxiliary random variables to be any finite set. Definition 9: However, from a computational point of view, it is preferable that the sequence of sets in (35) not grow indefinitely with k. The following result ensures that this, indeed, does not happen. It quantifies the bounds on the alphabet size of the auxiliary random variables beyond which there is no strict enlargement of these regions. Lemma 6: For all integers k ∈ N, we have the following. Proof: The proof for claim for R ‡ (D) is presented in detail in Appendix C. The proof for (37a) and (37c) is almost identical to that of (37b), and the difference are highlighted in Remarks 5 and 6 in Appendix C. We conclude this section with two properties of the above three regions. B. A single-letter characterization for R(D) In this section, we present our main results on the single-letter characterization of the D-admissible rate region. The first two present inner and outer bounds sandwiching the D-admissible rate region using the three limiting rate regions given in Definitions 4, 6, and 8. Theorem 1: For any D ∈ [D, D], the regions R * (D) and R † (D) are inner bounds to the D-admissible rate region R(D) of the successive refinement problem with the CRR constraint, i.e., Proof: The inclusion (a) follows from Lemmas 3 and 6 above, and a proof of the inclusion (b) can be found in Appendix F. Theorem 2: For any D ∈ [D, D], the rate region R ‡ (D) is an outer bound to the D-admissible rate region R(D) of the successive refinement problem with the CRR constraint, i.e., Proof: The proof of the inclusion in (a) can be found in Appendix G. In the absence of the CRR constraint, Steinberg and Merhav's original solution to the physically-degraded side information version of the successive refinement problem required three auxiliary random variables (later simplified to two by Tian and Diggavi [20]) and two reconstruction functions. Benammar and Zaidi's solution to their formulation of the successive refinement problem with a common source reconstruction required two auxiliary random variables and a reconstruction function [10], [15]. The following result, which is the main result in this work, establishes a single-letter characterization of the D-admissible rate region R(D) for several cases of side information. Unlike other characterizations, the rate region R(D) is completely described by a single auxiliary random variable, and a single reconstruction function whose argument is not the side information and the auxiliary random variable, but the Gács-Körner common randomness shared by the two receivers. Theorem 3: If the DMS (S, U, V ) ∼ p SU V falls into one of the following cases, Since in each of the cases in Theorem 3, the characterization is given by precisely one auxiliary random variable, it is only natural to wonder as to when a quantize-and-bin strategy is optimal. In this strategy, the auxiliary random variable A that the encoder encodes the source into is simply the reconstruction that the receivers require. The encoder upon identifying a suitable sequence of reconstruction symbols simply uses a binning strategy to reduce the rate of communication prior to forwarding the bin index to the receivers. Thus in this strategy, all three terminals (i.e., the transmitter included) are aware of the common reconstruction. To analyze cases in which the quantize-and-bin approach is optimal, we define the corresponding rate region. Definition 10: The quantize-and-bin rate region where the union is taken over all reconstructionsŜ onŜ withŜ Clearly, by setting A =Ŝ in the proof of Theorem 1 detailed in Appendix F, we infer that R * QB (D) ⊆ R * (D). Consequently, we can see that the quantize-and-bin region is always achievable. The following result establishes three conditions under which the quantize-and-bin strategy is not merely achievable, but optimal, i.e., R * QB (D) = R * (D). then, the quantize-and-bin strategy is optimal, i.e., Proof: The proof can be found in Appendix J. In this section, we will present a binary example with S =Ŝ = {0, 1} and with the reconstruction distortion measure d being the binary Hamming distortion measure C. A Binary Example As illustrated in Figure 3, let (S, U, V ) ∼ p ρ,δ SU V ρ,δ∈[0,1] denote a family of DMSs such that (a) S is an equiprobable binary source; (b) S − U − V form a Markov chain; and (c) the channels p U |S , and p V |U are binary symmetric channels with crossover probabilities ρ and δ, respectively. Note that for any 0 < ρ, δ < 1, the pmf p ρ,δ SU V satisfies the conditions of both Case A and Case B of Theorem 4 above. Hence, we see that the quantize-and-bin strategy is optimal, and the optimal tradeoff between communication rates on the common and private links can be obtained without having to time-share between various operating points (corresponding to different average distortions). The following result presents an explicit characterization of the D-admissible rate region R(D)[p ρ,δ SU V ] for this class of sources. Proof: If the distortion D ≥ 1/2, then we can trivially meet the distortion requirement by settingŜ n =S n = 0 n . Consider the non-trivial range of distortions 0 ≤ D < 1/2. For δ ∈ (0, 1), the corresponding joint pmf p ρ,δ SU V satisfies S − U − V and S(U, V ) = S(U ) × S(V ). Therefore, Case B of Theorem 4 applies, and we have We first show that the right hand side of (42) is an inner bound for R * ChooseŜ as illustrated in Figure 4 with Thus, from the above, we conclude that To establish that choosingŜ according to Fig. 4 suffices to cover the entire rate region, we need to show that the LHS of the above equation is an outer bound for R * To that end, we proceed as follows. where (a) follows from Case B of Theorem 4 of Sec. III-A and the definition of R * QB (D); (b) creates an outer bound by relaxing the optimizing problem; and (c) follows from using Steinberg's common-reconstruction function for the binary symmetric source (see (14) of [13]) to obtain the solutions to the two minimizations in (46). The channel p S|Ŝ that simultaneously minimizes both optimization problems in (46) is precisely the choice in Fig. 4. Notice carefully that the proof of the above result uses one important aspect of Steinberg's characterization of the quantize-and-bin rate region for the point-to-point rate-distortion problem. • When the source and the receiver side-information are related by a binary symmetric channel , the optimal reverse test channel p S|Ŝ to be a binary symmetric channel with crossover probability D independent of the crossover probability of the channel relating the source and the receiver side information (see (14) of [13]). Consequently, the following general observation holds. Remark 3: The D-admissible rate region for any source q SU V that meets the conditions of Case A or Case B of Theorem 4 and for which q U |S and q V |S are binary symmetric channels with crossover probabilities ρ, ρ * δ, respectively, is given by (42). Consider the two pmfs given in Fig. 5 in support of the remark. A simple computation will yield that when is a binary symmetric channel with crossover probability 0.2; and (c) q V |S is a binary symmetric channel with crossover probability 0.32. Remark 3 assures that the quantize-and-bin strategy is optimal for both these sources, and that R(D) and 0.3. The rate region for each of these values for D is bounded by two lines -one with slope −1 corresponding to the message sum rate r uv + r v , and one with infinite slope corresponding to the common message rate r uv . When D = 0.5, no communication is required, and the rate region is the entire non-negative quadrant. As D is made smaller, the D-admissible rate region shrinks, and the minimum required communication rate for the common message increases. The admissible rate region shrinks until eventually D = 0, at which point, the corresponding admissible rate region is given by This region is entirely outside this figure except for the vertex, which is located at the top-right corner of the figure. D. On the Discontinuity of R(D) In source coding problems, the continuity of rate regions with the underlying source distributions allows for small changes in source distributions to translate to small changes in boundary of the rate region. Continuity is therefore essential in practice to allow the communications system engineer to estimate the source distribution and use the estimate to choose a suitable system operating point. When a single-letter characterization of a source coding rate region is known, it is possible to establish its continuity w.r.t the underlying rate region using the continuity of Shannon's information measures on finite alphabets [21,Chap. 2.3]. For example, [22,Lem. 7.2] considers the continuity of the standard rate-distortion function, and [23] and [24] study the semicontinuity of various source networks. However, the rate regions of certain source-coding problems are known to be discontinuous in the source distribution especially when they involve zero-error or functional reconstruction constraints [22,Ch. 11], [23], [25]. Despite the absence of any such reconstruction constraints, it turns out that the D-admissible rate region studied here is discontinuous in the pmf p SU V . The discontinuity arises rather due to the fact that we require the two reconstructions that are generated at two different locations in the network to agree (albeit with vanishing block error probability). Intuitively, in each of the cases where a single-letter characterization of the D-admissible rate region is known, the discontinuity can be attributed to the Gács-Körner common randomness in the argument of the single-letter reconstruction function; the Gács-Körner common randomness, and more precisely, its entropy can easily be seen to be discontinuous in the pmf p SU V . We illustrate this phenomenon by a simple example. Recall the D-admissible rate region R(D)[p ρ,δ SU V ] of the binary example given above. We now establish the discontinuity of this problem by showing that Suppose that (S, U, V ) ∼ p ρ,0 SU V and 1 > D > ρ > 0. Since δ = 0, we have U = V , and consequently, neither Case A nor Case B of Theorem 4 is no longer applicable in identifying the D-admissible rate region. However, since D > ρ and U = V , we can obviously achieve the distortion D at (r uv , r v ) = (0, 0) by simply choosinĝ S n = U n = V n =S n . This yields an average distortion of ρ < D, since Thus, From (51) and (52), we see that the D-admissible rate region of p ρ,δ SU V does not approach that of p ρ,0 SU V as δ → 0. Remark 4: The above argument can also be used to show the discontinuity at δ = 1, i.e., IV. CONCLUSIONS In this work, we look at a variant of the two-receiver successive refinement problem with the common receiver reconstructions requirement. We present general inner and outer bound for this variant. The outer bound is unique in the sense it is the first information-theoretic single-letter characterization where the source reconstruction at the receivers is explicitly achieved via a function of the Gács-Körner common randomness between the random variables (both auxiliary and side information) available to the two receivers. Using these bounds, we derive a single-letter characterization of the admissible rate region and the optimal coding strategy for several settings of the joint distribution between the source and the receiver side information variables. Using this characterization, we then establish the discontinuity of the admissible rate region with respect to the underlying source source distribution even though the problem formulation does not involve zero-error or functional reconstruction constraints. Then, there must exist a positive integer k ∈ N and sets (a , it follows that the above equations hold if and only if p A1A2 (a 1 , a 2 ) > 0 and p U V (u , v (k) ) > 0. APPENDIX B PROOF OF LEMMA 5 Let (r uv , r v ) ∈ R ‡ D,k . Then, there must exist pmf p ABCSU V ∈ P ‡ D,k and function f : For this choice of p ABCSU V , let us define Since (33) holds, we are guaranteed that η * > 0. Then, for any (a, b, c) ∈ A × B × C and (u, v) ∈ U × V, r v + r uv Hence, it follows that (r uv , r v ) ∈ R * D,k 2 . APPENDIX C PROOF OF LEMMA 6 We will begin by limiting the size for the auxiliary RV A, and then present bounds for the alphabet sizes for B and C, respectively. To bound the size of A, we need to preserve: We begin by fixingš ∈ S. Define the following |S| + 6 continuous, real-valued functions on P(B × C × S), the set of all pmfs on B × C × S. Let π ∈ P(B × C × S). Note that preserving condition (25) is not straightforward because of the presence of Gács-Körner common randomness function. Consequently, this condition has to be split into two parts, which have to be combined together non-trivially. However, this approach requires the application of the Support Lemma [26, p. 631] infinitely many number of times, along with a suitable limiting argument. To preserve (25), define for each m ∈ N, a continuous function ψ 7,m : P(B × C × S) → R by Note that ψ 7,m links together the distortion requirement with the probability that the reconstructions are different. Pick any pmf p ABCSU V ∈ P ‡ D,k . Then, by definition, it follows that there exist functions f : A × C × V →Ŝ and g : Consequently, Combining the above with (68a)-(68g), we see that After possibly renaming of elements, we may assume that the alphabet of each of the auxiliary RVs A m is the same, say A * . Note that the optimal reconstruction functions f m , g m (see (69)) for the choice q AmBCSU V meeting (73h) satisfy Since the number of functions from the set A * × B × U (or A * × C × V) toŜ is finite, the sequence {(f m , g m )} m∈N must contain infinitely many repetitions of at least one pair of reconstruction functions. Therefore, let Let {q Am j BCSU V } j∈N be the subsequence of {q AmBCSU V } m∈N with (f mj , g mj ) = (f ω0 , g ω0 ). By the Bolzano-Weierstrass theorem [27], we can find a subsequence {q Am j l SU V } l∈N ⊆ {q Am j SU V } j∈N that converges to, say, q * A * BCSU V . Since {ψ s } s∈S\{š} , ψ 1 , ψ 2 , ψ 3 , ψ 4 , ψ 5 , ψ 6 and {ψ 7,m } m∈N are continuous in their arguments, by taking appropriate limits of (73a)-(73e), we see that Note that the above equality holds because (f ω0 , g ω0 ) = (f mj l , g mj l ) for all l ∈ N. Similarly, Thus, we may, without loss of generality, restrict the size of the alphabet of A to be |S| + 6. It then follows from Definitions 5 and 6 of Sec. III-A that considering random variables A and B with |A| > |S|+6 and |B| > |Ŝ| |U | does not enlarge the region, i.e., we can identify different A * andB using the above argument that operate at the same rate pair. We are now only left with bounding the size of C, for which, we can repeat now repeat steps similar to that of A. This time, we preserve: (a) the distribution p ABS ; (b) three information functionals H(S|A, C, V ), H(S|A, C, U, V ) and H(S|A, B, C, U, V ); and (c) the reconstruction constraint. Proceeding similarly as in the case of for the random variable A, we conclude that |C| ≤ |S|(|S| + 6)|Ŝ| |U | + 3 suffices. Thus, it follows that Remark 5: Since R * k (D) has only one auxiliary random variable, the proof of (37a) of Lemma 6 follows closely the portion of the above proof corresponding to the reduction of the size of A alone. For the purposes of Lemma 6, we only need to preserve two information functionals, namely H(S|A, U ), and H(S|A, V ). Hence, the proof will only use {ψ s : s ∈ S \ {š}}, ψ 1 , ψ 2 , and ψ 7,m to conclude that |A| ≤ |S| + 2 suffices, and hence Remark 6: For the proof of (37c) of Lemma 6, we only need to preserve four information functionals, namely H(S|A, U ), H(S|A, V ), H(S|A, B, U ) and H(S|A, C, V ). Hence, the proof will only use {ψ s : s ∈ S \ {š}}, ψ 1 , ψ 2 , ψ 3 , ψ 4 and ψ 7,m to conclude that |A| ≤ |S| + 4 suffices. The bound for B is the same as in the above proof. The final argument for bounding C requires the preservation of (a) the distribution p ABS ; (b) the information functionals H(S|A, C, V ); and (c) the reconstruction constraint. APPENDIX D PROOF OF LEMMA 7 Since R * (D) ⊆ R 2 , we only need to show that the line segment between any two points in R * (D) lies completely within R * (D). To do so, pick (r u , r uv ), (r u , r uv ) ∈ R * (D) and λ ∈ [0, 1]. Then, by definition, we can find pmfs q A SU V , q A SU V ∈ P * D,|S|+2 such that r uv ≥ I(S; A |U ), (87a) r v + r uv ≥ I(S; A |V ), (87b) r uv ≥ I(S; A |U ), (87c) r v + r uv ≥ I(S; A |V ). (87d) Further, there must also exist functions f : A × U →Ŝ, g : A × V →Ŝ, f : A × U →Ŝ, and g : A × V →Ŝ such that and Without loss of generality, we may assume that the alphabets of A and A are disjoint, i.e., A ∩ A = ∅. Let us defineà A ∪ A . Now, define a joint pmf q * TÃSU V over {0, 1} ×Ã × S × U × V as follows: 1. Let T ∈ {0, 1} be an RV such that P[T = 0] = λ. Let Then by definition T is a function ofà (since T = 0 if and only ifà ∈ A ) and T is independent of (S, U, V ). Let q * A,S,U,V be the marginal of (Ã, S, U, V ) obtained from q * T,Ã,S,U,V . Then, the following hold: (i) q * A,S,U,V ∈ P * D,2|S|+4 . This follows by definingf :Ã × U →Ŝ andf :Ã × V →Ŝ bỹ and verifying that (97b) Hence, it follows that any point that is a linear combination of (r u , r uv ), (r u , r uv ) ∈ R * (D) is a point in R * 2|S|+6 (D), which by Lemma 6, is identical to R * (D). Hence, the claim of convexity follows. APPENDIX E PROOF OF LEMMA 8 From Definition 4, we can find q AiSU V ∈ P * Di,k , and functions f i : A i × V →Ŝ and g i : and Perhaps after a round of renaming, we may assume that the alphabets of A i s are identical, i.e., A i = A. Since there are only a finite number of functions from A × V or A × U toŜ, the sequence (f 1 , g 1 ), (f 2 , g 2 ), . . . must contain infinitely many copies of some pair of functions. Let Let {i j } j∈N be a subsequence of indices such that (f ij , g ij ) = (f ω , g ω ) for all j ∈ N. Consider the sequence of pmfs {q Ai j SU V } j∈N . Since A × S × U × V is a finite set, by Bolzano-Weierstrass theorem [27], a subsequence of pmfs must be convergent. Let {i j k } k∈N be one such subsequence, and let q Ai j k SU V → qÅ SU V . By the continuity of the information functional [21], we see that Further, we see that Note that in the above two arguments, we have used the fact that (f ij k , g ij k ) = (f ω , g ω ) for k ∈ N. Combining (101), (102), and (103), we see that qÅ SU V ∈ P * D,k . Lastly, using the continuity of the information functional [21], we see that Hence, it follows that (r uv , r v ) ∈ R * k (D). We build a codebook using the marginals p A , p B|A and p C|A obtained from the chosen joint pmf. The codebooks for the three auxiliary RVs are constructed as follows using the structure illustrated in Fig. 7. • For each triple (i, i , i ) ∈ 1, 2 nRa × 1, 2 nR a × 1, 2 nR a , generate a random codeword A n (i, i , i ) ∼ p n A independent of all other codewords. Note that by the choice of rates, the total rate of the A-codebook is • For each triple (i, i , i ) ∈ 1, 2 nRa × 1, 2 nR a × 1, 2 nR a , and pair ( , ) ∈ 1, 2 nRb × 1, 2 nR b , generate a random codeword B n (i, i , i , , ) ∼ n k=1 p B|Ak(i,i ,i ) independent of all other codewords. Note that by the choice of rates, the total rate of the B-codebook is (107) • Similarly, for each triple (i, i , i ) ∈ 1, 2 nRa × 1, 2 nR a × 1, 2 nR a , and pair (l, l ) ∈ 1, 2 nRc × 1, 2 nR c , generate a random codeword C n (i, i , i , l, l ) randomly using n k=1 p C|Ak(i,i ,i ) independent of all other codewords. Note that by the choice of rates, the total rate of the B-codebook is (108) A n (2 nRa , 1, 1) A n (2 nRa , 2 nR 0 a , 2 nR 00 a ) A n (1, 1, 1) . . . . . . . . . Upon receiving a realization s n of S n , the encoder does the following: It then searches for a pair , and a pair (l, l ) such that (A n (i, i , i ), C n (i, i , i , l, l , s n ) ∈ T n ε [p ABS ]. Using (106)-(108) and invoking the lossy source coding theorem [26, p. 57], and the Covering Lemma [26, p. 62], we see that 3. The encoder conveys (i, ) to both receivers, and (i , l) to the receiver with side information V . Note that this strategy corresponds to the following rates Further, for any (i, i , i , , , l, l ), by the Markov Lemma [26, p. 296], we are guaranteed that Thus for sufficiently large n, the probability with which we will find a tuple (i, i , i , , , l, l ) such that the corresponding codewords and the source and side information realizations is ε-typical can be made arbitrarily small. Moreover, as a consequence of the Packing Lemma [26, p. 46], we are also guaranteed that From (109), (111), (112), we can choose n large enough such that when averaging over all realizations of the random codebooks, the probability with which all of the following events occur is at least ε. (a) the encoder will be able to identify indices (i, i , i , l, l , , ) such that the corresponding codewords and the realization of S n are jointly ε-letter typical; (b) the identified codewords and the realizations of (S n , U n , V n ) are jointly 2ε-letter typical; (c) the receiver with side information V will identify the indices (i , l ) determined by the encoder; and (d) the receiver with side information U will identify the indices i , i , determined by the encoder. Then, there must exist a realization of the three codebooks {a n (i, i , i )}, {b n (i, i , i , , )}, and {c n (i, i , i , l, l )} such that the above four events occur simultaneously with a probability of at least 1 − ε. For this realization of the codebooks, with probability of at least 1 − ε, the realizations of the source and side informations (s n , u n , v n ), and the selected codewords will be jointly 2ε-letter typical, i.e., (a n (i, i , i ), b n (i, i , i , , ), c n (i, i , i , l, l ), s n , u n , v n ) ∈ T n 2ε [p ABCSU V ]. Note that letter typicality ensures that the support of the empirical distributionq ABCSU V induced by the tuple (a n (i, i , i ), b n (i, i , i , , ), c n (i, i , i , l, l ), s n , u n , v n ) matches that of q ABCSU V . Thus, it follows that whenever (113) holds, we have for each k = 1, . . . , n, Lastly, since (113) holds with a probability of at least 1 − ε, it is also true that the reconstructions at either receiver will offer an expected distortion of no more than D(1 + 2ε) with a probability of at least 1 − ε. The proof is complete by noting that ε can be chosen to be arbitrarily small. where in (a), we let A i := (M uv , U i−1 ) and B i := (U n i+1 , S i−1 ); in (b), we introduce the time-sharing random variable Q that is uniform over {1, . . . , n}; in (c) we use the fact that Q is independent of (S Q , U Q ); and in (d), we denoteĀ := (A Q , Q), andB = (B Q , Q). Similarly, where in (a), we have denoted C i := (M v , V n\i ) and used the chain in (b) we make use of the uniform time-sharing random variable Q; and in (c), we use the independence of Q and (S Q , U Q , V Q ) and defineĀ := (A Q , Q),B = (B Q , Q), andC = (C Q , Q). Note that the following holds. For each i ∈ {1, . . . , n} and (a, b, c, a, b, u). (120b) In other words,S (121b) Using the above notation, we can then verify that Now, we have to establish the existence of auxiliary RVs such that the RHS of (122) is, in fact, zero. To do so, we make use of the two pruning theorems in Appendix H. The first step is to only allow realizations of auxiliary random variables for which the reconstructions agree most of the time, and prune out the rest. To this end, define By a simple application of Markov's inequality, one can argue that Similarly, using Property (e) of Theorem 5 of Appendix H, we see that From (123), (124) and Property (a) of Theorem 5 of Appendix H, we see that Since S(Å,B,C) = E, for any (ā,b,c) ∈ E, we can invoke Property (b) of Theorem 5 of Appendix H with Now, let's proceed by pruning pÅBC S further by using Pruning Method B defined in (159) of Appendix H with Invoking Property (a) of Theorem 6 of Appendix H, it follows that (138b) . . . . . . APPENDIX H PRUNING THEOREMS We now present two pruning theorems that concern any five random variables (A 1 , A 2 , S, B 1 , B 2 ) where (A 1 , A 2 )− S − (B 1 , B 2 ) forms a Markov chain. The theorems will be applied to the CRR successive refinement problem, where S will be identified as the source, A 1 , A 2 will be associated with auxiliary random variables, and B 1 and B 2 are the side information random variables available at each of the receivers. The pruning theorems will help us to understand how small alterations to the marginal pmf of (A 1 , A 2 , S) will change • the joint pmf of (A 1 , A 2 , S, B 1 , B 2 ) with respect to variational distance, and • information functionals such as I(S; A 1 |B 1 ), I(S; A 2 |B 2 ) and I(S; A 1 A 2 |B 1 B 2 ). Figure 8 illustrates the two kinds of pruning. In the Pruning Method A, we have (A 1 , A 2 , S) ∼ p A1A2S as shown in Figure 8(a). We select an appropriate threshold 0 < δ < 1, and consider any subset E ⊆ A 1 × A 2 with P[(A 1 , A 2 ) ∈ E] ≥ 1 − δ. We then take the joint pmf p A1A2S and construct a new joint pmfpà 1Ã2S whose support set satisfies S(à 1 ,à 2 , S) ⊆ E × S. In other words, the edges belonging to E c × S in the bipartite graph of p A1A2S indicated by red dashed lines in Figure 8(a) are removed (and the remaining edges are scaled appropriately) to definepà 1Ã2 S . In the Pruning Method B that is illustrated in Figure. 8(b), we first select an appropriate threshold 0 < δ < 1, and consider any subset of edges E ⊆ A 1 × A 2 × S with Edges that are not in E (shown by red dashed lines in Fig. 8(b)) are removed, and the probability mass of the rest of the edges are scaled appropriately to construct a newpà 1Ã2S such that S(à 1 ,à 2 , S) ⊆ E. The precise details of the two kinds of pruning, and the required results pertaining to them are elaborated below. . Then, by Markov's inequality, Further for every (a 1 , b 1 ) ∈ D, Lemma 2.5 of [22] guarantees that Finally, Assertion (e): The proof of (e) is identical to that of (d), with the only difference being that the commencement of the argument is as follows. The remaining steps are identical to those in (d), with the exception that all four variables (either (à 1 ,à 2 , B 1 , B 2 ) or (A 1 , A 2 , B 1 , B 2 )) appear in the conditioning. Proof: The proof follows on the same steps as Theorem 5 and the difference is in the evaluation of the probabilities of the normalization term in (159). Assertion (a): Since for (a 1 , a 2 , s) ∈ E, we have p(s|a 1 , a 2 ) ≤ δ, we can argue that Further, for any s ∈ S(S), Assertion (d): Consider the following. An application of Pinsker's followed by Jensen's inequality yields the following. The remaining steps are identical to those in (d), with the exception that all four variables, i.e., either (à 1 ,à 2 , B 1 , B 2 ) or (A 1 , A 2 , B 1 , B 2 ), appear in the conditioning. APPENDIX I PROOF OF THEOREM 3 In each case, the overall approach is to show that a rate pair (r uv , r v ) that is included in the outer bound rate region R ‡ (D) is also included in R D . To establish that, in each case, we pick a rate pair (r uv , r v ) ∈ R ‡ |S|(|S|+6)|Ŝ| |U | +4 (D) and establish explicitly that the rate pair is also an element of R * k (D) for some k. The proof is then complete by invoking the achievability of R * k (D) proved in Theorem 1 of Sec. III-B. Case A: The proof follows from the following series of arguments: Then, from (187), we see that E d(S, f (f (GKà V,ÃU ))) ≤ D. Case D: In this case, since H(S|U ) = 0, the receiver with side information U does not require the encoder to communicate any message. Hence, we see that if (r uv , r v ) ∈ R(D), then any (r uv , r v ) ∈ R(D) provided r uv + r v ≥ r uv + r v , i.e., only the sum-rate constraint is relevant. Let Then, by Lemma 10 of Appendix K, for each (a, c, u) ∈ S(A, C, U ), there must exist an η(a, c, u) such that
Packet method of data transmission in radar and radio navigation upper-air sounding systems The use of the packet method of transmission of coordinatetelemetry information to the ground station in radiosondes of upper-air sounding systems can significantly reduce the influence of interference arising in the radio channel, increase the reliability of receiving transmitted information, reduce the cost of measurements of meteorological parameters of the atmosphere on the scale of the aerological network of the Russian Federation. Introduction Domestic radar-type SRs use the method of measuring the direction and range to calculate the coordinates, speed and direction of movement of the radiosonde in a free atmosphere. Measurement of angular coordinates: azimuth (), elevation () is carried out by the method of conical scanning. A distinctive feature of domestic SRs is the measurement of the slant range (Rn) by a radio-pulse method to an aerological radiosonde of the MRZ-3 type, equipped with a super-regenerative transceiver providing an active response signal at a distance of at least 250 km [1][2][3][4]. The experience in developing the navigation SR "Polyus" showed the advantages of transmitting telemetry information (TI) about meteorological parameters of the atmosphere (MPA) from the ARS to the base station in the form of digital packets [5 -7]. To improve the tactical and technical characteristics of the radar SR and to ensure the maximum compatibility of its MPA sounding data with the data received by the radio navigation SR "Polyus", it was necessary to modernize the radar SR by introducing the packet method of transmitting telemetry information (TI) from the ARS to the base radar [8,9]. In order to improve the radar SR, a microprocessor radiosonde MRZ-3MK working in batch mode for transmitting telemetric information was developed and tested. Problem statement The traditional method of transmitting TI from the ARS is to transmit information about the MPA during one cycle, the duration of which is Tcyc = 20 s (Fig. 1), [1][2][3][4]. During this time interval, the ARS measuring generator for 5 s sequentially generates the frequency of the reference (calibration) channel with a period of Trc = 0.7 ms, the frequency of the measuring channels of temperature Tt and humidity Tu. For example, the period of the measuring generator when the temperature sensor is connected in the entire range of changes in the temperature of the medium from minus 90°C to plus 50°C varies within Тt = 0.8 -60.0 ms. The coding of TI is carried out by frequency-pulse manipulation (PFM) pulses of a measuring generator of a pulse repetition rate of radio pulses (800 kHz) emitted by the SPP radiosonde. Reception of TI ARS, processing and formation of meteorological information is carried out in a ground-based radar by calculating meteorological parameters from the values of the received periods Top, Tt, Tu, taking into account the known calibration coefficients of the ARS transmitter and sensors. In the considered transmission method, the following problems arise when receiving and processing telemetric information: 1. The long interval of time for obtaining meteorological information (Тcyc = 20 sec.) creates the problem of its stable reception during signal fading due to ARS swaying, since the signal fading interval at the output of the radar receiver is usually 1-3 sec. The appearance of at least one fading of the signal during the information transmission cycle does not allow to calculate the MPA and obtain the measurement result. 2. Another problem occurs when the frequency of telemetric signals caused by spurious amplitude modulation of the radiation of the ARS transmitter in the channel bandwidth of the angular automation radar, which leads to a violation of the stability of the automatic tracking ARS by angular coordinates. 3. The third problem is related to limiting the maximum value of the period of telemetric frequencies of the ARS measuring generator equal to Ti max = 60.0 ms. This limitation does not allow the use of modern temperature sensors with a small time constant and high resistance (about 2-4 MΩ) at low temperatures of the order of minus 80-90°C. 4. Additional difficulties arise when receiving a pulsed telemetric signal in the radar receiver when the period of the video pulses changes within a significant range of Ti max = 0.7-60.0 ms, since their duty cycle and the constant component of the signal change tens of times, which complicates the operation of the threshold device of the period meter pulses. Therefore, to improve the tactical and technical characteristics of a radar SR and to ensure maximum compatibility of its MPA sounding data with the data received by the radio navigation SR "Polyus", it is necessary to modernize the radar SR by introducing the packet method of transmitting telemetric information (TI) from the ARS to the base radar. Questions of the optimal reception of packet telemetry information theory in radar SR In order to substantiate the methods of optimal reception of digital ARS signal in radar SR and to elaborate technical recommendations for the development of on-Board and ground equipment, it is necessary to make some explanations regarding the signal transformations in the linear and digital parts of the radar receiver [10][11][12][13][14][15]. From the output antenna to the input low noise amplifier (LNA) the radar receives the realization of a random signal y in (t) represents the mixture of the transmitted binary signals u iin (t) in a sequence of radio pulses SPP to the carrier frequency and random interference 1680 MHz n in (t) y in (t) = u iin (t) + n in (t) (1) Information about the MPA is transmitted in the form of a digital code by pulsefrequency manipulation (PFM) subcarrier SPP frequency. The Central value of the subcarrier frequency (radio pulse repetition frequency) of the SPP is 800 kHz. The deviation of the subcarrier frequency is within Δf = ±15kHz. Then, by using a mixer and a heterodyne, the spectrum of radio pulses is transferred to the intermediate frequency of the radar receiver. After the amplitude detector, the video pulses with a frequency fs = 800±15 kHz and a duration Δτ = 0.25 µs enter the narrow-band intermediate-frequency amplifier, in which the first harmonic of the video pulse frequency is exarticulated. Next, a harmonic signal containing telemetry information in the form of PFM is fed to the input of the frequency detector (FD). From the FD output, a mixture of digital binary signal and interference y(t) is fed to the correlator and the threshold device made on the microcontroller MCU (Fig.2). A generalized block diagram of the telemetry channel of the radar receiver is shown in Fig.2. In the process of optimal processing in the linear part of the receiver, the spectrum of the analog signal y in (t) is respectively converted and transferred to the intermediate frequency, then fed to the frequency detector (FD). As a result of transformations at the FD output signal y(t) is formed which is the sum of binary digital signals u i (t) and random interference n(t). (2) At the FD output of the radar receiver, a multipolar digital binary signal of the MPA code sequence is formed. Next, this signal is converted into y k (t) -digital sequence of a unipolar binary video pulses code of MPA sequence and enters the input of the correlator, Fig.3. In accordance with the designations adopted in Fig.7 the main parameters of the signals have the following physical meaning: u k 1(t) -symbol 1 signal; u k 0 (t) signal of symbol 0; τ k 1 -the duration of a symbol 1 signal; τ k 0 signal duration of symbol 0; τ ti = τ k 1 + τ k 0 the duration of the digital stream clock pulses period (practically τ k 1 = τ k 0). The theoretical features of receiving telemetry information in the form of this digital stream are reduced to the Bayesian problem of statistical character recognition of binary code. After receiving the y k (t) signal, hypotheses can be made about which of the u k i (t) flow signals was received: 0 or 1. This problem is related to binary code character recognition, so it belongs to the binary class problems associated with the detection of signals with known parameters, which is typical for digital communication systems with synchronization [16]. Next, we consider the case of transmission of MPA characters by binary code with "active zero", because of the PFM subcarrier frequency, energy of these signals is the same. The symbols 1 are transmitted by signal u k 1 (t), the symbols 0 are transmitted by signal u k 0 (t), Fig.3. Erroneous decisions of the k-th digit symbol receiving of the code sequence can be either in receiving 1 when 0 was transmitted, or in receiving 0 when 1 was transmitted. In the considering problem the cost of losses at erroneous reception of symbols of the k-th digit can be accepted identical C 10 (k) = C 01 (k) as it leads to the omission of information about symbol. If the symbol is received correctly, the losses are zero. Further, it is assumed that the ζ j ,which is transmitted by MPA digital code, takes any value within the physical feasibility of 0 to ζ j max . In this case, the a priori transmission probabilities in each of the digits of the symbols 0 and 1 are highly likely to be statistically the same: P 0 (k) = P 1 (k) = 0.5. When solving the binary problem of detecting the digital code symbol, one of four outcomes is possible: 1. The hypothesis of absence of a signal H 0 is accepted and the hypothesis H 0 is true; 2. The hypothesis of signal reception H 1 is accepted and the hypothesis H 1 is true; 3. The hypothesis of absence of a signal H 1 is accepted and the hypothesis H 0 is true; 4. The hypothesis of signal reception H 0 is accepted and the hypothesis H 1 is true. At introduction the densities of conditional probabilities p (y|u 0 ) and p (y|u 1 ) realization of y(t), provided that the signals u 0 and u 1 were transmitted respectively, it is possible to use the ratio of these conditional probability densities as a one-dimensional random variable to develop the optimality criterion. Type of functions of conditional densities of signal distribution y (t) of digital symbols of transfer 0 and 1 according to Fig.3 is shown in Fig.4. The relation (3) is called the likelihood relation. Thus, in the Bayesian interpretation, the optimality criterion for detecting the information symbol is reduced to determining the threshold ratio. Λ(y) > η when Н 1 , or Λ(y) < η when Н 0 , where η is the threshold ratio of probabilities chosen from the cost of right and wrong decisions. The threshold voltage u p is determined according to the criterion of optimality η for a particular technical design of the decision-making device. Often in practice, instead of (8) use the ratio of logarithms of likelihood ratio ln Λ(y) and the optimality criterion ln η. In this case, the probability of reception in each of the digits of the characters 0 and 1 are assumed to be equal: η = 1, and ln η = 0. Therefore, the criterion of optimal detection of a symbol of any digit on the background of white noise takes the form: In the case of deterministic signals u i (t) and interference n(t), representing normal white noise, bounded by the frequency band Δf Bayes criterion for deciding on the choice of a hypothesis for a known implementation of the received signal y (t) can be presented in general form 6) where N 0 is the noise spectral power; η is the threshold ratio of conditional probability densities (4). As mentioned earlier, the radar SR under consideration uses pulse-frequency manipulation (PFM) of the subcarrier frequency f with the SPP to transmit digital information, i.e. a binary code with "active zero" is used [16]. In particular, the transmission of 1 uses a signal u 1 (t) which is proportional to the frequency f s =813kHz, and the transmission of 0 uses a signal u 0 (t) which is proportional to the frequency f s =787kHz. Both signals have the same energy E and duration τ i 1 = τ t 0 : The signal-to-noise ratios for u 1 (t) and u 0 (t) at the input of the correlation device will be the same. In this case, the optimality criterion can be represented as a correlation integral: The meaning of relation (8) is that the decision is made in favor of the symbol for which the correlation integral takes a greater value (Fig.5). The considered approach for determining the methods of processing the information signal in the radar receiver provides the highest probability of correct detection of the symbols of the entire transmitted digital packet and, ultimately, the correct definition of the MPA. Calculation of correlation functions of signals u 1 (t) and u 0 (t) is carried out in the microcontroller of the radar receiver. The Fig.6 shows for comparison the graphs of the theoretical dependences of the probability of error P osh when receiving a single character from the ratio of signal power to interference power q 2 = Р s / Р i for three types of modulation: PAM, PFM, and PPM, calculated using the expression: 6 The minimum calculated signal-to-noise ratio q = y(t) / n (t) in the telemetry channel in the design of radar SR is assumed to be q = 3, which corresponds to the signal-to-noise ratio q 2 = P s / P i equal to 10dB. Further, it is assumed that the signal spectrum is consistent with the bandwidth of the receiver. The probability of error for receiving information of an elementary character is: to PAM P m = 10 -2 , for the FIM P m = 10 -3 , for PIM P m = 10 -5 . It should be emphasized that the use of PFM can significantly reduce the probability of error (less than P m = 10 -5 ), but it requires appropriate modernization of the radiosonde and radar equipment. Because of during the observation of one information cycle T C = 2sec. 20 packets are transmitted, then the correlation processing of all packets with coherent signal energy storage theoretically leads to an increase in the signal-to-noise ratio q in the telemetry channel by about 20 times, which corresponds to an increase of q by 13 dB. When using the incoherent accumulation method, the signal-to-noise ratio q increases by 6.5 dB. In accordance with the graph shown in Fig. 6, the probability of error in the optimal processing of all packets during the information cycle for the PIM method is reduced to less than P m k = 10 -9 . These theoretical results are confirmed by the field tests data [14]. Further, it should be noted that in the considered conditions of receiving telemetry information, considering the ratio (11), technically the simplest criterion for optimal reception of the information symbol 0 or 1 can serve as a threshold voltage η = Up determined by the ratio On the basis of the above theoretical provisions in [11] developed the structure of the full digital transmission code MPA. The article also discusses the structure of the protocol and the tests results of the modernized radiosonde system (SR) of the atmosphere of the radar "Vector-M" together with MRZ-3MK operating in the batch mode of telemetry information transmission. Development of a digital telemetry information transmission method from the side of the radiosonde to SR radar On the basis of the results obtained in the development of the navigation SR "Polyus", technical means were developed and modernization of the radar SR was carried out by integration of a packet method of telemetry information transmission from the ARS to the base radar [11][12][13][14]. These works examines features of the development of the microprocessor type of radiosonde MRZ-3MK, Protocol settings digital packet telemetry data transmission. Features of the packet method of forming telemetry information in the ARS type MRZ-3MK are as follows. The block diagram of the ARS is shown in Fig. 7. The composition of the aerological radiosonde ARS includes: primary unit (sensors) and secondary (measuring) converters MPA, electronic control unit, microcontroller (MCU), on which the device pairing, a packet telemetry information computer and shaper (PTI), and modulating voltage of the microwave module SPP shaper and frequency modulator are based. The implementation of ARS on a universal microcontroller (based on the Cotrex-M3 core with a 100 MHz working clock frequency) allows hardware and software to solve the following tasks: -forms the main time cycle of the ARS; -controls the operation of the secondary measurement converter and measures the time parameters of the signals at its output; -measures the output voltage of analog sensors (humidity sensor); -provides formation, coding and transmission of information package containing calculated and measured values of MPA, as well as parameters of the current operating mode; -supports ARS pre-flight preparation mode. Because the information frequency band of telemetry information MPA does not exceed ΔF < 0.5 Hz [8][9][10], its update is required to be implemented at a rate of at least once every two seconds. The period of the information stream clock pulses τ ti is selected in accordance with a given bandwidth telemetry channel radar Δf = 5.0 kHz and a spectral width of digital pulses: τ ti = 2 / 5,0 = 0,420 ms (2) The volume of the information packet is 240 bytes. The duration of the fragment of the information package containing all the information about the MPA is T P = 0.42 ms × 240 bytes = 100 ms. ( Redundancy implemented into the structure of the information packet allows to correct individual bit errors that may occur due to interference, and repeatedly duplicate packets to deal with freezing of the signal. The information rate of data transmission in the telemetry channel is 1200 baud. The method of encoding information bits in a packet is a selfsynchronizing code of the Manchester 2 type. Measurement of meteorological parameters of the atmosphere in ARS proceeds as follows: MPA affect the sensors unit and secondary transducers of ARS, whose output signals in the form of video pulses measuring periods are fed to the input of MCU telemetry module. Information about the reference channel measuring periods, temperature and humidity in the form of a digital code is supplied to the computing device and the packet telemetry information generator (PTI). In MCU processor within T P = 100 ms. a digital packet containing 240 bytes of telemetry information received from ARS sensors is formed. Transmission of MPA in the form of digital packets is performed in simplex mode via radio channel from the radiosonde to the base radar. The transmission time of one packet of meteorological information is Tcyc = 100 msec. This ensures multiple transmission of packets during one cycle with variability of information at least 1 time in 2 seconds, which significantly increases the reliability of telemetry data reception in the conditions of fading of the radiosonde signal. The physical speed of data transmission in the channel is 2400 bit/s. The structure of one of the variants of the information package transmitted by ARS is shown in Fig. 8. The total packet length is 30 bytes×8=240 bytes. For a 2400 bps transmission rate, this means that in 2 seconds 20 identical packets are transmitted. This redundancy allows, in the simplest case, to do without noise-proof encoding of the digital stream. Error bit recovery is performed by correlating multiple adjacent packets. The ARS radio signal is optimally processed in the telemetry channel of the radar receiver. To decrypt telemetry data packets, the radar was modernized by implementation of a special subprogram that is part of the radar system software. In general, the use of packet transmission of telemetry information in radar SR allows: 1. Reduce the duration of the information transfer cycle to 1-2 seconds., thereby to increase reliability of reception of telemetric information in the conditions of strong fading of ARS signal and to define in more detail a vertical profile of MPA; 2. Reduce the level of parasitic amplitude modulation (PAM) of the ARS signal due to the homogeneous nature of the PTI and increase the stability of the automatic tracking of the ARS by angular coordinates; 3. Remove restrictions from the duration of the ARS transducer periods and expand the possibility of using different types of MPA sensors; 4. Due to the small duty cycle of information pulses, it is essential to simplify the conditions for receiving and processing the signal in the radar receiver, thereby increasing the reliability of receiving telemetry information. In the modernization process of the radar MARL-A and "Vector-M" by modification of the software to decode the packet information and the transmitting of MPA, testing shows that the performance of domestic radar MARL-A and "Vector-M" included in the SR significantly increases. During 2013-2014 years in Central Aerological Observatory (CAO) of Roshydromet (Dolgoprudny) performed flight tests ARS MRZ-3МК with serial radar MARL-A, "Vector-M" which are equipped with improved software [15]. According to test results the decision was made to start mass production of ARS MRZ-3МК for the aerological network of Roshydromet, the Eastern and Baikonur cosmodrome. Conclusion Theoretical analysis of optimum reception of telemetric information conditions about MPA from ARS Board to ground radar in the batch mode was performed. The description of the improved radiosonde MRZ-3MK which implements a packet method of telemetric information transfer is presented. The results of research and experimental launches shows that the modernization of serial ARVC MARL-A, "Vector-M" ,that provides work with the ARS MRZ-3МK comes down to the modification of the software of telemetry radar blocks for decoding of receiving digital meteorological information packets. To improve the reliability of meteorological information in the SR should be considered a promising application of FIM for the transmission of telemetry information and coherent energy storage of the information signal in the radar receiver.
Grimace Scores: Tools to Support the Identification of Pain in Mammals Used in Research Simple Summary The ability to identify and assess pain is paramount in animal research to address the ‘refinement’ principle of the 3Rs (Reduction, Refinement, Replacement), satisfy public acceptability of animal use in research and address ethical and legal obligations. Many physiological, behavioural and physical pain assessments are commonly used, but all have their limitations. Grimace scales are a promising adjunctive behavioural pain assessment technique in some mammalian species used in research. This paper reviews the extant literature studying pain assessment techniques in general, and grimace scales specifically, in animal research. The results indicate that the grimace scale technique is simple and able to be used spontaneously at the ‘cage side’, is non-invasive in its application, highly repeatable, reliable between interobserver and intraobserver applications and easy to train and use. The use of grimace scales should be more frequently considered as an important parameter of interest in research and animal wellbeing. Further research into the use of grimace scales is required to develop scales for a wider range of animal species, increase applicability in studies specifically related to pain assessment and for further validation of the technique. Abstract The 3Rs, Replacement, Reduction and Refinement, is a framework to ensure the ethical and justified use of animals in research. The implementation of refinements is required to alleviate and minimise the pain and suffering of animals in research. Public acceptability of animal use in research is contingent on satisfying ethical and legal obligations to provide pain relief along with humane endpoints. To fulfil this obligation, staff, researchers, veterinarians, and technicians must rapidly, accurately, efficiently and consistently identify, assess and act on signs of pain. This ability is paramount to uphold animal welfare, prevent undue suffering and mitigate possible negative impacts on research. Identification of pain may be based on indicators such as physiological, behavioural, or physical ones. Each has been used to develop different pain scoring systems with potential benefits and limitations in identifying and assessing pain. Grimace scores are a promising adjunctive behavioural technique in some mammalian species to identify and assess pain in research animals. The use of this method can be beneficial to animal welfare and research outcomes by identifying animals that may require alleviation of pain or humane intervention. This paper highlights the benefits, caveats, and potential applications of grimace scales. Introduction The 3Rs, Replacement, Reduction and Refinement, is a fundamental framework used internationally to ensure the ethical and justified use of animals in research [1]. The implementation of refinements is required to alleviate and minimise the pain and suffering of animals used in research. Public acceptability of animal use in research is contingent on satisfying the ethical and legal obligations to provide appropriate pain relief along with humane endpoints for potentially painful procedures. To fulfil this obligation, staff, researchers, veterinarians, and technicians must rapidly, accurately, efficiently and consistently identify and assess signs of pain in their target species, and act accordingly. The ability to identify and assess pain and suffering is paramount to animal welfare in research to prevent undue suffering and any possible consequent negative impact on research outcomes. Identification of pain may be based on several indicators such as physiological, behavioural, or physical ones. Each of these has been used to develop different pain scoring systems, with potential benefits and limitations in identifying and assessing pain. Grimace scores are a promising adjunctive behavioural technique in some mammalian species to facilitate in identifying and assessing pain in research animals. The use of this method can be beneficial to animal welfare and research outcomes by identifying animals that may require alleviation of pain or humane intervention. A discussion of the benefits of grimace scales, including their potential applications, is included in this paper. Eligibility Criteria The inclusion criteria were: publication in English; assessments of pain in research animals and livestock; mammalian grimaces scales in animals; studies key to the development of grimace scales; studies that used facial units as an indicator of pain assessments in animals. Search Strategy The search strategy aimed to only find articles published in English or translated into English. There was no restriction on the date of publication. Articles were searched for between June-July 2020. Keywords used to search all databases and references sources were animal grimace score, animal grimace scale, animal pain assessment, animal pain indicators, animal pain face, animal pain scales and the NC3Rs website. All papers were retrieved and downloaded into Endnote with X8.0.1 with any duplicates removed. What is pain and why does it matter? The International Association for the Study of Pain defines pain as: 'An unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage' [2]. Pain can be further categorised as acute, visceral or chronic. Acute pain serves an evolutionary and adaptive function to signal and avoid potential or actual damage to tissues. This type of pain may result from an injury or surgical wound [3][4][5]. Visceral pain is due to the activation of stretch or pressure receptors in visceral organs. Unalleviated or poorly treated acute pain can progress to chronic pain. The latter is the result of neuroplastic changes occurring within the nervous system, rendering the body more sensitive to pain and can even create sensations of pain without any external stimuli [3][4][5][6][7]. It is important to be able to manage and assess all types of pain in research mammals and avoid the inadvertent development of chronic pain. While the types of pain may manifest differently, research staff must be able to assess and alleviate pain to maintain optimal animal wellbeing (mental and physical). There are moral, legal and ethical obligations that require those working with animals to manage pain [5,[7][8][9][10][11][12][13]. The recognition, assessment and treatment of pain is an essential aspect in public support and acceptability in the use of animals for research [7,12,13]. Using the precautionary principle, animal ethics committees and research staff must acknowledge the potential for pain [5,10,12,14]. They must also consider the experimental and animal welfare consequences of pain and take steps to ensure pain is adequately managed during procedures [3][4][5]15]. Regulatory frameworks often apply, as a precaution, the anthropomorphic principle, by which any procedure causing, or expected to cause, pain in humans, may produce pain in animals. The European Union, United Kingdom and Australian regulations operate on this principle and require the alleviation of pain for research animals. Exceptions to the alleviation of pain may be granted for studies that measure pain and/or distress. However, even in these exceptional cases, there is always a maximum threshold level of pain before intervention is required [5,10,12,[14][15][16][17]. Although management of pain and associated humane interventions will vary due to the nature of the experimental outcomes, researchers are required to intervene with predetermined criteria to alleviate pain or, if necessary, humanely euthanise animals. For practical purposes, animal users can only fulfil this obligation for humane intervention if they are able to identify pain rapidly, consistently and accurately in their target species. Unalleviated pain results in alterations in animal behaviour, physiology, and physical states [18][19][20]. These changes can be identified in various ways through behavioural observation, biochemistry, haematology, endocrinology and physical alterations in locomotion or posture [4,21]. In addition to the suffering of the animal, these changes may impact experimental outcomes and become a confounder by increasing experimental variability and producing negative affective states. Conversely, positive emotional states of animals are linked to less experimental variability and more robust experimental results [22,23]. The full spectrum of potential confounders to unmitigated pain is not entirely understood; however, the literature supports that there are experimental and animal welfare benefits in identifying and subsequently treating pain and alleviating negative effects on animals used for research [3,17,19,[21][22][23][24]. Pain Faces Humans are known to display a series of facial expressions linked to the experience of pain [25,26]. Those so-called 'pain faces' are used in human medicine to detect, as well as assess, pain in non-verbal humans (i.e., infants) [7,25,26]. These pain faces can be used to develop grimace scales and capitalise on the human propensity to focus on the facial area [27,28]. The conservation of these pain faces is also present in many non-human mammalian species [26,[29][30][31][32][33][34] and are a naturally useful method in the identification and assessment of pain. However, as with any technique, Grimace scales have benefits and limitations. These are important to acknowledge and take into considerations before their use. Pain Assessment Requirements There are a series of important considerations when determining if a method is an appropriate test in identifying and/or assessing pain. The testing method must reliably produce the same result independent of the observer and the number of times an animal is observed. These are, respectively, known as intraobserver and interobserver agreement. It should also be consistent between testing timepoints and observers [7,35,36]. An ideal method should be easy to train and not require specialist knowledge or equipment [7,33,[36][37][38]. A suitable test must demonstrate validity by accurately determining or reflecting the presence or absence of pain [7,35,36,39]. To determine the validity of a pain assessment technique, we should test the animals before the painful stimulus, after the introduction of the painful stimulus and once pain relief has been provided. The test should demonstrate an absence of pain before the painful stimulus, an increase in pain at the introduction of the painful stimulus, and a subsequent reduction in observed pain on the delivery of an appropriate analgesic [7,36]. Ideally, the test should be able to demonstrate a dose-responsive curve to pain based on the administration of appropriate analgesia [40][41][42][43][44]. The specificity and sensitivity of a test are also crucial to ensure animals are correctly identified when pain or welfare concerns arise. If the specificity is too low, there is a risk of pain being incorrectly identified, potentially leading to unnecessary interventions such as pain relief or humane euthanasia [3,7,15,36]. Alternatively, if the sensitivity is too low, experimental animals may reach their threshold for intervention while being inaccurately identified as not painful, therefore remaining in pain possibly even beyond their humane endpoint. An appropriate method would demonstrate both high sensitivity and specificity to ensure correct assessment and correct management of arising pain or welfare issues [3,7,15,36]. Cage or pen-side pain identification techniques should rely on spontaneous rather than retrospective indicators of pain. It ensures humane intervention can be applied promptly with animals not left in distress for any extended length of time [9,45,46]. The assessment of pain should preferably be a non-invasive method, to avoid the risk of eliciting a pseudoanalgesic stress response, inhibiting the ability of the observer to detect pain accurately [47][48][49][50]. Techniques such as assessing the quality of nest-building in mice [3,15,18,[51][52][53] or degree of burrowing in rodents are non-invasive, observatory, proxy measures to wellbeing and potentially pain [3,15,18,54,55]. Confounders to Pain Identification Some caveats must be maintained when selecting a pain assessment technique. Many pain assessment indicators may be ambiguous. The choice of a pain identification tool or methodology must be specific to the species and validated for the procedures or experimental work being performed [56][57][58][59][60]. It well accepted that not all animals demonstrate the same signs of pain, even for a similar nociceptive stimulus [36,61]. Many research animals are prey animals and as such, are prone to hide signs of pain or demonstrate a freeze response, rendering pain assessments challenging [21,46,50,[62][63][64][65][66]. The types of procedures or experiment performed should not obscure the ability of the technique to detect pain [39,[56][57][58]67]. Pain identification should be consistent across the species regardless of sex, strain or breed; however, differences in pain thresholds between sexes or strains may exist [67,68]. Additionally, some natural behaviours (i.e., flehmen response, aggression) [3,4,32,[69][70][71][72] or physiological indicators (i.e., cortisol, heart rate) [3][4][5]7,15,17,21,66] may be equivocal and require differentiation. Whenever possible, the choice of technique should accurately identify an animal in pain, independent of the procedure or behaviour performed, species, affective or physiological state, sex, strain or breed. Non-Grimace Scale Pain Assessment The individual expression, magnitude and experience of pain can vary between animals [67,68]. There are known difficulties in measuring the magnitude of a particular animal's pain or distress which can make the absolute measurement or degree of pain challenging to assess [4,21,62,68,[73][74][75]. Ideally, a pain assessment technique should ensure accurate pain identification and minimal opportunity for the confounding of experimental outcomes due to experimental procedures, sex, breed/strain, or species. At present, there is not a single non-invasive, low-cost behavioural, physical or physiological pain assessment technique that is spontaneous, pain-specific, easy to train and quick to use (Table 1) [5,15,16,21,38,46,53,78,[86][87][88][89]. With the exception of some behavioural ethograms [63,78,90,91], other methods are unable to give a reliable dose-dependent response to pain. While many pain identification methods have their use and benefits, their use in the cage or pen-side management of animal pain and/or in the timely and appropriate application of humane intervention is limited. Thus, a myriad of techniques has been developed in an attempt to assess and capture the various expressions of pain in animals. These tools usually revolve around three dimensions: behavioural, physiological, and physical [5,73]. Table 1 categorises and reviews some commonly used assessments [3,4,7,8,15,21,80] in terms of their dimension, ability to be timely (spontaneous), non-invasiveness, spontaneous, easiness to train, and low-cost with minimal or no equipment requirements. * Ultrasonic vocalization monitoring requires special equipment; ** e.g., Egg or Milk; *** Eggs can easily be counted; **** Obvious signs of lameness are easy to train but more subtle lameness may be more difficult. Grimace Scales in Animals Grimace scales are proving to be a useful methodology for the identification of pain in research that meets most of the prerequisites for identifying and assessing pain in research animals. A range of research species-specific grimace scores has been developed ( Table 2) and used in a wide range of experimental studies and research settings ( Table 3). The initial methodology in the mapping of pain and the development of a facial action coding system (FACS) was developed in humans [97,98]. A FACS is an anatomical classification system used to map facial movements and facial muscles areas involved in facial contraction and relaxation. Photographs and videos scored by blinded observers serve as the base of facial mapping for FACS. FACSs offers the ability to code and identify expressions of pain via the individual components of facial expressions known as facial action units (FAUs) [99]. FAUs consistent with the expression of pain can then be used to develop a pain face or 'grimace' [99]. Regions of the face that have been found to change during the expression of pain include the eye, nose, cheek, mouth, ear and whiskers [8,81,100]. The position or carriage of the head is also found to change in some species as well [33,64,68,85,101,102]. The FAUs related to the expression of a grimace face in mammalian animals used in research are included in Table 4. From this known 'grimace face', the severity of the pain experienced can be objectively scored from images and/or film of animals in a known naturally (i.e., lameness, mastitis) [37,82,85] or experimentally induced (i.e., plantar incision [41,44,103] state of pain. Table 2 summarises many of the available studies that demonstrate a successful use of grimace scales in research animals. The table outlines which species-specific grimace scales have been validated, shown to be pain-specific, demonstrated a dose-dependent relationship, used in real-time and were easy to use. The different pain states to which they are applicable is also listed. In all but one species (guinea pigs) [63,78,90], observers were found to correctly, reliably and objectively identify pain in animals when using facial expressions or facial action units. [70,82] * Validated by reduction in grimace scores on receipt of pain relief and/or corroborated with other pain behaviours or testing; N/R denotes lack of publications available. Table 3. Grimaces Scales Facial Action Units by Species. Sepsis Rat [121] Control animals (negative or positive) were also included throughout this process and a simple species-specific grimace scale was developed [25,33,41,107]. The scoring system most commonly used in grimace scales is a three-point scale to determine if a specific FAU is not (score = 0), moderately (score = 1), or obviously present (score = 2) [41,45,100]. The scale must then demonstrate a dose-dependent change in pain scores on the delivery of analgesia [7]. Further research is typically performed to ensure the applicability of the grimace scale across multiple pain scenarios or environments, sex, strain/breed, age, as well as type and length of painful stimuli [7]. The scoring system can be used three ways. Firstly, it can determine either the absence or presence of pain. Secondly, it can offer some distinction between the intensity of pain via the summation of total scores. A change by two or more points is considered to be a legitimate alteration in pain intensity [133]. Thirdly, a threshold score can be set to offer guidance to research staff as to when to intervene to provide pain alleviation or humane euthanasia of research animals. The process of developing a grimace scale is time intensive but once developed and validated is relatively easy to train research staff to use [7,38,70,85,101]. Advantages and Uses Grimace scales have been applied across numerous research models, species and environmental contexts [41,128] (Table 4). They are a technique that can also be used to detect pain in existing pain research models as well as analgesic drug studies [40][41][42]45,60,77,109,110,[128][129][130]. Grimace scales offer the ability to detect and assess the severity of pain, determine the potential benefit of any analgesic intervention and assist in identifying humane interventions. The technique is of practical value as it can be used at the cage or pen-side level as a spontaneous indicator of pain [39,41,55,75,92]. As a methodology, it has the added benefit of being easy to teach to a range of observers including research staff, clinical veterinarians, animal scientists and undergraduate and graduate students [38,41,55,75,129]. Overall, the grimace scale methodology appears to be acceptably conserved and validated across a number of mammalian species and range of experiments. It is likely this technique has the capacity to be applied across an even greater range of mammalian species and experimental settings (Tables 3 and 4). However, a careful systematic assessment will always be required to ensure applicability, accuracy and validity. Grimace scale facial expressions are proving to be a useful [81] complement to existing tools in the assessment of animal wellbeing. The scores generated from the grimace scale should be used in conjunction with the context in which the animal is scored, its history, the procedure performed and the general parameters for wellbeing and signalment (sex, strain, species). When used appropriately, it is an excellent method to identify pain and as an adjunct to maintaining animal wellbeing in research studies [3,64,70,82,85,87]. Using this technique has the potential to improve pain detection in research animals and enable observers (i.e., research staff) a better opportunity to provide analgesia, humane euthanasia or identify animals requiring reassessment. The use of these grimace scales can be a vital tool to enable mitigation of the experience of pain in animals and refine animal welfare outcomes [41,60,66,75,76,82,100,114,128]. Unlike other types of pain assessment, grimace scales are spontaneous and usable in real-time [7,45,55,76,87,91,92,101]. They can also be matched and corroborated against other known indicators of pain or painful diseases including, but not limited to, lameness [37,64,82,85], cortisol [70], behavioural ethograms [81,85,91,92], acute laminitis [37], mastitis and foot rot [82]. A future area for development and benefit is the use of software automation in the development and scoring of facial expressions. The use of scoring software along with the installation of video cameras into enclosures may be able to enhance and hasten the development of grimaces, offer highly accurate grimace scores for animals in pain but also allow the remote monitoring and scoring of affected animals [41,59,134,135]. Another benefit of the system is simplicity, as it enables staff to distinguish a painful face from a non-painful one. Using a three-point scale is thought to be very useful in the reduction in subjectivity and offers observers greater clarity, confidence and support as to when to administer pain relief or humane intervention [7,75,81]. Reduction in grimace scores has been shown to occur on the application of pain relief [33,35,41,45,82,85,100,102] in a dose-dependent manner [40,41,128]. Therefore, grimace scales have the potential to assess both the presence and severity of pain. The use of grimace scales can alert research staff to animal discomfort, which may require additional monitoring, assessment or analgesia. Grimace scales are a non-invasive method in the detection of pain [7,81,100]. Many of the animals utilised in research are known 'prey' species with a high degree of stoicism and evolutionary adaptation to minimise expressions of pain or poor welfare states [50,[63][64][65]74]. Consequently, an ideal pain identification and assessment should be non-invasive and should reduce the possibility for these prey animals to minimise their expression of pain or for the potential of stress-induced analgesia [47][48][49][50]. Both experienced and inexperienced observers can identify pain with a significant intraobserver and interobserver agreement [41,57,60,66,76,82,100,114]. A potential benefit of using grimace scales to identify and assess pain in animals is that extensive animal experience is not required. Observers varied in their background experience to research and animal work and their training in pain assessment techniques. The observers ranged from students (undergraduate and postgraduate), veterinarians, animal care professionals, and early to late-career researchers [33,38,70,75,101,106,114,116]. Another favourable outcome when using grimace scales is that a natural empathy or innate understanding of animal behaviour is not necessary nor is a belief in the ability of an animal to experience pain. Through the use of a grimace scale, pain identification and assessment can be more objective (for or against the presence of pain). It also requires research staff to formally record a score and monitor animals for signs of pain, which can offer a more precise framework to determine when humane intervention or pain relief is needed [36]. The apparent usefulness of grimace scales could be related to several factors. One of which is that it capitalises on the innate human tendency to focus on the facial area when observing an animal [28]. Interestingly, many FAUs (orbital tightening, ear position and cheek area) appear to be conserved across mammalian species [33,41,45,82,100] (Table 3) and may be tapping into an evolutionary conservation repertoire of known FAUs. It may help explain how even the single identification of a few potentially evolutionarily conserved FAUs can still be useful in detecting pain [35]. It is supported via statistical modelling which has identified the FAUs most strongly correlated with a pain face, thereby offering the potential to isolate which FAUs are critical for use in a grimace scale (i.e., statistically significant) and which ones may detract from the scale (i.e., equines have four and mice have two critical FAUs) [35]. It may explain why grimace scales are one of the few techniques proven to be robust across several different mammalian species when compared to other pain assessment techniques [34]. However, by using only the minimum number required of FAUs to score pain, the ability to determine appropriate intervention thresholds and assess pain intensity may be reduced. The use of FACS and subsequent combinations of FAUs appears to be an excellent method to identify changes in facial features, which are consistent with the experience of pain in animals [33,40,41,76,82,85,92,101,104,106,114,130]. The grimace scale method seems to meet many of the requirements for an ideal pain identification technique. It is known to be a reliable and validated method of assessing pain in many of the commonly used research animals [33,35,40,41,45,70,82,85,100,106,114]. Limitations Similar to any tool, grimace scales have their caveats and limitations. The creation of pain and grimace scales takes considerable time to develop [3,41,61,62,99]. FAUs can be species-specific with each FAU requiring validation and ideally, statistical modelling and weighting, to determine its significance in the system [8]. This means the number of FAUs can vary amongst species with mice having five FAUs [100], rats four [41], sheep three to five [70,82], lambs five [107], equines six [37], ferrets three [106], cattle three to four [68,85], rabbits five [45] and pigs and piglets three [91,114]. While some FAUs movements appear to be tightly conserved (i.e., orbital tightening), others vary amongst species. These variations can be contradictory between species and may be due to age of the animal [107,115] and/or musculature of the face. The nose and philtrum areas tend to be areas with greater variation amongst mammals [37,82]. For example, rats and rabbits [41,45] will flatten their nose when in pain while mice and ferrets bulge their noses [100,106]. Therefore, each species requires the development of its own precise facial or grimace scoring system. Currently there are several commonly used mammalian research species that either do not have developed grimace scales or are yet to be fully developed. These include hamsters, dogs, guinea pigs, and non-human primates. Further work is needed to develop and determine the validity of grimace scales in these species. Pain expression and threshold levels can also vary slightly amongst breed or strain [67,68]. Baseline grimace scores need to be taken for every cohort daily for approximately three days before the initiation of an experiment or potentially painful stimuli to minimise these variations [75]. False positives are known to occur in a small range of scenarios such as sedation/anaesthesia, sleeping status [41,56,75,100], or during bouts of aggression [32]. Therefore, grimace scales should not be used during those times. Additionally, it is important to note that facial variations may occur between individuals. As a result, absolute scores may be less important than a change in the score by two points or more (i.e., trends) [133], and a more 'trends-based' approach could be more useful. There are also times when the grimace scales can result in a false positive with animals not demonstrating a pain face during a known painful procedure. For example, ear clipping in mice did not demonstrate any changes in grimace scores [57] and neither did experimentally induced gastrointestinal mucositis in rats [58]. There is discussion around the differences found in the length of time of post-painful stimuli that an animal may display a pain face and hence a grimace score. Early peer-reviewed publications questioned the ability of grimace scales to be useful for more than 24 h after a painful event [59,100]. More current studies have demonstrated that pain can be identified in animals via grimace scales for more than 24 h and more than 14 days after a painful stimulus [55,82,108,120,128,129]. From the recent literature and available publications, it is clear this technique has applications beyond its initial use. The history of the animal, the species, breed/strain, environmental context, procedures performed, and general parameters of wellbeing must be considered when using grimace scores [7,64,75,81]. There is still research to be conducted to explore the use of grimace scales. Currently, not every grimace scale has been fully validated (ferret, piglet, lamb) and additional species may yet benefit from their development (goats or other small mammals). Preliminary work does suggest that guinea pigs do not appear to be good candidates for facial pain scales. These studies used behaviourial ethograms which included elements of commonly and strongly conserved facial expressions (i.e., orbital tightening) and did not find any significant correlation of these expressions as indicators of pain [63,78,90]. It may be that grimace scales are not appropriate for these species or the FAUs associated with pain are different to other mammalian species. Many scales have only been used in specific settings or studies and need further work to determine if they are affected by common agricultural or animal procedures such as restraint in lambs [107] or piglets [115]. There is still variability in the available literature as to the length of time a grimace score can be detected in some species and studies [58,128] as well as its applicability of use [56] which should be further explored. While grimace scales have been developed and validated for several mammalian species, it is known there are species-specific variations in the expression of pain faces (guinea pigs) which may determine the development of a grimace to be unsuitable or require a different approach. Application and Summary In an ideal situation, a single pain identification technique would be sufficient across all species and scenarios; however, this currently does not exist and may never exist, given pain is an individual multifactorial experience [61]. Nonetheless, the growing body of literature is demonstrating that, overall, pain faces in mammalian species are often expressed and can be identified during most procedures, pain types, and contexts. Most of the variation found when using grimace scores to identify and assess pain is in the strength of association, the magnitude of certainty and the consistency of grimace score expression. Even with these variations, the use of grimace scales appears to be good at detecting pain in mammals [41,45,64,70,85,87,91,[100][101][102]106,114]. However, if studies could be more standardised in their approach and the use of grimace scales, this may be beneficial in reducing minor confounding elements (i.e., handling, conspecifics) or in identifying areas of improvement. Future studies and the day to day practical and experimental application of this technique would benefit from having a formally validated and consistent training program, complete with video and photographic materials. A standard training program would be useful for grimace score users and has been useful for other pain scoring systems [38,46,86]. Part of the development, training and implementation of grimaces could be enhanced by the use of various technologies such as automated or semiautomated software for scale development and scoring via video surveillance [41,59,134,135]. These nascent technologies are often unfeasible due to cost, infrastructure constraints and a lack of development but in the future their use may play a greater role in grimace scoring systems. The identification and mitigation of pain fulfil an essential and required aspect of refinement when working with animals in research. As of yet, no single indicator or technique is considered sufficient in the identification and assessment of pain. Several peer-reviewed publications have advocated multiple measures of animal welfare, and pain should be employed to mitigate the potential negative effects of pain on animal welfare and research outcomes [3,4,9,15,21,61,64]. Using a combination of relevant retrospective and spontaneous techniques applied on a case by case basis can maximise the opportunity to detect and assess pain in research animals. It minimises the chance for pain to be undetected and maximises the opportunity to preserve animal welfare and research outcomes. While there are known limitations, grimace scales to at least identify potential indicators of pain are useful tools [60]. The use of grimace scales with other parameters of pain and/or animals wellbeing is likely to increase the ability of research staff to identify and assess pain in mammals and offer appropriate humane interventions. At this time grimace scales are a potentially promising and important pain identification tool; however, further work should be performed in a consistent manner to validate existing work as well as explore new applications to other species, conditions and experimental studies. To achieve good animal welfare and research outcomes and meet legal and ethical obligations, it is paramount to utilise a consistent and accurate pain identification method. The use of a grimace score can assist in fulfilling these obligations by identifying pain and allowing a timely intervention via analgesia or humane endpoints. Grimace scales are thus proving to be a valuable tool with a myriad of applications. Their use can offer improvements in animal welfare and more robust animal research outcomes [9,64]. While grimace scales are not without limitations, there is a growing body of literature and evidence to suggest they can be a significantly useful adjuncts in the detection and assessment of pain in a variety of species and research studies [7,35,41,66,76,100]. When used correctly by trained individuals along with an animal's history and basic wellbeing criteria, grimace scales can be a practical, accurate and easy method to identify pain in research animals to provide refinements in experimental animal welfare and outcomes [38,61,75]. Future applications of their use could focus on different types of experimental studies, new species, neonates, standardisation in training protocols, and correlation of multiple observations over time. Conclusions While there are some identified limitations, grimace scales appear to be a valid tool for pain assessment in many mammalian animals, and have many benefits compared to non-grimace pain assessment techniques. Due to the simplicity of spontaneous use, non-invasive application, repeatability of results, interobserver and intraobserver reliability and ease of training, the use of grimace scales should be more frequently considered as an important parameter of interest in research and animal wellbeing. In addition, this technique has the capacity to satisfy the requirement for refinement in accordance with the 3Rs. Additional research into the use of grimace scales is required for other species, pain-related or other specific studies, and further validation. Acknowledgments: Special thank you to Manuel Christie, Natalie Roadknight, Kat Littlewood, and Natarsha Williams for their assistance in the preparation of this manuscript. Conflicts of Interest: The authors declare no conflict of interest.
Comparison of performance of self-expanding and balloon-expandable transcatheter aortic valves Objective To evaluate the flow dynamics of self-expanding and balloon-expandable transcatheter aortic valves pertaining to turbulence and pressure recovery. Transcatheter aortic valves are characterized by different designs that have different valve performance and outcomes. Methods Assessment of transcatheter aortic valves was performed using self-expanding devices (26-mm Evolut [Medtronic], 23-mm Allegra [New Valve Technologies], and small Acurate neo [Boston Scientific]) and a balloon-expandable device (23-mm Sapien 3 [Edwards Lifesciences]). Particle image velocimetry assessed the flow downstream. A Millar catheter was used for pressure recovery calculation. Velocity, Reynolds shear stresses, viscous shear stress, and pressure gradients were calculated. Results The maximal velocity at peak systole obtained with the Evolut R, Sapien 3, Acurate neo, and Allegra was 2.12 ± 0.19 m/sec, 2.41 ± 0.06 m/sec, 2.99 ± 0.10 m/sec, and 2.45 ± 0.08 m/sec, respectively (P < .001). Leaflet oscillations with the flow were clear with the Evolut R and Acurate neo. The Allegra shows the minimal range of Reynolds shear stress magnitudes (up to 320 Pa), and Sapien 3 the maximal (up to 650 Pa). The Evolut had the smallest viscous shear stress magnitude range (up to 3.5 Pa), and the Sapien 3 the largest (up to 6.2 Pa). The largest pressure drop at the vena contracta occurred with the Acurate neo transcatheter aortic valve with a pressure gradient of 13.96 ± 1.35 mm Hg. In the recovery zone, the smallest pressure gradient was obtained with the Allegra (3.32 ± 0.94 mm Hg). Conclusions Flow dynamics downstream of different transcatheter aortic valves vary significantly depending on the valve type, despite not having a general trend depending on whether or not valves are self-expanding or balloon-expandable. Deployment design did not have an influence on flow dynamics. Comparison of self-and balloon-expandable TAV performance. CENTRAL MESSAGE Flow dynamics downstream of different TAVs vary significantly depending on valve type and size, despite not having a general trend depending on whether valves are self-expanding or balloon expandable. Video clip is available online. Current commercially available transcatheter aortic valves (TAV) are either self-or balloon-expandable. During the past 2 decades, tremendous improvements in TAV designs and materials took place to optimize valve performance and maximize its benefits. 1 Metals were replaced (stainless steel vs cobalt chromium) to ensure stronger and more efficient anchoring, skirts were added and later modified to limit regurgitation, and valve profiles were altered to allow minimal interference with the downstream flow. Despite these improvements, the interaction of each TAV with the flow in the aortic root is associated with nonphysiological flow properties compared with flow in a native annuli. Clinical, in vitro, and in silico studies have shown that TAV performance varies with valve type (self-expanding vs balloon-expandable), 2-5 the unique design of each valve within the same type group, 6-8 the deployment (axial and commissural), [9][10][11] and the surrounding patientspecific anatomy. [12][13][14] It is important to evaluate the flow downstream of the aortic valve because it instructs directly on the performance parameters and ultimately durability (after sufficient follow-up). The turbulence of the flow downstream of the TAV informs on the pressure drop across the valve and explains some of the reasons behind differences in pressure recovery among different valves, as identified by different measurement modalities such as echocardiography and catheterization. 15 The turbulence of the flow downstream of the TAValso informs on the forces that the platelets and red blood cells undergo, in the context of general blood damage such as platelet activation, thrombus formation, and hemolysis. 2,16 In this study, we aim to characterize the differences in the resulting flow dynamics and pressure recovery downstream of multiple self-expanding and balloon-expandable TAVs. METHODS The hemodynamic assessment of a 26-mm Evolut (Medtronic), a 23mm Sapien 3 (Edwards Lifesciences), a small Acurate neo (Boston Scientific) and a 23-mm Allegra (New Valve Technologies) transcatheter heart valve was performed in a left heart simulator under pulsatile physiological conditions. These sizes are equivalent in that they treat similar-sized annuli (20-23 mm). For the study, the TAVs were implanted into a rigid test chamber described in previous publications. 2,3,9,17 The aortic pressures ranged from 80 to 120 mm Hg, the peak aortic pressure was set at 24 L/min, and the heart rate at 60 beats per minute. The fluid used in the experiments was a mixture of water-glycerin (60/40 by volume) with properties similar to those of blood (density of 1060 kg/m 3 and a kinematic viscosity of 3.5 cSt). The valves were placed in the same annulus of the same aortic root as described in previous studies. 2 Flow data were acquired using ultrasonic flow probes (HXL; Transonic Inc), and pressures at all the measurement locations were measured with a Millar catheter (ADInstruments Inc). The Millar catheter was inserted along the centerline of the aortic valve chamber. Recordings of the pressure at every axial location along the ascending aorta with intervals of 5 mm downstream of the valves and 1 mm inside the valves. Position 0 mm corresponds to the most upstream measurement (ventricular), and position 120 mm corresponds to the last measurement point in the measurement region of the chamber. Fifty consecutive cardiac cycles of aortic pressure, ventricular pressure, and flow rate data were recorded at a sampling rate of 100 Hz at every measurement location. The mean transvalvular pressure gradient (PG) is defined as the average of positive pressure difference between the ventricular and aortic pressure curves during forward flow. The peak PG was obtained from the instantaneous pressure waveforms. High-speed recording enface of the valve opening, and closing was performed at a frame rate of 1000 Hz. Particle image velocimetry experiments were performed to assess the flow downstream of each TAV. The flow was seeded with fluorescent poly (methyl methacrylate)-rhodamine B particles with average diameter of 10 mm. A laser sheet created by pulsed neodymium-doped yttrium lithium fluoride single-cavity diode pumped solid-state laser coupled with external spherical and cylindrical lenses shone on the region of interest while acquiring high-speed images of the fluorescent particles within the downstream region. Time series recordings were acquired at a temporal resolution of 500 Hz. Phase-locked recordings were acquired to calculate the resulting flow statistical parameters (Reynolds shear stress [RSS]) over 250 images. The RSS, an established metric to evaluate turbulence and any associated blood damage potential, is a statistical quantity that is used to describe a turbulent flow field. 18,19 The principal RSS is calculated as per Equation 1. Where r is the blood density and u 0 and v 0 are the instantaneous velocity fluctuations in the x and y directions, respectively. In addition, viscous shear stress (VSS) was also computed as per Equation 2 and probability density functions (PDF) were calculated and plotted. Where t is in Pa and m is the dynamic viscosity in Ns/m 2 . Statistics The results are presented as mean AE SD. Statistical analysis was performed using JMP Pro version 15.2.0 (SAS Institute Inc). All data were distributed normally, and therefore, t test for paired comparison between the vena contracta and recovery zone for each valve was performed along with Tukey test for unpaired comparison for vena contracta and recovery zone gradients of all valves. The instantaneous VSS over the cardiac cycle are plotted as PDFs. The PDF displays all the values (all the range) of a certain parameter distributed over a certain region of interest and gives the relative or differential likelihood (frequency) of any parameter. The area under the probability density function curve is always equal to 1 and therefore can also be considered as a normalized histogram. 20 Abbreviations and Acronyms PDF ¼ probability density function PG ¼ pressure gradient RSS ¼ Reynolds shear stress TAV ¼ transcatheter aortic valve VSS ¼ viscous shear stress RESULTS Downstream Flow Field Figure 1 shows the averaged flow velocity downstream of each of the TAVs at acceleration, peak systole, and deceleration phases. The dark streaks of red and blue vorticity contours represent the shear layers corresponding to the jet boundaries and the distance between them represents the width of the jet. As the flow starts accelerating, reaching the tip of the fully open valve leaflets, it separates from the leaflet tip and travels as a free shear layer that is a region of concentrated vorticity, an indicator of flow rotation. Because the resulting shear layers and jet stability are consequences of the interaction between flow and leaflets, it is important to visualize the opening of the valves. Videos 1 through 4 show the gradual opening of each of the valves (Evolut R, Sapien 3, Acurate neo, and Allegra, respectively). Leaflet flutters are clear with the Evolut R and Acurate neo; however, less noticeable with the Sapien 3 and the Allegra. From a different angle, Videos 5 through 8 show the flow as imaged in the experiments, highlighting the leaflet motion during the cardiac cycle. The maximal velocity at peak systole obtained with the Evolut R, Sapien 3, Acurate neo, and Allegra was found to be 2.12 AE 0.19 m/sec, 2.41 AE 0.06 m/sec, 2.99 AE 0.10 m/sec, and 2.45 AE 0.08 m/sec, respectively (P<.001). Comparing between each valve, significant differences were found except between the Sapien 3 and the Allegra (P ¼ .957). Downstream Flow Turbulence Figure 2 shows the principal RSS at different phases in the cardiac cycle. The maximum RSS occurs during peak systole where the flow is maximal. The dark blue patches indicate an elevated RSS magnitude, and the more prevalent elevated RSS magnitudes are, the more turbulent the flow is considered to be. The fluctuations observed in the RSS contours follow in evolution and distribution in the flow field those seen in Figure 1. To quantify the RSS distribution more accurately, Figure 3, A, shows the probability density functions of RSS for each of the TAVs at peak systole. The Acurate neo and the Evolut R present the largest distributions of RSS in all 3 phases (ie, acceleration, peak systole, and deceleration). The Allegra shows the minimal range of RSS magnitudes (up to 320 Pa), followed by the Evolut (up to 600 Pa) and then the Acurate neo and Sapien 3 (up to 650 Pa). In the literature, it was reported that a limit of 100 Pa to evaluate potential blood damage could be considered appropriate. 21 Any value that exceeds 100 Pa is considered elevated enough to be associated with blood damage VIDEO potential. In Figure 3, A, the Sapien 3, Acurate neo, and Allegra show equal distribution of RSS <100 Pa. The Evolut shows a higher prevalence within this limit. For RSS exceeding 100 Pa, the Allegra still shows the lowest distribution. The Evolut and the Sapien 3 show equal and largest distribution up until an RSS limit of 210 Pa. When 210<RSS<450 Pa, the Acurate neo shows the largest likelihoods of development of elevated RSS. When RSS >450 Pa, the Evolut shows the highest likelihoods of elevated RSS up until 600 Pa. The Acurate neo shows more elevated likelihoods when RSS >600 Pa compared with the Sapien 3 because both these valves show such elevated magnitudes. To evaluate the actual shear force per unit area experienced by blood elements, we calculated the instantaneous VSS for each of the valve flow fields and we plotted the probability density function of the VSS in Figure 3, B. All the instantaneous VSS magnitudes obtained were lower than 10 Pa, a threshold associated with potential blood damage. 18 The Evolut was shown to have the smallest magnitude range (up to 3.5 Pa), followed by the Allegra (up to 4.8 Pa), followed by the Acurate neo (up to 5.5 Pa) and then the Sapien 3 (up to 6.2 Pa). Pressure Recovery The importance of accounting for pressure recovery is that it permits identification of the true PG across the TAV and accordingly a more accurate assessment of performance. Figure 4 shows the variations of PGs along selected locations in the aortic root with the 4 different valves and Figure 5 shows the variations of the corresponding standard deviations. A box-and-whisker plot is provided for the instantaneous measurements in Figure E1. The results are plotted from the ventricular side upstream of each valve to the downstream side up until the end of the aortic testing chamber (at 120 mm). As the flow crosses the valve, the PG decreases from the ventricular side to the aortic 1 until it reaches a minimum at the vena contracta (where the jet is the narrowest and where maximum jet velocity occurs). After that, the recovery process starts through a gradual increase in PG along the various points. All the valves follow this expected pattern of pressure changes along positions in the aortic root. The largest pressure drop at the vena contracta occurs with the Acurate neo TAV where the minimal pressure reaches 13.96 AE 1.35 mm Hg. The PG with the Sapien 3, Evolut, and Allegra reach 10.54 AE 0.51 mm Hg, 10.64 AE 0.38 mm Hg, and 11.89 AE 0.61 mm Hg, respectively. The 23-mm Sapien 3 showed the smallest PG at the vena contracta. The location of the vena contracta varied with each valve. The vena contracta of the Acurate neo was the closest to the valve entrance, and that of the Allegra was the furthest from the valve entrance. At 12 mm, in the recovery zone, the smallest PG was obtained with the Allegra (3.32 AE 0.94 mm Hg), followed by Sapien 3 (3.68 AE 0.76 mm Hg), then the Evolut R (4.77 AE 0.87 mm Hg) and the largest PG was obtained with the Acurate neo (5 AE 1.21 mm Hg). All differences in PGs were statistically significant (P <.001) except for the Allegra and Sapien 3 at the vena contracta (P ¼ .1399) and the Acurate neo and Sapien 3 in recovery zone (P ¼ .2105). The largest pressure recovery (difference between PG at the vena contracta and PG at 120 mm) was obtained with Acurate neo (8.96 mm Hg), followed by Allegra (7.79 mm Hg), then by Sapien 3 (6.86 mm Hg), and then Evolut R (4.47 mm Hg). From Figure 5, the fluctuations in the SDs are higher with the self-expanding valves compared with the Sapien 3. All differences in pressure recovery were statistically significant (P <.001). DISCUSSION In this study, we evaluated the hemodynamics downstream of 4 TAVs with variable leaflet position, 3 of which are self-expanding valves (26-mm Evolut R, S Acurate neo, and 23-mm Allegra) and 1 balloon expandable valve (23mm Sapien 3). We report findings on flow turbulence and its relationship to potential for thrombogenicity due to flow turbulence, and on pressure recovery along with its relationship to the assessment of overall valve performance. The higher the RSS and the VSS, the more the flow is considered turbulent. Turbulence is an essential and important factor to assess after heart valve implantation because it can lead to blood damage such as platelet activation, thrombus formation, and hemolysis. Several studies have specified thresholds above which forces on the platelets are the red blood cells are nonphysiological, leading therefore to adverse effects related to blood damage. 18,22 Additionally, several clinical studies have pointed to the occurrence of thrombus formation and hemolysis after various generations of TAVs. These findings were dependent on the type of the valve implanted and how every unique valve design influences the resulting flow. Therefore, it is important to assess how valve performance and behavior (eg, gradients, turbulence, and flutter) influence or correlate with clinical findings. [23][24][25][26] The connection between blood damage and valve durability has also been a subject of research in the recent years. 27,28 Thus, understanding how every valve influences the resulting flow is important to relate the findings to future outcomes after TAV replacement. In this study, the RSS (or turbulence shear stresses) were evaluated to compare the resulting turbulence obtained among the 4 valves. RSS is a pseudoforce and is often used to provide a statistical quantitative evaluation of the influence of turbulent fluctuations on the averaged velocity field at a given position in space. 18 The Allegra TAV showed the smallest range of RSS, indicating lowest turbulence levels compared with the other valves. This result was also accompanied by a small leaflet flutter frequency (Videos 4 and 8), which helped with the flow stabilization and with the reduction of RSS. 2 The Acurate neo and the Evolut R showed elevated likelihoods of developing elevated RSS that exceed 0.1 KPa, a threshold adopted for blood damage initiation, 21 compared with the other valves. Both valves also showed elevated flutter frequency (Videos 1, 3, 5, and 7) influencing the elevated RSS obtained in this study. The elevated leaflet flutter could be attributed to the supra-annular design of the leaflets and the location of the tip of the leaflet in the Evolut R and the Acurate neo, in addition to the porcine pericardium material of the leaflets. Although this was clearly observed with these 2 selfexpanding valves, the Allegra showed minimal flutter (comparable with the Sapien 3) despite having supra-annular leaflet design. This may be due to the small stent spaces (diamonds) and the compact frame of the Allegra compared with a more open stent design with both the Evolut R (larger diamonds) and the Acurate neo (open frame). This may also be due to tissue thickness and leaflet geometry that are most probably the main determinants of a complete circular opening and the degree of leaflet fluttering at the time of peak flow, which will ultimately determine flow patterns, turbulence, shear stresses, pressure drop, and pressure recovery. Pressure recovery is an important phenomenon that instructs on the performance of the implanted valve. 29 the jet expands downstream, its velocity starts decreasing and pressure is recovered depending on several factors such as turbulence, velocity of blood at the vena contracta, and the geometry of the aorta. 3,13,14,30 Several clinical studies presented detailed comparative works between echocardiogram-based gradients (at the vena contracta) and catheterization-based gradients (in the recovery zone). [31][32][33][34][35] Some of these studies highlighted that balloon-expandable valves are characterized by higher gradients at the vena contracta and more elevated The VC of the Acurate neo was the closest to the valve entrance, and that of the Allegra was the furthest from the valve entrance. At 12 mm, in the recovery zone, the smallest pressure gradient was obtained with the Allegra (3.32 AE 0.94 mm Hg), followed by Sapien 3 (3.68 AE 0.76 mm Hg), then the Evolut R (4.77 AE 0.87 mm Hg) and the largest pressure gradient was obtained with the Acurate neo (5 AE 1.21 mm Hg). All differences in pressure gradients were statistically significant (P <.001) except for the Allegra and Sapien 3 at the VC (P ¼ .1399) and the Acurate neo and Sapien 3 in the recovery zone (P ¼ .2105). pressure recovery. 3,35 Some of these studies were inconclusive. 34 In this study, the Allegra TAV was characterized by the lowest PG in the recovery zone and one of the highestpressure recoveries among the 4 valves. The Allegra, as previously mentioned, was characterized by the smallest turbulence downstream of the valve. The Acurate neo was characterized by elevated turbulence, the most elevated PG at the vena contracta and the most elevated PG at 120 mm. However, the pressure recovery obtained from the vena contracta to the 120-mm recovery zone was the highest. The turbulence downstream of the Evolut R was among the highest observed in this study, and the PG at the recovery zone was the second most elevated with the pressure recovery being the smallest. The effect of turbulence on the downstream flow of the valve was also clear with the large fluctuations in standard deviations of the PGs at the different locations. This study shows that pressure recovery is valve-dependent, although it is hard to generalize the dependence on the self-expanding versus the balloon-expandable type. With various valve types and designs, more experiments and more clinical outcomes are needed to assess the optimally performing valve type. Although differences in gradients and pressure recovery among the four types of valves were demonstrated in this study, these differences may not be clinically significant in terms of hemodynamic performance. However, the opening and closing characteristics, the degree of fluttering and turbulence downstream the aorta may exert relevant influence in durability and long-term outcomes. Limitations In this study, we used an idealized solid aortic root model that led to a perfect circular TAV replacement deployment, an advantage that may not be accomplished in many patients due to the anatomical characteristics of the native valve and root. The absence of patient-specific factors influence the flow patterns downstream of the valve and these characteristics have not been fully characterized at present in this study. However, we aimed at performing a highly controlled study that isolates the effect of each transcatheter heart valve independently from geometric or deploymentrelated considerations, similar to previous studies. 9,[36][37][38][39][40] Moreover, in this study, only the recommended axial deployment 41 was assessed. Additionally, we performed the hemodynamic assessment of these valves under 1 physiological set of conditions. Whether or not these conclusions hold under different physiological scenarios is yet to be determined with more studies. Finally, we tested one type of valves for each experiment. Variability in valve type is not anticipated because the manufacturability of these commercial valves is already established. It is also key to acknowledge that such ex vivo modeling does not factor in the biological aspects of platelets activation which come into play. CONCLUSIONS The hemodynamics downstream of 4 transcatheter aortic valves, 3 of which are self-expanding valves (26-mm Evolut R, S Acurate neo, and 23-mm Allegra) and 1 balloon expandable valve (23- this study under pulsatile conditions in vitro. There was a distinct trend of performance obtained with each valve independent of whether they are self-expanding or balloonexpandable, as summarized in Figure 6. The Allegra valve, a self-expanding valve, and the Sapien 3 valve, a balloonexpandable valve, were characterized by the lowest leaflet flutter and thus, the lowest turbulence downstream. These results were supported by the lowest PG results along the pressure recovery zone and minimal fluctuations as evidenced by the SDs of the PG downstream of the valve.
Existence of martingale solutions and the incompressible limit for stochastic compressible flows on the whole space We give an existence and asymptotic result for the so-called finite energy weak martingale solution of the compressible isentropic Navier--Stokes system driven by some random force in the whole spatial region. In particular, given a general nonlinear multiplicative noise, we establish the convergence to the incompressible system as the Mach number, representing the ratio between the average flow velocity and the speed of sound, approaches zero. Introduction In continuum mechanics, the motion of an isentropic compressible fluid is described by the density ̺ = ̺(t, x) and velocity u = u(t, x) in a physical domain in R 3 satisfying the mass and momentum balance equations given respectively by (1.1) ∂ t ̺ + div(̺u) = 0, ∂ t (̺u) + div(̺u ⊗ u) = divT + ̺f . Here f is some external force and T the stress tensor. By Stokes' law, T satisfies T = S − pI where p = p(̺) is the pressure and S = S(∇u) the viscous stress tensor. In following Newton's law of viscosity, we assume that S satisfies S = ν ∇u + ∇ T u + λ div uI with viscosity coefficients satisfying ν > 0, λ+ 2 3 ν ≥ 0. For the pressure, we suppose the γ-law p = 1 Ma 2 ̺ γ where Ma > 0 is the Mach number and γ > 3 2 , the adiabatic exponent. In order to study the existence of solutions to system (1.1), it has to be complemented by initial and boundary conditions (very common are periodic boundary conditions, no-slip boundary conditions and the whole space). The existence of weak solutions to (1.1) has been shown in the fundamental book by Lions [23] and extended to physical reasonable situations by Feireisl [11,15], giving a compressible analogue of the pioneering work by Leray [22] on the incompressible case. These results involve the concept of weak solutions where derivatives have to be understood in the sense of distributions. This concept has since become an integral technique in the study of nonlinear PDE's. In recent years, there has been an increasing interest in random influences on fluid motions. It can take into account, for example, physical, empirical or numerical uncertainties and is commonly used to model turbulence in the fluid motion. As far as we know, the first result on the existence of solution to the stochastic compressible system is due to [34]. This was done in 1-D and later for a special periodic 2-D case in [33]. The latter mostly relied on existence arguments developed in [35]. In [13], a semi-deterministic approach based on results on multi-valued functions is used and follows in line with the incompressible analogue shown in [1]. A fully stochastic theory has been developed in [5]. The existence of martingale solutions has been shown in the case of periodic boundary conditions. This has been extended to Dirichlet boundary conditions in [32]. Compared to the stochastic compressible model, the incompressible system has been studied much more intensively. It first appeared in the seminal paper by Bensoussan and Temam [1] which is based on a semi-deterministic approach. Later, the concept of a martingale solution of this system was then introduced by Flandoli and Gatarek [16]. For a recent survey on the stochastic incompressible Navier-Stokes equations, we refer the reader to [30] or to [29] for the general survey including deterministic results. The aim of this paper is to look at the situation on the whole space R 3 . This is particularly important for various applications and especially for those in which the comparative size of the fluids domain far exceeds the speed of sound accompanying the fluid. See [14] for more details. Difficulties arise due to the lack of certain compactness tools which are available in the case of bounded domains. We shall study the system (1.2) d̺ + div(̺u)dt = 0, d(̺u) + [div(̺u ⊗ u − S(∇u)) + ∇p(̺)]dt = Φ(̺, ̺u)dW, in Q T = (0, T ) × R 3 . A prototype for the stochastic forcing term will be given by (1.3) Φ(̺, ̺u)dW ≈ ̺dW 1 + ̺udW 2 where W 1 and W 2 is a pair of independent cylindrical Wiener processes. We refer to Sect. 2 for the precise assumptions on the noise and its coefficients. The first main result of the present paper is the existence of finite energy weak martingale solutions to (1.2). The precise statement is given in Theorem 2.4. We approximate the system on the whole space by a sequence of periodic problems (where the period tends to infinity). After showing uniform a priori estimates, we use the stochastic compactness method based on the Jakubowski-Skorokhod representation theorem. In contrast to previous works, we adapt it to the situation on the whole space taking carefully into account, the lack of compact embeddings. In order to pass to the limit in the nonlinear pressure term, we use properties of the effective viscous flux originally introduced by Lion [23] similar to [5]. A fundamental question in compressible fluid mechanics is the relation to the incompressible model. If the Mach number is small, the fluid should behave asymptotically like an incompressible one, provided velocity and viscosity are small, and we are looking at large time scales, see [21]. The problem has been studied rigorously in the deterministic case in [24,25,26], as a singular limit problem. A major problem to overcome is the rapid oscillation of acoustic waves due to the lack of compactness. A stochastic counterpart of this theory has very recently been established in [3]. The limit ε of the system dW, has been analyzed under periodic boundary conditions. Given a sequence of the so-called finite energy weak martingale solution for (1.4) (see next section for definition) where ε ∈ (0, 1) , its limit (as ε → 0) is indeed a weak martingale solution to the following incompressible system: Herep is the associated pressure and P is the Helmholtz projection onto the space of solenoidal vector fields. A major drawback in the approach in [3] is that the noise coefficient Φ(̺, ̺u) has to be linear in the momentum ̺u. This is due to the aforementioned lack of compactness of momentum when ε passes to zero. This cannot even be improved in the deterministic case. The situation on the whole space, however, is much better as a consequence of dispersive estimates for the acoustic wave equations, see Proposition 4.8. We apply them to the stochastic wave equation and hence are able to prove strong convergence of the momentum, see Lemma 4.11. Based on this, we are able to prove the convergence of (1.4) to (1.5) under much more general assumptions on the noise coefficients. See Theorem 2.6 for details. In Sect. 2, we state the required assumptions satisfied by the various quantities used in this paper, as well as some useful function space estimates. We define the concept of a solution, state the required boundary condition applicable in our setting and finally state the main results. In Sect. 3, we are concerned with the proof of Theorem 2.4, giving existence of martingale solutions on the whole space. Based on this result, we devote Sect. 4 to the proof of Theorem 2.6; the low-Mach number limit on the whole space. Preliminaries Throughout this paper, the spatial dimension is N = 3 and we assume that (Ω, F , (F t ) t≥0 , P) is a stochastic basis with a complete right-continuous filtration, W is a (F t )-cylindrical Wiener process, that is, there exists a family of mutually independent real-valued Brownian motions (β k ) k∈N and orthonormal basis (e k ) k∈N of a separable Hilbert space U such that We also assume that ̺ ∈ L γ loc (R 3 ), ̺ ≥ 0, and u ∈ L 2 loc (R 3 ) so that √ ̺u ∈ L 2 loc (R 3 ). Now let set q = ̺u and assume that there exists a compact set K ⊂ R 3 and some functions g k : and in addition, satisfies the following growth conditions: Then if we define the map Φ(̺, ̺u) : U → L 1 (K) by Φ(̺, ̺u)e k = g k (·, ̺(·), ̺u(·)), we can use the embedding L 1 (K) ֒→ W −l,2 (K) where l > 3 2 , to show that Φ(̺, ̺u) 2 is uniformly bounded provided ̺ ∈ L γ loc (R 3 ) and √ ̺u ∈ L 2 loc (R 3 ). See [5,Eq. 2.3]. As such, the stochastic integral´· 0 Φ(̺, ̺u)dW is a well-defined (F t )-martingale taking value in W −l,2 loc (R 3 ). Lastly, we define the auxiliary space U 0 ⊃ U via and endow it with the norm Then it can be shown that W has P-a.s. C([0, T ]; U 0 ) sample paths with the Hilbert-Schmidt embedding U ֒→ U 0 . See [7]. 2.1. Sobolev inequalities for the homogeneous Sobolev space. As we shall see shortly, the compactness techniques used in this paper involves certain estimates whose constants must necessarily be independent of the size of the domain. We therefore require the homogeneous Sobolev space Here O is an exterior or an unbounded domain, for example O = R 3 . In particular, given a function u ∈ D 1,q (O), we have that for any 1 ≤ q < 3, See [17, Chapter II] for more details. Note that the constant above is independent of the size of O, unlike in the case of the usual Sobolev-Poincáre's inequality. To continue, let us define the concept of a solution used in this paper. holds where Q T := (0, T ) × R 3 and where is the pressure potential for constants a, ̺ > 0. (9) In addition, (1.2) 1 holds in the renormalized sense. That is, for any Remark 2.2. The definition above also holds for functions defined on the periodic space 3 for any L ≥ 1, rather than on the whole space R 3 . In that case, it even suffices to consider just smooth test functions which are not necessarily compactly supported. See for example [4,3,5]. Definition 2.3. If Λ is a Borel probability measure on L 2 div (R 3 ), then we say that [(Ω, F , (F t ), P), u, W ] is a weak martingale solution of Eq. (1.5) with initial law Λ provided: (1) (Ω, F , (F t ), P) is a stochastic basis with a complete right-continuous filtration, Existence of weak martingale solutions as defined in Definition 2.3 has been shown to exist under suitable growth conditions on the noise term. We refer the reader to [27], albeit stated in the Stratonovich sense. A global-in-space existence result stated in the Itô form appears to be absent from the literatures although it is certainly expected. However, this is a by product of the singular limit problem that we study in this paper. See Theorem 2.6 below. For bounded domains, see for example, [6,16]. Remark 2.5. The assumption ̺−1 ε ≤ M K 2 given in the law above is not restrictive and can actually be dropped. However, it is needed in the proof of Theorem 2.6 below. Theorem 2.6. Let Λ be a given Borel probability measure on L 2 (R 3 ) and for ε ∈ (0, 1), we let Λ ε be a Borel probability measure on /2 is such that the initial law in Theorem 2.4 holds and where the marginal law of Λ ε corresponding to the second component converges to Λ weakly in the sense of measures on ) where u is a weak martingale solution of (1.5) in the sense of Definition 2.3 with the initial law Λ and r ∈ ( 3 2 , 6). Proof of Theorem 2.4 Let ̺ L and u L be some density and velocity fields defined dP × dt a.e. (ω, t) ∈ Ω × [0, T ] on the space T 3 L such that ̺ L and u L satisfies the so-called dissipative estimate; existence of which is shown in [4, Eq. 3.2] for the particular choice of L = 1. We observe that [4, Eq. 3.2] is translation invariant and as such, holds true for any fixed L ≥ 1. Also, the inequality is preserved if we replace H δ (̺) by H(̺). As such if we consider ψ = χ [0,t] , then we obtain the inequality: However, due to (2.1), there is a compact set K ⊂ R 3 such that for any 1 ≤ p < ∞, we have that where c p is independent of both k and L and where we have used ̺ L ≤ 1 + ̺ γ L . Also, by the use of the Burkholder-Davis-Gundy inequality, Hölder inequality and Young's inequality, we have that for an arbitrarily small ǫ > 0. By taking the pth-moment of the supremum in (3.1) and applying Gronwall's lemma, we obtain the inequality where c p,ǫ,vol(K) is in particular, independent of L. Now by the assumptions on Λ, the right hand side of (3.2) is finite. As such, we obtain the following uniform bounds in L Note that the estimates in (3.3) are global but unfortunately, do not include all necessary quantities. In the following, we derive local estimates with respect to balls B r which will depend on the radius r > 0. A consequence of (3.3) 3 is L , this follows in an obvious way from the definition of H. Otherwise we cover B r ⊂ R 3 by tori to which ̺ L is extended by means of periodicity. The number of necessary tori depends on r but is independent of L. To see this, we notice that since vol(B r ) ≈ c(π)r 3 and vol( Remark 3.1. We get (3.4) be making it the subject in (2.6) and using (3.3) 3,4 . However, we only obtain the estimate locally in space because of the constant term ̺ in the pressure potential (2.6). This will blow up with the size of the torus if we try obtaining a global estimate. We observe that non of the bounds in (3.3) directly controls the amplitude of u L . However using the Sobolev-Poincaré's inequality and γ > 3 2 , the following holds , and, consequently, In view of the bounds established in (3.3), (3.4) and the assumptions on the initial law, we can conclude that u L ∈ L p (Ω; L 2 (0, T ; W 1,2 (B r ))). (3.6) uniformly in L. Furthermore, for r > 0, we can use the (uniform in L but not in r) continuous embedding W 1,2 (B r ) ֒→ L 6 (B r ) and Hölder's inequality, to get for dP × dt a.e. Since the radius of the ball above is chosen arbitrarily, we may conclude that 4γ+3 (B r ))), uniformly in L for r > 0 by using (3.3). 3.1. Higher integrability of density. For reasons that will be clear in the subsequent sections, it is essential to improve the regularity of density. We give this in the following lemma: where the constant c, is independent of L (but depends on r). Proof. If we set B 3 r,L := B r ∩ T 3 L , then it is enough to prove that independently of L. The general case then follows by covering B r by sets of the form B ∩ T 3 L for a ball B. First notice that by combining (2.3) with the continuity property of the Bogovskiȋ operator B( r,L is a bounded John domain and hence satisfies the emanating chain condition with some constants σ 1 and σ 2 which are independent of the size of the torus. The fact that the constant c in (3.10) is independent of L therefore follows from the fact that the constant c in [9, Theorem 5.2] only depends on σ 1 , σ 2 and q as well as the fact that c q is independent of L. The idea now is to test the momentum equation with B(̺ Θ ). To do this however, we first replace the map ̺ → ̺ Θ with the function b(̺) ∈ C 1 c (R) and apply Itó formula to the function f (b, q) =´B 3 Since f is linear in q, no second-order derivative in this component exits. Also, the quadratic variance of b(̺) is zero since the renormalized continuity equation is deterministic. Now, notice that the Bogovskiȋ operator commutes with the time derivative (but not with the spatial derivative) and since the continuity equation is satisfied in the renormalized sense, we have that As such for b L := b(̺ L ), the following holds in expectation: where we have integrated by parts and used the fact that B(f ) solves the equation div v = f . It therefore follows that To improve the regularity of ̺, we aim at estimating J 8 in terms of the rest. To do this, we first set the left-hand side of (3.11) to E J 0 . Then using (2.3), (3.3), (3.6), (3.7) and heavy reliance on Hölder inequalities, we can show just as in [5, Propositions 5.1, 6.1] for δ = 0 and noting that ∆ −1 ∇ and B enjoys the same continuity properties; for some constants c = c Θ,γ which are in particular, independent of L. Remark 3.4. In estimating J 2 , we use instead, the Bogovskiȋ operator in negative spaces which can be found in [18, Proposition 2.1], [2] or [10]. Also, note the comment just after [18,Remark 2.2] about carrying over the properties of the Bogovskiȋ operator from a star shaped domain onto more common domains treated in the analysis of PDE's. The result follows by making E J 8 the subject and estimating it from above by the estimates given by the rest. 3.2. Compactness. We now show that not only are our earlier estimates bounded uniformly on the torus T 3 L but due to the fact that each constants obtained are uniform in L, they are indeed bounded locally on the whole space R 3 . We then proceed to show the usual compactness arguments. Lemma 3.5. For any L ≥ 1, we have that Proof. We will only show the first uniform estimate as the rest can be done in a similar manner in conjunction with (3.3), (3.7) and Lemma 3.2. Let L, r ∈ N and let B r ⊂ R 3 be the ball of radius r centered at the origin. If B r ⊂ T 3 L , then we notice that we can directly deduce from (3.3) 2 that u L ∈ L p Ω; L 2 (0, T ; W 1,2 (B r )) (3.12) uniformly in L. Otherwise, we can use the same argument as in the justification of (3.4) above to get from (3.3) 2 , u L L p (Ω;L 2 (0,T ;W 1,2 (Br ))) ≤ c(p, r), ∀r ∈ N (3.13) uniformly in L. That is, for any r ∈ N and any B r ⊂ R 3 , (3.13) holds. By combining (3.12) and (3.13), we can deduce that For the compactness result, let define the following path space , ω), and let (1) µ uL be the law of u L on χ u , (2) µ ̺L be the law of ̺ L on the space χ ̺ , (3) µ ̺LuL be the law of ̺ L u L on the space χ ̺u , (4) µ W be the law of W on the space χ W , (5) µ L be the joint law of u L , ̺ L , ̺ L u L and W on the space χ. Proposition 3.6. For an arbitrary constant c, which is uniform in r ∈ N, L ≥ 1 and R > 0, let us define the set Proof. To see this, fix R > 0 and consider the subsequence {u n } n∈N ⊂ A R so that u n L 2 (0,T ;W 1,2 (Br )) ≤ c(r)R, ∀n ∈ N and ∀r ∈ N Then by the use of a diagonal argument, we can construct the sequence {u n n } n∈N ⊂ {u n } n∈N that is a common subsequence of all the sequences {u m n } n∈N for all m ∈ {0} ∪ N where u 0 n := u n . And by uniqueness of limits, we can therefore conclude that u n n ⇀ u in L 2 (0, T ; W 1,2 (B r )) for every r ∈ N. This finishes the proof. Proof. We first show that {µ uL ; L ≥ 1} is tight on χ u . To do this, we let R > 0, then by Proposition 3.6, there exists a compact subset A R ⊂ χ u . Now since (A R ) C := {u L ∈ L 2 (0, T ; W 1,2 loc (R 3 )) : u L L 2 (0,T ;W 1,2 (Br )) > c(r)R, for some r ∈ N}, for any measure µ uL ∈ {µ uL ; L ≥ 1}, there exists a r ∈ N such that: as R → ∞, where we have used (3.13) in the last inequality. This implies that {µ uL ; L ≥ 1} is tight on χ u . By using a similar argument adapted to suit the compactness arguments in [5, Sect. 6] we can show that {µ ̺L ; L ≥ 1} and {µ ̺LuL ; L ≥ 1} are also tight on χ ̺ and χ ̺u respectively. Furthermore, µ W is tight since its a Radon measure on the Polish space χ W . This finishes the proof. From Proposition 3.7, we cannot immediately use Skorokhod representation theorem to deduce that {µ L ; L ≥ 1} is relatively compact (i.e. Prokhorov theorem), since the path space χ is not metrizable. However, we may use instead the Jakubowski-Skorokhod representation theorem [20] that gives a similar result but for more general spaces including quasi-Polish spaces, the space in which these locally in space Sobolev functions live. Applying this yields the following result: Proposition 3.8. There exists a subsequence µ n := µ Ln for n ∈ N, a probability space (Ω,F ,P) with χ-valued random variables (ũ n ,̺ n ,q n ,W n ), and their corresponding 'limit' variables (ũ,̺,q,W ) such that • the law of (ũ n ,̺ n ,q n ,W n ) is given by µ n = Law(u Ln , ̺ Ln , ̺ Ln u Ln , W ), n ∈ N, • the law of (ũ,̺,q,W ), denoted by µ = Law(u, ̺, ̺u, W ) is a Randon measure, • (ũ n ,̺ n ,q n ,W n ) convergesP−a.s to (ũ,̺,q,W ) in the topology of χ. To extend this new probability space (Ω,F ,P) into a stochastic basis, we endow it with a filtration. To do this, let us first define a restriction operator r t define by for t ∈ [0, T ] and X ∈ {χ ̺ , χ u , χ W }. We observe that r t is a continuous map. We can therefore constructP−augmented canonical filtrations for (̺ n ,ũ n ,W n ) and (̺,ũ,W ) respectively, by setting Proof. This follows in exactly the same manner as in [5, Proposition 5.6]. Then by using the convention ∂ i := ∂ xi and for some cutoff functions φ(x), φ(x) ∈ C ∞ c (R 3 ), we may do a similar computation as in (3.11). That is, we apply Itô's formula to the function f (g,q) =´R 3q · φ(x)A i [φ(x)g] dx whereq =̺ũ and where g = T k (̺) and T k : Or equivalently, by testing the momentum equation satisfied by the sequence of weak martingale solution in Lemma 3.9 by . We obtain the following (by assuming that L is large enough such that sptφ ⊂ T 3 L ) where T k , as defined above, replaces b in the definition of the renormalized equation given by (2.7). Remark 3.12. Notice that since the approximate quantities in (3.17) are only defined locally in space, to apply this globally defined operators A, it is essentially to premultiply our functions by some φ ∈ C ∞ c (R). Also, we observe that since our noise term is a martingale, it vanishes when we take its expectation, as martingales are constant on average. Now notice that by integration by parts and the use of the properties of the operators A i and R ij = ∂ i A j , we may rewrite J 2 , J 4 , J 5 and J 6 so that (3.18) becomes: Remark 3.13. If we set the left-hand side of (3.19) toẼI 0 , then we point the reader to the difference in the viscosity constant in I 0 and I 4 . Similarly for the limit processes, we obtain where a 'bar' above a function represents the limit of the corresponding approximate sequence of functions. Proof. See [5,Sect. 6.1] or the deterministic counterpart in [28,Eq. 7.5.23]. Now by using the weak-strong pair: (3.17) 1 and Lemma 3.14, we can pass to the limit in the crucial term I 6 to getẼ I 6 →Ẽ K 6 . All other terms can be treated in a similar manner as in [5, Sect. 6.1] keeping in mind that the terms involving derivatives and cutoff functions are of lower order and hence easier to handle. In particular, we obtain the convergenceẼ I 7 →Ẽ K 7 by observing that R = ∂ j A i . We have therefore shown that 3.4. Identification of the pressure limit. Showing that indeedp =̺ γ or equivalently that̺ n →̺ strongly in L p (Ω × Q) for all p ∈ [1, γ + Θ) follows Feireisl's approach via the use of the so-called oscillation defect measure. This is a purely deterministic argument even in our stochastic settings since it relies on the renormalized continuity equation. To avoid repetition, we refer the reader to [28,Sect. 7.3.7.3] or [11]. To confirm that it indeed applies in the stochastic setting, the reader may also refer to [5, Sect. 6.2 and 6.3]. We now conclude with the following lemma which completes the proof of Theorem 2.4. 4.2. Compactness. To explore compactness for the acoustic equation, let first define the path space and let (1) µ ̺ε be the law of ̺ ε on the space χ ̺ , (2) µ uε be the law of u ε on χ u , (3) µ P(̺εuε) be the law of P(̺ ε u ε ) on the space χ ̺u , (4) µ W be the law of W on the space χ W , (5) µ ε be the joint law of ̺ ε , u ε , P(̺ ε u ε ), and W on the space χ. Then the following lemma, the proof of which is similar to [3, Corollary 3.7], holds true. Now similar to Proposition 3.8, we apply the Jakubowski-Skorokhod representation theorem [20] to get the following proposition. To extend this new probability space (Ω,F ,P) into a stochastic basis, we endow it with theP−augmented canonical filtrations for (̺ ε ,ũ ε ,W ε ) and (̺,ũ,W ), respectively, by setting Consequently, the uniform bounds shown in (4.1), (4.2) and (4.7) earlier hold for these corresponding random processes on this new space. In particular, we have that holds uniformly in ε for p ∈ [1, ∞) and where l > 5/2,φ ε =̺ ε−1 ε and We now verify that indeed the limit process satisfies Definition 2.3. This will complete the proof of Theorem 2.6. Proof. The proof of this proposition will follow from the following lemmata and propositions. Proof. The proof of this lemma follows further from combining Proposition 4.2 with the Lemmata 4.6, 4.9 and Proposition 4.8 below. Remark 4.7. Henceforth, we write ' ' for '≤ c' and ' ' for '= c' where c, which may varies from line to line is some universal constant that is independent of ε but may depend on other variables. Proof. Let define the functionΨ ε = ∆ −1 div(̺ εũε ) such that ∇Ψ ε = Q(̺ εũε ). Then equation (4.3) becomes (4.12) εd(φ ε ) + ∆Ψ ε dt = 0, We however observe that Eq. (4.12) is equivalent to where the usual wave operator is an infinitesimal generator of a strongly continuous semigroup S(·) = exp(A·). See for example [8]. Also since Φ := Φ(̺,̺ũ) is the Hilbert-Schmidt operator and equation (4.12) is satisfied weakly in the probabilistic sense, it follows that this weak solution is also a mild solution. See for example [7,Theorem 6.5]. As such after rescaling, we obtain the mild equation where the semigroup S(t) is such that is the solution to the homogeneous problem The lemma below is crucial to the proof of Proposition 4.8 and is an adaptation of [31, Lemma 2.2] to our setting. cf. [12,Lemma 3.1]. Proof. For simplicity, we assume that γ = 1. General γ > 1 will then follow by rescaling δ below. Using Plancherels theorem in t and x, we have that where we have used the Cauchy-Schwartz inequality. Moving on, we now consider a smooth cut-off function (with expanding support) η r ∈ C ∞ 0 (B 2r ) with η r ≡ 1 in B r for r > 0 and zero elsewhere. We now mollify the product of this cut-off function and our functions in (4.12) by means of spatial convolution with the standard mollifier. That is, if v is one of the functions in (4.12), we set v κ = (η r v) * ϕ κ where ϕ κ is the standard mollifier. This we do to ensure that the regularized functions are globally integrable. First off, we note that since (4.8) 4 holds uniformly in ε, for an arbitrary small δ > 0, we can find a κ(δ) such that for any 1 ≤ p < ∞ and an arbitrary ball B ⊂⊂ B r for r > 0. Then using (4.16) , (4.18) and Lemma 4.9, we obtain , for any ball B ⊂ R 3 and where in particular, the constant is independent of κ. So by rescaling in time, i.e, setting s = t ε so that ds = dt ε , we get with a constant that is independent of ε. Now by the continuity of Q, (4.19), and the initial law defined in the statement of Theorem 2.4, we conclude that Similarly we have that for any ball B ⊂ R 3 , Where we have used Jensen's inequality and Fubini's theorem in the first inequality, extended (0, t) to R and used the semigroup property in the second inequality, applied similar reasoning as in (4.21) in the third inequality and then used that (S(t)) t is a group of isometries on L 2 (extended by zero outside of the ball) in the last line above. We have therefore obtained the following bounds ε, ε for any ball B ⊂ R 3 . Now let make the notationΦ κ ε (e i ) := g i (·,̺ ε (·), (q ε )(·)) κ =: g ε,κ i . We notice that for a continuous function S(t) and a continuous operator Q, the quantity S(t)QΦ is Hilbert-Schmidt if Φ is Hilbert-Schmidt. As such, it follows from Itó isometry that where the above involved extending s from (0, t) to R as well as Fubini's theorem. Now using the semigroup property and similar estimate as in equation (4.20) and (4.21), followed by the fact that the semigroup is an isometry with respect to the L 2 -norm, we get that The last inequality follows because the noise term is assumed to be compactly supported in R 3 . See (2.1). We have therefore shown that where the constant is independent of ε. Combining this with the estimates from (4.24), we get from (4.15) that L 2 ((0,T )×B) where we have set holds for any ball B ⊂ R 3 . We also deduce from Eq. (4.19) together with the embedding L ∞ (0, T ; L r (B)) ֒→ L 2 (0, T ; L r (B)) where r = 2γ γ+1 , and the continuity of Q that (4.26)Ẽ ∇Ψ κ ε − ∇Ψ ε 2 L 2 (0,T ;L r (B)) ≤ c δ,t ,Ẽ q κ ε −q ε 2 L 2 (0,T ;L r (B)) ≤ c δ,t where δ is the arbitrarily constant from (4.19) which is independent of κ and ε. As such, the constant c δ,t can be made arbitrarily small for an arbitrary choice of δ so that lim κ↓0Ẽ ∇Ψ κ ε − ∇Ψ ε 2 L 2 (0,T ;L r (B)) = 0, r = 2γ γ + 1 . Proof. To avoid repetition, we refer the reader to [3,Proposition 3.13] for the proof of (4.29). However we proof (4.30) below. Combining this with Proposition 4.8 finishes the proof. By combining (4.9) with Lemma 4.11 we finish the proof of Lemma 4.5.
Thigh-Derived Inertial Sensor Metrics to Assess the Sit-to-Stand and Stand-to-Sit Transitions in the Timed Up and Go (TUG) Task for Quantifying Mobility Impairment in Multiple Sclerosis Introduction: Inertial sensors generate objective and sensitive metrics of movement disability that may indicate fall risk in many clinical conditions including multiple sclerosis (MS). The Timed-Up-And-Go (TUG) task is used to assess patient mobility because it incorporates clinically-relevant submovements during standing. Most sensor-based TUG research has focused on the placement of sensors at the spine, hip or ankles; an examination of thigh activity in TUG in multiple sclerosis is wanting. Methods: We used validated sensors (x-IMU by x-io) to derive transparent metrics for the sit-to-stand (SI-ST) transition and the stand-to-sit (ST-SI) transition of TUG, and compared effect sizes for metrics from inertial sensors on the thighs to effect sizes for metrics from a sensor placed at the L3 level of the lumbar spine. Twenty-three healthy volunteers were compared to 17 ambulatory persons with MS (PwMS, HAI ≤ 2). Results: During the SI-ST transition, the metric with the largest effect size comparing healthy volunteers to PwMS was the Area Under the Curve of the thigh angular velocity in the pitch direction–representing both thigh and knee extension; the peak of the spine pitch angular velocity during SI-ST also had a large effect size, as did some temporal measures of duration of SI-ST, although less so. During the ST-SI transition the metric with the largest effect size in PwMS was the peak of the spine angular velocity curve in the roll direction. A regression was performed. Discussion: We propose for PwMS that the diminished peak angular velocity during SI-ST directly represents extensor weakness, while the increased roll during ST-SI represents diminished postural control. Conclusions: During the SI-ST transition of TUG, angular velocities can discriminate between healthy volunteers and ambulatory PwMS better than temporal features. Sensor placement on the thighs provides additional discrimination compared to sensor placement at the lumbar spine. INTRODUCTION Multiple Sclerosis (MS) is a progressive neurological disorder usually presenting in early adulthood whose manifestations include an unpredictable spectrum of motor, sensory and autonomic symptoms, usually accompanied by increasing levels of ambulatory dysfunction (1,2). The relapsing-remitting form of the disease (RRMS) involves attacks of sudden exacerbations of symptoms lasting days to weeks, caused by autoimmunity, inflammation and demyelination, followed by abatement of many (but not all) of the new symptoms during periods of remission. Although MS is currently without a cure or a known cause, the last decade has seen a renaissance in disease modifying treatments and symptomatic therapies (3). Researchers' goals are to find new medical and physiotherapy treatments that can improve function after an attack and prevent new attacks (4), greatly improving the quality of life of patients. Assessment of intervention efficacy fundamentally depends on making accurate measurements of disease progression and disability. Traditional Measurements of Disability Progression in MS Objective and precise measurements of movement disability (including weakness and attenuation of coordination and control) are needed to make clear assessments about interventional efficacy and disease symptom progression (5). However, the day-to-day variation in MS symptom severity, combined with the relapsing-remitting course of RRMS, undermine precise assessment of symptomatic progression at a given moment in time. Furthermore, the efficacy of new treatments is sometimes disputed because of issues associated with the disability outcome measures (6,7). Current interventions (including medications and physiotherapy) used to treat MS symptoms are often modestly effective, and may exert their clinical effects on only a small subpopulation of those treated. For example, fampridine (4-AP) was shown to elicit a 25% improvement in ambulation of MS patients (compared to 6% in placebo-treated patients), but only in 35% of such patients (8). There is a correlation between clinical progression, as implied by MRI measures of brain atrophy and gross tissue loss, and symptomatic progression, although more fine-grained MRI measures of disease activity such as T2 lesion load do not always correlate directly with overall symptomatic assessment such as with the Multiple Sclerosis Functional Composite Score (MSFC) (9) or with validated tools based on clinical judgment such as the EDSS (Expanded Disability Status Scale) (10). In summary, both research and treatment into MS are characterized by uncertainty because it can be difficult to quantify modest improvements due to treatments (11,12). Inertial Sensors and Other Metrics of Mobility Dysfunction In general, detailed measurements of gait function and mobility require a specialist gait laboratory setting (e.g., for optoelectronic motion capture) and are too costly, isolated and timeconsuming for routine clinical use. Inertial Motion Units (IMUs) are a cost-effective, wearable subclass of wireless sensors based on Micro-Electromechanical Sensor (MEMS) technology, which often include a collection of accelerometers, gyroscopes and magnetometers, allowing the derivation of motion of various body segments; the choice of which body segment (e.g., ankle, hip, thigh, or a combination) will provide the minimal sensitivity needed to interpret the task remains controversial (13). Recent research has highlighted the opportunities for use of inertial sensors in MS (14), although most of this work has focused on home-based measures of total physical activity (15), with a comparatively smaller number of attempts to characterize walking in MS (16,17). By contrast, in other causes of movement disorder [e.g., Parkinson's disease (18), stroke (19), total knee arthroplasty (20), and elderly patients at risk of falling (21)], there is a broader range of data considering the strengths and weaknesses of the sensor metrics. Recently, at the level of the thigh, hip range of motion (ROM) has been found to be a useful metric to assess disability in MS during flat walking (17). In addition to walking, sensor measurements of ambulatory ability are broadened by a wide range of clinically-established tasks that the patient can perform. TUG and Other Tasks In the International Classification of Functioning [ICF (22)], the domain of activities can be broken down into capacity and performance. While direct tests of muscle strength arising during maximal isometric contraction can be measured with a force transducer, to assess clinically relevant disability, muscle actions are usually assessed within a more naturalistic context, such as walking a short distance, walking a longer distance (where fatigue and walking degradation are possible), or getting out of a chair and starting to walk. The Timed-Up-And-Go (TUG) task (23) tests the time it takes for a patient to stand up from a seated position, walk 3 m, turn around 180 • , walk back 3 m, turn around and sit back down again; the task begins when the clinician gives the signal to start, and it ends when the patient's body first returns to the seat pan of the chair. TUG duration is a modest predictor of frailty and falls (24), and TUG is a threshold test for independent living. In their original, non-instrumented format, most of these naturalistic tasks had only a single metric output, which was either time duration (e.g., TUG) or distance covered successfully (e.g., the 6 min walk). TUG can be effectively considered as six subtasks ( Figure 1A): the sit-to-stand transition (SI-ST), walking 1 (away), turn 1 (180 • ), walking 2 (return journey), turn 2 (180 • ), and the standto-sit transition (ST-SI); in analyses, walking 1 and walking 2 are often bundled together because they represent nearly identical subtasks, and some analyses elide turn 2 with the ST-SI transition because the two subtasks usually do not have a clear boundary. A range of TUG-like variants also exist that shorten the walk ] or lengthen it [to 7 m each way (26)] in order simplify the task for patients or to make the walking data more robust. The Sit-to-Stand Transition and the Thigh What makes TUG and TUG-like tasks different from other walking tasks (e.g., the Timed-25 Foot Walk or the 6 min Walk) is the inclusion of the sit-to-stand transition and the standto-sit transition [also some researchers have also investigated aspects of the turns (16,27)]. The sit-to-stand transition (and the continuation into walking) is not only ecologically relevant for day-to-day living, but it is particularly affected in the frail elderly who complain of stiffness after extended sitting. It is also highly dependent on extensor strength in the lower extremity, and is considered one of the most mechanically demanding of functional daily activities (28). The stand-to-sit transition is an indicator of control and balance during eccentric contraction of the extensors. For stroke, specific SI-ST metrics (such as rising speed or asymmetry of weight distribution) have been proposed as possible metrics for detecting improvement during the first year post-stroke (29,30). The asymmetry features are particularly important in stroke because of hemiparesis, although rising speed might potentially be useful in any movement disorder, including MS; to the best of our knowledge a similar investigation for MS has not occurred. Some groups have looked at single SI-ST transitions, or cycles of Sit-Stand-Sit transitions, which provide more uniform data about the SI-ST transition, because TUG often results in elision of the SI-ST transition and walking 1 when the first step (toeoff and swing) begins before or immediately at the completion of contralateral thigh extension. Compared to the ankles, the SI-ST transition has a profound effect on the directionality of the thigh segment (and to the torso as well). Known Sensor Metrics for TUG Extensive sensor-based research on TUG has been performed in a range of clinical conditions (31,32). A brief survey of this literature reveals at least 90 sensor metrics for TUG have been derived to recognize falling risk. In a 2014 systematic review of 53 sensor-based studies on the sit-to-stand transition (32), 84% of the studies used a sensor on the torso, at either the spine [e.g., L3 (33,34)] or the sternum [e.g., (18)]. Other studies have placed sensors on the shanks (16,27,35); only in a few cases placement was on the thigh segment (20,36,37), despite the fact that the thigh would be the most directly involved body segment during the SI-ST or ST-SI transition. The many metrics (based on all body segments) have included calculations based on temporal variables, linear acceleration variables, angular velocity variables, frequency variables, and descriptive statistics based on entropy (ApEn) and fractal dimension (d F ). Some groups have measured asymmetry in weight bearing (36). The derived temporal variables (and asymmetry) are the most clearly related to traditional gait measures (which are based on position and FIGURE 1 | Clarification of methods. (A) shows a schematic of the entire TUG task divided into subtasks. (B) shows the approximate directions of pitch, roll and yaw (depending on precise sensor stability) as we describe in this study. Pitch is nominally rotation around the medio-lateral axis (i.e., within the sagittal plane), roll is nominally rotation around the dorso-ventral axis (i.e., within the coronal plane), and yaw is nominally rotation around the vertical (superior-inferior) axis (i.e., within the transverse plane). force), while sensor metrics are based on movement (angular velocity and linear acceleration). In the current study we sought to compare a collection of transparent metrics of the SI-ST and ST-SI transitions, assessing whether there was added value when measurements were made with sensor placement at the thigh, compared to placement at the spine. We judged assessment value in terms of effect size (the rank biserial) of the association of a feature with its ability to distinguish middle-aged healthy participants from Persons with MS (PwMS). In addition to temporal measures, we examined a range of calibrated, transparent sensor metrics, as well as testing two different measures of the smoothness of signals. As a rough test of whether our metrics would be useful in examination of PwMS, we compared an ambulatory sample of MS patients [Hauser Ambulation Index (HAI) ≤ 2, no use of walking aids for short distances] to middle-aged healthy volunteers. Thus, our hypothesis is that there exists a set of thighbased sensor metrics of pitch angular velocity that have a higher effect size in distinguishing PwMS from healthy volunteers than either the TUG stopwatch time or the published spine based metrics. Finally, to roughly simulate the value of our features, we produced a step-wise logistic regression with multiple features. Volunteer Recruitment Seventeen PwMS (mean age ± sd = 53.06 ± 11.06, 13 female) were recruited from a local community MS center (MS Sussex), with approval from Staffordshire University ethics committee. Twenty-three healthy volunteers (age 46.13 ± 11.12, 14 female) were recruited from the university community via email. The exclusion criteria were that no participant had clinically relevant complicating diseases (other than MS) that would impact walking ability or walking rates. This included: not currently suffering from flu, cold, etc., no current leg/back injuries due to trauma, no loss of motivation due to obvious psychiatric symptoms (e.g., no major depression, bipolar disorder, psychosis), and no loss of walking ability or exercise tolerance due to another disorder: heart failure, recent myocardial infarction, COPD or other respiratory disorder. Procedure The experimental procedure was approved by the university ethics committee, and the experiment was run according to the principles in the Declaration of Helsinki. Each participant was informed about the nature of the experiment, and they gave their informed consent for the experiment. Before each volunteer began, he/she filled in a demographic form (establishing their age and gender, estimated year of first symptoms, and year of receiving an MS diagnosis). Three of our sensors were noninvasively placed on the lateral aspect of their lower left thigh (the most distal part of the sensor was 5 cm above the superior border of the patella), lower right thigh, and the small of the back (at the level of L3). All sensors were worn over clothing using a lightweight Velcro elasticated webbing system for keeping the sensors in place. All participants wore standardized running shoes (Lonsdale) of the correct shoe size, in order to correct for differences in mobility due to shoe stiffness or heels; our team have a collection of different sizes of these running shoes to fit all participants. Sensors were placed on the lateral surfaces of thighs, to avoid interference with walking; sensors were orientated with the positive X-axis pointing superiorly (proximally). Fitting the sensors took 5 min, while removing the sensors took 3 min. In general the entire procedure for a single volunteer lasted 60 min (including rest time). The sensors had their data synchronized at the beginning and the end of the experiment by being affixed together and being subjected to sudden transient accelerations, interspersed with periods of non-movement. Tasks The timed-up-and-go (TUG) task was performed according to Steffen et al. (44). The task involves arising from a seated position, walking 3 m, turning around, walking back 3 m, turning around and sitting back down in the chair. Participants started in a chair with arms, with a tape mark on the floor showing the 3 m distance where they were supposed to turn around. Participants were given instructions to perform the task "as fast as possible, but safely, " and they were shown how to do the task. Stopwatch timing was done according to best practice (44,45), starting on the word "Go" and ending when the participant's buttocks first made contact with the seat of the chair; a sensor-based full length TUG duration feature was also calculated based on the attitude of the thigh. The TUG task was performed twice. Participants were also asked to perform several other walking and balance tasks, including a Timed-25-Foot-Walk [T25FW based on timing with a stopwatch, (46)], which was used to establish that participants were at the Hauser Ambulation Index [HAI, (47)] of 2 or below. None of the tasks were stressful or tiring, and participants were asked before each task if they needed a rest. Sensors and Data Analysis The sensors used were x-IMU by X-io (Bristol, UK), with three dimensions each of accelerometry, gyroscopy and magnetometry. These sensors are factory calibrated for gravitational acceleration (accelerometers) and angular momentum (gyroscopes), and they incorporate an onboard algorithm for estimation of heading and quaternions (48,49). These sensors have been validated for accuracy when measuring walking, both in terms of angular velocities and derived temporal gait metrics (50). Data from the three sensors in each x-IMU node was gathered at 128 Hz onto the onboard 32 GB micro SD cards (Sandisk Ultra Micro) with the sensors' blue tooth transmission off (to extend battery charge). Time alignments between sensors and with other measurements and video tapes were performed using an automated event-based synchronization strategy [e.g., (51)]. Directions used (i.e., pitch, roll and yaw) are shown in Figure 1B. Binary file sensor data was transferred to a Windows 7 computer, and the binary files were converted into csv files using the manufacturer's provided Graphical User Interface. The csv files were read into Matlab, and all sensor data was aligned (based on the synchronization signals at the beginning and end of the experiment) with a purposed-made script; timing differences between sensors were interpolated linearly-at no point did the original sensor acquisition data differ between sensors by more than 50 ms (over the course of 90 min of acquisition). The relevant sensor data for each task was located by Matlab based on the event's start and finish time recorded by the sensor, and all data was low-pass filtered (2.5 Hz, 4th order Butterworth, 0 latency, Matlab filtfilt). Peaks were identified with a peak detector algorithm set to detect a minimum recovery of 20% of the range of the signal. Timing duration from the spine sensor was based on Weiss et al. (33,52), while all other angular velocity and duration measurements were derived as shown in Figure 2. Smoothness To test control of movement, repeated gait movements can be tested for variation, such as the Coefficient of Variation for any metric (e.g., step length) (53). For a single movement performed once (e.g., the SI-ST transition), inconsistent neural control (or loss of balance) may be reflected by a loss of smoothness (which is often measured as an increase in jerk for a continuous signal). In this study, we tested two different measures of smoothness. The normalized mean absolute jerk (54) is one of the most commonly used measures for smoothness (smoothness 1): Another measure of smoothness we used, the speed arc length (55), has the advantage of being unit-free (smoothness 2): Statistics Statistics were calculated within Matlab (Natick, MA, USA). To allow for peaks from different legs (and in different directions) to be compared, all peaks are the peak of the absolute value of the calibrated signal, and all means are also the mean of the absolute value of the calibrated signal. Graphical inspection of healthy and PwMS peak angular velocity data showed that it was approximately normally distributed; nevertheless, to allow for those features that were not normally distributed, for assessments of correlation between repeated attempts of the same task, an Intraclass Correlation Coefficient (ICC) was calculated (56). For unpaired comparisons between the means of two populations, the Wilcoxon Rank Sum test was used; this was corrected by the Holm-Bonferroni correction for multiple comparisons. For effect size calculations, the rank biserial was calculated. Participants The two cohorts compared in the main study were ambulatory persons with multiple sclerosis (PwMS) and middle-aged healthy volunteers. The PwMS were recruited via a local MS community center (MS Sussex Treatment Center). The baseline characteristics of the two groups are shown in Table 1. The two groups were not statistically significantly different in terms of height, weight, or age (although the mean age difference was >6 years). In all other measurements of disability and difficulty, the PwMS had significantly higher Beck Depression Index Scores, MSWS-12 scores, FSS scores, MFIS scores, and T25FW times (which were on average 1.5 s longer than the times for healthy volunteers). This difference in mean T25FW is just over the established cut-off of 20% that suggests a clinically meaningful difference (46), and the mean of 6.02 s is almost exactly the 6 s cut-off established for clinically meaningful cut-off (57). Format of TUG Data Pitch gyroscope data from each sensor (and roll data from the lumbar spine sensor) were used to derive both the rate of movement during the sit-to-stand (and stand-to-sit) transitions, as well as the durations that these activities lasted. The features we calculated were based on finding peaks, calculating the peak attributes (maximum, start point, end point, 20% rise point, 80% return point), and from those points calculating the magnitude of the peak (angular velocity), the duration (time in seconds) of the peak's arc (where an arc is the geometric segment of the angular velocity curve), the mean angular velocity of the peak's arc, the area under the curve of the arc, and the smoothness of the peak's arc. Representative sensor data is shown in Figure 3. All traces in this figure are low pass filtered (2.5 Hz) and factorycalibrated. Sharp peaks/troughs correspond to the thigh's role in swing phase, while wider simultaneous peaks/troughs are the stance phase of the contra-lateral lower limb. Panels A (healthy) and D (PwMS) show both left and right thigh pitch traces during the entire TUG task; each walking step is clearly identifiable from the swing phase (sharp peaks) and concurrent contra-lateral stance phase (wider, blunt peaks), as are the sitto-stand and stand-to-sit transitions (wider and lower-amplitude changes). The turns are more easily identified by the traces for the yaw gyroscopes (not shown). The first half step ("step 1") that occurs immediately after standing up entails a small swing phase (in panel A it is the right thigh trace between 1.2 and 1.5 s) that peaks at a much lower angular velocity than other steps. Figure 3B is a close up of panel A during the sit-to-stand transition showing the relationship between the peaks of the spine pitch trace (black line) and the thigh traces. In previous studies (33,52), the spine pitch trace was the data used to derive the timing of the SI-ST and ST-SI transitions. For this volunteer, the first spine peak (intersection of black time course trace and left-most vertical gray line) is closely aligned with the initiation of thigh movements (red and dark blue circles), and the second spine peak/trough (right-most vertical gray line) is closely aligned with the beginning of the first half step (i.e., one possible end of the sit-to-stand transition). For the purposes of computer identification, zero-crossing points of the thigh traces (black squares) were used as markers for the end of SI-ST transitions. Panel C is a close up of panel A during the stand-to-sit transition F) show analogous traces for a PwMS; note that the different panels have slightly different scales on their axes. In addition to the pitch traces from the left thigh (red) and the right thigh (dark blue), (B,C,E,F) include a pitch trace from the lumbar spine sensor (black), to allow comparisons with previously published data features based on torso-mounted sensor data. The peaks/troughs for the thigh traces are magenta circles, and the peaks/troughs for the spine are shown as vertical gray lines. The start of the rise for the left thigh is a red circle, for the right thigh is a dark blue circle, and for the spine is a magenta diamond. Step end points are shown as black squares, and 20% rise and 80% return points are shown as cyan circles. showing the relationship between the peaks of the spine pitch trace and the thigh traces; for this volunteer, the second spine peak (right-most vertical gray line) is closely aligned with the thighs' return to the seat pan of the chair (i.e., the end of the stand-to-sit transition), which is identified by the 80% return point (cyan circles). The delay of the thigh pitch traces (red and blue traces, between 10.8 and 11.3 s) to arrive at 0 • /s (black squares) in this case is due to abduction/adduction of the thighs accompanied by thigh rotation, rather than a delay in sitting (i.e., the hands bracing against the fall downward). The first spine peak is delayed compared to knee and thigh flexion (cyan circle on red line at 9.8 s). The thigh activity of the right lower limb (dark blue) is a combination of the final shuffling step during Turn 2 (T2, starting at the dark blue circle) and the subsequent flexion of sitting down. The traces related to a PwMS in panel D show a similar set of activities as in panel A, although the actions are performed more slowly and with lower angular velocity peaks. The most noticeable difference is that in panel F the ST-SI transition is performed much more slowly and carefully. Figure 4 shows a close up view of the same left thigh pitch trace during the sit-to-stand transition from Figures 3A,B, along with the peak attributes and time points used to derive the features for these movements. A complete description of the arcs is provided in the Supplementary Materials. Arcs A-H correspond to the sit-to-stand transition, while arcs J-R correspond to the same attributes during the stand-to-sit transition (there is no arc I). Arcs E and N (not shown) correspond to a 1-s time period centered around the maximum (i.e., peak of the arc) of the SI-ST transition (arc E) and ST-SI transition (arc N). The peak (shown here as a black circle) is bracketed by the step end (to the right, black square) and the start of the rise (to the left, dark blue triangle). To avoid eccentricities arising from false starts and additional partial movements, the start of calculations is sometimes represented by the 20% rise point (cyan diamond, left), and the 80% return point (cyan circle, right). Features of SI-ST and ST-SI Transitions: Repeatability Before determining which features were most likely to be affected in our cohort by MS, we sought to determine which of the features were clearly repeatable. Because each of the participants performed the TUG task twice, we compared the value of each feature during the first attempt and the second attempt. We analyzed the correlation using the Intraclass Correlation Coefficient (ICC). The features we tested were based on the pitch angular velocity measurements from both thighs and the spine sensor, roll angular velocity measurements from the spine sensor, a range of smoothness metrics, and an omnibus measure of TUG duration based on the Anterior-Posterior accelerometer of the thigh. The calculations were the absolute value (magnitude) of the peak angular velocity, the many possible durations of the event (as determined by the arcs as explained in the methods and Figure 4), the magnitude of the mean angular velocities for those arcs, the area under the curve for those arcs, and the smoothness of each arc (see Methods). Each pitch feature was initially calculated for both left and right thighs (and also for the spine), and the final thigh features were the maximum of the two thigh values, the minimum of the two thigh values, the value associated with the thigh making the first step, and the value associated with the thigh making the second step. In broad terms, we started with 819 features (many of which were highly related), of which 152 had an ICC ≥ 0.60 [a good correlation according to (58)]. Representative plots showing selected correlations of four of the features are shown in Figure 5. The most correlated measurement arcs for the transitions are arc J, K, N, and M all of which encompass the entirety of the ST-SI peak (including the peak itself); the least correlated were arcs P and Q, both of which represent the first half of the ST-SI transition. The most consistent among the spine roll metrics are the SI-ST arcs that include the most possible time for unpredictable activity, including arcs B, F, E, and A, all of which had excellent correlations (ICC ≥ 0.75). The vast majority of smoothness metrics were poorly correlated, although a few were good (between 0.60 and 0.74). This may be expected, given that lack of smoothness would represent loss of control, which would per force be inconsistent. Features of SI-ST and ST-SI Transitions: PwMS vs. Healthy In total 819 correlated features were tested, and they were compared between the healthy volunteers and the PwMS. The raw P-values (Wilcoxon Rank Sum test) and the effect sizes (rank biserial) are shown in Table 2 Figure 6A. The fastest 50% of healthy volunteers reach angular velocities that exceed all PwMS, while the slowest quartile of PwMS cannot reach angular velocities reached by all healthy volunteers (except for one healthy outlier, who was a tall (175 cm), middle-aged female who moved slowly and deliberately when getting in and out of the chair). To illustrate the scale of those differences, a comparison of the total TUG task durations (as measured by stopwatch) are shown next to this plot (see Figure 6B). When comparing the effect sizes (Rank Biserial in Table 2) of MS in our cohort of the features, several observations arise. The features relating to the sit-to-stand transition have a larger effect size (and are more consistently relevant when discriminating PwMS from healthy volunteers) than the stand-to-sit transition. The angular velocity features (Area under the Curve, absolute peak and absolute mean) have larger effect sizes (and are more consistently relevant when observing PwMS) than the durations. In our hands, the effect sizes of the durations arising from the spine sensor [features 21 and 22 in this study, originally from Weiss et al. (52)] have a smaller effect size than the homologous features measured with thigh sensors; furthermore, spine pitch peak angular velocity features (features 14-16) have larger effect sizes than spine duration features (features 20 and 21). In our hands, in a univariate analysis roll of the spine sensor features had low rank biserials compared to the other tested features; the exception was for smoothness features, four of which had P < 0.05, including feature 22 (Spine Roll Arc D smoothness Figure 3 is labeled with the relevant time markers and peak attributes used to calculate the features in this study. Arcs A-H correspond to the SI-ST transition, while arcs J-R represent the ST-SI transition. How these points were computationally derived is described in the methods; note that arcs E and N (not shown) are 1 s regions centered on the peak, and arc I does not exist. 2). As stated above, smoothness features were less consistent than other features. Among non-smoothness features derived from the roll of the spine sensor, the largest effect size of MS was on the mean of the angular velocity during ST-SI (arc J), which was associated with a raw P = 0.067 (rank biserial = −0.345). Multivariate Analysis With Logistic Regression As an unplanned analysis, we sought to understand how these variables might work together, given that many of the features were based on similar or related measurements. Using a stepwise procedure (Matlab), we removed variables that were weak contributors (low absolute t-values) or were not robust when subsets of volunteers were selected for the model. A set of seven features were found and described in a logistic regression (see Table 3). The regression had an R 2 [coefficient of discrimination (59)] of 0.4708 based on 73 • of freedom for error. None of the pairs of variables had a coefficient of correlation above 0.69 ( Table 4). To check for overfitting, combined data for healthy and PwMS volunteers were randomly split in half (training set), betas were re-derived for the seven robust features, and the remaining volunteers (test set) were compared to predicted values based on the new betas; in 100 attempts, the average correct prediction rate was 0.7982. This implies that these features may be consistent enough to be useful in assessing degrees of mobility/disability among MS patients. DISCUSSION Inertial sensor metrics of gait and mobility variables, and their responsiveness to clinical conditions, are being explored for the differences elicited by sensor placement on different parts of the body (60). In this study of MS, we considered myriad TUG features (derived from previous studies of ambulatory disabilities of all kinds), and found informative metrics derived from thigh-positioned wearable inertial sensors that would be useful for estimating disability in PwMS, particularly with regard to strength and effort. We also compared a range of the best of the thigh-based metrics to spine-based metrics (which represent both strength and control), and ran a logistic regression on the results. We list seven non-overlapping features that may be useful together as complementary metrics in assessments of disability progression in MS, and also as metrics for clinical efficacy for interventions proposed to improve or limit disability in MS. In the present study, the test for whether these features may be useful for estimating disease progression was a comparison of a small community sample of PwMS with Hauser Ambulation Index scores ≤2 against a sample of middle-aged, healthy volunteers. Our novel contribution is to consider the combination of thigh and spine metrics in MS-as did Motta et al. (17) during a 1-min walking task. Our data specifically considers the case of TUG, which includes the SI-ST and ST-SI transitions; these transitions are particularly challenging activities in everyday life, and are especially revealing of the movement of the thigh segment. Figure 3C). (B) This can be compared directly to the same measurement during the Sit-to-Stand (SI-ST) transition (magenta circle in Figure 3B), which shows only fair correlation. (C) shows the absolute value of the mean angular velocity of the signal (as shown as arc F in Figure 4) during the Sit-to-Stand transition. (D) shows the correlations for the duration of the sit-to-stand phase (arc F). As expected, we found that the total time duration of the TUG task as measured by stopwatch was a consistent and discriminatory feature (rank biserial = −0.473, P < 0.05) for these two cohorts; this is similar to a study of TUG in the elderly [Instrumental Activities of Daily Living (IADL) vs. no IADL] in which TUG duration was the most discriminatory feature (52), and to an MS vs. healthy comparison of the Timed 25 Foot Walk where overall velocity (which is usually measured as a stopwatch duration) was the most discriminatory mobility feature (53). In our cohorts we compared a wide variety of sensor-based microfeatures of TUG to two timing features of TUG as a whole; we found that many of the thigh-derived sensor micro-features are reproducible and have high reliability, and that a collection of thigh pitch angular velocity features (including absolute values of the area under the curve, the peak and the mean) based on the sit-to-stand transition differed between MS and healthy with higher effect sizes (rank biserial) than total time duration of TUG; three of these features were statistically significantly different (between healthy and PwMS) by the stringent Holm-Bonferroni method of multiple comparisons. These features were all similar measurements of the area under the curve for pitch angular velocity for the SI-ST transition. Because the SI-ST transition is a demanding task for the musculature, and higher values for pitch angular velocity would be particularly demanding, we associate these variables with strength (28). This fits with previous research on patients with total knee arthroplasty that concluded that quadriceps weakness has a substantial impact on performance of the sit-to-stand task (20,61). We also tested temporal duration features based on the thigh SI-ST transition and previously published features based on the spine-derived SI-ST transition (52), and we found the set of such spine-derived features that were potentially useful, but those features resulted in lower effect sizes than the traditional stopwatch duration of TUG for our cohorts (and thus had lower effect sizes than the best angular velocity features). For both sit-to-stand and stand-to-sit transitions, spine data is discriminatory, but thigh data is more discriminatory for MS disability. We also measured many features suggesting that thigh pitch (or spine pitch) is much more discriminatory than spine roll. Some previous studies have found discriminatory features within the roll of the spine (37), within the stand-to-sit transition (26,33,62), and from jerk-related smoothness of angular velocity signals (21), all of which would reflect diminished balance and All angular velocities refer to pitch unless stated as roll. The features in each category are listed in order of the effect size (rank biserial); note that some features (13,16,17,22,(23)(24)(25)(26)(27) are included as illustrative rather than as discriminatory features. 819 features were tested, so that with a Holm-Bonferroni method for multiple comparisons, only features 1, 2, and 3 (Area Under the Curve for arcs B, D and F) are significant. Arcs are as listed in Figure 4. control rather than strength/weakness. In our cohorts these types of features produced smaller univariate effect sizes, and those roll features that were reliable (ICC) did not reach raw P-values under P < 0.05 (except for feature 22). In a logistic regression we found that our initial hypothesis was supported: the movement of the thigh during the SI-ST transition was the most informative of all the TUG measures tested, and that adding a thigh feature (feature 3) robustly improved a logistic regression compared to using only spine features with the total TUG duration. However, we were surprised to find that five of the seven robust features were from the spine sensor, three were related to roll, and two were related to smoothness; none of the other thigh features were independent or robust enough to stay in the analysis after the first one was included. Of the spine features, it is intuitive that healthy volunteers have a large pitch SI-ST peak (feature 14, implying torso strength and effort), and that PwMS have a larger roll peak during ST-SI (feature 26, implying loss of torso control). It also makes some sense that healthy volunteers would have a smoother roll in angular momentum in the 1 s surrounding the ST-SI peak (feature 27, arc N, Figure 4). It was interesting to find that the PwMS had a larger AUC of spine pitch in arc Q (feature 17); arc Q is the first half of the ST-SI transition, and when picked by our algorithm is made up primarily of Turn 2 of the TUG. It is less intuitive that the spine roll signal during most of the SI-ST transition (feature 22, arc D) would be smoother for MS patients than for healthy volunteers; presumably this relates to MS patients being slower and more cautious when rising (using the chair's arms), but none of the other calculations (peak, mean or duration) is discriminatory in this way. This hierarchy of discriminatory power (strength > control) seems to be supported by some other studies working on other ambulatory disorders. A previous study examining the shankmounted sensor metrics of TUG (as an entire task) in PwMS (16) found that their regression models for clinical disability metrics [EDSS and Multiple Sclerosis Impact Scale (MSIS-20)] incorporated many sensor metrics of angular velocity including mean angular velocities, maximum angular velocities, and minimum (i.e., trough negative) angular velocities (all multiplied by patient height), while it rejected coefficients of variation, and many gait duration features (e.g., mean stride time, mean swing time, mean double support %, turning time). In a study of the elderly (33,52), the range of the vertical accelerometry signal (located at the lumbar spine) was a discriminatory feature for identifying idiopathic fallers among the elderly, while SI-ST duration and ST-SI duration were not discriminatory. Relevance of Sensor Assessment of Mobility in the Clinic The use of inertial sensor technology in clinical assessment of disability is moving ahead rapidly in both MS and in disorders of mobility more generally. The goal of such systems is to increase the resolution and consistency of measurements of ambulatory disability (e.g., might it be possible to consistently recognize a difference between an ambulatory equivalent of EDSS 4.2 vs. EDSS 4.3). Only further sensor research on clinical populations will clarify whether this goal is even possible. Currently a commercial system for measuring mobility during TUG that is operated by clinicians (i.e., not researchers or engineers) has been released and assessed by the UK's National Institute for Health and Care Excellence (63). Extensive research into this particular inertial sensor methodology has been driven by the manufacturer of this system, which places sensors near the ankles. In a crosssectional study of early stage relapsing remitting MS, the anklebased sensor system used a proprietary algorithm to produce an EDSS estimate that was shown to correlate moderately well (R 2 = 0.5) with clinician assessed EDSS (16). More recently the same system was able to predict the 90-day risk of falls Analysis Details The most clear result here is that for univariate associations, the hierarchy of discrimination is broadly: area under the curve > mean/peak angular velocity > duration. This dominance by AUC was slightly unexpected, as mean/peak velocity features might be expected to vary inversely with duration measures; however, when thinking about the entire movement, duration multiplied by movement is a more comprehensive measure of the total effort and strength than the peak (or the mean) is. It is worth noting that the ICC for AUC features were generally not as high as for peak or mean features. Duration features were quite variable. The rationale for positioning wearable inertial sensors on the thighs for characterizing the sit-to-stand and stand-tosit transitions is that the activity of the thighs during these transitions is invariably both necessary and sufficient to achieve these actions, while the activity of the spine and torso are usually necessary but are definitely not sufficient. For example, additional torso activity may occur during bodily adjustments or false starts, and torso activity can be suppressed while rising up or sitting down with the use of the chair's arms. Nevertheless, our regression favored spine metrics. Regarding false starts and bodily adjustments, it is slightly easier to detect the difference between healthy and PwMS from overall absolute peak angular velocity values or from means derived from time segments that do not include the bottom 20% of activity (i.e., arc F on Figure 3B has a higher effect size than arc B). The values for pitch angular velocity are higher for healthy than for MS; the regions of the bottom 20% of activity may be associated with brief, abortive initiations of standing, which are inconsistent but common to both healthy and mild MS, thus masking the appropriate durations or mean values of the transitions. Note also that the calculations of durations are made less valid (lower absolute effect size) by including the bottom 20% of activity; the rank biserial for SI-ST duration (maximum from either thigh) when based on Arc F (which does not include the lower 20%, see Figure 4) is −0.442, compared to the rank biserial for the same value based on Arc B is −0.358. By contrast, for area under the curve measures, where increased duration adds to the appearance of strength in the healthy participants, the bottom 20% of the curve adds slightly to the discrimination between MS and healthy (i.e., arc B has a greater absolute effect size compared to arc F). In general strength measurements based on angular velocity had higher discriminatory power if the maximum of the two thighs was used (compared to the lesser value from the two thighs). Also, for spine roll features, where MS is associated with higher values of roll angular velocity than seen in healthy volunteers, this increased roll is easier to detect in longer segments that include the bottom 20% of the entire peak region. Limitations One limitation of the current study is that we did not make concurrent measurements of strength (e.g., the Oxford Scale for Muscle Strength Grading), nor did we estimate spasticity (e.g., Modified Ashworth Scale); plainly there are differences in the types of MS mobility impairment (65), and there would be a difference in the test results between a PwMS with flaccid paralysis vs. a PwMS with normal strength and a high level of spasticity. In future measurements of the SI-ST transition, measurements of strength and spasticity should accompany sensor measurements, as this is often not done (16,66). Another limitation is that for inertial sensor metrics to be justified for use in the clinic to assess disability or mobility impairment, a longitudinal study needs to be performed. Such a longitudinal study would ideally show that clinically relevant disability progression (or amelioration due to therapeutic intervention) could be detected with more sensitivity and consistency by the sensor metrics than by the EDSS (or possibly by the MSFC). Recognising fine-grained differences against a "gold standard" measurement such as the EDSS will require an agreement or recognition as to how to recognize (or cause) small changes in disability independently of the EDSS. Inconsistency between equally disabled patients (or between measurements from the same patient on different days) may affect many individual metrics because patients may compensate for their disability with additional motivation; it would be expected that when this compensation occurs, there would be a deterioration of performance control (e.g., spine roll during TUG) because of the speed-accuracy trade-off (67,68). When considering speed and limb movements during walking tasks (e.g., T25FW), motivation (or lack thereof) can affect walking speed; however, lack of motivation alone will be less likely to affect peak angular velocity during the SI-ST transition, because standing up slowly requires more prolonged effort than standing up quickly, due to the disadvantageous torque moments that have to be resisted during slow standing (69). The sensors used during this study were recorded independently and were later synchronized using an automated synchronization protocol. While this produces accurate data synchronization, it prevents real-time analysis, which would be essential for clinical use. Since the gathering of this data, the manufacturer of the sensors (x-io) has introduced a new generation of IMU sensors (NGIMU), which include WiFi communication and the use of one sensor as a master sensor to calibrate all others on the network (70). In the future, these self-synchronizing sensors should be used for gathering data. In our regression, we found a few features with smaller effect sizes (many of which are more related to accuracy/control rather than speed/strength) that may be relevant for estimating disability in PwMS, particularly when assessing PwMS who have mild or almost no ambulatory dysfunction. Likewise, the many uncorrelated features rejected from the final list of features may include some usefully discriminating features that could be used as metrics of balance and control during movement. The generalizability of these results for PwMS may be limited due to the precise nature of the TUG task format, as well as due to the idiosyncrasies of PwMS. For example, Boonstra et al. (20) used a special sit-to-stand assay that differed from the TUG in several important aspects; their chair did not have arms, their arthroplasty patients had to position their hands on their hips so that they could not use their arms to aid in standing, and the task did not continue directly into a walking task. Another feature of their protocol that differed from the current study is that their chair had an adjustable chair height so the participants' knees always started at 90 • . The precise position of the knees at the beginning of rising will affect measurements of activity, especially angular velocity. In the TUG protocol the participant is allowed to start with their legs in self-selected positions, which would mean that the first movement during TUG would include repositioning of the lower limb into an optimum position for the sit-to-stand transition. CONCLUSIONS Our data suggest that positioning sensors on the thighs and measuring pitch angular velocities during the sit-to-stand transition can provide information relating to disability in multiple sclerosis that is more relevant (with larger effect sizes) than both (a) durations of sit-to-stand derived from a lumbar spine sensor, and (b) durations of the entire TUG task. Our data suggests that adding a thigh sensor-based metric can increase discriminatory power compared to using a spine sensor alone, and that for mild to modest disability (HAI ≤ 2), features that reflect weakness (or strength) are more discriminatory than features that reflect loss of control or imbalance. Finally, the area under the curve, the peak and mean angular velocities, the durations, and the roll measures may provide more universal and broadly-sensitive information if they are combined into a composite metric, although for any such metric to be adopted by the medical community, it would have to be transparent. Our regression data included the SI-ST transition, ST-SI transition, part of Turn 2, and overall gait performance (TUG stopwatch time), all of which were contributory to the model. FUNDING The DAAD funded travel and masters studies for CO. BSMS's Independent Research Project funded a project by JaB.
Histamine Causes Pyroptosis of Liver by Regulating Gut-Liver Axis in Mice Huangjiu usually caused rapid-drunkenness and components such as β-benzyl ethanol (β-be), isopentanol (Iso), histamine (His), and phenethylamine (PEA) have been reported linked with intoxication. However, the destructive effect of these components on gut microbiota and liver is unclear. In this study, we found oral treatment of these components, especially His, stimulated the level of oxidative stress and inflammatory cytokines in liver and serum of mice. The gut microbiota community was changed and the level of lipopolysaccharide (LPS) increased significantly. Additionally, cellular pyroptosis pathway has been assessed and correlation analysis revealed a possible relationship between gut microbiota and liver pyroptosis. We speculated oral His treatment caused the reprogramming of gut microbiota metabolism, and increased LPS modulated the gut-liver interaction, resulting in liver pyroptosis, which might cause health risks. This study provided a theoretical basis for the effect of Huangjiu, facilitating the development of therapeutic and preventive strategies for related inflammatory disorders. Introduction The cultivation and use of fermented wines can be traced back to 7000 years ago. The typical production technology is via a solid or semi-solid brewing mode of 'bilatera fermentation' with single or multiple grains such as glutinous rice, sorghum, wheat, corn, and rice as raw materials and wheat koji as a saccharifying starter [1,2]. During the above open fermenting process, various strains of yeast, bacteria, fungi, mold and other strains inoculated naturally or cooperatively undergo glucose metabolism, protein metabolism, and lipid metabolism. At the same time, complex flavor compounds were produced [2]. A variety of fermented wines have been developed and consumed in Asian countries. Recently, fermented foods are of public interest and consumed more frequently in western countries because fermented foods, particularly fermented wine, have been proven to have health-promoting and protective effects. Huangjiu is a nutritive brewing wine made from rice, millet, corn, and other grains through cooking, saccharification, fermentation, filtration, and frying [3], thus Huangjiu is a complex mixture composed of many chemical components besides ethanol [4]. Carbohydrates, amino acids, organic acids, lipids are the main functional components in Huangjiu [3]. However, there are compounds in Huangjiu that may adversely affect its flavor, taste, and even safety of it [5]. Rapid-drunkenness has always been one of the most important factors affecting the consumers' choice of Huangjiu, which has seriously limited the development of the Huangjiu industry. It has been reported that compounds such as The results for the body weight, liver weight, serum, and liver biomarkers, such as MDA, GSH, SOD were presented in Figure 1A-I. There was no significance in body weight among groups ( Figure 1A). Although the liver weight of β-be, His, Iso, and PEA groups was significantly decreased when compared with Normal (p < 0.01, Figure 1B), there was no significance between the ratios of liver weight to body weight ( Figure 1C). The serum GSH, SOD, MDA of the Iso group were significantly higher than other groups (p < 0.01, Figure 1D-F). The liver GSH, SOD, MDA of His group were significantly lower than any other group (p < 0.01, Figure 1G-I). The serum GSH, SOD, MDA of the Iso group were significantly higher than other groups (p < 0.01, Figure 1D-F). The liver GSH, SOD, MDA of His group were significantly lower than any other group (p < 0.01, Figure 1G-I). Effects of His, PEA, β-Be, and Iso on Inflammatory Cytokines in the Liver and Serum H&E staining has been performed to assess the effect of different components on the liver. As shown in Figure 2A-F, among the five treated groups, the His group exhibited the most significant difference when compared to the Normal group. Hyperchromatic nuclei and invisible nucleoli were present in the His group. Hepatocytes with two nuclei were common, and other hepatocytes were in the mitotic phase. Liver cells with pathological changes can also be seen in the His group, such as lytic cells, disappeared nucleus, white vacuoles in the cytoplasm, and vacuolated lesions. The lesions observed occurred in more areas in His group compared with other groups (Figure 2A-F). Meanwhile, the His groups showed higher significance in TNF-α and IL-10 levels both in the liver and serum (p < 0.01, Figure 2G,H). However, the level of IL-6 had no significant difference among groups ( Figure 2I). 2.2. Effects of His, PEA, β-Be, and Iso on Inflammatory Cytokines in the Liver and Serum H&E staining has been performed to assess the effect of different components on the liver. As shown in Figure 2A-F, among the five treated groups, the His group exhibited the most significant difference when compared to the Normal group. Hyperchromatic nuclei and invisible nucleoli were present in the His group. Hepatocytes with two nuclei were common, and other hepatocytes were in the mitotic phase. Liver cells with pathological changes can also be seen in the His group, such as lytic cells, disappeared nucleus, white vacuoles in the cytoplasm, and vacuolated lesions. The lesions observed occurred in more areas in His group compared with other groups (Figure 2A-F). Meanwhile, the His groups showed higher significance in TNF-α and IL-10 levels both in the liver and serum (p < 0.01, Figure 2G,H). However, the level of IL-6 had no significant difference among groups ( Figure 2I). Effects of His, PEA, β-Be, and Iso on Pyroptosis in the Liver As shown in Figure 3A-C, the level of Cas-1, GSDMD, and IL-1β in the His group were significantly enhanced compared with those in the other groups (p < 0.01). A similar tendency was seen with the level of Cas-1 and IL-1β in the Iso group. The level of gut LPS was presented in Figure 3D, and we found that the LPS level was significantly higher in the His group than in the other groups (p < 0.01). 2.3. Effects of His, PEA, β-Be, and Iso on Pyroptosis in the Liver As shown in Figure 3A-C, the level of Cas-1, GSDMD, and IL-1β in the His group were significantly enhanced compared with those in the other groups (p < 0.01). A similar tendency was seen with the level of Cas-1 and IL-1β in the Iso group. The level of gut LPS was presented in Figure 3D, and we found that the LPS level was significantly higher in the His group than in the other groups (p < 0.01). Effects of His, PEA, β-Be, and Iso on Gut Microbial Community Structure and Composition Total DNA was extracted from the feces samples and the 16S rRNA gene of V3-V4 was amplified by PCR. Sequencing of PCR products was performed by using the Illumina MiSeq platform at Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). After removing the low-quality and chimeric sequences, 780,661 high-quality sequencing reads were generated from feces samples. A total of 613 operational taxonomic units (OTUs) were generated from the high-quality sequences at 97% sequence. Ace and Simpson indexes of the gut microbiome in the His group and the β-be group significantly decreased ( Figure 4A,D). Shannon index of the gut microbiome in the His group and the Eth group significantly decreased ( Figure 4C). However, there is no significance of the Chao index between the treated groups and the Normal group ( Figure 4B). LEfse analysis further confirmed the difference of gut microbial structures between all groups ( Figure 4E). At the phyla level, there were 19 phyla were identified. The dominant phylum with relative abundance > 1% was Bacteroides (72.75%), Firmicutes (25.09%), and Proteobacteria (1.39%) ( Figure 5A) in all groups. At the genus level, there were 213 genera were identified. The dominant genera were considered as norank_f__Muribaculaceae (32.47%), Prevotellaceae_UCG-001 (22.61%), Lachnospiraceae_NK4A136_group (3.24%), unclassified_f__Lachnospiraceae (6.54%), and Alistipes (4.86%) ( Figure 5B). We found that the His group decreased the relative abundance of norank_f__Muribaculaceae compared with that of the Normal group. The His, Eth, βbe, PEA group significantly increased the relative abundance of Prevotellaceae_UCG_001 (p < 0.01). The gut microbiome composition in His group mice differed substantially from those in the mice in Iso, Eth, and PEA group ( Figure Effects of His, PEA, β-Be, and Iso on Gut Microbial Community Structure and Composition Total DNA was extracted from the feces samples and the 16S rRNA gene of V3-V4 was amplified by PCR. Sequencing of PCR products was performed by using the Illumina MiSeq platform at Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). After removing the low-quality and chimeric sequences, 780,661 high-quality sequencing reads were generated from feces samples. A total of 613 operational taxonomic units (OTUs) were generated from the high-quality sequences at 97% sequence. Ace and Simpson indexes of the gut microbiome in the His group and the β-be group significantly decreased ( Figure 4A,D). Shannon index of the gut microbiome in the His group and the Eth group significantly decreased ( Figure 4C). However, there is no significance of the Chao index between the treated groups and the Normal group ( Figure 4B). LEfse analysis further confirmed the difference of gut microbial structures between all groups ( Figure 4E). At the phyla level, there were 19 phyla were identified. The dominant phylum with relative abundance > 1% was Bacteroides (72.75%), Firmicutes (25.09%), and Proteobacteria (1.39%) ( Figure 5A) in all groups. At the genus level, there were 213 genera were identified. The dominant genera were considered as norank_f__Muribaculaceae (32.47%), Prevotellaceae_UCG-001 (22.61%), Lachnospiraceae_NK4A136_group (3.24%), unclassi-fied_f__Lachnospiraceae (6.54%), and Alistipes (4.86%) ( Figure 5B). We found that the His group decreased the relative abundance of norank_f__Muribaculaceae compared with Correlation among Gut Microbiota, Liver Pyroptosis and Microbial Metabolism In order to explore the potential relationship between gut and liver, we applied the correlation analysis among gut microbiota, gut LPS, and the levels of IL-1β, Cas-1, and GS-DMD through Spearman correlation analysis ( Figure 6). As shown in the figure, the relative abundance of Pseudogracbacilus negatively correlated with the level of Cas-1 (p < 0.05), but significant positively correlated with the level of GSDMD (p < 0.01); the relative abundance of Sporosarcina negatively correlated with the levels of gut LPS and IL-1β (p < 0.05); the relative abundance of norank_f_Lachnospiracae negatively correlated with the level of GSDMD but positively correlated with the level of Cas-1 and IL-1β (p < 0.05; p < 0.01, respectively). We also found that the relative abundance of Lachnospiraceae_NK4A136_group negatively correlated with the level of GSDMD (p < 0.05) and positively correlated with the level of Cas-1 (p < 0.05). in His group mice differed substantially from those in the mice in Iso, Eth, and PEA group ( Figure 5C,D). in His group mice differed substantially from those in the mice in Iso, Eth, and PEA group ( Figure 5C,D). < 0.05), but significant positively correlated with the level of GSDMD (p < 0.01); the relative abundance of Sporosarcina negatively correlated with the levels of gut LPS and IL-1β (p < 0.05); the relative abundance of norank_f_Lachnospiracae negatively correlated with the level of GSDMD but positively correlated with the level of Cas-1 and IL-1β (p < 0.05; p < 0.01, respectively). We also found that the relative abundance of Lachnospi-raceae_NK4A136_group negatively correlated with the level of GSDMD (p < 0.05) and positively correlated with the level of Cas-1 (p < 0.05). Discussion The mice model is one of the commonly used models to investigate liver damage and has long been used to evaluate the pathological process of alcoholic liver disease (ALD) [17], nonalcoholic liver disease (NALD) [11], and other liver injuries [10]. Compared with gavage, intraperitoneal injection greatly shortens the modeling cycle, but there is a higher risk of death concurrently [18]. In the present study, we investigated the effect of four main components in Huangjiu, His, PEA, β-be, and Iso, on liver pyroptosis by oral treatment in mice. We first investigated the effects of four components on body weight, liver weight, and level of oxidative stress in serum and liver of mice. Simultaneously, the level of inflammatory cytokines in the liver and serum were examined. Results have shown that treatment with four components, especially the His, significantly stimulated the level of oxidative stress and inflammatory cytokines both in serum and liver. In order to explore the underlying mechanism, we performed gut microbiota analysis and pyroptosis pathway assay and found the gut microbial community structure and composition has been Discussion The mice model is one of the commonly used models to investigate liver damage and has long been used to evaluate the pathological process of alcoholic liver disease (ALD) [17], nonalcoholic liver disease (NALD) [11], and other liver injuries [10]. Compared with gavage, intraperitoneal injection greatly shortens the modeling cycle, but there is a higher risk of death concurrently [18]. In the present study, we investigated the effect of four main components in Huangjiu, His, PEA, β-be, and Iso, on liver pyroptosis by oral treatment in mice. We first investigated the effects of four components on body weight, liver weight, and level of oxidative stress in serum and liver of mice. Simultaneously, the level of inflammatory cytokines in the liver and serum were examined. Results have shown that treatment with four components, especially the His, significantly stimulated the level of oxidative stress and inflammatory cytokines both in serum and liver. In order to explore the underlying mechanism, we performed gut microbiota analysis and pyroptosis pathway assay and found the gut microbial community structure and composition has been changed, resulting in the increase of inflammatory LPS and subsequent pyroptosis of liver cells. Correlation analysis indicated the possible relationship between oral treatment, gut microbiota metabolism, and liver pyroptosis. Alcohol can induce the increase of endotoxin level, activate Kupffer cells, and trigger the excessive production of TNF-α, IL-1β, and IL-6, which participate in the pathogenesis of liver diseases [19]. TNF-α could induce SREBP1 expression, resulting in hepatic steatosis [20]; IL-1β and IL-6 are involved in hepatic steatosis and inflammation, which causes hepatic disease [21]. In this study, we found that besides ethanol, the expression of TNF-α, IL-1β, and IL-6 in other treated groups increased significantly, which might trigger liver inflammation, directly or indirectly. Pyroptosis is a programmed cell death characterized by the formation of pores on the plasma membrane, cell swelling, and plasma membrane rupture, similar to necrosis rather than apoptosis [22]. For a long time, IL-1β has been considered an important pyrogen. IL-1β affects the body's innate and adaptive immunity through different mechanisms, so as to significantly enhance the body's response to infectious diseases [23]. Although pyroptosis in host defense is largely unclear, it has been proved to inhibit the replication and proliferation of microorganisms in vivo [24]. The proinflammatory effect of IL-1β and pyroptosis could induce autoimmune and inflammatory diseases [25]. As recently reported, Gasdermin D (GSDMD) protein was presented in nigericin-induced NLRP3 inflammasomes, which act on the prediction site of inflammatory activation and participate in pyroptosis and IL-1β secretion [14]. In the present study, the level of GSDMD and IL-1β of the liver in the His group both increased significantly, indicating that pyroptosis caused by His occurred in the liver. Intestinal homeostasis is maintained by the interrelationship between the gut microbiota, intestinal barrier, and immune system. Imbalance in the gut microbiome often tends to result in the destruction of the intestinal barrier and immune system [26]. Under normal physiological conditions, intestinal epithelial cells, microbiota, and immune cells support the steady-state of the intestinal system collaboratively. Intestinal epithelial cells receive signals from the microbiota, such as microbial metabolites (LPS, short-chain fatty acids, etc.) or the microbes per se, so as to maintain the normal physiological function of the mucosal barrier [27]. LPS is associated with excessive inflammatory response, mainly activating NFκB by binding to peripheral blood macrophages or microglia TLR4/CD14 complex, increasing the production of cytokines such as IL-6 and TNF-α to cause an inflammatory response [28]. In this study, the level of the gut LPS was observed to increase significantly in the His group. Meanwhile, the level of liver IL-6 and TNF-α in the His group increased either, which means the increase of gut LPS might cause liver inflammation, connecting the gut microbiota change and the liver pyroptosis. Nevertheless, the gut microbiota could also regulate host immune function through the action of metabolites or endotoxin [29]. At the same time, immune cells can directly or indirectly affect the growth of the microbiota by releasing cytokines or chemokines [30]. To further explore the underlying mechanism, the effects of His, PEA, β-be, and Iso on the gut microbiota were assessed by 16S rRNA sequencing. We found that His treatment increased the relative abundance of Proteobacteria at the phylum level, which was considered as a microbial marker for dysbiosis [31]. At the genus level, His treatment enriched the relative abundance of Lachnospiraceae_NK4A136_group and norank_f _Lachnospiracae, which belong to the Lachnospiraceae family, and can also be enriched by the Fubrick tea in the previous study [32]. Through the correlation analysis among gut microbiota, LPS, and the levels of IL-1β, Cas-1 and GSDMD, we found that the relative abundance of Lach-nospiraceae_NK4A136_group negatively correlated with the level of GSDMD (p < 0.05) and positively correlated with the level of Cas-1 (p < 0.05). Lachnospiraceae_NK4A136_group was reported to negatively correlate with obesity [33]. In our study, the body weight of mice in His group was decreased, and we speculated that Lachnospiraceae_NK4A136_group caused pyroptosis and inflammation in the liver, resulting in weight loss. In conclusion, the treatment of four main components in Huangjiu, especially the histamine, caused metabolic disorder, oxidative stress, liver pyroptosis, and inflammation via regulating gut microbiota and its metabolism. The possible mechanism was summarized and illuminated in Figure 7. To sum up, oral histamine treatment caused changes in the gut microbiota structure and composition, especially Lachnospiraceae_NK4A136_group and norank_f_norank_o_Clostridia_UGG-014, resulting in the increase of inflammatory LPS, which subsequently caused oxidative stress and inflammatory damage to the liver, further lead to liver pyroptosis and may even cause health risks. Although the present study analyzed the destructive effect of main components in Huangjiu on liver and investigated the possible underlying mechanism, there were still some possible limitations in this study. What is the exact source of gut LPS and how the main components in Huangjiu triggered its promotion? Is there any synergetic or antagonistic effect of the four main components? There are too many unresolved questions to be answered. Further studies focusing on the comprehensive effect study of combination of main components, the effective removal of harmful components, as well as other possible harmful effects study of Huangjiu exert on human health, are needed. Overall, the study revealed a possible relationship between gut microbiota and liver metabolism, and the obtained results might help guiding the establishment of a recommended limiting standard of harmful components in Huangjiu, which shed light on the gut-liver axis research and contribute to the development of Huangjiu industry, and should be further investigated. which subsequently caused oxidative stress and inflammatory damage to the liver, further lead to liver pyroptosis and may even cause health risks. Although the present study analyzed the destructive effect of main components in Huangjiu on liver and investigated the possible underlying mechanism, there were still some possible limitations in this study. What is the exact source of gut LPS and how the main components in Huangjiu triggered its promotion? Is there any synergetic or antagonistic effect of the four main components? There are too many unresolved questions to be answered. Further studies focusing on the comprehensive effect study of combination of main components, the effective removal of harmful components, as well as other possible harmful effects study of Huangjiu exert on human health, are needed. Overall, the study revealed a possible relationship between gut microbiota and liver metabolism, and the obtained results might help guiding the establishment of a recommended limiting standard of harmful components in Huangjiu, which shed light on the gut-liver axis research and contribute to the development of Huangjiu industry, and should be further investigated. Animals and Experimental Treatment Thirty-six male C57BL/6J mice (6-8 weeks old, 25-30 g, Specific Pathogen Free) and fodder were obtained from Beijing Vital River Laboratory Animal Center [SCXK(Jing)2016-0006]. The mice were acclimated for 1 week before the experiments. The mice were housed in a room with a controlled temperature (22 ± 2 • C), humidity (65% ± 5%), and 12 h dark/light cycle. After the acclimation period, mice were randomly divided into six groups(n = 6): Normal (normal control: feed with normal saline,15 mL/kg BW), Eth (15% (v/v) ethanol solution, 15 mL/kg BW), His (histamine dissolved in 15% (v/v) ethanol solution, 15 mL/kg BW), PEA (phenethylamine prepared in 15% (v/v) ethanol solution, 15 mL/kg BW), β-be (β-benzyl ethanol dissolved in 15% (v/v) ethanol solution, 15 mL/kg BW), Iso (isopentanol prepared in 15% (v/v) ethanol solution, 15 mL/kg BW) ( Figure S1). The dosages of these substances above were set according to Peng et al. [34]. According to the weight of 60 kg adult drink 2 bottles of 500 mL Huangjiu, according to the concentration of each substance in Huangjiu, we calculated the content of substance intake per kg body weight of an adult, and the corresponding dose is determined by the intake per kg body weight of mice. The specific parameters were shown in Table S1. All groups received normal foods and water. The treatment was given by gavage once a day for 2 weeks; body weight was measured every other day. After the experiment, the mice feces were collected in the sterile sampling tube and frozen at −80 • C for later use. At the end of the experiment, the mice fasted for 12 h were anesthetized and euthanized, and their serum was collected in endotoxin-free tubules. After labeling, the serum samples were centrifuged at 4 • C at 3000× g for 10 min and stored at −80 • C for further analysis. After serum collection, the liver and gut tissues of mice were removed and placed in pre-cooled normal saline for rinsing to remove the serum stuck to it separately, and the surface moisture was sucked out with filter paper. It was weighed and cut into pieces, one part was soaked in a preconfigured solution of 4% (v/v) DPEC (prepared in PBS), the other part was added 9 times 0.9% pre-cooled normal saline to prepare a 10% (w/v) homogenate, placed in a glass homogenizer for rapid grinding with an ice-cold bath, centrifuged at 12,000× g at 4 • C for 15 min, then the supernatant was taken and stored at −80 • C for later use. Histopathological Analysis of Liver Tissues Liver tissue fixed in 4% DPEC was further dehydrated, embedded in paraffin, sliced in 4 µm thickness, and stained with hematoxylin and eosin (H&E). Subsequent examination of the prepared slides was under a panoramic microscope (3D HITECH Panimal 250, Hungary) [35]. Determination of LPS in Gut Tissues Gut LPS level was determined using ELISA assay kits obtained from Jiangsu Meimian industrial Co., Ltd. (Yancheng, China). The experimental procedures were quantified according to the supplier's instructions. Analysis of Gut Microbiome 16S rRNA analysis was performed on frozen feces to analyze the gut microbial community structure and composition. Fecal total DNA was extracted and sequenced. The universal primer pairs 338F and 806R were used to amplify the V3-V4 hypervariable region of the 16S rRNA gene [19]. Purified amplicons were pooled in equimolar and paired-end sequenced on an Illumina MiSeq PE300 platform (Illumina, San Diego, CA, USA) according to the standard protocols by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China). Operational taxonomic units (OTUs) with 97% similarity cutoff were clustered using UPARSE version 7.1 [36], and chimeric sequences were identified and removed. The taxonomy of each OTU representative sequence was analyzed by RDP Classifier version 2.2 [37] against the 16S rRNA database using a confidence threshold of 0.7. Assessment of alpha diversity by Chao richness, ACE index, Shannon and Simpson diversity indices using Sob's estimator [38]. Beta diversity at the OTU level was analyzed by principal component analysis (PCA) and non-metric multidimensional scaling anal-ysis (NMDS). The linear discriminant analysis effect size (LEfSe) method was used to identify differentially rich taxa in each treatment group, with the LDA score set at 4.0 [39]. Correlation Analysis In order to further explore the relationship among His, PEA, β-be, and Iso, gut microbiota, microbial metabolites, and biomarkers mentioned above, Spearman correlation analysis was used to study the relationship between different gut microbiota and LPS, and LPS and biomarkers. If the absolute correlation coefficient is greater than 0.6 and p < 0.05, the correlation was considered significant. Heat maps were obtained using R software to visualize the results. Statistical Analysis The results are expressed as means ± SD (standard errors of the means) and analyzed using GraphPad Prism 8.0 program (GraphPad Software, San Diego, CA, USA). The data were evaluated using a one-way ANOVA analysis of variance by Ducan's test, when p < 0.05 was considered significant.
Differential hepatoprotective mechanisms of rutin and quercetin in CCl4-intoxicated BALB/cN mice Aim:To investigate the mechanisms underlying the protective effects of quercetin-rutinoside (rutin) and its aglycone quercetin against CCl4-induced liver damage in mice.Methods:BALB/cN mice were intraperitoneally administered rutin (10, 50, and 150 mg/kg) or quercetin (50 mg/kg) once daily for 5 consecutive days, followed by the intraperitoneal injection of CCl4 in olive oil (2 mL/kg, 10% v/v). The animals were sacrificed 24 h later. Blood was collected for measuring the activities of ALT and AST, and the liver was excised for assessing Cu/Zn superoxide dismutase (SOD) activity, GSH and protein concentrations and also for immunoblotting. Portions of the livers were used for histology and immunohistochemistry.Results:Pretreatment with rutin and, to a lesser extent, with quercetin significantly reduced the activity of plasma transaminases and improved the histological signs of acute liver damage in CCl4-intoxicated mice. Quercetin prevented the decrease in Cu/Zn SOD activity in CCl4-intoxicated mice more potently than rutin. However, it was less effective in the suppression of nitrotyrosine formation. Quercetin and, to a lesser extent, rutin attenuated the inflammation in the liver by down-regulating the CCl4-induced activation of nuclear factor-kappa B (NF-κB), tumor necrosis factor-α (TNF-α) and cyclooxygenase (COX-2). The expression of inducible nitric oxide synthase (iNOS) was more potently suppressed by rutin than by quercetin. Treatment with both flavonoids significantly increased NF-E2-related factor 2 (Nrf2) and heme oxygenase (HO-1) expression in injured livers, although quercetin was less effective than rutin at an equivalent dose. Quercetin more potently suppressed the expression of transforming growth factor-β1 (TGF-β1) than rutin.Conclusion:Rutin exerts stronger protection against nitrosative stress and hepatocellular damage but has weaker antioxidant and anti-inflammatory activities and antifibrotic potential than quercetin, which may be attributed to the presence of a rutinoside moiety in position 3 of the C ring. Introduction Toxic liver injury may lead to acute liver failure, resulting in organ dysfunction. Numerous drugs and toxic substances could cause hepatic damage, with the severity of the changes proportional to the duration of the toxic exposure [1] . Carbon tetrachloride (CCl 4 ) poisoning is one of the most commonly used models of acute liver damage. The hepatotoxic effects of CCl 4 were attributed to the excessive production of free radicals [2] . Previous studies have shown that natural compounds with antioxidant activity could ameliorate CCl 4 -induced liver damage, thus preventing acute liver failure [3,4] . Rutin is a naturally occurring flavonol consisting of aglycone quercetin and a rutinoside moiety in position 3 of the C ring ( Figure 1). These widespread flavonoids, commonly found in various foods [5] , exert numerous biochemical and pharmacological activities, such as antioxidant [6] , anti-inflammatory [7] and antitumor activities [8] . However, the pharmacological effects of rutin and its aglycone may differ, suggesting that the presence of the rutinoside moiety is crucial for some of the protective effects of rutin. In several studies, rutin exerted anti-inflammatory activity, whereas quercetin was either not effective against or actually aggravated the inflammatory response both in vivo and in vitro [9,10] . However, other authors demonstrated that quercetin acted as a strong inhibitor of inflammation in an experimental model of rat colitis [11] . Interestingly, rutin was an effective anti-inflammatory agent in chronic inflammatory conditions, such as adjuvant arthritis, whereas quercetin, but not rutin, potently suppressed acute inflammation and reduced carrageenan-induced paw edema [7] . Similarly, the cytochromes CYP1A1 and CYP1B1 were strongly inhibited by quercetin, while rutin exerted no inhibition or only weak inhibitory potential [12] . Thus, the pharmacological activities of rutin and quercetin may differ substantially, which could be attributed to the presence of a sugar moiety in position 3 of the C ring. Previous investigations showed that rutin and quercetin could ameliorate chemically induced liver damage in rodents [13,14] . The objective of this study was to elucidate the molecular mechanisms of the hepatoprotective activity of rutin against acute toxic liver damage in mice and compare this activity to its aglycone quercetin. Reducing power assay The reducing power of the samples was determined by the method of Oyaizu [15] , as described previously [16] . Briefly, an aliquot of the sample (1.0 mL) at various concentrations (1.25-100 μg/mL) was mixed with phosphate buffer (0.2 mol/L, pH 6.6, 2.5 mL) and 1% potassium ferricyanide (2.5 mL). The mixture was incubated at 50 °C for 20 min. After the addition of 10% trichloroacetic acid (2.5 mL), the mixture was centrifuged at 1000×g for 10 min. The supernatant (2.5 mL) was mixed with distilled water (2.5 mL) and 0.1% iron (III) chloride (0.5 mL), and the absorbance was measured at 700 nm using an appropriate blank. All experiments were performed in triplicate. Trolox was used as a reference. DPPH radical scavenging assay The free radical scavenging activity of the samples was measured using the stable DPPH · radical according to the method of Blois [17] , as described previously [18] . Briefly, a 0.1 mmol/L solution of DPPH · in ethanol was prepared, and this solution (0.5 mL) was added to sample solution in ethanol (1.5 mL) at different concentrations (0.39-50 μg/mL). After the reaction was performed in the dark at room temperature for 30 min, the absorbance was measured at 517 nm. The capability to scavenge the DPPH · radical was calculated using the following equation: (%)=[(A 0 -A 1 )/A 0 ]×100, where A 0 is the absorbance of the control reaction and A 1 is the absorbance in the presence of the sample, corrected for the absorbance of the sample itself. All experiments were performed in triplicate. Trolox was used as a reference. Total antioxidant capacity assay The total antioxidant capacity of rutin and quercetin was evaluated by the phosphomolybdenum method according to the procedure of Prieto et al [19] , as described previously [16] . Briefly, the sample was dissolved in ethanol, and an aliquot of the solution (0.3 mL) was combined in a vial with the reagent Nitric oxide radical scavenging assay The nitric oxide (NO · ) scavenging activity of the samples was determined according to the method described by Rai et al [20] with a slight modification. The NO · generated from sodium nitroprusside in an aqueous solution at physiological pH interacts with oxygen to produce nitrite ions, which were measured by the Griess reaction. Equal volumes of 10 mmol/L sodium nitroprusside in phosphate buffered-saline (pH 7.4) were mixed with different concentrations of the sample (0.39-50 μg/mL) and incubated at 25 °C for 150 min. After the incubation, 1.0 mL of the reaction mixture was mixed with 1% sulfanilamide (0.5 mL). After 5 min, 0.1% naphthylethylenediamine dihydrochloride (0.5 mL) was added, the solution was mixed, and the absorbance of a pink-colored chromophore was measured at 540 nm against the corresponding blank solution. Trolox was used as a standard. All experiments were performed in triplicate. The NO · scavenging activity was expressed as the percentage of inhibition according to the following equation: (%)=[(A 0 -A 1 )/A 0 ]×100, where A 0 is the absorbance of the control without a sample and A 1 is the absorbance in the presence of the sample. Animals Male BALB/cN mice, 10-12 weeks old and weighing 23-25 g, were obtained from our breeding colony. The animals were housed under standard environmental conditions and had free access to tap water and a standard rodent diet (pellet, type 4RF21 GLP, Mucedola, Italy). All experimental procedures were approved by the Ethical Committee of the Medical Faculty, University of Rijeka. Experimental design The mice were divided into six groups, each containing five animals. The normal control (group I) received saline, and group II received CCl 4 dissolved in olive oil (2 mL/kg, 10% v/v) intraperitoneally (ip). Rutin or quercetin, dissolved in 5% (v/v) DMSO, was administered ip at 10, 50, and 150 mg/kg (groups III, IV, and V, respectively) and 50 mg/kg (group VI), respectively, once daily for five consecutive days. Immediately after the last dose, the mice were given CCl 4 . The doses of rutin were selected on the basis of our preliminary studies (data not shown), whereas the middle dose of quercetin (50 mg/kg) was used for the comparison with rutin. The mice were sacrificed 24 h after the injection of CCl 4 . Blood was collected by cardiac puncture, and heparinized plasma was separated for the determination of the ALT and AST activities. The gall bladder was removed, and the liver was carefully excised, washed with saline, blotted dry and divided into samples. The tissue specimens were snap frozen in liquid nitrogen and stored at -80 °C if not used on the same day. The liver samples were used to assess the Cu/Zn SOD activity, GSH and protein concentration and also for immunoblotting. Portions of the livers were immersed in 4% paraformaldehyde for histology and immunohistochemistry. Determination of hepatotoxicity The activity of transaminases (ALT and AST) in plasma was measured using a Bio-Tek EL808 Ultra Microplate Reader (BioTek Instruments, Winooski, VT, USA) according to the manufacturer's instructions. Measurement of oxidative stress Mouse livers were homogenized in 50 mmol/L phosphate buffer saline (PBS), pH 7.4, using a Polytron homogenizer (Kinematica, Lucerne, Switzerland). The supernatants were separated by centrifugation at 15 000×g for 20 min at 4 °C (Beckman L7-65 Ultracentrifuge, Beckman, Fullerton, CA, USA) and used to measure the Cu/Zn SOD activity and GSH content. The Cu/Zn SOD activity was determined as described previously [3] . The GSH content was evaluated according to Anderson [21] , with modifications. Briefly, the supernatants were deproteinized with 1.25 mol/L metaphosphoric acid and centrifuged at 5000×g for 10 min at room temperature (Rotina 420R, Andreas Hettich GmbH, Tuttlingen, Germany). Then, 25 µL of the deproteinized sample was mixed in a cuvette with 700 µL of 0.3 mmol/L NADPH in PBS, 100 µL of 6 mmol/L DTNB and water to give a final volume of 1.0 mL. The reaction was started by the addition of 10 µL of GR (50 units/mL), and the absorbance was monitored at 405 nm for 25 min. The GSH concentration in the samples was determined using the standard curve generated with different GSH solutions under the same condition. The protein content in the liver homogenates was estimated by the Bradford method [22] . Histopathology Paraformaldehyde-fixed tissues were processed routinely, embedded in paraffin, sectioned, deparaffinized and rehydrated using standard techniques [23] . Hepatocellular necrosis was evaluated by measuring the size of the necrotic area in hematoxylin and eosin (H&E) stained liver sections. The necrotic areas were manually selected, and their size was determined using Cell F v3.1 software (Olympus Soft Imaging Solutions, Münster, Germany). Statistical analysis The data were analyzed using StatSoft STATISTICA version 7.1 and Microsoft Excel 2000 software. Differences between the groups were assessed by a one-way ANOVA and Dunnett's post hoc test. The values in the text are mean±standard deviation (SD). For the in vitro studies, the concentration of samples that provide 50% inhibition (IC 50 ) was obtained by interpolation from a linear regression analysis. Differences from P<0.05 were considered statistically significant. Liver weight and plasma activity of transaminases The relative liver weight in CCl 4 -intoxicated mice decreased compared with that in the controls. Rutin dose-dependently prevented liver weight loss significantly more than quercetin at the equivalent dose. The plasma AST and ALT activities significantly increased 24 h after CCl 4 -intoxication (P<0.05). Treatment with rutin decreased the activity of these transaminases in a dose-dependent manner (Table 1). However, quercetin showed a less protective effect against hepatocellular damage than rutin at the equivalent dose. Effect of rutin and quercetin on hepatic oxidative stress Our results showed that CCl 4 administration induced oxidative stress in mouse livers. The Cu/Zn SOD activity and GSH concentration were significantly lower compared with the control group (Table 1) (P<0.05). Treatment with rutin elevated the Cu/Zn SOD activity and GSH concentration in a dose-dependent manner. Quercetin at 50 mg/kg significantly attenuated the decrease in the oxidative stress markers in CCl 4 -intoxicated mice, restoring Cu/Zn SOD activity more potently than rutin at 50 mg/kg (P<0.05). Figure 3. The livers of the control mice showed a normal morphology and architecture ( Figure 3A). In the CCl 4 -intoxicated mice, severe hepatic damage with a massive centrilobular necrosis was detected ( Figure 3B). In the livers of mice treated with rutin at 10 mg/kg, no histological improvement was found ( Figure 3C). Rutin at 50 mg/kg markedly reduced the size of the hepatic necrotic areas ( Figure 3D), whereas the high-dose rutin (150 mg/kg) almost completely prevented hepatocellular damage ( Figure 3E). In mice treated with quercetin at 50 mg/kg, larger necrotic areas were present than in the livers of mice treated with rutin at the equivalent dose ( Figure 3F and 3G). Amelioration of hepatic inflammation by rutin and quercetin To determine whether rutin or quercetin could reverse the acute liver inflammation induced by CCl 4 , we analyzed pro- Figure 4A). In contrast, strong NF-κB immunoreactivity was detected in the CCl 4 -treated mice (Figure 4B). Immunohistochemical analysis revealed NF-κB p65 localization in both the cytoplasm and the nucleus of hepatocytes and in Kupffer cells. The low-dose rutin treatment (10 mg/kg) did not substantially affect NF-κB immunopositivity ( Figure 4C). However, the higher doses of rutin, 50 and 150 mg/kg, gradually decreased NF-κB expression and prevented the accumulation of NF-κB p65 in the nuclei ( Figure 4D and 4E). Quercetin had a more pronounced effect on NF-κB suppression than rutin at the equivalent dose ( Figure 4F and 4G). CCl 4 administration also increased the hepatic levels of TNF-α and COX-2 ( Figure 5). Rutin dose-dependently reduced both TNF-α and COX-2 expression compared with the CCl 4 -treated mice. Similarly to NF-κB, quercetin more effectively suppressed TNF-α and COX-2 expression than rutin at the equivalent dose. Effect of rutin and quercetin on hepatic nitrosative stress The analysis of the hepatic expression of iNOS and the formation of the NO · -dependent product, 3-NT, revealed no immunopositivity in the control mice ( Figure 6A and 7A). CCl 4 administration induced a strong expression of iNOS and 3-NT ( Figure 6B and 7B). Rutin dose-dependently reduced the iNOS ( Figure 6C, 6D, and 6E) and 3-NT ( Figure 7C, 7D, and 7E) immunopositivity compared with the CCl 4 -treated mice. However, quercetin was less effective than rutin in the reduction of both iNOS ( Figure 6F and 6G) and 3-NT ( Figure 7F and 7G) immunoreactivity. Induction of the Nrf2/HO-1 pathway in the liver by rutin and quercetin The hepatic expression of HO-1 (Figure 5), the cytoprotective www.nature.com/aps Domitrović R et al Acta Pharmacologica Sinica npg enzyme that plays a critical role in the response to stressful conditions, and Nrf2, its upstream inducer [24] , is shown in Figure 8. The livers of control mice were Nrf2 immunonegative ( Figure 8A). The livers of CCl 4 -intoxicated mice were weakly Nrf2 immunopositive ( Figure 8B). However, the treatment with rutin resulted in a marked dose-dependent induction of Nrf2 ( Figure 8C, 8D, and 8E) in the cytoplasm and nucleus of hepatocytes and Kupffer cells. Nrf2 immunoreactivity in hepatocytes of mice treated with quercetin ( Figure 8F) mark-edly increased compared with the CCl 4 group but was less expressed compared with mice receiving rutin at the equivalent dose ( Figure 8G). Similarly, rutin dose-dependently increased the hepatic HO-1 levels compared with CCl 4 -treated mice, whereas quercetin was a less potent up-regulator of HO-1 than rutin at the equivalent dose ( Figure 5). Antifibrotic potential of rutin and quercetin in the liver The livers of control mice did not show substantial TGF-β1 immunopositivity ( Figure 9A). In contrast, TGF-β1 was over- Figure 9B). The administration of rutin dose-dependently ameliorated TGF-β1 expression ( Figure 9C, 9D, and 9E). However, the hepatic TGF-β1 immunoreactivity in mice treated with quercetin ( Figure 9F) was significantly lower than that in mice receiving rutin at the equivalent dose ( Figure 9G) (P<0.05). Discussion The results of the present study suggest the importance of the sugar moiety in position 3 of the C ring for the specific pharmacological activities of rutin and quercetin. The glycosylation of flavonoids reduces their antioxidant activity compared with their corresponding aglycones [25] . Our results showed that quercetin possessed higher reducing power and DPPH · and NO · free radical scavenging abilities in vitro compared with rutin. Quercetin more potently prevented the decrease in Cu/ Zn SOD activity in mouse livers. However, it was less effective in the amelioration of protein nitrosylation, suggesting npg that the in vitro free radical scavenging ability of compounds does not necessary correlate with their in vivo efficacy, which indicates that absorption and/or metabolism are key modulators of in vivo activity [26] . Additionally, the hepatic lesions and necrosis induced by CCl 4 were markedly reduced in mice treated with rutin, whereas quercetin was less effective. The results of the current study showed that the amelioration of hepatic necrosis was closely related to the reduction of nitrosative but not oxidative stress. Reactive oxygen species (ROS) generated during CCl 4 intoxication may induce NF-κB activation and consequently stimulate the production of cytotoxic and proinflammatory cytokines, such as TNF-α [27] . Our results showed that rutin, and more potently quercetin, suppressed both NF-κB and TNF-α expression in injured livers. NF-κB is also involved in the regulation of COX-2 and iNOS gene expression by binding to their promoter regions [28] . COX-2 and iNOS exert a prominent role under inflammatory conditions by producing prostaglandins and NO · , respectively [29,30] . Thus, the antiinflammatory effect of quercetin and rutin in CCl 4 -injured liver could be attributed to the inhibition of the NF-κB pathway, which is in agreement with previous findings [31] . Additionally, the increased NO · synthesis and superoxide generation could result in the formation of peroxynitrite [32] and nitration of protein tyrosine residues [33] , which actively contribute to the development of hepatic necrosis [34] . In this study, the activation and nuclear accumulation of NF-κB p65 in CCl 4 -intoxicated mice coincided with COX-2 and iNOS overexpression, which was prevented by both flavonoids. The nuclear presence of COX-2 and iNOS suggests their involvement in the regulation of some nuclear functions. However, rutin exhibited lower potency in reducing COX-2 expression compared with quercetin, which is in agreement with previous findings [35] . Interestingly, rutin more effectively suppressed the expression of both iNOS and 3-NT compared with quercetin. The latter coincided with the amelioration of hepatic necrosis. These findings agree with Raghav et al [36] , who showed a stronger inhibition of iNOS than COX-2 gene expression by rutin in murine macrophages. TGF-β, a pleiotrophic growth factor, has been shown to stimulate fibroblast proliferation and increase the synthesis of ECM proteins [37] . Previously, we showed that the fibrogenic potential of the CCl 4 -injured liver is closely related to TGF-β1 expression [38] . In the current study, TGF-β1 expression markedly increased in both hepatocytes and non-parenchymal cells of the CCl 4 -intoxicated liver. Although Kupffer cells and HSCs are considered the main sources of TGF-β [39] , hepatocytes may also produce TGF-β. In cultured hepatocytes, latent TGF-β was rapidly detectable during culture due to demasking of the mature TGF-β by calpains [40] . The nuclear presence of TGF-β1 could be attributed to increased nuclear membrane permeability during the activation of apoptosis [41] . Our results showed that treatment with both flavonoids markedly ameliorated the overexpression of TGF-β1, suggesting the reduction of the fibrogenic potential in the liver. HO-1, the inducible isoform of heme oxygenase, plays a critical role in cell protection against acute and chronic liver injury [42] . The transcription factor Nrf2 is considered a key regulator of HO-1 expression [24] . In the current study, hepatic injury was associated with low expression of HO-1 and its upstream inducer Nrf2 but with high expression of NF-κB. The treatments with rutin and, to a lesser extent, quercetin were associated with increased Nrf2 and HO-1 expression in the liver, which coincided with the suppression of NF-κB activation. Most recently, Liu et al [43] showed that quercetin acts as an inducer of HO-1 in primary rat hepatocytes. Interestingly, the induction of HO-1 was found in quercetin-treated but not in rutin-treated rat glioma C6 cells [44] and in the hydrogen peroxide-induced apoptosis of RAW264.7 macrophages [45] . However, rutin acted as a strong HO-1 inducer in a rat model of liver ischemia-reperfusion injury [46] , suggesting a cell typedependent activation of this enzyme. Additionally, the nuclear translocation of HO-1 could be involved in the regulation of genes responsible for cytoprotection against oxidative stress [47] . In conclusion, the results of this study demonstrate that rutin and quercetin can ameliorate acute liver damage by at least four mechanisms: acting as scavengers of free radicals, inhibiting NF-κB activation and the inflammatory response, exerting antifibrotic potential and inducing the Nrf2/HO-1 pathway. The rutinoside moiety in position 3 of the C ring could be responsible for more pronounced protective effects against iNOS induction, nitrosative stress and hepatocellular necrosis. The aglycone quercetin exerted higher antioxidant and anti-inflammatory activities and antifibrotic potential than rutin. The antioxidant actions of rutin and quercetin were partially responsible for their beneficial effects in injured liver tissue. However, the antioxidant properties of these flavonoids cannot solely explain the stronger protective activity of rutin against hepatocellular damage. Thus, the modulation of signaling pathways emerges as an important mode of action of these flavonoids, which should be considered as a specific therapeutic strategy. The application of these flavonoids in medical practice should be further confirmed by conveying preliminary placebo-controlled clinical studies.
Non-amenability of R.Thompson's group F We present a proof of non-amenability of R.Thompson's group F. Definition 1.2. [Height function] Let G be a group. A function h : G → N∪{0} is called a height function on G, if the following conditions hold: (i) h(xy) ≤ h(x) + h(y) for all x, y ∈ G; (ii) h(x) = h(x −1 ) for all x ∈ G; (iii) h(1) = 0 where 1 ∈ G denotes the identity element of G. Then Γ is not amenable. Remark 1.4. By passing to a subgroup if necessary, we may and will assume that ξ and η generate Γ. Condition D-(ii) implies that for all g ∈ Γ and ∈ {−1, 1}, we have h(gξ p ) ∈ {h(g) − p, h(g) + p}. Theorem 1.3 applies to R.Thompson's group F to establish it's nonamenability. We discuss this application in later sections. Definition 1.5 (Shifts of height functions). Let G be a group and h : G → N ∪ {0} be a height function. Then for any g ∈ G we can define a function h g : G → N ∪ {0} by letting h g (x) = h(g −1 x) for all x ∈ G. Remark 1.6. In general, h g : G → N ∪ {0} is not a height function because h g (1) = 0. But it can be viewed as a shifting of the function h : G → N ∪ {0} such that it measures the height not with respect to 1 ∈ G but with respect to g ∈ G. Then condition (D) can be re-written in terms of the shifts of the original height function as well. Remark 1.7. Notice that the equality h(xη δ ξ −p ) = h(x) + p does not hold for all x ∈ Γ i.e. we cannot force this condition globally, simply because the height of an element is always non-negative. In some situations, it is useful to consider the height function of type h : Γ → Z, i.e. the image of the function is Z instead of N ∪ {0}. For example, let G = Sol(2, d), d ≥ 2 denotes the free solvable group of derived length d ≥ 1 on two generators a, b. Then the "height function" h(w(a, b)) = σ b is naturally interesting, where σ b denotes the sum of exponents of b, in the word w(a, b). For this height function the inequality h(xη δ ξ −p ) = h(x) + p is indeed satisfied, for all x ∈ G, δ ∈ [−2, 2] where we set η = a, ξ = b −1 . Remark 1.8. Roughly speaking, for a finite subset F ⊂ Γ, the condition h g (xξ −p ) = h g (x) + p for all x ∈ F means that in going from x to xξ −p the height jumps up. In other words, the height jumps up along the ξ −p shifts from the right. In groups, this happens at the expense of the height jumping down along the ξ p shifts from the right for many x's in (some neighborhood of) F , i.e. when we go from x to xξ p . However, the property of height jumping up locally along ξ −p shifts seems too strong in some examples of groups. Conditions (i)-(iv) of (D) offer a very interesting substitute: on one hand it says that, along the ξ −p shifts, the height indeed jumps up for all but at most one x on a given horizontal segment of bounded length (condition (D)-(i)), on the other hand, for such "an unsuccessful" element x, there are many elements in a certain neighborhood of x such that the height actually jumps up along the ξ p shift [condition (D)-(ii)]. Moreover, one can deduce that there are many elements related to x in a special way where the height jumps along both the ξ −p shift and the ξ −p shift. Thus existence of an "unsuccessful" element is compensated by the existence of "super-successful" elements. Remark 1.9. The fact that we use arbitrary height function makes the claims of the theorems not only very strong but also provides great flexibility in applications. For example, very often very little is known about the Cayley metric of the group, so one can work with the most convenient height function instead. We will need the following Definition 1.10 (Horizontal and vertical lines). Let H be a cyclic subgroup generated by η, and H be a cyclic subgroup generated by ξ. A subset gH ⊂ Γ will be called a horizontal line (passing through g ∈ Γ), and a subset of the form gH will be called a vertical line (passing through g ∈ Γ). If x, y belong to the same horizontal line L ⊂ Γ, then we say x is on the left (right) of y if x = yη n where n < 0 (n > 0). Remark 1.11. Because of conditions h(η) = h(η −1 ) = 0 and because of subadditivity of the height function h, if x, y ∈ Γ are on the same horizontal line, then h(x) = h(y), and even more generally, h g (x) = h g (y), for all g ∈ Γ. So height is constant on a fixed horizontal line. Generalized binary trees We will need some notions about binary trees and generalized binary trees of groups. Let F be a finite subset of a finitely generated group Γ. For us, a binary tree is a tree such that all vertices have valence 3 or 1 and one of the vertices of valence 3 is marked and called a root. Definition 2.1 (Binary Trees). A binary tree T = (V, E) of F is a finite binary tree such that V ⊆ F . A root vertex of T will be denoted by r(T ). Vertices of valence 3 are called internal vertices and vertices of valence 1 are called end vertices. The sets of internal and end vertices of T are denoted by Int(T ) and End(T ) respectively. Definition 2.2. (see Figure 1) A generalized binary tree T = (V, E) of F is a finite tree satisfying the following conditions: (i) All vertices of T have valence 3 or 1. Vertices of valence 3 are called internal vertices, and vertices of valence 1 are called end vertices. (ii) All vertices of T either consist of triples (i.e. subsets of cardinality 3) or single elements of F . If a vertex has valence 3 then it is a triple; if it has valence 1 then it is a singleton. For two distinct vertices u, v ∈ V , their subsets, denoted by S(u), S(v), are disjoint. The union of all subsets (triples or singletons) representing all vertices of T will be denoted by S(T ) (iii) One of the vertices of T is marked and called the root of T . The root always consists of a triple and has valence 3, and it is always an internal vertex. We denote the root by r(T ). (iv) For any finite ray (a 0 = r(T ), a 1 , a 2 , . . . , a k ) of T which starts at the root, and for any i ∈ {0, 1, 2, . . . , k} a vertex a i is called a a vertex of level i. (vi) One of the elements of each triple vertex is chosen and called a central element, the other two elements are called side elements. (vii) If u is a vertex of level n ≥ 2 of T of valence 3, and v, w are two adjacent vertices of level n + 1, then the set of all vertices which are closer to v than to u form a branch of T beyond vertex u. T has two branches beyond the vertex u; the second branch will consist of the set of all vertices which are closer to w than to u (viii) Similar to (vii), we define the branches beyond the root vertex. So the tree consists of the root and the three branches beyond the root. Definition 2.4. If T = (V, E) is a generalized binary tree of F ⊂ Γ, A ⊆ V , then the union of all subsets (triples or singletons) which represent the vertices of A will be denoted by S(A). In particular, the union of all subsets representing all vertices of T will be denoted by S(T ). Definition 2.6. The set of end vertices of a generalized binary tree T = (V, E) will be denoted by End(T ), and the set of internal vertices will be denoted by Int(T ). Also, Single(T ), Central(T ), Side(T ) denote the set of all singleton vertices, central elements and side elements respectively. Remark 2.7. By the definition of a generalized binary tree, Single(T ) = End(T ). Definition 2.8. For a generalized binary tree T = (V, E), for all v ∈ V \{root(T )}, p(v) denotes the vertex which is adjacent to v such that level(p(v)) = level(v) − 1; and for all v ∈ V \End(T ), n(v) denotes the set of vertices v which are adjacent to v such that level(v ) = level(v) + 1. Remark 2.9. n stands for next, p stands for previous. If v is an internal vertex then n(v) consists of pair of vertices unless v is a root. We will need the following Proof. The proof is by induction on |S(T )|. For the trivial generalized binary tree we have |S(T )| = 6, |S(End(T )| = 3 so the inequality is satisfied. Let T be any non-trivial GBT, and v ∈ End(T ), level(v) is maximal, w = p(v). By definition, v is a singleton and w is a triple vertex. Since level(v) is maximal, n(w) consists of pair of singleton vertices. Let u denotes the other singleton vertex in n(w). Let also w = (a, b, c). We denote w = (a). By deleting u and v from T , and replacing w with w we obtain anew Quasi-GBTs: We need even more general objects than GBTs, namely quasi-GBTs. A quasi-GBT is a somewhat degenerate form of a GBT. The major difference is that internal vertices of odd level are allowed to be pairs (instead of triples). Definition 2.11. A quasi-GBT T = (V, E) of F is a finite tree satisfying the following conditions: (i) All vertices of T have valence i ∈ {1, 2, 3}. Vertices of valence i ∈ {2, 3} are called internal vertices, and vertices of valence 1 are called end vertices. (ii) All vertices of T consist of k-tuples of elements of F where k ∈ {1, 2, 3}. If a vertex has valence i then it is an i-tuple. For two distinct vertices u, v ∈ V , their subsets, denoted by S(u), S(v), are disjoint. The union of all subsets (triples, pairs or singletons) representing all vertices of T will be denoted by S(T ) (iii) One of the vertices of T is marked and called the root of T . The root is always an internal vertex, and contains three elements. We denote the root by r(T ). (iv) For any finite ray (a 0 = r(T ), a 1 , a 2 , . . . , a k ) of T which starts at the root, and for any i ∈ {0, 1, 2, . . . , k} a vertex a i is called a a vertex of level i. (vi) One of the elements of each internal vertex is chosen and called a central element, the elements of the vertex other than central element are called side elements. (vii) vertices of even level are triples. (viii) If u is a vertex of level n ≥ 2 of T of valence i ∈ {2, 3}, and v 1 , . . . , v i−1 are the adjacent vertices of level n + 1, then the set of all vertices which are closer to v i than to u form a branch of T beyond vertex u. T has (i − 1) branches beyond the vertex u. (ix) Similar to (vii), we define the branches beyond the root vertex. Definition 2.12 (η-normal quasi-GBT). Let Γ be a finitely generated group satisfying condition (A) Labeled quasi-GBTs: We will introduce a bit more structure on quasi-GBTs. Let Γ be a finitely generated group satisfying condition (A), F ⊂ Γ be a finite subset of Γ, F 0 ⊆ F is partitioned into 2-element subsets {x, y} such that y ∈ {xξ, xξ −1 }. Thus every element in F 0 has a ξpartner which we denote by N ξ (x) and since ξ is fixed, we will drop it and denote by N (x). By definition, N (N (x)) = x, ∀x ∈ F 0 . Assume also that the following conditions are satisfied: (L2) if w is an internal vertex then b is the central element of w and a is a side element of v. Then we label the edge e by the element a −1 b. We also will denote a = start(e), b = end(e). On the other hand, for each vertex Thus all vertices b 2 , b 3 , . . . , b k−1 are labeled by some elements l(b 2 ), l(b 3 ), . . . , l(b k−1 ). [However, notice that the labeling of these vertices actually depend on r; if r, r are finite level increasing rays passing through the vertex v and diverging at v then the label of v with respect to r will differ from the label of v with respect to r ]. Then we associate the group element l(e 1 )l(b 2 )l(e 2 ) . . . l(b k−1 )l(e k−1 ) to r which we will denote by L(r). Remark 2.14. Notice that in a labeled quasi-GBT we associate a word L(r) to every level increasing ray r. Notice also that the labeling structure of a generalized binary tree of F depends on the choice of non-torsion element ξ ∈ Γ and subset F 0 ⊂ F which can be partitioned into pairs of ξ-partners, and it depends on the partitioning as well. At the end of this section, we would like to introduce an important structure on quasi-GBTs, namely, the order. This notion will be crucial, in the proof of Theorem 1.3 for guaranteeing that in building super-quasi-GBTs we do not get loops, so there is no obstacle in building the trees. (ii) for all u, v, w ∈ V (T ) if u < v and w belongs to the branch beyond v then w < u. Then T with < will be called ordered. Zigzags and other intermediate notions In this section we will assume that Γ is a finitely generated group satisfying condition (A). A good and useful example of such a group is the group Z 2 itself. For every x ∈ Γ, we call the sets xH and xH the horizontal lines and the vertical lines, respectively, passing through x. A sequence Z = (x 1 , x 2 , . . . , x m ) of elements of Γ will be called a zigzag if for all The number m will be called the length of Z. Balanced Segments. We will be working with tilings of the group Γ into segments. Definition 3.5 (Segments). Let x, y ∈ Γ belong to the same horizontal line L, and x is on the left side of y, i.e. there exists n ∈ N such that y = xη n . Then by seg(x, y) we denote the finite set of all points (elements) in between x and y including x and y, i.e. seg(x, y) = {xη i | 0 ≤ i ≤ n}. A finite subset I of L is called a segment if there exists x, y ∈ I such that I = seg(x, y). Definition 3.6 (Balanced Segments). A finite segment I will be called a balanced segment if |I| is divisible by 6. Definition 3.7 (Leftmost and rightmost elements). Let I be a segment. Then there exists a unique element z 1 ∈ I such that for any z ∈ I there exists a non-negative integer n such that z = z 1 η n . The element z 1 will be called the leftmost element of I. Similarly, we define the rightmost element of I: there exists a unique element z 2 ∈ I such that for any z ∈ I there exists a non-positive integer n such that z = z 2 η n ; z 2 will be called the rightmost element of I We would like to conclude this section with introducing some notions for labeled η-normal generalized binary trees. Definition 3.10 (starting element). Let T be a labeled η-normal quasi-GBT, and a be the central element of root(T ). The element N (a) will be called the starting element of T and denoted by start(T ). Definition 3.11 (special quasi-GBTs). A labeled η-normal quasi-GBT is called special if {a } is an end vertex where a is the starting element of T . In the proof of Theorem 1.3 the GBTs will be constructed pieceby-piece, in other words, some GBTs will be constructed as a union of elementary pieces. An trivial labeled η-normal GBT T is called an elementary piece. If T is special as a quasi-GBT then it is called a special elementary piece, otherwise we call it an ordinary elementary piece. It is useful to observe that quasi-GBTs can be obtained from a GBT as follows: Let T be an η-normal and labeled GBT. Let also v 1 , . . . , v k ∈ Int(T ) be not necessarily distinct internal vertices such | v belongs to some level increasing ray r which starts at w i }. By deleting 1≤i≤k {a i } S(S i ) from T we obtain an η-normal and labeled quasi-GBT. If T is a special GBT then we obtain a special η-normal and labeled quasi-GBT. Definition 3.12. Let T be a labeled η-normal quasi-GBT. We say T is successful if it contains a triple vertex of odd level. Generalized binary trees in partner assigned regions In this section, we will be assuming that Γ is a finitely generated group satisfying conditions (A) and (D), 100|K| , F is a connected (K, )-Følner set, {I(x)} x∈X be a collection of pairwise disjoint balanced segments of length at most 1200 tiling Γ. Let also We will need the following notions . For any subset S ⊂ F 1 we will denote the minimal region containing S by R(S ). Definition 4.2 (Partner assigned regions). A region S ⊆ F 1 is called partner assigned if there exists a subset S 0 ⊆ S and a function n : We will denote xξ n(x) = N (x) and x = N (xξ n(x) ). Also, for any subset Notice that the partners of elements from F 1 do not necessarily lie in F 1 but they always lie in F . In the proof of Theorem 1.3 we will be assigning partners of elements of F 1 before we start building the trees, so F 1 will be a partner assigned region. We will arrange the partner assignment to satisfy certain conditions to enable us to push the rays of the trees to higher levels or at least not to let them come below certain level. Definition 4.3 (Zigzags and GBTs respecting partner assignment). Let S ⊆ F 1 be a partner assigned region and Z = (x 1 , x 2 , . . . , x n ) be a quasi-zigzag in S. We say Z respects partner assignment of S if for all i ∈ {1, . . . , n − 1}, if z i and z i+1 do not lie on the same horizontal line then z i+1 = N (z i ). We say a quasi-GBT T is respecting the given partner assignment if it is a labeled quasi-GBT with respect to this partner assignment. A partner assignment induces labeling structure for a quasi-GBT T ; from now on we will identify the notion of "labeled quasi-GBT" with the notion of a "quasi-GBT respecting the partner assignment". We will also identify the notion of "balanced (quasi-)zigzag" with the notion of a "(quasi-)zigzag respecting the partner assignment", if the partner assignment is fixed. (Notice that if a quasi-zigzag satisfies our fixed partner assignment, then it is balanced; cf. Definitions 3.1 and 3.2.) Definition 4.4. An element z ∈ F 1 is called successful if at least one of the following three conditions are satisfied: Otherwise, z is called unsuccessful. More generally, for any region S ⊆ F 1 , we say z ∈ S is a successful element of S, if either at least one of conditions (i), (iii) are satisfied, or N (z) / ∈ S. Now we will introduce a certain partner assignments for regions in F 1 . First, we describe a certain natural tiling and partner assignment of the group Z 2 . Namely, for every m, n ∈ Z, we let I m,n = {(x, m) ∈ Z 2 | 360n ≤ x < 360(n + 1). The partner assignment will be defined as follows: for every (x, y) ∈ Z, N (x, y) = (x, y + p) if y is even, and Remark 4.5. In the group Z 2 with standard generating set η = (1, 0), ξ = (1, 0), let us assume the above tiling {I m,n } m,n∈Z and the partner assignment, and let p = 1, h(x, y) = |y|, for all (x, y) ∈ Z 2 . Then for every quasi-zigzag Z in Z 2 which starts at element g ∈ Z 2 , and respects the given partner assignment, Remark 4.6. The above tiling and partner assignment immediately induces the pull back tiling by balanced segments of length 360 and partner assignment through the epimorphism π : Γ → Z 2 which satisfies the following conditions: , if x, y ∈ Γ are on the same horizontal line and N (x) = xξ p then N (y) = yξ p . We will fix this tiling in Γ and the partner assignment for the rest of the paper. Notice that any zigzag Z = (x 1 , . . . , x m ) which respects this tiling and the partner assignment is necessarily balanced if We now would like to introduce more structures on the partner assigned regions as well as on quasi-GBTs in partner assigned regions. Definition 4.7 (Suitable/unsuitable segments). A segment I(x), x ∈ X is called suitable if for some (consequently, for any) g ∈ I(x) we have π(g) = (x, y) ∈ Z 2 where y ∈ Z odd . If I(x) is not suitable then it is called unsuitable. Remark 4.8. Notice that, by the definition of the partner assignment, the segment I(x) is suitable iff for some (consequently, for any) g ∈ I(x) we have N (g) = gξ −p . For the rest of the paper we make the following assumptions: let Γ be a finitely generated amenable group satisfying conditions (A) and (D), be a tiling of Γ by segments of length 3; and for all x ∈ X, let I(x) be a balanced segment of length 360 × 3 = 1080 such that I(x) = 0≤k≤359 I k (x); we assume that the tiling {I(x)} x∈X is obtained by the pullback of the tiling in Remark 4.5. . All the labeled η-normal GBTs and quasi-GBTs will respect this tiling {I(x)} x∈X and the fixed partner assignment, and all the elementary pieces will respect the tiling {I k (x)} x∈X,0≤k≤359 . The notion of the region will be understood with respect to this tiling, i.e. a subset S ⊆ F 1 will be called a region if there exists a subset The intervals I k (x), 0 ≤ k ≤ 359 will be called short intervals, and the intervals J l (x), 0 ≤ l ≤ 179 will be called medium intervals. All the labeled η-normal quasi-GBTs will respect the tiling {I k (x)}, 0 ≤ l ≤ 359, x ∈ X unless specifically stated otherwise in which case they will respect the tiling We will say that a short segment We will assume that all the labeled GBTs and quasi-GBTs have a root consisting of a suitable short segment. We may and will assume that there is no unsuitable segment I k (x) ⊂ F 1 such that for some z ∈ I k (x), N (z) / ∈ F 1 . Also, we will assume that F 1 has at least as many suitable medium intervals as non-suitable ones. If u, v are on the same horizontal line and u = vη n , n ∈ Z, we will write d(u, v) = |n|. For g ∈ Γ, |g| will denote the length of g in the left invariant Cayley metric of Γ w.r.t. the generating set {ξ, η}. Figure 2. This is a special labeled quasi-GBT with a root vertex root(T ), starting element start(T ), and different types of vertices a(T ), b(t), c(T ) and d(T ). A vertex is a k-tuple (k ∈ {1, 2, 3}) iff it contains k big black dots; an empty circle indicates that the element is omitted, and the small dot means the element is not in F 1 . Completeness We make the assumptions of Section 4. In this section we introduce a key notion of completeness of quasi-GBTs. The completeness of the sequence T 1 , T 2 , . . . T m simply means that we do not stop the trees unnecessarily. Definition 5.1 (Complete sequence of quasi-GBTs.). Let T 1 , T 2 , . . . , T n be a sequence of mutually disjoint labeled η-normal quasi-GBTs of F . We say this sequence is pre-complete if for every i ∈ {1, . . . , n} (i) the root of T i consists of a triple I k (y) ⊂ F 1 , y ∈ X, 0 ≤ k ≤ 359, where I(y) is a suitable segment; (ii) the singleton vertices (i.e. end vertices) of T i belong to unsuitable segments; (iii) if v is a vertex of even level of T i then v is a triple and Definition 5.2 (Successful complete sequence). Let T 1 , T 2 , . . . , T n ( * ) be a complete sequence of labeled η-normal quasi-GBTs of F and I be an unsuitable short segment in F 1 . We say ( * ) is successful at I if one of the following conditions hold: Notice that if a pre-complete sequence T 1 , T 2 , . . . , T n satisfies the condition 1≤i≤m S(T i ) ⊇ F 1 then it is complete unless there exists an In this case, we will say that the sequence (T 1 , . . . , T m ) is unsuccessful at I. It will be useful to observe that labeled quasi-GBTs can be obtained from a labeled GBTs by simply deleting some branches. Let T be a labeled GBT such that all the internal vertices of T are triples of the form (z, zη n 1 , zη n 2 ) for some n 1 , n 2 ∈ Z; moreover, all end vertices are of even belongs to some level increasing ray r which starts at w i }. By deleting 1≤i≤k {a i } S(S i ) from T we obtain an ordinary labeled quasi-GBT R. If T is special (or respects the partner assignment, or a tiling) then we will say that R is special (or respects the partner assignment, or a tiling). Notice that the definition of labeled quasi-GBT as in the above construction agrees with the definitions 2.11 (quasi-GBT) and 2.13 (labeled quasi-GBT). The following proposition will be very useful in the proof of Theorem 1.3. Proof. Let us recall that for every x ∈ X, forms a tiling of Γ; also every J l (x) has length 6 and contains exactly two short intervals; we will call these short intervals neighbors. Notice also that, since the quasi-GBTs T 1 , . . . , T m respect the tiling x∈X 0 ,0≤k≤359 We may assume that if v ∈ V (T i ), 1 ≤ i ≤ m, and S(v)\F 1 = ∅ then v is an end vertex; moreover, F 1 contains at least as many suitable segments as non-suitable ones. We will also assume that F − 1 contains at least as many unsuitable short intervals at which T 1 , T 2 , . . . , T m successful as F + 1 (and we will ignore the successfulness in F + 1 ). We may furthermore assume that for some 1 ≤ m < m, As agreed in Section 4, labeled η-normal quasi-GBTs T 1 , . . . , T m respect the tiling {I k (x)} x∈X,0≤l≤359 . Notice that for such a quasi-GBT T , if I 1 , I 2 are two neighboring short intervals, then for all 1 ≤ i ≤ m, either S(T i ) ∩ I 1 = ∅ or S(T i ) ∩ I 2 = ∅. In the proof below, we will construct labeled η-normal quasi-GBTs will respect just the tiling {J l (x)} x∈X,0≤l≤179 . Recall that by the agreement the root and more generally, all vertices of even level of any labeled η-normal quasi-GBT consists of a suitable short interval. are the neighboring short intervals and at I − i the sequence T 1 , . . . , T m is successful. We will call the intervals The main idea is to construct GBTs Ω = (T 1 , . . . , T M ) of F such that the following conditions hold: (c1) : every suitable short interval in F 1 forms an internal vertex of one of the GBTs from Ω; (c2) : every unsuitable medium interval in F 1 contains an internal vertex of one of the GBTs from Ω; (c3) every successful medium interval contains either two internal vertices or one internal vertex and one starting element of GBTs from Ω. Let now T be a labeled η-normal quasi-GBT and R be a labeled quasi-GBT such that S(R) ∩ S(T ) = ∅. We introduce the following types of operations: breaking up a quasi-GBT: Let v be a pair vertex of T, S(v) = {a, b}, level(a) = level(N (a)) + 1, level(b) = level(N (b)) − 1. Let also w be a vertex of T such that N (b) ∈ S(w). Then w is a triple vertex and is adjacent to two vertices w 1 , w 2 . Let V i = {v ∈ V (T ) | v belongs to some level increasing ray r which starts at w i } for all i ∈ {1, 2}. S i = {v ∈ V (T ) | v belongs to some level increasing ray r which starts at w i }. Then S := {b} S(w) S(w 1 ) S(w 2 ) S(V 1 ) S(V 2 ) forms a labeled η-normal quasi-GBT, while S(T )\S forms another labeled ηnormal quasi-GBT. Hence, T can be broken into two labeled η-normal quasi-GBTs. If we break T at all of its k pair vertices (notice that k ≤ 3) then we obtain k + 1 η-normal quasi-GBTs. We will call these η-normal quasi-GBTs obtained as a result of break up. gluing of type 1: Let v be as above, R be special, σ 1 = {c 1 } be its starting vertex such that v and σ 1 belong either to the same or neighboring short intervals. Then S(T ) S(R) forms a new labeled quasi-GBT R where root(R ) = root(T ), and the pair vertex v is replaced with a triple vertex. gluing of type 2: Let R be special, σ 1 = {c 1 } be its starting vertex and σ 2 = {c 2 } be the end vertex of T such that c 1 , c 2 belong to the same short interval. Then the union S(T ) S(R) forms a new labeled quasi-GBT R such that root(R ) = root(T ). collecting special quasi-GBTs: Let R 1 , R 2 , R 3 be disjoint special labeled quasi-GBTs, with starting elements s 1 , s 2 , s 3 respectively. Then the set S(R 1 ) S(R 2 ) S(R 3 ) forms a GBT R with root(R ) = (s 1 , s 2 , s 3 ). All internal vertices of R consists of a triple on some horizontal line except perhaps the root. Now, we are ready to start the proof. Inductively, for any i ∈ {1, . . . , m } we will associate a finite set of labeled quasi-GBTs C i = (S Step: Assume that, for some i ∈ {1, . . . , m −1}, Let Σ = {σ 1 , . . . , σ k } be a maximal subset of End(T i+1 ) such that the following two conditions are satisfied: (i) if 1 ≤ q 1 < q 2 ≤ k then σ q 1 and σ q 2 do not belong to the same short interval; (ii) for all q ∈ {1, . . . , k}, σ q belongs to F 1 . If Σ = ∅ then we let C i+1 = C i T i+1 . Otherwise, we have σ 1 := {c 1 } belongs to F − 1 . Let also σ 1 belongs to the short interval I. Then we have two cases: q contains an element besides c 1 . Then, by completeness, we have two sub-cases: a) There exists j ∈ {1, . . . , i} such that the starting element s of T j belong to the same short interval I as σ 1 . Then we perform gluing of type 2, between σ 1 and s. In this case we do not perform any operation and go to σ 2 . Case 2: I\ 1≤q≤n i S (i) q contains no element besides c 1 . Then we do not perform any operation and by going to σ 2 , apply the process to σ 2 , and so on. Notice that as a result of the construction, the collection D = {S 1 , . . . , S n } of labeled quasi-GBTs satisfies the following conditions: Step: For j ∈ {1, . . . , m − m }, we perform break up operations at all pair vertices of T m +j , and let Φ 1 , . . . , Φ r be a set of all quasi-GBTs and Ψ = {s 1 , . . . , s r } be the set of starting elements obtained as a result of the break up. Let also K 0 ⊆ {1, . . . , r} be the unique maximal subset such that for all t ∈ K 0 if s t belongs to the short interval I then For all t ∈ K 0 , let s t belongs to the short interval I t (recall that I t ⊂ F + 1 ). Let I t be the neighboring short interval in F − 1 . Then we have one of the following two cases: Case 1: I t contains a pair vertex of some labeled quasi-GBT S from D = D 0 . Then we perform gluing of type 1 between S and Φ t . of three starting elements of some three labeled quasi-GBTs then we collect these labeled quasi-GBTs into one GBT. We denote the resulting collection of GBTs byD. Notice that, conditions (c1)-(c3) are satisfied forD, i.e.: (i) every suitable short interval in F 1 forms an internal vertex of one of the GBT fromD; (ii) every unsuitable medium interval in F 1 contains an internal vertex of one of the GBT fromD; (iii) every successful medium interval contains either two internal vertices or one internal vertex and one starting element of GBTs from D. Let R 1 , . . . , R n denotes the set of GBTs obtained as a result of the process. Then the following conditions hold: (i) for every suitable medium interval J ⊆ F 1 , J ⊂ 1≤i≤n S(Int(R i )); (ii) for every unsuitable medium interval J ⊆ F 1 , |J∩ 1≤i≤n S(Int(R i ))| ≥ , moreover, since the number of suitable medium intervals is not less than the number of unsuitable ones, we obtain that |F 1 ∩ 1≤i≤n S(Int(R i ))| ≥ 3 4 |F 1 | + 1 3 L where L denotes the number of successful medium intervals. Let Ω be a maximal subset of F 1 such that for any two distinct |B 70p | . Now, by the assumption every ball of radius 20p centered at F 1 has a non-empty intersection with a successful medium interval. Then every ball of radius 35p centered at F 1 (in particular, the balls B 35p (x), x ∈ Ω) contains a successful medium interval, thus we obtain that L ≥ |Ω| ≥ Then, by Lemma 2.10 we obtain that We make the assumptions of Section 4. The following notions will be needed Definition 6.1. Let x 1 , x 2 ∈ X, A 1 ⊆ I(x 1 ), A 2 ⊆ I(x 2 ). We say A 1 , A 2 are connected by a quasi-zigzag if for some u 1 ∈ A 1 , u 2 ∈ A 2 there exists a quasi-zigzag Z such that Z starts at u 1 and ends at u 2 . If Z is also balanced, then we say A 2 is connected to A 1 by a balanced quasi-zigzag. We say the pairs (I 1 , I 2 ) and (I 3 , I 4 ) are connected with non-interfering zigzags if there exist zigzags Z 1 , Z 2 in F 1 such that I 1 and I 2 are connected by Z 1 , I 3 and I 2 are connected by Z 2 , moreover, there exists no (x, k) ∈ Y 0 such that Z i ∩ I k (x) = ∅ for all i ∈ {1, 2}. Remark 6.5. In the above definition, S n are defined inductively. Notice that the set S n ∪N (S n ) may contain a large region even if q 1 = q 2 = · · · = q n = 0. In general, since h 1 , . . . , h n are arbitrary non-negative integers, by taking them sufficiently big we also obtain that the entire F 1 (more precisely, F 1 ∪N (F 1 )∪(F 1 \F 1 ) = F 1 ∪(F 1 \F 1 )) is an extremal region. This observation will be used in the sequel, but the most interesting case of an extremal region is when (q 1 , . . . , q n ) = (0, . . . , 0); this observation will be crucial in our study. A somewhat more general case of 0 ≤ q i ≤ 2p, for all 1 ≤ i ≤ n is also interesting but we will not be using it in this paper. Notice also that S n is a union of suitable segments. Moreover, although S n is a region, R(g 1 , . . . g n ; q 1 , . . . , q n ) may not be; we hope this (calling it an extremal region) will not cause a confusion. Definition 6.7 (Minimal elements). Let I(x), x ∈ X be a suitable segment in the extremal region R(g 1 , . . . g n ), and I ⊆ I(x) be a subsegment. We say z ∈ I is a minimal element of I if z is unsuccessful with respect to some h g i , 1 ≤ i ≤ n, and there is no u ∈ I and j ∈ {1, . . . , i − 1} such that u is unsuccessful with respect to h g j . We say z is an absolutely minimal element of I, if it is minimal, and for every other minimal u ∈ I, min{j : z is unsuccessful w.r.t. h g j , but u is successful w.r.t. h g j } < min{j : u is unsuccessful w.r.t. h g j , but z is successful w.r.t. h g j } Condition (i) of (D) states that on a given horizontal segment of length 2 at most one element is unsuccessful with respect to the same h g . Then condition (ii) states that given an unsuccessful element u, one can relate to it an element u in a certain special way such that not only u is successful but even nicer property holds for u , namely, h(u ξ −p η i 1 ξ p η i 2 ξ −p η i 3 ξ p ) = h(u )+4p for certain values of i 1 , i 2 , i 3 . Thus existence of an unsuccessful element u is compensated by the existence of u which is related to u and is even better than successful. We will materialize this idea in the proof. First, we need the following notions Definition 6.8 (Successfully related elements). Let u ∈ I k (x) ⊂ F 1 be a suitable short interval, and u ∈ I k (x). We say an element u is Definition 6.9 (Successfully related special elementary pieces). Let (g 1 , . . . , g N ) be a finite sequence of elements of Γ, and u ∈ I = I k (x) ⊂ F 1 , x ∈ X 0 , 0 ≤ k ≤ 359, be such that u is an absolutely minimal element of I. Let T a labeled η-normal special elementary piece of F . We say T is successfully related to u if N (u) is a starting element of T (so, in particular, it is an end vertex). In this case we also say T is successfully related to I. Definition 6.10. Let T be a special elementary piece successfully related to u ∈ I for some suitable short segment I = I k (x) = {u, u 1 , u 2 }, x ∈ X 0 , 0 ≤ k ≤ 359. Let A ⊆ F . We say T is successful relative to A, if for all i ∈ {1, 2}, if N (u i ) belongs to an unsuitable short segment I i ⊂ F 1 , then A contains all elements of I i which are successfully related to u. Let us emphasize that if a suitable short segment I = I k (x) = {z 1 , z 2 , z 3 } ⊂ F 1 , where z 1 and z 3 are the leftmost and rightmost elements of I respectively, in the extremal region R(g 1 , . . . , g N ) admits a minimal element but not an absolutely minimal element, then I has exactly two minimal elements, namely, z 1 and z 3 . Let T be an ordinary elementary piece such that root(T ) = I; we will denote the vertex of T which contains N (z i ) by Then both elements of I 2 \S(v 2 ) are successfully related to either z 1 or z 2 ; N (z 1 )η is successfully related to z 3 and N (z 3 )η −1 is successfully related to z 1 . These observations are useful but in the sequel we will choose the sequence (g 1 , . . . , g n ) dense enough that every suitable short segment will contain an absolutely minimal element. Now, we would like to introduce a notion of partial (or linear) order in the set of suitable short intervals of F 1 as well as in the set of all elements belonging to the suitable short intervals of F 1 . The proof of Theorem 1.3 is based in constructing a complete sequence of elementary pieces or quasi-GBTs, and the order we introduce will indicate where to start the next quasi-GBT in the sequence at every step. Partial Order: We now introduce a strict partial order in extremal regions [we say a relation ∼ on a set S is a strict partial order if it is anti-symmetric, transitive, and there is no x ∈ S such that x ∼ x]. By prolonging the sequence of origins, this partial order can be made a linear order. Notice that, since the functions h g : Γ → N ∪ {0} are constant on the horizontal lines, the definition does not depend on the choices u 1 , u 2 . If y 2 ≺ y 1 then we also say I k 1 (y 1 ) is bigger than I k 2 (y 2 ). We also say I k 1 (y 1 ) is strongly bigger than I k 2 (y 2 ) if condition (c1) holds. Once we define the order ≺ on the set of suitable short segments of an extremal region (and on the set of elements which belong to suitable short segments of the extremal region) we can restrict it to any subset. Then it is clear from the condition D-(iii) that for any region R ⊆ F 1 (in particular, for the region F 1 itself) we can choose long enough sequence g 1 , . . . , g L of origins such that for the choice (q 1 , . . . , q L ) = (0, . . . , 0) the partial order ≺ on the set of suitable short segments of R (and on the set elements which belong to suitable short segment of R) becomes a linear order. Indeed, in defining the partial order ≺, we observe that R 0 (g 1 ) ⊇ R 0 (g 1 , g 2 ) ⊇ · · · ⊇ R 0 (g 1 , g 2 , . . . , g L ). Notice that R 0 (g 1 , g 2 , . . . , g L ) is never empty. If it contains only one suitable short segment of F 1 then this segment is the biggest in our order. But if it contains more than one such segment then we look into the question of which one of them contains unsuccessful elements with respect to h g i for the least possible i. This way of viewing our definition of the partial order motivates more general notion of extremal regions with constraints. We will avoid defining this notion in its most general natural version, but rather restrict ourselves to a certain type (two types) which will be used in the sequel. First, we need a the following Definition 6.11. Let Z = (x 1 , . . . , x n ) be a zigzag in F 1 which respects the fixed tiling and partner assignment. We say Z belongs to the class R lef t (R right ) if x i is not the leftmost (rightmost) element of any unsuitable short segment for all 1 ≤ i ≤ n − 1 such that x −1 i x i+1 = ξ p . Definition 6.12 (Extremal regions with constraints on the left). Let g 1 , g 2 , . . . , g n ∈ Γ be distinct elements and I 1 , I 2 , . . . , I n be a sequence of suitable short segments of F 1 . Let the sub-region S ⊆ F 1 be such that there exist a sequence (S 0 , S 1 , . . . , S n ) of subsets and sequences (h 1 , . . . , h n ) and (q 1 , . . . , q n ) of non-negative integers such that We will write R(g 1 , . . . g n ; q 1 , . . . , q n ; I 1 , . . . , I n ) = S ∪N (S)∪(F 1 \F 1 ), and call R(g 1 , . . . g n ; q 1 , . . . , q n ; I 1 , . . . , I n ) the extremal region of the sequence (g 1 , . . . , g n ). By replacing conditions (iv) with following condition, we obtain the notion an extremal region with constraints on the right. (iv) Notice that conditions (i)-(v) imply that for all 1 ≤ i ≤ n − 1, I i+1 is connected to I i with a zigzag from R lef t }. In the definition of partial order above, by replacing R(g 1 , . . . g n ; q 1 , . . . , q n ) with R(g 1 , . . . g n ; q 1 , . . . , q n ; I 1 , . . . , I n ) everywhere we define a partial order with constraint on the left (right). We will use partial orders both with and without constraints. Unless said otherwise "a partial order" will have no constraint. We now fix the sequence (g 1 , . . . , g L ) and study suitable short segments with respect to the order ≺ that it defines: Definition 6.13. Let I = I k (x) = {u, v, w} ⊂ F 1 be a suitable short segment where u is the leftmost and w is the rightmost element; and let ≺ be a linear order on P (F 1 ). We will say I is of If T is an elementary piece with root at I = I k (x), we say T is of type 1 (or type 2, 3, 4), if I is of type 1 (or type 2, 3, 4 respectively). We also need the following notions Definition 6.14 (exceptional regions). Let I = {u 1 , . . . , u m } ⊆ I(x) ⊂ P (F 1 ) where elements of the segment are listed from leftmost to rightmost. We say I is positively exceptional if u 1 ≺ u 2 ≺ . . . ≺ u m ; and negatively exceptional if u m ≺ u m−1 ≺ . . . ≺ u 1 ; we say I is exceptional if it is either positively or negatively exceptional. Similarly, we say a region R ⊆ F 1 is exceptional, if either all segments I ⊆ P (R) are positively exceptional or they are all negatively exceptional. Definition 6.15 (neighbor segments). Let I k (x ), I k (x ) be suitable short segments in F 1 . We say they are neighbors if there exists an unsuitable segment I k (x) ⊆ F 1 and u, v ∈ I k (x) such that uξ p ∈ I k (x ), vξ p ∈ I k (x ). I k (x) will be called a connecting segment. Remark 6.16. In other words, two short intervals are neighbors if there is a quasi-zigzag of length 4 connecting them. Notice that by condition D-(iii) the connecting segment is unique. Definition 6.17 (connected components). A sub-region of F 1 is called connected if any two suitable short segments in it are connected with a quasi-zigzag respecting the tiling {I k (x)} x∈X,0≤k≤359 . If R is a maximal connected sub-region of F 1 then R N (R) is called a connected component of F 1 . Now we are ready to start the proof of the theorem. Without loss of generality we may and will assume that F 1 is connected (otherwise we consider each connected component separately). Then, by Remark 2.3 if u, v ∈ F 1 belong to the suitable intervals of F 1 then either both h(u), h(v) are even or both are odd; without loss of generality again we may and will assume that h(u) is even whenever u belongs to a suitable segment of F 1 . We can choose a sequence g 1 , . . . , g L such that the partial order ≺ imposed on P (F 1 ) is linear and the partial order (still denoted with ≺) imposed on X(F 1 ) is strongly linear, i.e. given any two suitable short intervals indexed by elements of X(F 1 ), one is strongly bigger than the other. Then, without loss of generality and by shifting the tiling of the group Γ by k ∈ {1, 2} units to the right if necessarX(F 1 ) y, we may assume that either F 1 is an exceptional region or there exists a finite collection and one of the following conditions hold: Case A: all short segments I k i (x i ) are of type 1; Case B: all short segments I k i (x i ) are of type 3, and all of them have a neighbor of different type which precedes the short segment I k i (x i ). Case C: all short segments I k i (x i ) are of type 4 and all of them have a neighbor of different type which precedes the short segment I k i (x i ). We will first assume that for all short intervals I k i (x i ), 1 ≤ i ≤ M one of the cases A, B, C holds. We will start the proof by describing the base (more precisely, the first step) of the inductive process: . . . g N ). Since the order ≺ is linear, R 1 contains only one short segment. Let I k 0 (x 0 ) be this short segment. Then I k 0 (x 0 ) has an absolutely minimal element, and let z be this element. Then we build a complete labeled η-normal special elementary piece T 1 with root(T 1 ) = I k 0 (x 0 ), and the starting element at N (z) such that T 1 is successfully related to I k 0 (x 0 ) and let , and carry the inductive process by applying it to F 2 . We continue the process of building special labeled η-normal pieces T 1 , T 2 , . . . , T N and regions F 1 , F 2 , . . . , F N inductively such that (i) the sequence T 1 , T 2 , . . . , T N is complete; (iv) for every i ∈ {1, . . . , N }, T i is a special elementary piece such that root(T i ) is the biggest short interval I in the set P (F i ) with respect to the linear order ≺, and T i is successfully related to I relative to 1≤j≤i−1 S(T j ) (so, in particular, the starting element of T i is N (z) where z is the biggest element of I); (v) for all 1 ≤ i ≤ N , S(T i ) is a subset of an extremal region R 0 (g 1 , . . . , g L ) of F i . where u i is the leftmost and w i is the rightmost element. Now, we assume Case A by the assumption of which, we have where α, β the leftmost and rightmost elements respectively. Then α is successfully related to u i and β is successfully related to w i . Then α, β ∈ 1≤j≤N i −1 S(T j ) because otherwise I k i (x i ) is not the biggest element of P (F N i ). But then, because of completeness, both of α and β are starting elements of some elementary pieces T j 1 , T j 2 where j 1 < N i , j 2 < N i . Hence, the special piece T N i is successful relative to 1≤j≤N i −1 S(T j ), moreover, the sequence T 1 , . . . , T N i is successful at Now, let us assume Case B: (Case C is similar to Case B). Then , w) be the short suitable segment preceding I k i which is a neighbor of I k i and has a different type. Let us assume it has type 4. (other cases are similar/easier). Let also u be the leftmost and w be the rightmost element of I , and I 1 , I 2 , I 3 be the unsuitable short segments containing N (u), N (v), N (w) respectively. Then one of these segments is the connecting segment of I k i and I . Let I 3 be a connecting segment. Since I has type 4, both elements in I 3 \{N (w)} are successfully related to u hence I cannot precede I k i which contradicts our assumption. Let now I 2 be the connecting segment. Then I 2 = (α, N (v), β) where α, β are the the leftmost and the rightmost elements respectively. Then α is successfully related to u and β = N (w i ), hence T N i is successful relative to 1≤j≤N i −1 S(T j ), and T 1 , . . . , T N i is successful at I 2 . Finally, let I 1 be the connecting segment. Then N (u) ∈ I 1 is the starting element of one of the T j , 1 ≤ j < N i , moreover, the set (I 1 \{N (u)}) ∩ 1≤j≤N i −1 S(T j ) either contains the starting element of one of the T N i or it consists of a pair vertex of T N i . Hence again T N i is successful relative to 1≤j≤N i −1 S(T j ), and T 1 , . . . , T N i is successful at I 1 . Thus we obtain that either all but at most M of the suitable short segments of F 1 are of type 3 or all but at most M of the suitable short segments of F 1 are of type 4. But notice that if Γ is amenable then for all > 0, it admits (F, )-Følner sets as well (i.e. one can replace K with F ). This implies the following intermediate proposition which is interesting in its own right Proposition 6.18. If Γ is amenable and satisfies conditions (A) and (D), then for all > 0, Γ admits (K, )-Følner sets F which is either positively exceptional or negatively exceptional. By Proposition 6.18 we may assume that there exists a sequence C which induces a linear order ≺ 1 on F 1 which is either positively exceptional or negatively exceptional. On the other hand, by conditions D-(iii) and (D)-(iv), there exists a sequence C which induces a linear order ≺ on X(F 1 ) and P (F 1 ) with right constraint such that no suitable segment I k (x) in F 1 is of type 3, and there exists a sequence C which induces a linear order on X(F 1 ) and P (F 1 ) with left constraint such that no suitable segment I k (x) in F 1 is of type 4. Thus we have one of the following two cases: Case 1. There exist sequences C 1 , C 2 inducing linear orders ≺ 1 , ≺ 2 on X(F 1 ) and P (F 1 ) such that ≺ 1 is positively oriented, and ≺ 2 is a linear order with right constraint. Case 2. There exist sequences C 1 , C 2 inducing linear orders ≺ 1 , ≺ 2 on X(F 1 ) and P (F 1 ) such that ≺ 1 is negatively oriented and ≺ 2 is a linear order with left constraint. These two cases are symmetric and we will be assuming we are in Case 1. Then all suitable short segments in F 1 are of type 3 with respect to the order ≺ 1 , and no suitable short segment in F 1 is of type 3 with respect to the order ≺ 2 . Because of the right constraint, then, any suitable short segment I in F 1 is either of type 2 or of type 4, and the rightmost element of I is the least element of it. If we have at least 1 900|B 50p | |F | distinct pairs (I 2j−1 , I 2j ), 1 ≤ j ≤ m of suitable short segments in F 1 such that these pairs are connected with mutually non-interfering zigzags then again we obtain a complete sequence of labeled η-normal quasi-GBTs covering F 1 and being successful in at least m non-suitable short segments. Then by Proposition 5.4 we again obtain a contradiction. Thus we may assume that there 100|B 50p | such that, with respect to the ordering ≺ 2 , either for all (x, k) ∈ Y 1 the suitable segment I k (x) is of type 2 or for all (x, k) ∈ Y 1 the suitable segment I k (x) is of type 4. Let us assume the latter case (the former case is very similar). Now, we will be working with both of the orderings ≺ 1 and ≺ 2 . Let g 1 be the least element of C 1 , Y Without loss of generality we may assume that odd . Let Let us observe that if for some (x, k) ∈ Y (1) odd , the suitable segment I k (x) contains an element u with h g 1 (uξ −p ) < h g 1 (u) then the following three conditions hold: For all non-negative integers n, let also Then for all distinct i, j ∈ Z + , there is no segment with non-trivial intersections with both G 1,i and G 1,j , and similarly, there is no segment with non-trivial intersections with both H 1,i and H 1,j ; in addition, if i − j / ∈ {0, 1}, then again no segment intersects both G 1,i and H 1,j . However, if i − j ∈ {0, 1}, u ∈ G 1,i , v ∈ H 1,j and u, v belong to the same short segment I k (x), then this segment is necessarily unsuitable; moreover, if i − j = 0, then v is the rightmost element of I k (x), but if i − j = 1 then v is not the rightmost element of I k (x). Then v ∈ G 1,n for some n and I k (x)\{v} ⊆ H 1,n−1 hence I k (x)\{v} ⊆ 1≤k≤n 1 S(T k ). Let j be the smallest number such that (I k (x)\{v}) ∩ S(T j ) = ∅. Then there exists a suitable short segment I l (z) forming a vertex w of T j (i.e. S(w) = I l (z)) such that for a vertex w ∈ n(w) we have S(w ) ∩ ((I k (x)\{v}) = ∅. We make a key observation that if (l, z) ∈ ∆ then S(w ) = (I k (x)\{v}) (in particular, w is a pair vertex) and the central element of w is the leftmost element of the segment I k (x). Now lets us consider the other vertices of T i ; let v 1 , v 2 ∈ Γ\{v} forming end vertices of T i where v 1 is the leftmost element of some unsuitable segment I k (x ) and v 2 is the middle element of some unsuitable segment I k (x ). Let j 1 , j 2 be the smallest numbers such that (I k (x )\{v 1 }) ∩ S(T j 1 ) = ∅ and (I k (x )\{v 1 }) ∩ S(T j 2 ) = ∅. Then there exists suitable short segments I l 1 (z 1 ), I l 2 (z 2 ) forming vertices w 1 , w 2 of T j such that for some vertices w 1 ∈ n(w 1 ), w 2 ∈ n(w 2 ) we have S(w 1 ) ∩ ((I k (x )\{v 1 }) = ∅ and S(w 2 ) ∩ ((I k (x )\{v 2 }) = ∅. Then we again make a a key observation that if (l 1 , z 1 ) ∈ ∆ then S(w 1 ) = (I k (x )\{v 1 }) (in particular, w 1 is a pair vertex) and the central element of w 1 is the rightmost element of the segment I k (x ). On the other hand, if (l 2 , z 2 ) ∈ ∆ then S(w 2 ) is an end vertex consisting either the rightmost or the leftmost element of I k (x ). Let now C 1 = (g 1 , g 2 , g 3 , . . . ) where g i is the i-th least element of C 1 for all i ≥ 1. We now define Y 2 = Y 1 \Ω 1 and inductively define sequences G i , H i , Ω i , Y i , 1 ≤ i ≤ r and the special elementary pieces T (i) 1 , . . . , T (i) n i +m i and stop the process when for all suitable short segments I k (x), (x, k) ∈ Y 0 there exists a ball B u (10p) of radius 10p centered at some u ∈ I k (x) such that B u (10p) ∩ ∪ 1≤i≤r H i = ∅ (we will have On the set Y we have a linear order ≺ 1 . This order induces a complete sequence of elementary special GBTs Θ 1 , . . . , Θ s which cover F tail . The quasi-GBTs T n i +m i , 1 ≤ i ≤ r are successfully related to elements of Ω i . Besides, we already observed that for any of these elementary pieces, if it is rooted at a suitable short segment I 1 = {z 1 , z 2 , z 3 } with z 1 being the leftmost and z 3 the rightmost element and a special elementary piece from the list T (i) 1 , . . . , T (i) n i is rooted at neighboring suitable short segments I 2 = I k (x) with (k, x) ∈ ∆ the the following holds true: if I is the unsuitable segment connecting I 1 and I 2 (such a segment is unique) then the sequence T . . , Θ s is successful at I in both of the cases when N (z 1 ) ∈ I and N (z 3 ) ∈ I (in case N (z 2 ) ∈ I the sequence can even be unsuccessful). Thus we obtain that there exists m unsuitable short intervals I 1 , . . . , I m and n unsuitable intervals I m+1 , . . . , . . , Θ s is successful at all I k , 1 ≤ k ≤ m, unsuccessful at all I k , m + 1 ≤ k ≤ m + n and neither successful nor unsuccessful at all other unsuitable short intervals of F 1 . Then by Proposition 5.4, Notice that if none of the Case A, Case B, Case C can be guaranteed, then it means that, loosely speaking, most of F 1 (100 percent in the limit) consists of regions which is either positively exceptional or negatively exceptional. Moreover, the places where positively and negatively exceptional regions meet have insignificant cardinality (zero percent in the limit). This is a rather extreme case, however, it seems difficult (if possible) to take care of it with just the inequalities of condition (D) if we exclude the conditions h(g (q) ξ p ) = h(g (q) +p, 1 ≤ q ≤ 2. On the other hand, having the group F in mind as a major application, the linear order ≺ seems to be in agreement with a bi-order of F and this seems to cooperate with the possibility of this extreme case. Since F does not have many interesting quotients, it is impossible to achieve one of the cases A-C, by taking quotient; it is clear that, for example, the following type condition rules out the possibility described above (i.e. the existence of large enough exceptional regions), so it guarantees the existence of one of the cases A, B, C: ( * ) for all ∈ {−1, 1}, there exist k ∈ N, r ∈ Z, and m 1 , . . . , m k ∈ N\{1, 2}, n 1 , . . . , n k ∈ {1, 2} such that Indeed, by condition ( * ), there cannot be an exceptional sub-region R of F 1 which contains a ball of radius R = 2pk+(m 1 +n 1 )+. . .+(m k + n k ) + r (notice that we have two such quantities, one for each value of ; the radius R can be taken as the maximum of these quantities). Hence we obtain a result which is interesting in itself: Theorem 6.19. If Γ satisfies conditions (A), ( * ) and the following weaker version of condition (D): there exists an odd integer p ∈ Z odd such that for all g ∈ Γ, (i) for at least one δ ∈ {0, 1}, the equality h(gη δ ξ −p ) = h(g) + p (1) is satisfied. Then Γ is non-amenable. Remark 6.20. Notice that in the above statement of Theorem 6.19 we have somewhat weakened condition D-(ii). The additional strength of this claim is needed only in our arguments beyond Proposition 6.18. Part 2: Application to R.Thompson's group F In 1965 Richard Thompson introduced a remarkable infinite group F that has two standard presentations: a finite presentation with two generators and two relations, and an infinite presentation that is more symmetric. Basic properties of F can be found in [1] and [2]. Non-amenability of F is conjectured by Ross Geoghegan in [4]. The standard isomorphism between the two presentations of F identifies A with X 0 and B with X 1 . For convenience, let A = {A, B} = {X 0 , X 1 }, let X = {X 0 , X 1 , X 2 , . . .} and let Free(A) and Free(X) denote the free groups of rank 2 and of countably infinite rank with bases A and X, respectively. Normal Form Recall the following basic fact about elements in free groups. Proposition 7.1 (Syllable normal form). There is a natural one-toone correspondence between non-trivial elements in Free(X) and words X e 1 n 1 X e 2 n 2 · · · X e k n k with nonzero integer exponents and distinct adjacent subscripts. A word of the form described in Proposition 7.1 is called the syllable normal form of the corresponding element in Free(X). The terminology refers to the language metaphor under which an element of X is a letter, a finite string of letters and their formal inverses is a word, and a maximal sub-word of the form X e n is a syllable. We will use the following result on normal forms for elements in Thompson's group F. For a proof of this result see [2]. Theorem 7.2 (Thompson normal forms). Every element in F can be represented by a word W in the form X e 0 n 0 X e 1 n 1 . . . where the e's and f 's are positive integers and the n's and m's are non-negative integers satisfying n 0 < n 1 < . . . < n k and m 0 < m 1 < . . . < m l . If we assume, in addition, that whenever both X n and X −1 n occur, so does either X n+1 or X −1 n+1 , then this form is unique and called the Thompson normal form of this element. In order to cleanly describe the rewriting process used to convert an arbitrary reduced word into its equivalent Thompson normal form (and to explain the reason for the final restrictions), it is useful to introduce some additional terminology. Definition 7.3 (Shift map). Let S denote the map that systematically increments subscripts by one. For example, if W = X e 1 n 1 X e 2 n 2 . . . X e k n k , then S(W ) is the word X e 1 n 1 +1 X e 2 n 2 +1 . . . X e k n k +1 . More generally, for each i ∈ N, let S i (W ) denote i applications of the shift map to W . Thus, S i (W ) = X e 1 n 1 +i X e 2 n 2 +i . . . X e k n k +i . Note that this process can also be reversed, a process we call down shifting, so long as all of the resulting subscripts remain non-negative. Also note, that a shift of an odd word, up or down, remains an odd word. Remark 7.4 (Rewriting words). Using the shift notation, the defining relations for F can be rewritten as follows: for all n > m, X n X m = X m S(X n ). More generally, let W = X e 1 n 1 X e 2 n 2 . . . X e k n k be a reduced word and let min(W ) denote the smallest subscript that occurs in W , i.e. min(W ) = min{n 1 , n 2 , . . . , n k }. It is easy to show that for all words W Definition 7.5 (Core of a word). Let W be a reduced word with min(W ) = m. By highlighting those syllables that achieve this minimum, W can be viewed as having the following form: . . X e l m W l where the e's are nonzero integers, each word W i is a reduced word with min(W i ) > m, always allowing for the possibility that the first and last words, W 0 and W k , might be the empty word. We begin the process of converting W into its Thompson normal form by using the rewriting rules described above to shift each syllable X e i m with e i positive to the extreme left and each such syllable with e i negative to the extreme right. This can always be done at the cost of increasing the subscripts in the subwords W i . If we let pos and neg denote the sum of the positive and negative e's, respectively, then W is equivalent in F to a word of the form W = X pos m W 0 W 1 . . . W l X neg m with W i is an appropriate upward shift of the word W i . The appropriate shift in this case is the sum of the positive X m exponents in W to the right of W i plus the absolute value of the sum of the negative X m exponents in W to the left of W i . The resulting word W 0 W 1 . . . W l between X pos m and X neg m is called the core of W and denoted Core(W ). The construction of the core of a word, is at the heart of the process that produces the Thompson normal form. Remark 7.6 (Producing the Thompson normal form). Let W be a reduced word and let W = X pos m Core(W )X neg m be the word representing the same element of F produced by the process described above. If the first letter of W is X m , the last letter is X −1 m and min(Core(W )) > m + 1 then we can cancel an X m and an X −1 m and downshift Core(W ) to produced an equivalent word whose core has a smaller minimal subscript. We can repeat this process until the extra condition required by the normal form is satisfied with respect to the subscript m. At this stage we repeat this entire process on the new core, the down-shifted Core(W ). After a finite number of iterations, the end result is an equivalent word in Thompson normal form. From the description of the rewriting process, the following proposition should be obvious. Proposition 7.7 (Increasing subscripts). If W is word with min(W ) = n and a non-trivial Thompson normal form W , then min(W ) is at least n. In particular, when min(W ) > m, the words W and X e m , e nonzero, represent distinct elements of F. Example 7.8. Consider the following word: W = (X −3 2 X 2 5 )X 4 0 (X 5 1 X −2 3 )X −1 0 (X 7 1 )X 2 0 (X 3 X 4 ) It has min(W ) = 0, pos = 6, neg = −1. Pulling the syllables with minimal subscripts to the front and back produces the equivalent word: W = X 6 0 (X −3 8 X 2 11 )(X 5 3 X −2 5 )(X 7 4 )(X 4 X 5 )X −1 0 with Core(W ) = X −3 8 X 2 11 X 5 3 X −2 5 X 8 4 X 5 . Note that we needed to combine two syllables in order for the core to be in syllable normal form. The process of reducing this to Thompson normal form would further cancel an initial X 0 with a terminal X −1 0 and down shift the core because min(Core(W )) = 3 > 1 + 1. The new word is: W = X 5 0 (X −3 7 X 2 10 X 5 2 X −2 4 X 8 3 X 4 ) and the new core is X −3 7 X 2 10 X 5 2 X −2 4 X 8 3 X 4 . Now we are close to claim that F satisfies the conditions of Theorem 1.3, but for that, first, we need to introduce the height function Proof. It is clear that the function h : F → N ∪ {0} is subadditive, and h(g) = h(g −1 ) for all g ∈ F. We will verify the conditions (i), (iii), and condition (ii) for = 1; for = −1 it is verified similarly. We will also verify condition (iv), but for the values +1 and -1 of , our arguments will be somewhat different from each other. Let g ∈ F, W = X e 0 n 0 X e 1 n 1 . . . X e k n k X −f l m l . . . X −f 0 m 0 be the Thompson normal form of g. For any , δ ∈ Z, let W ( , δ) = gη δ ξ .
A Combination of Nootropic Ingredients (CAF+) Is Not Better than Caffeine in Improving Cognitive Functions Many nootropic compounds claim to have positive effects on cognitive performance. In this study, we tested the effects of the nootropic compound CAF+ on cognitive functioning. CAF+ contains a combination of ingredients that has separately shown to boost cognitive performance, including caffeine, l-theanine, vinpocetine, l-tyrosine, and vitamin B6/B12. We examined whether CAF+ would improve cognitive functions in healthy young participants, and whether it would be more effective than caffeine. We used a randomized double-blind placebo-controlled three-way cross-over design to examine the performance of 21 healthy young participants on a test battery aimed to measure memory performance, attention, and sensorimotor speed. Our main outcome measure was participant’s performance on the Verbal Learning Test (VLT). Subjective alertness, heart rate, and blood pressure were also monitored. Participants were tested at 30 and 90 min after treatment. We found that after 90 min, the delayed recall performance on the VLT after caffeine was better than after CAF+ treatment. Further, caffeine, but not CAF+, improved the performance in a working memory task. In a complex choice reaction task caffeine improved the speed of responding. Subjective alertness was increased as a result of CAF+ at 30 min after administration. Only caffeine increased diastolic blood pressure. We conclude that in healthy young students, caffeine improves memory performance and sensorimotor speed, whereas CAF+ does not affect the cognitive performance at the dose tested. Introduction Nootropics, also known as smart drugs, are compounds that enhance cognitive performance. There are two types of nootropics: synthetic or natural compounds. While both have been widely researched and can effectively increase functions such as memory and attention, natural nootropics are associated with a safer side-effect profile and are even capable of making the brain healthier (see Suliman et al. 2016). There is quite some experimental evidence demonstrating that the natural ingredient caffeine has positive effects on cognitive functions in healthy volunteers. This has been shown in young (e.g., Hogervorst et al. 1999) as well as old subjects (Lorist et al. 1995). Caffeine is considered to be safe, within a certain dose range, and can easily be administered. Although coffee and caffeine have been found to have beneficial effects (Einöther and Giesbrecht 2013), there is still a further potential to improve cognitive functions. Other natural ingredients have been suggested to have beneficial effects on cognitive functions. One example is vinpocetine (a semisynthetic derivative of the vinca alkaloid vincamine, extracted from the plant Vinca minor), which has been shown to improve memory functions in humans (Subhan and Hindmarch 1985). Vitamins B 6 and B 12 have also been found to have positive effects on brain functions in healthy subjects (Bryan et al. 2002). Interestingly, the amino acid tyrosine, which is a precursor of dopamine, has been shown to exhibit both cognition-enhancing as well as stress-reducing effects (Banderet and Lieberman 1989;Brady et al. 1980;Deijen and Orlebeke 1994). A similar effect is suggested for l-theanine (Nathan et al. 2006), although its combined effects with caffeine are more extensively researched. L-theanine is well known for its synergistic effects with caffeine, leading to larger improvements in cognition together than alone (Camfield et al. 2014;Einother et al. 2010;Haskell et al. 2008) and eliminates the vasoconstrictive effects of caffeine (Dodd et al. 2015). It should be noted that the study Dodd et al. does not show evidence of a synergistic effect of caffeine and l-theanine on cognition. However, relatively lower doses were used in their study compared to previous studies, which may suggest that the combined effects depend on the dose. It could be suggested that other combinations of natural ingredients may also lead to synergistic effects, as with caffeine and l-theanine. This could be based on the assumption that they work via unique molecular pathways which, when stimulated together, create an additive effect. For example, caffeine is assumed to work via an adenosine mechanism (Einöther and Giesbrecht 2013), whereas vinpocetine has been suggested to act via a phosphodiesterase type 1 mechanism (Filgueiras et al. 2010). Another possibility is that the effects of certain nootropics may improve the efficacy of others. As mentioned, the anti-sympathetic properties of ltheanine may allow caffeine's arousal-inducing effects to be more potent (Dodd et al. 2015). This can be related to the wellknown inverted U-shaped relation between stress/arousal and cognitive performance (Baldi and Bucherelli 2005), in which case, it is possible that the effects of l-theanine place participants more optimally on this curve for the specific task at hand. As l-tyrosine may also exhibit stress-protective effects, combining it with an arousal-inducing nootropic may also improve participant's efficacy on a cognitive task. Finally, it could be possible that a combination of different nootropics affects entirely different pathways than those that they affect when administrated separately. For these reasons, it would be interesting to combine a number of natural nootropics and investigate their effect on human cognition. Indeed, several natural nootropic blends exist and are available on the market. The main problem with these blends is that the ingredients are often separately known to have cognition-enhancing effects, but regarding their combined effects, the experimental evidence from randomized controlled trials is lacking. With the current study, we tackle part of the problem by investigating one of these nootropic blends. We used a randomized, double-blind, placebo-controlled three-way cross-over design to examine the effects of the natural nootropic blend CAF+ on cognitive markers of memory and attention, and on mood and physiology. CAF+ contains 100 mg caffeine, 200 mg l-theanine, 40 mg vinpocetine, 300 mg l-tyrosine, 1 mg vitamin B 12 , and 20 mg vitamin B 6 . These ingredients have different mechanisms of action and a possible additive effect on brain function. In order to evaluate the synergistic effects of this combination of different ingredients in CAF+, we compared the effects with the wellestablished effects of caffeine. This study is the first randomized controlled trial to examine the effects of CAF+ on cognitive performance in young healthy volunteers. Based on previous research discussed above, we hypothesized that CAF+ would improve cognitive performance more so than caffeine. This could have great potential for treating ageassociated memory impairments such as dementia disorders. The primary objective of this experiment was to establish the effects of CAF+ on cognition, especially memory, in healthy adults. As our secondary objective, we have measured performance on other cognitive tasks after CAF+: working memory performance using an n-back task, response inhibition and focused attention using the Stroop, complex scanning and visual tracking using the digit symbol substitution test (DSST), and motor speed using a simple, choice, and incompatible reaction time task. Another secondary objective was to establish that potential performance differences were concomitant with, but not primarily due to mood changes. The Bond and Lader evaluation form was used to measure the subjective state and perceived alertness (McNair et al. 1971). Finally, blood pressure and heart rate were measured to evaluate drug effects on these basic physiological parameters. Participants All experimental procedures were approved by the Medical Ethics Committee of Maastricht University and performed in accordance with the 1975 Declaration of Helsinki, as revised in 2008. Twenty-one healthy participants (10 male, 11 female; mean = 21.7, SD = 3.1, range = 18-31) were included. The participants were recruited from Maastricht University via advertisements. They were screened with a medical questionnaire and a urine hCG level test, to exclude pregnancy. Other criteria which excluded participation in this study were: having a (history of) cardiac, hepatic, renal, pulmonary, neurological, gastrointestinal, hematological, or psychiatric illness. With regard to the psychiatric illness, those volunteers who have suffered from depression, bipolar disorder, anxiety disorder, panic disorder, psychosis, or attention deficit hyperactivity disorder were excluded from participation. Also, those volunteers with a first-degree relative with a psychiatric disorder or a history with a psychiatric disorder were excluded. Other exclusion criteria were excessive drinking (> 20 glasses of alcohol-containing beverages a week), lactation, use of medication other than oral contraceptives, use of recreational drugs from 2 weeks before until the end of the experiment, and any sensory or motor deficits which could reasonably be expected to affect test performance. All participants had to sign an informed consent form before inclusion and received a financial reward for their participation. Design and Treatment In this study, a double-blind, placebo-controlled cross-over design was used. Participants were not allowed to drink alcohol 24 h before testing and use caffeine or smoke on the test day. There were three test days on which participants received either a placebo, 100 mg caffeine, or CAF+ (consisting of 100 mg caffeine, 200 mg l-theanine, 40 mg vinpocetine, 300 mg l-tyrosine, 1 mg vitamin B 12 , and 20 mg vitamin B 6 ). Test days were separated by a wash-out period of at least 7 and at most 14 days. For each test session, participants performed all tasks three times: once without taking a capsule (pre-test), once 30 min after having taking a capsule, and once 90 min after having taken a capsule. In each session of 30 min, a battery of cognitive tasks had to be completed, a questionnaire was taken, and heart rate and blood pressure were measured (see Fig. 1). Cognitive Test Battery The battery consisted of an adjusted version of the Rey Auditory Verbal Learning Test (Lezak 1995), an n-back task, the Stroop color-word task, the digit symbol substitution test, and a reaction time task. Verbal Learning Test Verbal memory was assessed using a list of 15 Dutch monosyllabic words of which participants had to remember as many as possible. All words were presented one by one on a computer screen, after which, the participant was asked to name as many words as they could remember from this list. Twenty minutes later, the participant was again requested to recall as many words as possible. Parameters obtained from this task were the number of correctly recalled words immediately after presentation of the list (immediate recall), and the number of correctly recalled words 20 min after presentation of the word list (delayed recall). N-Back Task This task was designed to measure working memory. A sequence of numbers was presented to the participant, and the task consisted of indicating when the presented number matched the one from n steps earlier in the sequence. In this study, we used a 0-back, 1-back, and 2-back task, in which the 0-back was a simple focused attention/speed task and the 1-and 2-back required accessing information from working memory. Parameters obtained from this task were the number of correct responses, i.e., the number of times the participant pressed the button when the number matched the one from n steps earlier and reaction times. Stroop Color-Word Task The Stroop task induces interference and assesses response inhibition and focused attention. In this task, color names (in Dutch) were printed in colored ink, and participants were asked to name the color of the ink instead of the words themselves. However, the color names and the color of the ink were mostly incongruent to induce interference. The colors used in this task were blue, red, green, and yellow. Parameters obtained from this task were the number of errors made and time to complete the task. Digit Symbol Substitution Test This test assessed complex scanning and visual tracking. On the computer screen, a series of nine numbered symbols were shown that represented a Bkey^. The participant was then presented with a series of parallel boxes that contained a symbol in the top half of the screen and a number in the lower half of the screen. The symbol and number had to be matched to form the key by responding to the number in the lower half of the screen with a mouse click. Parameters obtained from this task were the number of correct responses made within 3 min and reaction times. Reaction Time Task This task was divided into three parts. In the first part, the participant had to react as soon as a button lighted up in the center of a response box, by pressing that button (simple reaction time). In the second part, one of three possible buttons would light up (choice reaction time). Finally, one of three possible buttons could light up again, but now the button to the right of the lighted up button had to be pressed (inverted choice reaction time). In all three tasks, the participant was instructed to keep a red button pressed before and after pressing the target button. Responses had to be made as quickly as possible. Parameters obtained from all versions of this task were reaction times, calculated as time needed to release the red button, and movement times, calculated as time needed to move from red button to target button. Questionnaire In order to capture the subjective feelings of the participants during the treatment period, we used the Bond and Lader visual analogue scale (Bond and Lader 1974). We used nine items to capture subjective feelings of Balertness^. Blood Pressure and Heart Rate The blood pressure and heart rate were measured using a calibrated device. This was done before each assessment at baseline, at 30 min, and at 90 min (see Fig. 1 Bphysiology^). Statistical Analysis SPSS was used for the data analyses. For all variables, the mean was calculated per treatment condition. However, for reaction and movement times, the median was used for the analyses, since these data were not normally distributed. A repeated measure design was used with within-subject factor treatment (placebo, caffeine, CAF+) and time (30 and 90 min). We also tested for simple effects (treatment effects for each time point separately). A Sidak post-hoc analysis was used to investigate differences between treatment conditions in more detail. We did not include the baseline performance in the analyses because it was found that there were still practice effects, and the data showed various outliers. Since these practice effects interfere with the treatment effects, we skipped these data. All data were tested at a p value of 0.05. Verbal Learning Test No overall treatment effect was found on the immediate recall trials, or at the separate 30 and 90 min time points (all F's(2,19) < 2.11, p ≥ 0.149). However, on the delayed recall, a significant overall treatment effect was evident (F(2,19) = 5.87, p < 0.010). Analysis of simple effects revealed that there was a significant difference between conditions on delayed recall performance at 90 min (F(2,19) = 4.31, p < 0.029). A post-hoc analysis using Sidak's procedure (α = 0.05) further revealed that participants remembered on average 2.4 words less when they had been given a CAF+ capsule then when they had been given a capsule-containing caffeine (see Fig. 2). No significant differences were found at 30 mins (F(2,19) = 1.48, n.s.). N-Back Task For all versions of this task, no overall treatment effects were found on the accuracy scores (all F's(2,19) < 3.17, p ≥ 0.065). Simple effects revealed that a statistically significant existed at 90 min on the 2-back task (F(2,19) = 5.30, p < 0.015). Sidak's post-hoc analysis (α = 0.05) showed that caffeine improved accuracy scores significantly from 92 to 95%, as presented in Fig. 3. No effects on accuracy scores were found at 30 min for this task (F(2,19) = 1.00, n.s.). The analysis did not reveal reaction times to be significantly affected on any of the task conditions (all F's < 3.05, p ≥ 0.071). Digit Symbol Substitution Test The overall treatment and time point analyses did not reveal significant effects of placebo, caffeine, or CAF+ on accuracy scores (all F's < 0.93, p ≥ 0.411), number of correct answers (all F's < 0.81, p ≥ 0.462), or reaction times (all F's < 0.90, p ≥ 0.425). Reaction Time Task It was found that on the choice and incompatible choice reaction time tasks, there was an overall treatment effect on participant's movement times, which is the time needed to move from red button to target button (choice (F(2,19) = 7.58, p < 0.004)); incompatible choice (F(2,19) = 6.74, p < 0.006)). Analysis of simple effects showed that on both tasks, this difference was significant between the caffeine and CAF+ Bond and Lader The analysis did not show an overall treatment effect (F(2,19) = 2.25, n.s.), but when analyzed per time point, it was revealed that participants reported to be more alert with CAF+ than with placebo after 30 min (F(2,19) = 4.37, p < 0.027), see Fig. 5. Alertness was not found to be affected 90 min after treatment (F(2,19) = 1.86, n.s.). Discussion In this study, we examined the acute effects of CAF+ and caffeine on different cognitive functions. This was done in a double-blind placebo-controlled randomized cross-over design. We found that caffeine improved working memory performance in the n-back task at the most difficult level and outperformed CAF+ treatment on the delayed verbal memory task. Further, caffeine increased the speed of responding in the incompatible reaction time task. On a physiological level, caffeine increased the diastolic blood pressure whereas CAF+ did not have an effect on blood pressure. Interestingly, CAF+ increased subjective alertness. Based on these data, it is concluded that caffeine improved cognitive performance and that CAF+ did not. It is well known that caffeine has various effects on cognitive functions. This was replicated in the current study and further supports this well-documented effect. The hypothesis of the current study was that CAF+ would have stronger effects than caffeine alone. This hypothesis could not be confirmed; CAF+ did not have an effect on the cognitive performance. In the verbal memory task, we found that the performance after CAF+ treatment was worse compared to the caffeine treatment. Also, in the reaction time task, CAF+ treatment was associated with slower response times as compared to caffeine. These data indicate clearly that CAF+ does not improve cognitive functions in young healthy volunteers at the dose used. However, it could be speculated that chronic treatment with CAF+ may have positive effects. The dose used for the caffeine capsule (100 mg) is in the range of the reported cognition-enhancing effects (McLellan et al. 2016). The same dose was used for the CAF+ formulation. CAF+ also contains other ingredients that may boost the brain systems that increase the improved cognitive performance in young subjects. It could be argued that the combination of the extra ingredients in addition to the caffeine may constitute a dose that is too high to have beneficial effects on cognition and would have placed participants non-optimally on the inverted-U curve of stress/arousal, leading to less than optimal performance on the tasks. However, CAF+ includes ingredients that both promote and reduce arousing drive, as discussed in the introduction. It is not likely, for this reason, that the addition of the ingredients in CAF+ necessarily causes a shift to the right on the on the inverted-U curve. Similarly, it has been shown in young and healthy students that the effects of caffeine only benefited memory during participant's nonoptimal phase (Sherman et al. 2016). In the current study, it may have been the case that the possible de-arousing effects of l-theanine and l-tyrosine in CAF+ did not sufficiently place participants in their non-optimal phase in order for the arousalpromoting ingredients to benefit task performance. Additionally, the inverted-U may be shifted for different tasks (Salehi et al. 2010). The tasks in this study were performed sitting quietly, with no accessory motions. It should be considered whether the CAF+ dose can be optimally developed for such tasks, which have limited to no stress component. Finally, ingredients may have a different pharmacological profile when added together than when administered alone. For Fig. 3 Effects of the three treatment conditions on accuracy scores in the 2-back task (means + S.E.M.). At 90 min, participants were on average 3% more accurate in their response after caffeine than with a placebo pill (p < 0.015) example, it has repeatedly been shown that caffeine and ltheanine added together have different effects on cognition and mood, leading to cognitive benefits not seen when administered alone (Camfield et al. 2014;Einother et al. 2010;Haskell et al. 2008). More research is needed to look into the mechanisms underlying such effects. Since our participants were young and healthy, it may have been the case that they performed at their maximum level already, leaving little room for improvement. On the other hand, it may reasonably be expected that a cognition enhancer would also be able to improve cognition in such a population. This was demonstrated in two studies examining the effects of the cognitive enhancer methylphenidate in healthy young volunteers: methylphenidate was found to decrease response times and improve episodic memory in one study (Linssen et al. 2014) and to improve declarative memory, attention, and response inhibition in another study (Linssen et al. 2012). We did find some effects of caffeine, although it must be mentioned that absolute differences were relatively small. Further studies could explore the effects of CAF+ in older participants (> 40 years old), who generally perform less well in the current tasks when compared to their younger counterparts. It is interesting to note that CAF+ had effects on a subjective measure of alertness. Participants reported to be more alert with CAF+ as compared to the placebo treatment, but only 30 min after intake of the capsule. A comparable study by Giesbrecht et al. (2010) looked at the combination of caffeine and l-theanine on cognition and subjective alertness in young adults. They found that attention switching improved but performance on the other cognitive tasks did not, while subjective alertness was increased overall. Similar to our results, no differences in heart rate were found, ruling out the influence of this factor on the feelings of alertness. It could be argued that the combination of caffeine and l-theanine helps focus attention, but not enough to show improved performance in a majority of cognitive tasks. Additionally, our findings suggest that the (subjective) effects of CAF+ appear quite quickly and disappear after 90 min, which was not the case in the aforementioned study. As the dose of caffeine and l-theanine in CAF+ is higher than the dose of these ingredients in the study by Giesbrecht et al. Participants were found to feel more alert 30 min after ingestion of CAF+, than after taking a placebo pill (p < 0.027) (40 mg vs 100 mg caffeine; 97 mg vs 200 mg l-theanine), the possibility of an inverted-U dose-response relationship may also exist for subjective alertness. In other studies that use combined ingredients with a higher dose of caffeine, such as Red Bull Energy Drink, results differ. Red Bull contains 80 mg of caffeine, and among other ingredients, vitamin B 6 and B 12 . Wesnes et al. (2017) found that in a similar population of young volunteers, Red Bull improved cognition and subjective alertness. However, alertness was not improved by the sugar-free version of the drink. On the other hand, Kammerer et al. (2014) did not find improvements in cognition, although self-reported alertness was not measured here. Finally, this study also shows that caffeine intake exerts some effects on physiological measures. An increased (diastolic) blood pressure is a well-known effect of caffeine, which tends to peak in 1-2 h after intake (Mort and Kruse 2008), as the current study's findings also show. Interestingly, treatment with CAF+ did not result in a blood pressure increase or at least mitigated the blood pressure increase due to caffeine alone. This is in line with previous research (Dodd et al. 2015), and suggests that the additional ingredients of CAF+ may act against the arousing influence of caffeine. Another well-known effect of caffeine is its ability to increase alertness (Mikalsen et al. 2001). While we were not able to find a statistically reliable effect of caffeine on alertness, the mean values on the alertness scale were higher after caffeine treatment (30 and 90 min) than after placebo treatment. Furthermore, alertness was measured subjectively by means of a questionnaire here, forming only an indirect indication of participant's alertness level. In conclusion, the current study did not show a positive effect of CAF+ on cognitive functions. Future research should focus on including more complex and/or stressful tasks or investigating stress-induced cognitive deficits. Additionally, the dose of the different ingredients could be adjusted, and inclusion of older participants may be another approach to explore the potential cognition-enhancing effects of CAF+ or other combinations of nootropic ingredients.
Current state and prospects of gout treatment in Korea Effective management of gout includes the following: appropriate control of gout flares; lifestyle modifications; management of comorbidities; and long-term urate-lowering therapy (ULT) to prevent subsequent gout flares, structural joint damage, and shortening of life expectancy. In addition to traditional treatments for gout, novel therapies have been introduced in recent years. Indeed, new recommendations for the management of gout have been proposed by various international societies. Although effective and safe medications to treat gout have been available, management of the disease has continued to be suboptimal, with poor patient adherence to ULT and failure to reach serum urate target. This review outlines recent progress in gout management, mainly based on the latest published guidelines, and specifically provides an update on efficient strategies for implementing treatment, efficacy and safety of specific medications for gout, and cardiovascular outcomes of ULT. In particular, we reviewed gout management approaches that can be applied to a Korean population. INTRODUCTION Gout is the most common inflammatory arthritis, which is induced by hyperuricemia and subsequent monosodium urate (MSU) crystal deposition in joints and other tissues [1]. Gout has a negative impact on the quality of life of patients due to extreme joint pain, and various comorbidities associated with the disease can be life-threatening. The prevalence and incidence of gout are increasing not only in Korea, but also in many other countries worldwide [2][3][4]. Despite its increasing prevalence and incidence, and the availability of effective and safe medications to treat gout, management of the disease has continued to be suboptimal, with poor patient adherence to urate-lowering therapy (ULT) and failure to attain the therapeutic level of serum urate (SUA) [5,6]. The optimal management of gout requires a multifarious approach, including mitigation of gout flare symptoms, lifestyle modification, patient education, management of comorbidities, and particularly, long-term ULT to dissolve MSU crystals and prevent future gout flares. Moreover, gout management should be individualized for each patient based on their comorbid conditions or concomitant medications. In this review, we provide an update on the treatment of gout mainly based on recently published guidelines, including the 2020 American College of Rheumatology (ACR) guidelines [7], the 2017 British Society for Rheumatology (BSR) guide- Lifestyle modification and diet therapy Results from a diet and genetics meta-analysis showed that the effect of diet or individual food items on SUA levels was small [11]. However, dietary factors may trigger gout flares, and patients with gout frequently seek advice on the dietary management of gout. Above all, a significant dose-response relationship between alcohol consumption, regardless of alcoholic beverage type, and the risk of recurrent gout flares was observed in a case-crossover study [12]. Hence, it is recommended that patients with gout limit their alcohol intake regardless of disease activity [7]. Regarding other dietary factors, a purine diet was reportedly associated with an increased risk of gout flares [13], and a high-fructose diet was associated with a high risk of incident gout [14]. Indeed, purine intake and a high-fructose diet should be limited in patients with gout [7]. Although vitamin C supplementation has been shown to lower SUA levels [15], vitamin C at a modest dose (500 mg/day) was insufficient as monotherapy or adjunct to standard ULT [16]; therefore, it is no longer recommended in patients with gout [7]. Recently, APLAR suggested that the evidence of limiting purine-rich foods to lower SUA levels or prevent gout flares in patients with gout is insufficient [10]. Since excessive food restrictions may reduce patients' compliance with medical treatment, it is better to focus on treatment using ULT and restrict mainly alcohol and high-fructose intake [17,18]. Weight loss approaches are also conditionally recommended for obese or overweight patients [7]. A large cohort study demonstrated that obesity was associated with a higher risk of incident gout and that changes in body mass index were associated with the risk of recurrent gout flares in a dose-responsive manner [19]. Similarly, weight loss through bariatric surgery or diet also demonstrated clinically relevant reductions in SUA levels and gout flare frequency [20,21]. Education for the patients and primary care physicians Drug adherence in patients with gout worldwide is very poor [22,23]. Drug adherence rates of patients with gout were the lowest when comparing drug adherence rates among patients with gout, hypertension, hypercholesterolemia, type 2 diabetes mellitus, hypothyroidism, osteoporosis, and seizure disorders [24]. To overcome this problem, education for primary care physicians is essential, in addition to education for patients with gout. Therefore, patient education is emphasized in almost all treatment guidelines [7,9,10]. Acute gout flares The use of topical ice on inflamed joints has been shown to reduce pain [25] and has been conditionally recommended as an adjuvant treatment in patients experiencing a gout flare [7]. Additional non-pharmacological care for gout flares includes rest of acutely affected joints, mobility assistance, and hydration [8]. Asymptomatic hyperuricemia There is no universally accepted definition of hyperuricemia; however, it is typically defined as an SUA level > 7.0 mg/ dL [26]. Asymptomatic hyperuricemia is a condition characterized by hyperuricemia without any symptoms or signs of MSU crystal deposition disorders, such as gout, urolithiasis, and urate nephropathy [27]. with an increased risk of hypertension, chronic kidney disease (CKD), and cardiovascular (CV) disease [28][29][30][31][32], hyperuricemia itself has not been established as a causal factor in any of these diseases. Among patients with asymptomatic hyperuricemia, ULT with febuxostat has been shown to significantly reduce incident gout flares over a 3-year period; however, the incidence of gout was low for both the febuxostat and placebo groups (0.9% vs. 5.9%) [33], which would correspond to a 3-year number needed to treat with febuxostat of 24 patients to prevent a single gout flare. In addition, among those with asymptomatic hyperuricemia with SUA levels > 9 mg/dL, only 22% developed gout within 5 years [34]. Given that the benefits of ULT do not outweigh the costs or risks associated with treatment for most patients with asymptomatic hyperuricemia, including those with comorbid CKD or CV disease, initiation of ULT is recommended against in those with asymptomatic hyperuricemia [7]. However, when a patient's SUA level is > 9 mg/dL, individualized ULT can be considered based on each patient's lifestyle or comorbidities. Acute gout flares Gout flares are induced by the activation of the NLR family pyrin domain containing 3 (NLRP3) inflammasome by MSU crystals with interleukin 1β (IL-1β) production and a subsequent cascade of other pro-inflammatory cytokines and chemokines [35,36]. The major goals of treatment for gout flares are pain control and rapid suppression of inflammation. Early treatment with colchicine, non-steroidal anti-inflammatory drugs (NSAIDs), or glucocorticoids (oral or injectable) is recommended as a first-line therapy for gout flares [7][8][9]. Head-to-head clinical trials comparing first-line anti-inflammatory agents with different mechanisms of action have demonstrated similar efficacy between low-dose colchicine, NSAIDs, and oral glucocorticoids for treating gout flares [37][38][39][40]. Regarding safety issues, naproxen (750 mg/day for 7 days) caused fewer side effects than low-dose colchicine (1.5 mg for 4 days) [37], while indomethacin (150 mg/day for 2 days followed by 75 mg/day for 3 days) resulted in more minor adverse events than prednisolone (30 mg/day for 5 days) [38]. When colchicine is the chosen agent, lowdose colchicine (1.0 to 1.2 mg immediately followed by 0.5 to 0.6 mg after an hour) is recommended instead of highdose colchicine (4.8 mg) due to their comparable efficacy and a lower risk of adverse effects associated with low-dose colchicine [7,41]. While ACR, EULAR, and APLAR do not prioritize among the three first-line therapies, the choice of anti-inflammatory agent generally depends on comorbid conditions and concurrent medications for each patient [7,9,10]. For instance, NSAIDs should be avoided in patients with renal impairment, peptic ulcer disease, cardiac disease, and concomitant anticoagulant use. Colchicine should not be administered to patients with severe renal impairment or severe liver disease, or administered in combination with strong inhibitors of cytochrome P450 3A4 and/or P-glycoprotein, such as cyclosporin, ketoconazole, clarithromycin, and verapamil [9,42]. Moreover, high-dose glucocorticoids are avoided in patients with active infection or uncontrolled diabetes. In contrast, intravenous, intramuscular, or intra-articular injections are preferred in patients who are unable to take oral medications [7]. Intra-articular glucocorticoid injection can also be considered for treatment of acute monoarticular gout [8,43]. For patients who have had recurrent flares, treatment selection is typically driven by patient preference based on past experiences of efficacy or adverse events associated with ULT [7]. In addition, for patients with severe gout flares, for example, when multiple joints are involved, combination therapy, such as colchicine and NSAID, or colchicine and glucocorticoids, may be considered [8,9,44]. Given that IL-1 has emerged as a crucial cytokine in gout flares, IL-1 inhibitors, including canakinumab, anakinra, and rilonacept, have been used to treat gout flares in Western countries [45][46][47]. However, IL-1 inhibitors are not available in Korea. Dapansutrile (OLT1177) is a novel anti-inflammatory agent, an orally active β-sulfonyl nitrile molecule that selectively inhibits the NLRP3 inflammasome in neutrophils and human monocyte-derived macrophages and the subsequent activation of IL-1β [48]. Further studies are needed to confirm the clinical potential of dapansurtrile in gout flares. Prophylaxis against mobilization flares The experience of gout flares that occur in the first few months of ULT initiation is one reason for stopping ULT [49]. There are two main strategies for decreasing the risk of gout flares during this period. First, concurrent anti-inflammatory prophylaxis therapy is strongly recommended during the first 3 to 6 months of ULT [7][8][9]. While the most indicated is low-dose colchicine, and a low-dose NSAID as an alternative in cases of intolerance or contraindication to colchiwww.kjim.org https://doi.org/10.3904/kjim.2022.036 cine [8][9][10], the ACR guidelines also indicate prednisone/ prednisolone for prophylaxis therapy [7]. Previous studies have shown that concomitant administration of naproxen 500 mg/day or colchicine 0.6 mg/day for 3 to 6 months effectively reduced gout flares [50][51][52]. Among gout patients in Korea, colchicine (62.3%) was reported as the most commonly prescribed initial prophylactic agent, followed by NSAIDs (39.9%) in a multicenter retrospective cohort study [53]. Regarding the duration of prophylaxis therapy, the 2017 BSR and 2016 EULAR guidelines recommended to continue prophylaxis during the first 6 months of ULT [8,9], and the 2020 ACR for 3 to 6 months [7]. While there are no currently available Korean guidelines on this issue, prophylaxis therapy more than 6 months from initiation of ULT, and achieving target SUA at the time of stopping prophylaxis was associated with fewer gout flares in Korean patients with gout [53]. In terms of the colchicine dose for prophylaxis therapy, low-dose colchicine (0.6 mg/day) was shown to prevent gout flare with fewer adverse events when compared with regular dose (1.2 mg/day) of colchicine among Korean gout patients [54,55]. Second, ULT should be initiated at a low-dose and gradually increased to reduce the risk of flares. The current guidelines recommend starting doses of allopurinol and febuxostat at ≤ 100 and ≤ 40 mg/day, respectively, and lower allopurinol doses in patients with CKD [7,9]. A randomized open-label trial (FORTUNE-1 study) demonstrated that starting febuxostat at a low-dose with stepwise dose increase, as well as concomitant low-dose colchicine prophylaxis therapy, effectively prevented gout flares compared to fixed-dose febuxostat alone; however, there was no significant difference in the incidence of gout flares between stepwise increases in febuxostat dose and low-dose colchicine prophylaxis [51]. Long-term ULT All patients with gout should be informed that gout is a chronic disease with MSU crystal deposition and that longterm ULT is required to suppress tophi and prevent subsequent gout flares and joint damage. Indications for initiation of ULT The 2020 ACR guidelines for gout management strongly recommend ULT for all patients with frequent gout flares (≥ 2 annually), subcutaneous tophi, and/or evidence of radiographic damage due to gout [7]. Initiation of ULT was conditionally recommended for patients experiencing their first flare with comorbid moderate-to-severe CKD (glomerular filtration rate [GFR] < 60 mL/min/1.73 m 2 ), urolithiasis, or a very high SUA level of > 9 mg/dL [7]. Similar recommendations for ULT indications have been made by the EULAR [9]. All patients with recurrent flares, tophi, urate arthropathy, and/or renal stones were indicated for ULT, and initiation of ULT was also recommended close to the time of the first diagnosis in young patients (< 40 years), a very high SUA level of > 8 mg/dL, and/or comorbidities [9]. There are some discrepancies among clinical guidelines regarding whether ULT should be initiated during an acute gout flare. While the EULAR does not provide guidance with regard to this issue [9], the BSR guidelines discourage the initiation of ULT during a gout flare and recommend postponing the ULT until acute inflammation has resolved [8]. Instead, ACR conditionally recommends ULT initiation during a flare [7] based on two randomized controlled trials (RCTs) that showed that ULT initiation during this period did not significantly extend the duration or severity of the flare [56,57], and also considering the conceptual benefits of time efficiency and flare symptoms that serve as a powerful motivator for ULT initiation. If the patient's inflammation and pain are severe, we recommend starting anti-inflammatory treatment first and ULT a week after the inflammation subsides; if the patient's inflammation is not severe and pain is tolerable, simultaneous anti-inflammatory treatment and ULT can be considered. Treat-to-SUA target Long-term ULT based on the treat-to-SUA target protocol has been proven to suppress gout flares, reduce urate crystal deposition, and prevent joint damage in gout [58,59]. The treat-to-SUA target approach, with a target SUA level of 6.0 mg/dL, was recommended by the ACR and EULAR [7,9]. A lower target SUA level of ≤ 5.0 mg/dL is recommended by the BSR for all patients with gout [8], and by the EULAR for those with high urate burden, such as tophaceous gout [9]. Following ULT initiation at a low-dose, the dose should be progressively titrated using serial SUA measurements to achieve and maintain the target SUA [7][8][9]. (XO) inhibition (allopurinol and febuxostat), promotion of renal urate excretion (probenecid and benzbromarone), and catalysis of uric acid to water-soluble allantoin (pegloticase). Table 2 summarizes each of these agents for the treatment of gout. 1) Allopurinol The first-line ULT recommended for patients with gout is allopurinol, which is a purine-based inhibitor of XO that was first used in 1966 [7][8][9]. Although rare, potentially life-threatening allopurinol hypersensitivity syndrome (AHS) typically develops within the first few months of treatment with allopurinol [60]. In particular, AHS, which appears in Koreans is unique and life-threatening [61]. Risk factors for AHS include the presence of the HLA-B*5801 allele, CKD, old age, concomitant diuretic use, and a high initial dose of allopurinol [62]. Therefore, in subgroups of Southeast Asian and African ethnicities, with a relatively high prevalence of HLA-B*5801, pre-testing for HLA-B*5801 is conditionally recommended before starting allopurinol [7,62]. The positive rate of HLA-B5801 in Koreans is reportedly 12.2% [63] while the positive rate in Caucasians is only 0.7% [63,64]. In addition, HLA-B5801 genotyping prior to treatment with allopurinol was less costly and more effective than treatment without genotyping among gout patients with CKD in Korea over a time period of 12 months [65]. Hence, it is recommended that Korean patients with gout, especially those with renal insufficiency, undergo the HLA-B58*01 test before starting allopurinol. Recently, Korean national health insurance has begun to cover this test at a reasonable cost. Although genetic factors (presence of HLA-B*5801) or reduced renal function are not modifiable, the initial allopurinol dose can be adjusted. Hence, allopurinol should be started at a low-dose, such as 100 mg/day in general, or ≤ 50 mg/day for those with CKD (GFR < 60 mL/min/1.73 m 2 ) [7]. Once patients with gout are established on an initial lowdose of allopurinol, the dose can be safely increased by 100 mg increments every month, or by 50 mg increments for those with renal impairment, until the target SUA is reached [66]. The average dose of allopurinol needed to achieve a target SUA of < 6.0 mg/dL was reported to be approximately 400 mg/day [66]. Although the United States Food and Drug Administration (FDA)-approved maximal dose of allopurinol is 800 mg/day [67], many patients with gout are not treated with the maximum permitted doses of allopurinol in real-world clinical settings [68,69]. A retrospective healthcare claims database study in Korea showed that the mean maximal dose of allopurinol used was 248 mg/day, with only 6.9% of allopurinol users receiving a maximal dose of > 300 mg/day [69]. 2) Febuxostat Febuxostat is a non-purine XO inhibitor (XOI) that is more selective and potent than allopurinol [70]. It is used as second-line ULT for patients with gout [7][8][9], mainly due to CV safety of febuxostat versus allopurinol in patients with gout and CV comorbidities (the Cardiovascular Safety of Febuxostat and Allopurinol in Patients with Gout and Cardiovascular Morbidities [CARES] trial) [71]. However, Koreans have a much higher risk of AHS than Westerners; therefore, the Korean FDA continues to maintain febuxostat as a first-line ULT along with allopurinol, and it is the most commonly used ULT in Korea [72]. Moreover, febuxostat showed a significantly higher persistence rate than allopurinol among Korean patients with gout after adjusting for confounding factors [73]. Dose adjustment of febuxostat is not necessary for patients with mild or moderate renal impairment, since its main route of elimination is the liver [70]. Among Korean patients with gout, febuxostat demonstrated good urate-lowering efficacy and renal safety even in cases of stage 4-5 CKD (GFR < 30 mL/min/1.73 m 2 ) not yet on dialysis [74], and was also efficacious and well tolerated in those undergoing dialysis [75]. The initial dose of febuxostat suggested for Korean gout patients on dialysis was 20 to 40 mg/day [75]. The maximal dose of febuxostat approved by the FDA is 80 mg/day, and co-administration of azathioprine or 6-mercaptopurine with febuxostat is contraindicated [76]. Febuxostat at a dose of 80 or 120 mg/day has shown better urate-lowering efficacy than allopurinol at a dose of 300 mg/day in RCTs [77][78][79]. 3) Comparative CV risk between allopurinol and febuxostat The CARES trial showed comparable rates of adverse CV events between febuxostat (up to 80 mg/day) and allopurinol (up to 600 mg/day); however, all-cause mortality and CV-related death were higher with febuxostat than with allopurinol among gout patients with coexisting CV disease [71]. This result led to the FDA black box warning that patients taking febuxostat should be monitored for signs and symptoms of myocardial infarction (MI) and stroke [76]. However, limitations of the CARES trial should be considered when interpreting the results, including a high dropout rate and the fact that most deaths (approximately 85%) occurred after ULT cessation [71]. Moreover, the absolute CV risk of febuxostat is uncertain due to the absence of a control group. In contrast, a large observational study, including 99,744 older medicare patients with gout, demonstrated no difference in CV risk, including MI, stroke, coronary revascularization, heart failure, or all-cause mortality between febuxostat and allopurinol initiators [80]. In addition, the Febuxostat versus Allopurinol Streamlined Trial (FAST) trial among 6,128 patients with gout aged 60 years or older, with at least one additional CV risk factor, also showed a comparable risk of adverse CV events or all-cause and CV mortality between febuxostat and allopurinol [81]. 4) Benzbromarone Uricosuric agents, including probenecid and benzbromahttps://doi.org/10.3904/kjim.2022.036 rone, can be used alone or in combination with an XOI in patients who are resistant or intolerant to XOIs [8,9,82,83]. Uricosurics can induce urolithiasis due to their mechanism of action and, therefore, should be avoided in patients with a history of or who currently have urolithiasis. Furthermore, all patients who are on uricosurics should receive adequate hydration; however, neither checking urinary uric acid levels nor receiving alkalinizing agents is recommended [7]. Benzbromarone has been shown to be effective and safe in general, even for gout patients with CKD; however, it was not approved in the USA and was withdrawn from several European countries due to hepatotoxicity associated with its use [84]. Nevertheless, the estimated risk of hepatotoxicity of benzbromarone in Europe is less than 1:17,000 [85]. Hence, patients treated with benzbromarone should undergo liver function tests. Benzbromarone is the only uricosuric agent available in Korea. This drug can be used as a second-line ULT in Korea. Lesinurad, a urate transporter-1 inhibitor indicated in combination with an XOI, was discontinued in the USA in February 2019 by the marketing-authorization holder [86] and was withdrawn in Europe in July 2020 [87]. Clinical trials have been conducted to determine the efficacy and safety of dotinurad, another novel drug with selective urate reabsorption inhibitor property [88]. 5) Comparative CV risk between uricosuric agents and allopurinol Unlike XOIs, data on the CV safety of uricosuric agents are limited. In a large medicare study, probenecid was associated with a reduced risk of CV events and all-cause mortality compared with allopurinol [69]. Similarly, a large population-based cohort study of Korean patients with gout reported a decreased risk of composite CV events and allcause mortality associated with benzbromarone compared with allopurinol [69]. However, these studies could not prove causality, and further studies are required to confirm whether uricosuric agents are favored over XOIs in terms of CV outcomes. 6) Recombinant uricase Pegloticase is a recombinant uricase conjugated to monomethoxypolyethylene glycol, which is administered as an intravenous infusion every 2 weeks [89]. RCTs of pegloticase over 6 months resulted in a reduced frequency of gout flares, resolution of tophi, and improved patient-reported outcomes, including pain, physical function, and health-related quality of life, among chronic patients with gout who were refractory or intolerant to conventional ULTs [90]. Overall, 41% of the patients treated with pegloticase developed anti-pegloticase antibodies with loss of SUA-lowering efficacy [91]. Moreover, pegloticase infusion reactions were associated with anti-pegloticase antibodies and loss of response [89]. Therefore, for patients treated with pegloticase, SUA levels should be monitored prior to each infusion of pegloticase, and treatment should be stopped if the SUA level increases to > 6 mg/dL, specifically on two consecutive measurements. Due to concerns of toxicity and cost, pegloticase is recommended for patients with severe symptomatic tophaceous gout in whom target SUA cannot be reached with standard ULTs, including XOIs and uricosuric agents, alone or in combination [7][8][9]. The concomitant use of immunosuppressive agents, such as methotrexate, azathioprine, and mycophenolate, has been attempted to reduce the development of anti-drug antibodies and infusion reactions [92]. However, pegloticase is not yet available in Korea. MANAGEMENT OF COMORBITIES AND CON-COMITANT MEDICATIONS Medications for associated metabolic conditions, including losartan, fenofibrate, and SGLT2 inhibitors, have shown modest urate-lowering efficacy and a lower risk of incident gout [93][94][95]. Regarding the concurrent use of these medications for patients with gout, losartan was recommended for the treatment of hypertension when feasible; however, adding or switching cholesterol-lowering agents to fenofibrate was conditionally recommended against, despite its urate-lowering effects, considering the side effects of the medication [7]. Moreover, given that thiazide diuretics are associated with increased SUA levels [96], it is recommended that hydrochlorothiazide be switched to an alternative antihypertensive agent if such a change is feasible in patients with gout [7]. However, since there are few practical alternatives to low-dose aspirin, discontinuing low-dose aspirin is not recommended among those receiving this medication [7]. CONCLUSIONS Gout is a chronic disease characterized by MSU crystal deposition, which requires long-term ULT. Patients with recurrent gout flares, tophaceous gout, or structural joint damage due to gout should undergo ULT to achieve and maintain a target SUA level of < 6 mg/dL. Currently available SUA-lowering agents include XOIs, uricosuric agents, and recombinant uricase. To date, allopurinol has had a higher incidence of life-threatening side effects than febuxostat in Koreans. There is no direct evidence that febuxostat increases the risk of CV disease in Korean patients with gout. Therefore, febuxostat is used as first-line ULT along with allopurinol in Korea. HLA-B58*01 tests should be performed to prevent AHS in Korean patients with gout prior to ULT with allopurinol. Administration of concomitant anti-inflammatory prophylaxis therapy is recommended during ULT to prevent gout flares. The first-line anti-inflammatory agents used to treat gout flares include low-dose colchicine, NSAIDs, and glucocorticoids. Additionally, adequate management of lifestyle factors and concomitant medications is recommended. In particular, educating patients and primary care physicians about gout should be emphasized to increase compliance with long-term ULT. This review is not an official guideline for the Korean College of Rheumatology (KCR), and the KCR will publish the Korean guidelines for the early management of gout in 2022. Further studies are needed to address the efficacy and safety of novel ULTs and anti-inflammatory agents in Korean patients with gout.
Effect of Food Intake on the Pharmacokinetics of a Novel Methylphenidate Extended-Release Oral Suspension for Attention Deficit Hyperactivity Disorder We conducted an open-label, single-dose, randomized, crossover study in healthy adults to assess the impact of food on the bioavailability of 60 mg methylphenidate extended-release oral suspension (MEROS;Quillivant XR TM )—a long-acting stimulant for the treatment of attention deficit hyperactivity disorder—by comparing the pharmacokinetic parameters under fed and fasting conditions. When MEROS 60 mg was administered under fed conditions compared with fasting conditions, the exposure of methylphenidate ( d enantiomer) was higher, with a mean area under the plasma concentration-vs-time curve (AUC) 0–t of 160.2 ng · h/mL vs 140.4 ng · h/mL, and a mean AUC 0–inf of 163.2 ng · h/mL vs 143.7 ng · h/mL, respectively. The ratios of the ln-transformed geometric means for methylphenidate for AUC 0–t and AUC 0–inf were 119.5% (90%CI, 115.7% to 123.5%) and 119.0% (90%CI, 115.2% to 122.8%), respectively, within the standard 80% to 125% bioequivalence acceptance range indicating no food effect on the overall exposure (rate and extent). There was a small increase in the peak plasma concentration (127.6% [90%CI, 119.9% to 135.8%]). However, this effect was small and not likely to be clinically significant. Overall, MEROS 60 mg was safe in both the fed and fasting condition when administered to healthy volunteers in this study. label, for patients ages 6 years and above, the recommended starting dose is 20 mg given orally once daily in the morning. Dosage may be increased weekly in increments of 10 to 20 mg. Daily dosage above 60 mg is not recommended. The formulation contains approximately 20% immediate-release and 80% extendedrelease methylphenidate. MEROS has demonstrated onset of action in 45 minutes and duration of action through 12 hours postdosing. 5 The relative bioavailability of 60 mg of MEROS compared with 60 mg of immediate-release methylphenidate hydrochloride (HCl) oral solution (given as 2 30-mg doses 6 hours apart) is 95%. 6 This article describes the assessment of the impact of food on the bioavailability of MEROS 60 mg in healthy adult subjects by comparing the pharmacokinetic (PK) parameters under fed and fasting conditions. The safety of MEROS 60 mg administered under fed and fasting conditions was also examined. Study Design This was an open-label, single-dose, randomized, 3-period, 3-treatment crossover study in healthy male and female adults under fasting and fed conditions. The main objectives of this study were to assess the relative bioavailability of a single 60-mg dose of MEROS (Quillivant XR TM , Pfizer Inc, New York, New York; concentration equivalent to 5 mg/mL methylphenidate HCl) compared with 60 mg immediate-release methylphenidate HCl oral solution, dosed 30 mg twice daily, and to assess the impact of food on the relative bioavailability of MEROS by comparing the PK parameters under fasting and fed conditions. The rate and extent of absorption (relative bioavailability) of MEROS relative to immediate-release methylphenidate HCl oral solution have been reported in detail elsewhere. 6 The effects of food on the bioavailability of MEROS, based on pharmacokinetic data collected during 2 of the 3 treatment periods (60 mg MEROS in fasting and fed conditions), are reported here. The study was conducted in accordance with the guidelines set forth by the International Conference on Harmonisation Guidelines for Good Clinical Practice, the Code of Federal Regulations for Good Clinical Practice, and the Declaration of Helsinki regarding the treatment of human subjects in a study. The study protocol and the Consent Form were approved by an institutional review board (St. Charles Community Institutional Review Board, St. Charles, Missouri) prior to the conduct of any study procedures. Screening assessments to determine study eligibility occurred within 28 days prior to the first dose of study drug in the first treatment period (period 1). Subjects were admitted to the clinic at least 10.5 hours prior to day 1 dosing and were required to stay for PK sampling for 24 hours after day 1 dosing and to return to the clinic for a blood collection at 36 hours postdose. Following a 7-day washout period, subjects returned to the clinical center to be dosed with the alternative treatment as per the randomization schedule (period 2). Study medication was administered by an oral dosing syringe and followed by administration of 8 fl oz of room-temperature water. Enrolled subjects received either 1 single 60-mg oral dose of MEROS administered at hour 0, 30 minutes after initiation of an FDA standardized high-fat-highcalorie test meal preceded by an overnight fast of at least 10 hours (test treatment) or 1 single 60-mg oral dose of MEROS administered at hour 0 after an overnight fast of at least 10 hours (reference treatment). The test meal consisted of 2 eggs cooked in butter, 2 strips of bacon, 2 slices of toast with butter, 4 oz of hash brown potatoes, and 8 fl oz of whole milk. All subjects were required to remain upright during the first 10 hours postdosing. No fluid was to be allowed from 1 hour predose to 1 hour postdose except that included with the dose and the high-fat-high-calorie test meal. Throughout the study, standardized meals and beverages were served. Meals were the same in content and quantity during each confinement period. The following meals were served: a small fat-free snack, a standardized meal, dinner, and a snack were served at 4, 7, 11, and 14 hours postdose, respectively. When fluids were not restricted, standardized beverages were allowed ad lib. Study Participants The study inclusion/exclusion criteria are described in detail in another publication. 6 Eligible subjects were healthy male and female individuals aged ࣙ18 years at the time of the first dosing, with a body mass index of 18 to 32 kg/m 2 , who provided written informed consent and were able to complete the screening process within 28 days prior to first dosing. Subjects were deemed healthy if there were no clinically relevant abnormalities documented by the medical history, full physical examination (including but not limited to an evaluation of the cardiovascular, gastrointestinal, respiratory, and central nervous systems), vital sign assessments, electrocardiogram, clinical laboratory assessments, and the Columbia-Suicide Severity Rating Scale. Excluded from the study were subjects with any evidence or history of a clinically significant disorder involving the cardiovascular, respiratory, renal, gastrointestinal, immunologic, hematologic, endocrine, or neurologic system(s) or psychiatric disease, including those who had received treatment for asthma within the past 5 years; those with positive hepatitis B surface antigen, hepatitis C antibody, or HIV antibody serology results; those with a history of glaucoma, structural cardiac abnormalities, seizures, hypertension, Tourette syndrome or tics; or with a history of treatment for depression, anxiety, tension, or agitation. In addition, subjects with a clinically significant illness during the 4 weeks prior to the first dosing, or those who reported receiving an investigational drug within 30 days prior to the first dosing, were pregnant, lactating or breastfeeding, smoking or using tobacco and/or nicotine products, or with a history of treatment for alcoholism, substance abuse, or drug abuse within the past 2 years were not allowed in the study. Reported difficulty fasting or consuming standardized meals, or reported intolerance to fatty foods or inability to consume a high-fat diet was also exclusionary. Prescription and nonprescription medications, other than hormonal contraceptives and hormone replacement therapy, were not allowed for a period of 14 days and 7 days, respectively, prior to period 1 dosing and through the end of the study; cytochrome P450 enzyme inducers were restricted for 28 days prior to period 1 dosing and through the end of the study; monoamine oxidase inhibitors were not allowed for a period of 14 days prior to period 1 dosing through 14 days after the final dose of the study medication. Bioanalytical Methods After collection, samples were stored in an ice bath or Kryorack R until processed. Plasma was separated from whole blood by centrifugation at approximately 3000 revolutions per minute for 10 minutes at 4°C, transferred into duplicate 8-mL polypropylene tubes, and stored frozen at -20°C (range ±10°C) within 1.5 hours of collection until assayed. Plasma samples were analyzed for d-and l-methylphenidate concentrations; however, the d and l enantiomers of methylphenidate differ in terms of their activities and PK properties, with much higher pharmacological activity and exposure observed for the d enantiomer [7][8][9] ; hence, results reported here are based on data for the d enantiomer of methylphenidate. A validated, high-performance liquid chromatographic-tandem mass spectrometric method was used to determine plasma d-methylphenidate concentrations. d-Methylphenidate was quantitated using a liquid-liquid extraction procedure with lamphetamine (Cerilliant, Round Rock, Texas) as the internal standard. Each 100-μL aliquot of qualitycontrol sample (d-threo methylphenidate; Chemtos, Austin, Texas) and 100 μL plasma study sample was mixed with 500 μL of internal working standard solution (5.00 ng/mL) and 100 μL of 10% ammonium hydroxide solution and vortexed; 5.00 mL of n-heptane was added. Following centrifugation, the organic layer was transferred to a culture tube and evaporated at 40°C. The residue was reconstituted in 700 μL of reconstitution solution, and an aliquot was injected onto the liquid chromatographic-tandem mass spectrometric system. The liquid chromatography system used a 150 × 4.6 mm (5-μm particle size) SUPELCO Chirobiotic V column (Sigma-Aldrich Corp, St. Louis, Missouri) with an isocratic flow of 83:17 (v:v) mobile phase A:mobile phase B, at a flow rate of 1.6 mL/min. Mobile phase A consisted of 0.25% ammonium trifluoroacetate solution in methanol, and mobile phase B consisted of 0.25% ammonium trifluoroacetate solution in deionized water. Positive ions were detected in the multiple-reaction monitoring mode with precursor→product ion pairs of 234.0→84.0 m/z for d-methylphenidate and 136.0→119.0 m/z for l-amphetamine. All samples were run in a single day. The lower limit of quantitation for d-methylphenidate in plasma samples was 0.10 ng/mL (calibration range 0.100 to 20.0 ng/mL). Nine concentrations were used for the standard calibration curves; percentage bias was within ±3.8%, and r 2 was greater than 0.997. Pharmacokinetic Evaluation The PK parameters were estimated for methylphenidate using a noncompartmental approach in SAS R software (SAS Inc, Cary, North Carolina). The PK parameters were area under the plasma concentration-vs-time curve (AUC) from time 0 to the time of the last quantifiable concentration (AUC 0-t ), AUC from time 0 extrapolated to infinite time (AUC 0-inf ) calculated as the sum of the AUC 0-t plus the ratio of the last measurable plasma concentration to the terminal rate constant, maximum plasma Statistical Analyses Based on a fixed type-I error of 5% and an estimated intrasubject coefficient of variation of 20% obtained from a pilot study, a sample size of 20 was required to provide at least 80% power to detect a difference between test and reference treatments, assuming a test/reference ratio of 95% to 105%. A total of 30 subjects were enrolled to account for potential dropouts. Standard noncompartmental methods were used to calculate PK parameters for methylphenidate (d enantiomer) plasma concentrations. Analyses of variance (ANOVA) using SAS software were performed on the ln-transformed PK parameters, AUC 0-t , AUC 0-inf , and C max and on the untransformed PK parameters T max , K el , and t ½ , with sequence, treatment, and period as fixed effects and subject within sequence as a random effect. Ratios of means and corresponding 90%CIs were calculated using the treatment least-squares means for ln-transformed, AUC 0-t , AUC 0-inf , and C max ; CIs were expressed as a percentage relative to the leastsquares means of the reference treatment. Exposure equivalence was concluded if the 90%CIs for the ratio of adjusted geometric means for AUC 0-inf , and C max for d-methylphenidate were completely within the bound-aries of 80% to 125%. Descriptive statistics were used to summarize all PK parameters. For the statistical analysis, subject sample values below the lower limit of quantitation were reported as 0. Study Population The study was conducted at a single site in the United States (Cetero Research-St. Charles, St. Charles, Missouri) from March 15 to March 31, 2010. Thirty subjects were enrolled and randomized to receive study treatment. All 30 subjects (25 male, 5 female) received a 60-mg dose of MEROS and were included in the safety analyses. Two subjects withdrew consent for personal reasons, and the remaining 28 subjects completed the study. Completed subjects included 23 male and 5 female subjects with a mean age of 36.8 years (range 19 to 68 years), 65.5% of whom were white (for a summary of demographic data for subjects included in the PK analyses, see Table 1). One subject did not finish the high-fat-high-calorie test meal prior to dosing and was excluded from the PK analysis; hence, 28 subjects were included in the PK analysis under fasting conditions, and 27 subjects were included in the PK analysis under fed conditions. Pharmacokinetics The mean plasma methylphenidate concentration-time profile following single-dose administration of MEROS 60 mg under the fed and fasted states is shown in Figure 1. Following attainment of C max , mean methylphenidate plasma concentrations declined in parallel, irrespective of food intake ( Figure 1). The exposure (AUC) of methylphenidate was higher when MEROS 60 mg was administered under fed conditions compared with fasting conditions, with mean AUC 0-t of 160.2 ng·h/mL vs 140.4 ng·h/mL, and mean AUC 0-inf of 163.2 ng·h/mL vs 143.7 ng·h/mL, respectively ( Table 2). The rate of exposure (C max ) of methylphenidate was also higher when MEROS 60 mg was administered under fed conditions compared with fasting conditions (C max of 17.0 ng/mL vs 13.6 ng/mL, respectively), and absorption of methylphenidate was more rapid under fed conditions, with median T max occurring at 4.0 hours and 5.0 hours, respectively ( Table 2). When compared with fasting conditions, under fed conditions the average C max and AUC 0-t of methylphenidate increased by 25% and 14%, respectively. Food intake did not affect the mean t ½ values (fed condition 5.24 hours; fasting conditions 5.65 hours) ( Table 2). Variability (coefficient of variation [CV]) estimates under fed and fasting conditions for AUC 0-t (CV = 49.1% and 50.6%, respectively), AUC 0-inf (CV = 49.2% and 50.7%, respectively), and C max (CV = 45.5% and 42.6%, respectively) were similar (Table 2). When MEROS 60 mg was administered under fed compared with fasting conditions, the 90%CIs for the AUC 0-t and AUC 0-inf geometric mean ratios fell within the standard 80% to 125% bioequivalence criteria. The ratios of the ln-transformed geometric means and 90%CIs for methylphenidate for AUC 0-t and AUC 0-inf were 119.5% (90%CI, 115.7% to 123.5%) and 119.0% (90%CI, 115.2% to 122.8%), respectively, indicating no effect on the overall exposure to MEROS after administration with food. The ratio of ln-transformed geometric means (90%CI) for C max was 127.6% (119.9% to 135.8%), which falls outside of the standard 80% to 125% bioequivalence criteria, indicating a slight increased rate of absorption of MEROS after administration in the fed condition (Table 3). Safety Overall, 12 of 30 subjects receiving MEROS 60 mg under fed or fasting conditions reported a total of 25 treatment-emergent AEs; all AEs were mild in intensity. Ten subjects reported 15 AEs after receiving MEROS under fed conditions compared with 6 subjects (10 AEs) after receiving MEROS under fasting conditions; 4 subjects reported AEs on both fed and fasted treatment. Headache (fed: n = 5/29 [17.2%]; fasting: n = 3/28 [10.7%]) was the most commonly reported AE and was generally considered to be possibly related to the treatment. AEs are listed by fed and fasting conditions in Table 4. There were no deaths, serious AEs, or discontinuations due to AEs reported, nor were there any clinically significant abnormalities in laboratory test data, vital signs, or electrocardiograms, and no subject had suicidal ideation or behavior. Discussion It is important to establish if there are any food effects of once-daily extended-release methylphenidate formulations in children with ADHD because the timing of the dose administration is typically after breakfast. This study in healthy adult subjects demonstrated that food intake has a small impact on the bioavailability of MEROS 60 mg, slightly increasing both the rate and extent of absorption. Consistent with the extended-release nature of MEROS, the rate of exposure (C max ) of MEROS was lower than that of the immediate-release methylphenidate HCl oral solution when administered under fasting conditions. The presence of food reduced the time to peak concentration of MEROS by approximately 1 hour when compared with under fasting conditions. Following the intake of a high-fathigh-calorie meal prior to administration of MEROS 60 mg, the average C max of MEROS increased by 25%, and the extent of exposure of MEROS (AUC 0-inf ) increased by 14% compared with administration of MEROS under fasting conditions. These changes are not likely to be clinically significant. The food effect exhibited by MEROS is different than those for other methylphenidate products. For immediate-release methylphenidate formulations, food intake prolongs the time to peak concentration by approximately 1 hour. 10 A similar effect has been observed with once-daily solid dosage forms of extended-release formulations of methylphenidate (such as tablets and capsules) when administered with food. The relative bioavailability of long-acting OROS R , an osmotic controlled-release tablet formulation of methylphenidate HCl, is similar to that of immediate-release methylphenidate under fed and fasting conditions. Studies have shown that peak plasma concentrations and AUC for OROS R were approximately 10% to 30% higher, and time to peak concentration was delayed by approximately 1 hour, by the presence of food. 11,12 Various tablet or capsule formulations of extended-release methylphenidate HCl also exhibit delayed PK profiles similar to that of OROS R when administered with food. 11,13 The formulation of MEROS is unique, with a matrix formulation that, unlike other formulations, is more resilient to degradation by stomach acid, making the drug release rate less likely to be altered by food intake and gastric transit time. MEROS is supplied as a powder that is reconstituted with water by the pharmacist prior to dispensing and is composed of cationic drug-polymer complexes, consisting of a d,l-threomethylphenidate racemic mixture bound to matrix particles via an ion-exchange mechanism. Eighty percent of these drug complexes are coated with an extended-release coating polymer, which allows for release of methylphenidate throughout the day; the remaining 20% are left uncoated and act as immediaterelease methylphenidate. 5 The IR component provides the requisite fast absorption rate needed to achieve a 45-minute onset of efficacy, 5 and food should not impact that outcome. The suspension formulation of MEROS provides a treatment option for children with ADHD who struggle with solid dosage forms-ie, those who are unwilling or unable to swallow tablets/capsules or to use sprinkles or transdermal patches. Based on C max , bioequivalence was not observed for MEROS 60 mg administered under fed and fasting conditions. However, the difference in C max between fed and fasted conditions was small (geometric mean ratio 127.6% [90%CI, 119.9% to 135.8%]). In an open-label, randomized crossover study, the C max for MEROS 60 mg administered in a fasted state was substantially lower (geometric mean ratio 69.13% [90%Cl, 63.72% to 75.00%]) compared with an equivalent dose of an immediate-release MPH oral solution (30 mg twice daily) 6 ; thus, the increased C max observed when MEROS was administered after a highfat, high-calorie meal was lower than that observed with an equivalent dose of the reference treatment, fasted. Further, the 90%CIs for the AUC 0-t and AUC 0-inf geometric mean ratios fell within the standard 80% to 125% bioequivalence acceptance range, indicating no food effect on the overall (rate and extent) exposure in these healthy adult subjects. These results suggest that MEROS can be administered with or without food for the treatment of ADHD. The shorter T max observed in the fed condition suggests that taking MEROS with food may shorten the time it takes for the medicine to start working. Conclusions This study in healthy adult subjects demonstrated no food effect on the overall (rate and extent) exposure of MEROS; there was a small increase in the peak plasma concentration (but not exposure). The effect was small and likely without clinical implications. Overall, MEROS administered as a single oral dose of 60 mg (12 mL oral suspension equivalent to 25 mg methylphenidate HCl per 5 mL [5 mg/mL]) demonstrated a favorable safety profile in healthy adult subjects under fasting and fed conditions. The AEs reported during the study were anticipated and are common among the AEs reported following administration of methylphenidate. 6
Experimental realization of dual task processing with a photonic reservoir computer We experimentally demonstrate the possibility to process two tasks in parallel with a photonic reservoir computer based on a vertical-cavity surface-emitting laser (VCSEL) as a physical node with time-delay optical feedback. The two tasks are injected optically by exploiting the polarization dynamics of the VCSEL. We test our reservoir with the very demanding task of nonlinear optical channel equalization as an illustration of the performance of the system and show the recover of two signals simultaneously with an error rate of 0.3% (3%) for a 25 km-fiber distortion (50 km-fiber distortion) at a processing speed of 51.3 Mb/s. I. INTRODUCTION Building energy efficient systems to process data currently performed by computer is one of the focus problems that photonic reservoir computing is trying to address. A reservoir computing system is a specific kind of neural network with a recurrent topology, i.e., coupling signals and information are not propagating unidirectional in the network structure. The training, consisting of adjusting the interconnection weight between the neurons for this particular structure, is usually difficult and data intensive as it scales with the square of the network size to solve a specific task. This also implies that the physical architecture with many tunable degrees of freedom should be designed, which represents a significant technical challenge for the development of efficient hardware platforms. A reservoir computing system overcomes these hurdles by not realizing the training through internal weight adjustments but by keeping it fixed and training a readout layer unidirectionally connected to the recurrent network. This can be achieved with a simple linear regression at the readout with simple regression algorithms. 1,2 This is specifically interesting as it allows the use of physical components for a hardware implementation of a neural network. Several architectures using this specific principle already exist. [3][4][5][6][7] However, realizing a large physical neural network remains a technical challenge especially with photonic devices. Hence, a solution was proposed with time-delay reservoir computing: Instead of using physical neurons, only one physical neuron is used, and several virtual neurons are temporally spread along a delay line. 8 The time separation between virtual neurons is set to be smaller than the physical-neuron response time so that the neurons remain in a sustained transient dynamics, which effectively translates into time-multiplexed interconnection between the virtual neurons. In that framework, adding neurons only requires lengthening the delay line. Several photonic architectures use this specific technique, with either an optoelectronic 4,9,10 or an all-optical [11][12][13][14][15][16][17] delay line. The vertical-cavity surface-emitting laser (VCSEL) is a good candidate to realize a time-delay reservoir computer and process data in optical networks as it is widely used in optical telecommunication networks. One of VCSEL's specificity is light emission along two orthogonal linear polarization modes and a faster modulation frequency than an edge-emitting laser. 18 We have already proven numerically 19 and experimentally 20 that a VCSEL-based time-delay reservoir computer is able to efficiently perform computation tasks, with state-of-the-art performance on various tasks such as chaotic time-series prediction and nonlinear WIFI channel equalization. Parallel processing of two tasks was originally proposed in Ref. 13 using single-mode dynamics of a laser diode. Using the multimode polarization dynamics of a laser diode has also been considered to perform simultaneously several tasks. It has been shown theoretically that using two longitudinal modes of an edge-emitting laser, 17 the two modes of a semiconductor ring laser 15 or the two polarization modes of a VCSEL 21 enable parallel processing with a time-delay reservoir computing architecture. We thus experimentally address here the question of whether a VCSEL-based photonic reservoir, which exhibits two polarization modes, is able to perform efficiently two tasks consisting of the recovery of two optical signals being distorted by a fiber. In this article, we present an experimental realization of a reservoir computer processing two tasks simultaneously. This reservoir computer is based on the time-delay reservoir architecture, using a VCSEL as a physical node. The two tasks are injected optically in each polarization mode of the VCSEL. By carefully choosing the operating point of the reservoir computer, we show the possibility to tune the performance of the system on each processed task. As an illustration, we test our reservoir on the nonlinear optical channel equalization. This task is very demanding as signals sent in optical fiber are distorted due to several nonlinear effects, such as chromatic dispersion and Kerr effect. 22 More specifically, we are able to recover two signals simultaneously distorted by 25 km and by 50 km of fiber and sent at 25 Gb/s with a mean error rate of 0.3% at 25 km and of 3% at 50 km, at a processing speed of 51.3 Mb/s. II. METHOD The experimental setup is depicted in Fig. 1. The reservoir itself is the same as the one we have previously studied in Ref. 20: It comprises a VCSEL (Raycan) as a physical node, which emits light at 1552.75 nm for the dominant linear polarization mode (LPx) and at 1552.89 nm for the depressed polarization mode (LPy). The bias current of the VCSEL is set at 4.5 mA, which corresponds to 1.5 times the threshold current. This choice of pumping current is based on the previous numerical analysis we conducted in Ref. 19, showing that a pumping current close to the current threshold lead to high-memory capacity and overall computing performance for the time-delay VCSEL-based reservoir computer. The feedback loop is made of a SMF-28 single mode fiber (standard telecommunication fiber) resulting in a delay line of τ = 39.4 ns. As only one calculation step can be performed per round-trip, this length imposes to a (2) n for each bit bn. This signal is temporally rescaled so that each symbol duration is τ. The ten values b (1) n−4 , bn−4 (2) , to b The speed of the system could be increased by reducing the length of the delay line, which was not possible in our case. To optimize our use of the VCSEL dynamics, we set the inter-nodes delay θ = 0.04 ns according to previous simulations 19 and the frequency limitation of the experimental components (i.e., oscilloscope, arbitrary waveform generator and modulators): The optimal delay between virtual nodes that exploits the best VCSEL's transient response is θ * = 0.02 ns; however, the modulation bandwidth of our arbitrary waveform generator (AWG) is at 25 GHz. We use for the training and testing of the reservoir only one every two nodes separated by 2θ = 0.08 ns due to the memory limitation of the computer performing the training, thus leading to consider N = 492 nodes instead of N = 984. Considering an increasing number of virtual nodes while keeping the feedback delay fixed, we observed numerically an improvement of the performance up to Nth = 100. Beyond this threshold value, increasing the size of the virtual network will only lead to marginal improvement in the RC performance. In our experience, we choose N = 492 > N th for experimental convenience rather than using all the accessible virtual nodes to speed up the training phase without compromising on the performance. There is also a polarization controller (P.C.) to control the optical polarization along the feedback loop. Finally, an optical attenuator Keysight 81577A (Att.) is used to control the feedback strength. In this article, the results presented are obtained with the isotropic feedback configuration, i.e., the orientation of the two VCSEL's polarization modes (LPx,y) are preserved in the external cavity prior to being fed back. Accordingly to the results obtained in Ref. 19, there is an optimum operating point for each value of the feedback strength while varying the injection power. This is why we set the feedback attenuation η to 17 dB to guarantee that enough power is injected to find this best operating point. The input layer is primarily composed of an arbitrary waveform generator (AWG) AWG700002A from Tektronix, a tunable laser Tunics T100S from Yanista, and two Mach-Zehnder modulators (MZx,y) with a bandwidth of 12.5 GHz. Both modulators are working in their linear regime. The light emitted by the tunable laser is split in two different beams and sent in the two different modulators. The wavelength of this laser is set to 1552.82 nm so that its wavelength is equally separated from the frequencies of the main and depressed polarization modes of the VCSEL, as presented in Fig. 2. By doing so, we ensure that having the same power in both linear polarization modes at the output of the modulators, the power is equally distributed among the two linear polarization modes of the injected VCSEL. Shifting the frequency of the master laser to one of the polarization modes of the VCSEL leads to a more efficient optical injection in this mode and therefore enhances the response of this mode at the expense of the response of the other mode, for which the optical injection is reduced. The two different masked input streams, corresponding to the two tasks Tx,y to be processed, are used to drive both modulators and are generated by the AWG at a symbol rate of 25 GS/s for each stream. The output power of the modulator is controlled by an optical attenuator built inside each modulator. This allows the independent change of the injected power Pinj x,y of the tasks Tx,y. At the modulators output, the optical polarization of the input stream containing Tx is aligned with the main polarization mode (LPx) of the VCSEL and the one of the input stream containing Ty with the depressed polarization mode (LPy). An example of input streams is given in Fig. 1(b). Both beams are then recombined and sent in the reservoir computer. The response of the reservoir is recorded at the output layer: The signal is first amplified with an erbium-doped fiber amplifier (EDFA) from Lumibird. Then, the two polarization modes of the VCSEL are separated and recorded with two photodiodes Newport 1544-B 12 GHz bandwidth, connected to an oscilloscope Tektronix DPO 71604C 16 GHz bandwidth with two channels at 50 GS/s. Examples of the experimental time series recorded for each polarization mode of the VCSEL are given in Fig. 1(c). The signal-to-noise ratio (SNR) has been experimentally measured at 21 dB. With the high-resolution optical spectrum analyzer BOSA from Aragon Photonics, we can study the spectral dynamics of the system in different configurations. Figure 2(a) shows the experimental optical spectrum of the reservoir computer without injection and with optical feedback. The VCSEL is lasing at 1552.72 nm, the wavelength of its dominant polarization mode. The dominant mode LPx of the VCSEL has a spectral width of 5.72 GHz with an attenuation of 17 dB in the feedback loop. The two smaller side peaks are induced by the undamped relaxation oscillations of the VCSEL, 23 which frequency is measured at 3.73 GHz. Figure 2(b) presents the spectrum of the reservoir with injection but without modulating input: Under this condition, the VCSEL is emitting light only in its dominant polarization mode, with the wavelength of the master laser at 1552.82 nm. We notice that the slave laser exhibits wave-mixing dynamics and that it is not locked to the master laser. When modulating the master laser, its spectrum broadens and overlaps the two wavelengths of the VCSEL, as shown in Figs. 2(c) and 2(d). This allows the VCSEL APL Photon. 5, 086105 (2020); doi: 10.1063/5.0017574 5, 086105-3 ARTICLE scitation.org/journal/app to react to the master laser and to respond according to the modulated input. This response also broadens the spectra of the two polarization modes of the VCSEL. The spectral width of the dominant polarization mode LPx detuned from the modulated input by 9.45 GHz. We observe also that injecting more power in the depressed mode LPy forces its emission despite not lasing when the VCSEL is free-running. We have tested the dual-tasking performance of our reservoir at solving a nonlinear optical channel equalization, which aims at reconstructing a transmitted signal only from the given distorted signal at the channel's output. We have chosen a single-mode optical fiber for the telecommunication channel. The distortion introduced by this channel is simulated using the nonlinear Schrödinger equation, which models the propagation of a signal in the fiber. This equation reads as 24 where E(z, t) is the slowly varying envelop of the optical field, α is the attenuation of the fiber, β 2 is the second order coefficient of dispersion, and γ refers to the nonlinearity of the fiber. We have chosen the coefficient of the SMF-28 fiber, which is the single mode silica fiber used for long haul transmission, with α = 0. n , which are the time-average values of the upper half and the lower half of the distorted signal for the duration of one bit. The input of the reservoir is realized by masking each feature value for five consecutive bits, hence using 10 different masks (one per input value) of 985 values, which are then summed together. The masked input of the reservoir Jn− 2 (t) at the step n − 2 reads where Mi(t) is one of the ten different masks. A graphical illustration of the preprocessing is given in Fig. 1(d). At the output of the reservoir, we train the system by linear regression with N = 492 nodes to recover bits bn− 2 . For each node, we use as a state the values of the optical power of the two orthogonal polarization modes (LPx and ARTICLE scitation.org/journal/app LPy). Two different linear regressions are performed, one for each task Tx and Ty, using the whole state of the reservoir. The equations of the regressions are S × ωx = b T x and S × ωy = b T y , where S is the reservoir's state matrix containing the power associated with the dominant (LPx) and depressed (LPy) polarization mode. ωi is the vector containing the readout layer weights obtained from linear regression, and b T i is the vector containing the target output of the task Ti. Exploiting the two LP modes for each regression stems from nonlinear mixing the two input data streams in the VCSEL dynamics so that the two polarization modes contains part of the information of both processed tasks. For the training of the reservoir, we use 20 000 samples, i.e., sliding block of five consecutive distorted bits. Since we record optical power of LPx,y modes for the 492 nodes, the size of S is 20 000 × 984. The performance of the reservoir is tested on 5380 samples and measured using the bit error rate (BER). As already stated, for each value of the feedback strength, there is a corresponding optimal injection power for the reservoir computer. 19 That is why we vary only the injected power, while keeping the value of the feedback strength fixed. This allows reducing the dimension of the space parameters to explore to find the best experimental operating point. By finding the best operating point, we ensure for our VCSEL-based reservoir computing system to have a combination of large memory capacity (i.e., long fading memory) and large computational ability (i.e., good aptitude for approximation and generalization), as demonstrated in our previous numerical analysis. 19 Furthermore, we aim at showing the tunable parameters that can control the performance of the two processed tasks Tx and Ty. Figures 4 and 5 present the influence of the ratio of injection power P injy P injx on the performance of the two processed tasks. To produce these figures, we first find the best operating point for each value of this ratio: We sweep the value of Pinj x (an example is provided in Fig. 3), and Pinj y is then fixed by the value of the ratio. As a result, we find the value of Pinj x that minimizes the mean BER for both Tx and Ty. This optimal value is then reported in the graph (this is why Figs. 4 and 5 do not contain any information on the effective injected power). Figure 3 shows an example of the method used to produce the performance figures. We first present the influence of the injected power on the performance of both tasks Tx and Ty in Fig. 3 for the two lengths of fiber recovered: 25 km (a) and 50 km (b). On this figure, the injection ratio Pinj y /Pinj x is fixed to 0.3. We can observe that there is an optimal injected power that yields the best mean performance at Pinj x = 0.09 mW for 25 km and at Pinj x = 0.2 mW for 50 km. We will only report this best value in the figures. III. RESULTS The results for the channel equalization of 25 km of propagation in the fiber are presented in Fig. 4(c). Figures 4(a) and 4(b) present an example of the signal at the input and output of the optical fiber, respectively. We observe that the performance on tasks Tx and Ty varies with the injection ratio Pinj y /Pinj x . If this ratio is smaller than 2, task Tx is better performed than task Ty. When this ratio is higher than 2, the trend is reversed, and the task Ty is better performed. This can be explained by a polarization switching in the VCSEL output induced by optical injection (i.e., the role of the dominant and depressed polarization modes of the VCSEL are exchanged 27 ). This phenomenon therefore increases the SNR of the task Ty injected in the depressed polarization mode. The system is able to provide a BER of 0.04% for the task Tx, while the dominant mode is strongly injected (with an injection ratio Pinj y /Pinj x of 0.2). The other task is processed with lower performance in this case, with a BER of 1.6%. When the ratio of power is greater than 0.5, the average performance of the reservoir reaches a threshold of performance with a BER of 0.35%. The ratio of injected power in the polarization modes can thereby be used to easily choose the split of performance between the two performed tasks. While processing a single nonlinear channel equalization task, the reservoir computer exhibits a BER of 0.08%. We notice that the performance of our VCSEL-based reservoir on a single task is comparable to the one achieved with a monomode laser diode with a more complex modulation format and similar propagation distance. 26 However, processing two tasks instead of one mitigates the averaged performance of the system. To analyze the impact of the nonlinear transformation induced by our VCSEL-based reservoir on the task, we compare it to a APL Photon. 5, 086105 (2020); doi: 10.1063/5.0017574 5, 086105-5 ARTICLE scitation.org/journal/app stand-alone linear regression (a linear classifier). Toward this end, the linear classifier is operated in the same conditions as the reservoir computer: One classifier is used to process the two tasks with the same dimension and similar injection power ratio as in a photonic reservoir computer. We use also the same input features with identical sizes for the training and testing sets (20 000 samples for training and 5380 for testing). Finally, similar SNR conditions are considered. To meet this last condition, as the VCSEL introduces additional noise, we added white noise to the input signal to achieve 21 dB before performing a stand-alone linear regression. With these similar operating conditions, a stand-alone linear regression provides a BER slightly lower than 1%, and the mean BER of the two tasks is ∼3.2% in the best operating point identified in our experiment (i.e., for a ratio in the range of 0.6-3). The reservoir computer is thus able to improve the performance on the two tasks by approximately one order of magnitude. We also provide results on the dual channel equalization of the propagation in 50 km of single mode fiber. Since the distortion of the signal is more pronounced [ Fig. 5(b)], the mean performance of the reservoir computer is expected to be lower than the one after a 25 km transmission. The performance of the reservoir computer is given in Fig. 5(c). We still observe a similar trend: The polarization switching of the VCSEL for a ratio of injection Pinj y /Pinj x ∼ 1, and the best achieved BER for one task is at 1.6%. The best mean performance is at 2.2%, achieved for a ratio of injection at 0.7. The system performing this single task exhibits a BER of 1.9%, which is slightly below the performance previously reported. 16 Contrary to the equalization of the shorter optical fiber, processing two tasks simultaneously slightly decreases the mean performance of the system, when compared to processing a single task. The performance of the stand-alone linear regression (linear classifier) is presented in Fig. 5(d). The test has been realized with the same condition as the one used for the reservoir computer. The linear classifier is achieving a BER of 7.5% as a best performance. When both processed signals are balanced, the linear classifier exhibits its best mean performance, with a mean BER at 8.4%. Using the nonlinear effects in our VCSEL-based photonic reservoir computer in similar SNR conditions thus provides a significant benefit, allowing to improve by a factor 5 the performance on the signal-recovery task. The relatively low range of power used for the input signal propagating in the fiber is consistent with the range of power use in telecommunication networks. Furthermore, it does not lead to significant trigger of the Kerr nonlinearity. Equalizing both linear distortion and a strong Kerr effect remains a challenge in current digital signal processing (DSP)-based techniques for optical channel equalization. 28 To analyze how the Kerr effect would affect the performance of the reservoir, we have sent in the fiber two signals with a large pulse-amplitude modulation depth of 0.5 W and recover two signals simultaneously at the output of the fiber. This power is large enough to trigger the Kerr nonlinearity (as only a few tens of mW are necessary) and make the task more complex to solve. Under these new conditions and using similar parametric and operating conditions, our reservoir can now recover two signals simultaneously with an optimal mean BER of 8.9% for a 25 km fiber distortion and with a mean BER of 17.9% for a 50 km distortion. A degradation of at least one order of magnitude is observed in these conditions with the level of recovery unsuitable for telecom application. However, the level of power was quite large, and no specific optimization was performed to optimize this modified task: There may be a more efficient size of the training set, larger reservoir size, and adapted preprocessing with more peripheral bits data to achieve better level of the performance. This work is left for future studies. IV. CONCLUSION We have realized an experimental photonic reservoir computer architecture capable of processing two tasks simultaneously. This reservoir is a time-delay reservoir computer, using a VCSEL as a physical node. The two different inputs are made by injecting two different optical signals, each being aligned with a different polarization mode of the VCSEL. Using this system, we have performed as an illustration two signal-recovery tasks simultaneously when the signal generated at 25 Gb/s is distorted by propagation in a 25 km or 50 km long SMF-28 optical fiber. We have been able to recover two signals with a BER of 0.3% at a processing speed of 51.3 Mb/s in total for a 25 km-distortion and with a BER of 3% at the same bit rate for a 50 km-distortion. On both tasks, the reservoir allows improving the performance by a factor 5-10, compared to processing the input signal directly under similar SNR conditions. The actual telecommunication networks use digital signal processing (DSP) to mitigate the effects of the optical fiber 29 as it allows propagating the signals along several thousands of kilometers with a BER of ∼10 −3 compatible with forward error correction, but at the expense of important computational resources. Our result also shows that there is still a significant margin of improvement before considering it a viable alternative to the best DSP approaches, despite achieving level of performance comparable to existing photonic-based machine learning techniques on this particular task. 30 Nevertheless, this result is a first step showing that analog photonic reservoir computing could be envisioned for such dual-tasking on optical channel equalization. We proved in our previous work that the bimodal dynamics of the VCSEL allows better computational performance than a single mode dynamics system. This is due to a more complex dynamics that is suitable to perform computation. Here, we proved experimentally that we can exploit the bimodal dynamics of the VCSEL to process two tasks simultaneously. This suggests that using a system exhibiting more dynamical modes would allow scaling up the number of tasks to be processed simultaneously. However, performing several tasks simultaneously slightly degrades the mean computational performance of the system. There is thus a trade-off between the number of tasks to be processed and the individual performance of each task considered. Moreover, we hypothesize that the physics underlying the coupling mechanism between modes may also influence the performance of the reservoir computer, for instance, using longitudinal mode of a laser 17 or the two modes of a semiconductor ring laser 15 instead of using the polarization modes of the VCSEL. This may constitute an interesting frame for future studies of multimode reservoir computing.
Role of RNF213 polymorphism in defining quasi-moyamoya disease and definitive moyamoya disease. OBJECTIVE Quasi-moyamoya disease (QMMD) is moyamoya disease (MMD) associated with additional underlying diseases. Although the ring finger protein 213 (RNF213) c.14576G>A mutation is highly correlated with MMD in the Asian population, its relationship to QMMD is unclear. Therefore, in this study the authors sought to investigate the RNF213 c.14576G>A mutation in the genetic diagnosis and classification of QMMD. METHODS This case-control study was conducted among four core hospitals. A screening system for the RNF213 c.14576G>A mutation based on high-resolution melting curve analysis was designed. The prevalence of RNF213 c.14576G>A was investigated in 76 patients with MMD and 10 patients with QMMD. RESULTS There were no significant differences in age, sex, family history, and mode of onset between the two groups. Underlying diseases presenting in patients with QMMD were hyperthyroidism (n = 6), neurofibromatosis type 1 (n = 2), Sjögren's syndrome (n = 1), and meningitis (n =1). The RNF213 c.14576G>A mutation was found in 64 patients (84.2%) with MMD and 8 patients (80%) with QMMD; no significant difference in mutation frequency was observed between cohorts. CONCLUSIONS There are two forms of QMMD, one in which the vascular abnormality is associated with an underlying disease, and the other in which MMD is coincidentally complicated by an unrelated underlying disease. It has been suggested that the presence or absence of the RNF213 c.14576G>A mutation may be useful in distinguishing between these disease types. disorders contributes not only to the development of new methods of molecular-targeted therapy but also to better genetic differential diagnosis, appropriate risk assessment, and onset prediction. Furthermore, better understanding these genetic factors might lead to the establishment of new diagnostic criteria and disease concepts. In recent years, ring finger protein 213 (RNF213) has been identified as a disease-susceptibility gene for MMD, and a single missense mutation in RNF213 (c.14576G>A, p.R4859K, rs112735431) is frequently found in studies of East Asian populations. [5][6][7] However, the relationship between this gene mutation and QMMD is unclear as it has received little attention, and the findings among those few studies vary considerably. Thus, the purpose of the current study was to investigate the role of the RNF213 c.14576G>A mutation in the genetic diagnosis and classification of QMMD by examining the mutation frequency in patients with MMD and those with QMMD. Patients Our case-control study was conducted among four core hospitals. Genetic analysis of the RNF213 c.14576G>A mutation was performed in patients with MMD or QMMD who visited one of these hospitals between August 2014 and August 2018 and agreed to inclusion in the study. This study was approved by the ethics committee of each hospital and the ethics committee of Nippon Medical School. Written informed consent was acquired from all participants. The following information was obtained from all patients: sex, age at diagnosis, family history, onset symptoms, underlying diseases, and the presence or absence of lifestyle diseases. Diagnosis We confirmed that the diagnoses of MMD and QMMD complied with the respective diagnostic criteria detailed in the Japanese guidelines outlined in "Recommendations for the Management of Moyamoya Disease: A Statement from Research Committee on Spontaneous Occlusion of the Circle of Willis (Moyamoya Disease) [2nd Edition]." 1 Patients with underlying diseases or stenotic occlusion lesions due to arteriosclerosis were excluded from an MMD diagnosis. MMD diagnosis was made using imaging procedures, including MRI, MRA, and cerebrovascular angiography. Some patients with MMD were diagnosed using MRI/MRA according to the criteria that stenosis or occlusion is found in the area focusing on the end of the intracranial internal carotid arteries and that abnormal vascular networks are found in the basal ganglia, as observed on MRA images. Most patients were diagnosed using cerebrovascular angiography according to the criteria that stenosis or occlusion is found in the area focusing on the intracranial internal carotid arteries and that abnormal vascular networks (i.e., moyamoya blood vessels) are observed in the vicinity of arterial phases. Underlying diseases in patients with QMMD were identified based on medical history obtained from the patient or a family member and medical records. Family history was obtained from either the patient or a family member. In the current study, a diagnosis of QMMD was defined as stenosis or occlusion observed at the end of the internal carotid arteries and in the vicinity of the anterior and middle cerebral arteries and the involvement of abnormal vascular networks. Patients with unilateral lesions who had underlying diseases were included in the QMMD group. The following conditions have been reported as underlying diseases: arteriosclerosis, autoimmune diseases, meningitis, von Recklinghausen's disease, brain tumor, Down syndrome, head injuries, radiotherapy, hyperthyroidism, and some other diseases. 1 In this study, arteriosclerosis was excluded as an underlying disease because of its known association with the RNF213 c.14576G>A mutation. 8 DNA Extraction and RNF213 Genotyping After written informed consent was obtained from included patients, peripheral blood samples were collected. Genomic DNA extraction from blood was performed using a GENOMIX kit (Talent). RNF213 c.14576G>A (exon 61) genotype screening was performed by small amplicon genotyping (SAG) based on high-resolution melting curve analysis 9 and confirmed by Sanger sequencing. Polymerase chain reaction (PCR) primers for c.14576G>A were designed to flank the mutation leaving only a single base, including the mutation between the primers. The forward primer used was 5′-GCAAGTTGAATACAGCTC CATCA-3′, and the reverse primer was 5′-TGTGCTT GCTGAGGAAGCCT-3′. PCR conditions were as follows: initial denaturation at 95°C for 2 minutes, followed by 45 cycles at 94°C for 30 seconds and annealing at 67°C for 30 seconds. After PCR, high-resolution melting curve analysis was performed in 96 well plates using a LightScanner (Idaho Technology Inc.), during which data were collected from 55°C to 97°C at a ramp rate of 0.101°C per second. Data Analysis Comparative analyses on age at onset, sex, onset symptom, family history, and frequency of the RNF213 c.14576G>A mutation were performed between the MMD and QMMD cohorts. In addition, univariate analyses were performed to compare age, sex, mode of onset, and risk factors for arteriosclerosis (smoking, hypertension, diabetes mellitus, dyslipidemia, and ischemic heart disease) between patients with and those without gene mutations. All values are expressed as mean ± SE or median and IQR. The Mann-Whitney U-test and the Fisher's exact test were used to assess statistical significance due to the heterogeneity and small sample size. All analyses were performed at a significance level of p < 0.05 using a commercially available software package (JMP Pro 13, SAS Institute Inc.). Results The two RNF213 c.14576G>A genotypes, namely c.14576GG (wt/wt) and c.14576AG (mut/wt), were determined by a modified SAG method for all patients in our study (Fig. 1). A total of 86 patients (76 with MMD and 10 with QMMD) were included in the study. Patient characteris-tics are presented in Table 1. The mean age at onset was 35 years for all patients, 34 years (range 1-77 years) for patients with MMD, and 43.4 years (range 20-62 years) for patients with QMMD. Our study cohort consisted of 61 females (55 with MMD and 6 with QMMD) and 25 males (21 with MMD and 4 with QMMD). In addition, positive family history was found in 15.8% of patients with MMD (12/76) and 10% of patients with QMMD (1/10). The onset symptoms of MMD in our cohort were cerebral ischemic disease in 48 patients, hemorrhage in 11 patients, headache in 7 patients, no symptoms in 4 patients, and other onset type in 6 patients. The onset symptoms of QMMD in our cohort were ischemia in 6 patients, hemorrhage in 0 patients, headache in 2 patients, no symptoms in 1 patient, and other onset type in 1 patient. We found no significant differences in age, sex, family history, and onset symptoms between the MMD and QMMD groups. The RNF213 c.14576G>A mutation was found in 64 patients (84.2%) in the MMD group and 8 patients (80%) in the QMMD group; there was no significant difference in mutation frequency between the two groups (p = 0.584). In addition, subgroup analysis showed no significant differences in age, sex, family history, and onset symptoms between patients with MMD and patients with QMMD based on RNF213 c.14576G>A mutation status. We found the following underlying diseases among the patients with QMMD in our group: hyperthyroidism in 6 patients, neurofibromatosis type 1 (NF1) in 2 patients, Sjögren's syndrome in 1 patient, and meningitis in 1 patient. The RNF213 c.14576G>A mutation was found in 5 patients with hyperthyroidism, 1 patient with NF1, 1 patient with Sjögren's syndrome, and 1 patient with meningitis ( Table 2). The results of comparisons of age, sex, mode of onset, clinical findings, family history, and risk factors for arteriosclerosis (smoking, hypertension, diabetes mellitus, dyslipidemia, and ischemic heart disease) between patients with and those without RNF213 gene mutations are presented in Table 3. None of the parameters showed significant differences. Only a family history of MMD tended to be associated with gene mutations (p = 0.0928). Discussion In this study, 80% of patients with QMMD carried the RNF213 c.14576G>A mutation. As the prevalence was not significantly different from that found in patients with MMD, we found that our MMD and QMMD cohorts had similar rates of gene mutation. Autoimmune diseases were the most common underlying diseases in patients with QMMD in this study, as cases of atherosclerosis were excluded. Furthermore, the prevalence of gene mutation in patients with QMMD in this study was extremely high compared with that found in previous studies on gene mutation in patients with QMMD. This finding indicates that many patients with conventional MMD who coincidentally have concurrent underlying disease were included as patients with QMMD in our study. Based on these findings, we speculate that among the patients diagnosed with QMMD according to current diagnostic criteria, patients with this gene mutation are classified as having definitive MMD complicated with a coincidental underlying disease, whose vascular abnormalities are unlikely to be improved by treatment of the underlying disease. In contrast, for patients with QMMD lacking this gene mutation, vascular abnormalities might have occurred due to the underlying disease, and thus, treatment of the underlying disease may be effective in improving QMMD. According to the definitions used in this study, patients with QMMD and any underlying disease all received a diagnosis of QMMD regardless of whether there was a causal relationship between the underlying disease and vascular abnormalities. 1 Therefore, patients with intracranial vascular abnormalities due to the underlying disease are considered together with those patients who coincidentally have MMD and an independent underlying disease. Thus, a diagnosis of idiopathic MMD is not sufficiently distinguishable from secondary MMD, and QMMD is inherently not the same as secondary MMD. Treatment modalities available for QMMD are similar to those available for MMD; however, certain medical treatments are effective for some patients with underlying diseases such as hyperthyroidism or autoimmune diseases. [2][3][4] Because the underlying diseases affect vascular structures, their treatment can lead to improvement in vascular abnormalities. Therefore, advances in treatment strategies for underlying diseases may lead to more patients with QMMD experiencing improvement as a result of nonsurgical medical treatment in the future. However, there is currently no method to preoperatively identify those patients with a direct causal relationship between an underlying disease and vascular abnormalities; therefore, indications for surgery and the appropriate timing of interventions are not clear. 1 Moreover, although the prediction of factors of pathological conditions, such as progression of vascular abnormalities, onset time, presence or absence of aggravation, and differences in phenotype, is important in patient management, no unified view has yet been derived regarding QMMD because of the diversity of underlying diseases. [7][8][9][10][11] For example, in clinical practice, we found a high prevalence of patients with QMMD and hyperthyroidism whose vascular abnormalities were not improved by the treatment of hyperthyroidism; such patients with QMMD were also found in the current study. 4,9,12 It is inferred that there are cases of secondary MMD with a causal relationship to the underlying disease as well as cases of idiopathic MMD that are poorly associated with the underlying disease in QMMD, both of which are associated with the same un- derlying disease. Thus, distinguishing MMD secondary to the underlying disease is important for treatment selection, in addition to better understanding the disease as well as its pathophysiology. To better discriminate between MMD and QMMD, genetic diagnosis may be informative. A missense mutation in RNF213 (c.14576G>A, p.R4859K, rs112735431) is highly correlated with MMD. The mutation is found at a high frequency in patients with MMD belonging to East Asian populations; approximately 80% of Japanese patients with MMD have this mutation, indicating a high correlation with clinical phenotypes such as early onset and disease exacerbation. 5 In our study, the RNF213 c.14576G>A mutation was found in 84.2% of patients with MMD, a frequency which is similar to that found in previous reports. 6,7,13 Furthermore, this finding demonstrates the validity of using the SAG approach by high-resolution melting curve analysis for the identification and genotyping of RNF213 c.14576G>A mutation in our cohorts. This method is a convenient one-step, single-tube method to detect specific mutations, and it is faster, simpler, and lower in cost compared with other approaches requiring separation or labeled probes. 14 Data from previous reports and the current study are summarized in Table 4. In previous studies, the percentage of patients with QMMD and the RNF213 c.14576G>A mutation varied considerably as follows: 0% (0/9) in the study by Miyawaki et al., 15 66.7% (12/18) in Morimoto et al., 16 18.7% (3/16) in Phi et al., 17 100% (1/1) in Chong et al., 18 11.9% (5/42) in Zhang et al., 19 and 53.3% (8/15) in Nomura et al. 20 In comparison, we found that 80% (8/10) of our patients with QMMD had the mutation, indicating a high frequency. The average frequency reported in all previous reports was 37.6% (35/93). [15][16][17][18][19][20] In addition, mutation frequency varied according to the underlying disease as follows: 48% (12/25) of patients had NF1, 48.5% (16/33) had hyperthyroidism, 13.6% (3/22) had arteriosclerosis, 100% (3/3) had rheumatism, 33.3% (1/3) had Down syndrome, and 0% (0/3) had undergone radiotherapy. The differences in these results may be related to the inclusion and exclusion of underlying diseases with a high prevalence, such as hyperthyroidism and arteriosclerosis. Although a consistent definition of QMMD was applied to all studies, those by Phi et al., 17 Chong et al., 18 and Nomura et al. 20 were limited to a single underlying disease; moreover, Morimoto et al. 16 and Miyawaki et al. 15 excluded arteriosclerosis as an underlying disease, whereas Zhang et al. 19 included all underlying diseases. Specifically, patients with QMMD accompanied by hyperthyroidism tended to have a high positive rate of variants. Hyperthyroidism has a high prevalence, which suggests that there are many patients with conventional MMD who coincidentally have concurrent hyperthyroidism. Therefore, the current diagnostic criteria by which all patients are classified as having QMMD does not fully encompass the true disease condition with respect to the underlying disease such as hyperthyroidism. Thus, it confounds efforts to accurately determine the best treatment strategy and predict therapeutic effects. The considerable variation found among previous reports suggests that patients with a definitive QMMD diagnosis secondary to the underlying disease, and patients with a diagnosis of independent and conventional MMD, are grouped with patients diagnosed with QMMD. Function analysis has demonstrated that RNF213 is implicated in angioplasty; however, the mechanism has not yet been clarified. In addition, as many gene carriers are patients with atherosclerotic lesions, pulmonary hypertension, or cardiovascular disease, as well as MMD, it is necessary to investigate the association between the RNF213 c.14576G>A mutation and each underlying disease group associated with QMMD in the future. [21][22][23] The RNF213 c.14576G>A mutation is found in 80% of Japanese patients with MMD. In contrast, it has been reported that the mutation is seldom found in patients with MMD from other ethnicities such as European populations. 24 The reason for these population-specific differences is not known, and a number of factors related to the RNF213 c.14576G>A mutation, such as function, distribution, and effect, require additional clarification. Our study demonstrated the inaccuracy of the current definition of QMMD and suggested that the presence or absence of genetic mutations may be helpful for accurately diagnosing QMMD. However, no clinical variables showed significant associations with the presence or absence of gene mutations, as shown in Table 3, warranting further investigations in a larger population of patients. A limitation of this study is that the overall sample size was small due to the low prevalence of the disease, resulting in limited statistical studies. 25,26 Thus, large-scale studies in a larger sample population are warranted. Our subsequent short-term focus is to investigate whether the effect of different medical treatments on disease groups, for which those medical treatments are effective among the underlying disease with QMMD, differs based on the presence or absence of the RNF213 c.14576G>A mutation. Conclusions Our findings suggest that the RNF213 c.14576G>A mutation might be a useful marker in contributing to the accurate diagnosis of QMMD, and that it may, therefore, aid in the selection of appropriate treatment strategies for some patients with QMMD as well as accurate prediction of disease conditions.
Self-reported snoring is associated with nonalcoholic fatty liver disease Although nonalcoholic fatty liver disease (NAFLD) is associated with obstructive sleep apnea syndrome (OSAS), studies on the direct relationship between NAFLD and snoring, an early symptom of OSAS, are limited. We evaluated whether snorers had higher risk of developing NAFLD. The study was performed using data of the Tongmei study (cross-sectional survey, 2,153 adults) and Kailuan study (ongoing prospective cohort, 19,587 adults). In both studies, NAFLD was diagnosed using ultrasound; snoring frequency was determined at baseline and classified as none, occasional (1 or 2 times/week), or habitual (≥3 times/week). Odds ratios (ORs) and hazard ratios (HRs) with 95% confidence intervals were estimated using logistic and Cox models, respectively. During 10 years’ follow-up in Kailuan, 4,576 individuals with new-onset NAFLD were identified at least twice. After adjusting confounders including physical activity, perceived salt intake, body mass index (BMI), and metabolic syndrome (MetS), multivariate-adjusted ORs and HRs for NAFLD comparing habitual snorers to non-snorers were 1.72 (1.25–2.37) and 1.29 (1.16–1.43), respectively. These associations were greater among lean participants (BMI < 24) and similar across other subgroups (sex, age, MetS, hypertension). Snoring was independently and positively associated with higher prevalence and incidence of NAFLD, indicating that habitual snoring is a useful predictor of NAFLD, particularly in lean individuals. Results In the present study 2,153 participants (mean age 41.4 years) in the Tongmei study and 19,587 participants (mean age 52.7 years) in the Kailuan cohort were included (Fig. 1). In Tongmei, the prevalence of NAFLD diagnosed via abdominal ultrasound was 29.6% (638/2,153), and those participants were more likely to be men, age ≥ 45 years, and to exhibit higher daily total energy intake, snoring, MetS and its components, higher body mass index (BMI), and elevated alanine transaminase (ALT), aspartate aminotransferase (AST), and gamma glutamyl transpeptidase (GGT). During the 10-year follow-up (follow-up rate 84.8%, 21,422/25,268; calculation is detailed in Supplementary Information), 4,576 patients with incident NAFLD were identified in Kailuan. Those patients were more likely to be women, age 45-65 years, not single, physical labourers, and non-smokers, and they were more likely to work on the surface, have a highest education level of high school, engage in sedentary behaviour for <4 hours per day, have moderate or high perceived salt intake, and exhibit habitual snoring, MetS and its components, higher BMI, and elevated ALT, C-reactive protein (CRP), and serum uric acid (SUA) (Tables 1 and S1). Compared with non-snorers, all snorers had higher prevalence and incidence of NAFLD, after adjusting for potential confounders; however, this was not the case for occasional snorers. Consistent results were obtained in sensitivity analyses (Table 2). In Tongmei, habitual snoring was still associated with the prevalence of NAFLD in each stratum after stratifying by sex, age (<45 vs. ≥45 years), workplace (underground vs. surface), occupation type, BMI, MetS, arterial hypertension, and waist circumference (WC); however, this was only significant in participants who did not have simple overweight or hyperglycaemia, and those who had hypertriglyceridemia or low high-density lipoprotein cholesterol (HDL-C). Obesity, BMI, and hypertriglyceridemia modified the effects of habitual snoring on NAFLD (Fig. 2, Tables 3 and S2). The association between habitual snoring and NAFLD prevalence was stronger among lean participants (BMI < 24 or those with normal BMI and WC), and patients with hypertriglyceridemia. In Kailuan, habitual snoring was still associated with the incidence of NAFLD in each stratum according to sex, age, occupation type, BMI, MetS, arterial hypertension, hypertriglyceridemia, and HDL-C; this was only significant in participants who were surface workers, did not have simple central obesity, or hyperglycaemia, and who had normal WC. Obesity, BMI, and hyperglycaemia modified the effects of habitual snoring on NAFLD (Fig. 3, Table 4 and S3). The association between habitual snoring and NAFLD risk was greater among lean participants (BMI < 24 or those with normal BMI and WC), and those who did not exhibit hyperglycaemia. Interestingly, occasional snoring was still not associated with NAFLD after stratification in either population, except in participants age <45 years, and those who were mental (versus manual) labourers in Kailuan. Heterogeneous effects were detected after stratifying by age and occupation type in Kailuan (Figs. 2 and 3; Tables 3, 4, S2 and S3). The association between occasional snoring and NAFLD risk was stronger among young participants (age < 45 years), and mental labourers. Discussion In this study, we established an association of self-reported snoring with NAFLD in a sampling-based cross-sectional population, and we validated this association in an independent large prospective cohort. Our findings indicated that self-reported snoring was significantly associated with a higher risk of subsequently developing NAFLD during a 10-year follow-up. These associations were independent of known risk factors for NAFLD such as obesity (defined according to BMI and WC in the present study), MetS and its components, age, smoking, sedentary behaviour, physical inactivity, elevated CRP, elevated SUA, and high salt intake 1,16,17 . To our knowledge the present study is the first to provide evidence of a direct association between snoring and an increased risk of NAFLD. Continued The association between habitual snoring and NAFLD was greater among lean participants (BMI < 24 or those with normal BMI and WC) in both populations. These results suggest that the presence of elevated BMI (or elevated BMI and WC) may buffer the effects of snoring on NAFLD. This is concordant with previous studies in which strong associations were found between obesity and increased prevalence and development of snoring 15 . In stratification analyses of two independent populations, habitual snoring was still significantly associated with increased prevalence and incidence of NAFLD in each stratum after stratifying by sex, age, MetS, arterial hypertension, and BMI in both analyses. Interestingly, the risk of NAFLD was significantly associated with habitual snoring in participants who did not have hyperglycaemia, and had normal WC, in both analyses; however, this was not the case in workers who exhibited hyperglycaemia in both analyses; and had elevated WC in the cohort analyses. These results suggest that hyperglycaemia and elevated WC may be stronger risk factors for the development of NAFLD than habitual snoring. High BMI, elevated WC, the presence of diabetes, and the presence of MetS are well-known primary risk factors for the development of NAFLD that are usually concurrent with NAFLD 1 . Lean NAFLD is also not uncommon; however, this represents a clinical challenge because the diagnosis of NAFLD may be delayed or ignored in such cases owing to an absence of the aforementioned common comorbidities 18 . Notably, the results of the present study suggest that habitual snoring may be a useful early indicator of NAFLD even in the absence of common comorbidities. Some inconsistent results pertaining to simple central obesity and simple overweight were obtained in the two study populations. This may be owing to different distributions of body types among these populations. In Tongmei, similar proportions of participants had simple central obesity (12.0%) and simple overweight (12.7%), and the largest proportion of participants (43.1%) exhibited both forms of obesity. In Kailuan, 27.3% of participants had simple overweight and only 7.4% had simple central obesity, and the largest proportion of participants (45.5%) exhibited normal BMI and WC. Furthermore, compared with non-snorers who had normal BMI and WC, ORs and HRs of NAFLD in participants with elevated BMI and/or elevated WC were dramatically increased. This confirmed that controlling weight and WC are very important in the management of NAFLD. The proportions of NAFLD in men and women were inconsistent in the present two populations; this conflict is common, as previously reported 19 . Interestingly, female non-snorers and male snorers had similar risks of NAFLD in both populations. In stratified analyses, however, habitual snoring was consistently significantly associated with increased risk of NAFLD among men and women. In a recently reported cross-sectional study, self-reported snoring status was compared with polysomnography results; in that study women tended to under-report their snoring and men tended to over-report snoring 20 . This apparent sex difference in self-reporting may contribute to the comparatively lower risk in men than in women. Snoring is an early symptom of OSAS 12 , and OSAS has been incorporated in two prediction models of nonalcoholic steatohepatitis in morbidly obese patients, to optimize the selection of patients for liver biopsy 21 . The prevalence of OSAS in the general population is relatively low 30 , however, and the diagnosis of OSAS relies on polysomnography. Habitual snoring is common in the general population and can be easily detected by co-sleepers. Therefore, we speculate that self-reported habitual snoring can be incorporated in prediction models of NAFLD. This association was confirmed in the present cohort study, but needs to be validated in more extensive populations. The mechanisms involved in the association between snoring and NAFLD have not been elucidated, but several explanations for the causal relationship between OSAS and NAFLD have been suggested 22 www.nature.com/scientificreports www.nature.com/scientificreports/ breathing leads to chronic intermittent hypoxia, which may cause liver injury, lipid deposition, inflammation, and fibrogenesis via activation of hypoxia inducible factor, nuclear factor kappa-light-chain-enhancer of activated B cells, or the induction of endoplasmic reticulum stress, tissue inflammation, and insulin resistance 22,23 . OSAS increases the number of micro-arousals, the accumulation of which causes sleep fragmentation and reduces its restorative value 24,25 . In a randomized controlled trial, it was concluded that the sound of snoring probably increased the number of micro-arousals 26 . Collectively, these potential mechanisms may constitute the pathophysiological basis of the association between snoring and increased risk of NAFLD. Liver biopsy is the gold standard for NAFLD diagnosis but biopsy is not feasible in a large population-based study. Ultrasound is a widely accessible imaging technique for the detection of fatty liver in clinical and population settings owing to its relatively low cost and verified safety 27 . To minimize the effects of misclassification via ultrasound in the present study, two separate analyses were conducted. In one analysis, at least two positive determinations via ultrasound were required to qualify a participant as a new NAFLD case 16 in a separate analysis, an alternative definition of "at least one positive report" was used. Similar results were obtained using either definition. In the present study, the presence and frequency of snoring was based on self-reporting that was undoubtedly influenced by input from participants' families, and this may have resulted in under-or over-reporting 15,20 . Notably, snoring can be detected by co-sleepers; detection, quantification, and data acquisition using more objective methods was beyond the scope of the present study, which used data derived from very large population-based cohorts. With regard to future studies, it has been reported that low-cost no-contact or contact microphones that do not affect sleep quality are effective, and acoustic analysis of snoring is now considered a highly accurate diagnostic tool for OSAS versus polysomnography 28 . Further studies using such methods are encouraged, to confirm the findings of the present study. Effective treatments are also available for snoring, such as low-level continuous positive airway pressure (CPAP), oropharyngeal exercises, oral appliance therapy, and the use of specific types of pillows [29][30][31][32][33] . A recent review indicated that CPAP, the first-line treatment for OSAS, may be beneficial with regard to liver disease in people with OSAS, independent of metabolic risk factors 34 . www.nature.com/scientificreports www.nature.com/scientificreports/ The two populations included in the current study were occupation-based, so caution should be used in extrapolating the results to more general populations. Compared with population-based studies, the common issue of an unbalanced ratio of men and women existed; because the majority of coal mine staff are men. Interestingly, the sex ratio in Kailuan was close to that in the general population, which was at least partly because alcohol drinkers were excluded from the analysis and the proportion of male drinkers was larger than that of female drinkers. Furthermore, interaction analysis consistently indicated that there were no significant differences in ORs or HRs between men and women. Detailed dietary information and a history of OSAS at baseline were obtained in Tongmei, and high total energy intake was a risk factor for NAFLD in a crude model. Comparable dietary information and OSAS history were not obtained in Kailuan, however, so the two populations could not be compared in this regard. Moreover, individuals with genotype 3 HCV infection were not excluded in this study because only a history of HCV infection was collected in Tongmei and Kailuan; genotype testing was not feasible in these two large population-based studies. However, the prevalence of genotype 3 HCV infection is low in Chinese populations 11,35 . Lastly, although several potential confounders were adjusted in the models, because the current investigation was an observational study, the present results may have been affected by additional independent NAFLD risk factors and snoring risk factors that could not be incorporated into the analysis owing to unavailability, such as myopenia measured via body composition, genetic susceptibility genes, neck circumference, or cranio-facial differences; in addition, anatomical aspects such as single or multi-level obstruction, muscle tonus, and length of the upper airway may influence the intensity of snoring 1,15,36 . conclusion Snoring is a common condition that may be associated with the prevalence and 10-year incidence of NAFLD. Habitual snoring may be particularly useful as a low-cost, non-invasive, and convenient predictor of NAFLD, especially in individuals who do not exhibit common comorbidities. Further research investigating the underlying mechanisms involved in the association between snoring and NAFLD is warranted, as are prospective studies investigating the effects of attenuating snoring symptoms on NAFLD. 16,[37][38][39] . Both the Tongmei study and Kailuan study consisted of face-to-face interviews, clinical examinations, and acquisition of laboratory data. These studies were conducted in compliance with the Declaration of Helsinki, and the protocols were reviewed and approved by the Ethics Committees of Shanxi Medical University and Kailuan General Hospital, respectively. Written informed consent was obtained from all study participants before data collection. Exclusion criteria included (1) self-reported alcohol consumption, or missing alcohol consumption history data; (2) liver cirrhosis; (3) presence of diseases such as OSAS, thyroid disease, or cancer; (4) taking a drug that could potentially affect snoring or NAFLD, or long-term use of sedative-hypnotic drugs; and (5) missing ultrasound data or data pertaining to other covariates. In the Kailuan study, we additionally excluded (6) participants with NAFLD at baseline; (7) participants without follow-up data; (8) participants who self-reported drinking during follow-up; and (9) participants with liver cirrhosis during follow-up. Data collection and definitions. Blood pressure measurement, anthropometry, overnight fasting blood specimen collection, physical examination, and abdominal ultrasound were performed in the morning by trained and certified nurses, physicians, or experienced radiologists who were blinded to the laboratory findings, in accordance with standard protocols and techniques 40 . In face-to-face interviews, each participant was asked about demographics, lifestyle, nutrition, and physical activity, and participants' medical history was collected via self-administered questionnaires. In the Tongmei study, physical activity level and sedentary behaviour were assessed using the International Physical Activity Questionnaire (IPAQ) 41 , and a validated semi-quantitative food frequency questionnaire was used to obtain data reflecting dietary intake in the past year 42 . Notably, no nutrition survey was involved in the Kailuan baseline survey. Laboratory staff assessed blood biochemical indexes and blood glucose using automatic analysers (Tongmei: SIEMENS ADVIA 1800 at the General Hospital of Datong Coal Mining Group; Kailuan: Hitachi 747 at the Central Laboratory of the Kailuan General Hospital). C-reactive protein and serum uric acid were only tested in Kailuan. Alcohol consumption was ascertained using a structured questionnaire, including the consumption of beer, wine, and spirits. Definitions and calculations. NAFLD was diagnosed by experienced radiologists via abdominal ultrasonography (Tongmei: portable MyLab 30CV, Biosound Esaote; Kailuan: HD-15, Philips) at recruitment in both studies; NAFLD was monitored biennially from 2008-2017 in Kailuan. The criteria for determination of NAFLD suggested by the Chinese Liver Disease Association were used 1 as previously described 16 . Alternative causes, such as alcohol consumption and systemic diseases or medications before a diagnosis of NAFLD were ruled out according to the history of drinking, drug use, and diseases. Owing to the relatively low sensitivity and specificity of ultrasonography for detecting moderate or severe liver steatosis compared with histology 16,27 , NAFLD was defined as positive liver steatosis determined via ultrasonography, and incident NAFLD was defined as patients Table 3. Effect modification of snoring on NAFLD in Tongmei: OR (95% CI), P value. † adjusted for age (<45 or ≥45 years), sex, marital status (single, married, divorced/widowed/separated), education (illiterate/ primary, junior high school, senior high school or college, bachelor or higher), income (≤4000, >4000-6000, >6000 RMB), workplace (underground/surface), occupation type (mental labour/light physical labour/heavy physical labour), current tobacco smoking (yes, no), perceived salt intake (low, medium, high), degree of IPAQ (low, moderate, high), degree of sedentary (low, moderate, high), total energy intake per day (low, moderate, high), elevated serum liver enzymes (no/yes), obesity (normal, central, overweight, both), and MetS (no/ yes). Abbreviation. CI confidence interval; OR odds ratio; HR, hazard ratio; SD, standard deviation; IPAQ, international physical activity questionnaire; MetS, metabolic syndrome; BMI, body mass index; RERI, relative excess risk due to interaction. (2020) 10:9267 | https://doi.org/10.1038/s41598-020-66208-1 www.nature.com/scientificreports www.nature.com/scientificreports/ without NAFLD at baseline and with at least two reports of positive liver steatosis at any time from 2008 to 2017 16 . For cases of incident NAFLD, person-time of follow-up was calculated from the date of the 2006 survey (baseline) to the date of the first NAFLD diagnosis; for the remainder, person-time of follow-up was calculated from the date of baseline to the date of the last follow-up. Snoring status was self-reported by participants, and was often ascertained with the assistance of family members, with regard to the question "Have you ever snored while asleep?" In both studies, there were three response choices for that question: "never", "occasionally (1 or 2 times/week)", and "habitually (≥3 times/week)". MetS was diagnosed with the presence of any three of the following five factors 1 : (1) elevated waist circumference: waist circumference >90 cm in men and >85 cm in women; (2) arterial hypertension: arterial blood pressure ≥130/85 mmHg or on antihypertensive therapy; (3) hypertriglyceridemia: fasting serum triglycerides ≥1.7 mmol/L or on lipid-lowering medication; (4) low HDL-C: fasting serum HDL-C < 1.0 mmol/L in men or <1.3 mmol/L in women; and (5) hyperglycaemia: fasting serum glucose ≥5.6 mmol/L or a history of type 2 diabetes mellitus. Obesity was defined based on both BMI and WC, and included four categories: normal (normal BMI and WC), simple central obesity (normal BMI and elevated WC), simple overweight (elevated BMI and normal WC), and both forms of obesity (elevated BMI and WC). Physical activity and sedentary behaviour were defined as low, moderate, or high in accordance with IPAQ guidelines 41 . Total energy intake per day was calculated based on China Food Composition 30 , and categorized according to tertiles. In the Kailuan study, physical activity and sedentary behaviour were respectively evaluated based on answers to questions pertaining to the frequency of physical activity and duration of sedentary behaviour. Salt intake was self-reported as low, medium, or high as described previously 16 . Elevated serum liver enzymes was defined as any among ALT, AST, and GGT higher than the upper normal limit (40,45, and 58 U/L, respectively). Elevated SUA was defined as >420 μmol/L in men and >357 μmol/L in women. Current smokers were those who had smoked at least one cigarette per month during the past year.
Circular RNA identified from Peg3 and Igf2r Circular RNA is a newly discovered class of non-coding RNA generated through the back-splicing of linear pre-mRNA. In the current study, we characterized two circular RNAs that had been identified through NGS-based 5’RACE experiments. According to the results, the Peg3 locus contains a 214-nucleotide-long circular RNA, circPeg3, that is detected in low abundance from the neonatal brain, lung and ovary. In contrast, the Igf2r locus contains a group of highly abundant circular RNAs, circIgf2r, showing multiple forms with various exon combinations. In both cases, the expression patterns of circPeg3 and circIgf2r among individual tissues are quite different from their linear mRNA counterparts. This suggests potential unique roles played by the identified circular RNAs. Overall, this study reports the identification of novel circular RNAs specific to mammalian imprinted loci, suggesting that circular RNAs are likely involved in the function and regulation of imprinted genes. Introduction Circular RNA is a newly discovered class of non-coding RNAs that are produced through the back-splicing of linear pre-mRNA [1,2]. In back-splicing, the splicing acceptor site of an upstream exon is joined to the splicing donor site of its downstream exon. In eukaryotic genes, the exons localized in the 5'-side tend to be included as circular RNA more frequently than those in the 3'-side. In particular, the 2nd exon is the most frequent exon that is included as part of circular RNA [3,4]. Circular RNA is very stable due to its unusual circular structure, which lacks the 5' cap and 3' Poly-A tails [3]. As a consequence, circular RNA detection has been elusive until recent advancements in high-throughput sequencing, although some circular RNAs are quite ubiquitous and abundant in vivo [1,2,5]. Since its initial discovery from RNA viruses, recent studies indicate that circular RNAs are well conserved across mammals ranging from mice, porcine, to humans [1,2]. In terms of physiological roles, circular RNAs are closely associated with various diseases, particularly in cancers, Alzheimer's, neurological diseases, and diabetes. Thus, many circular RNAs have been recently recognized as biomarkers with potential for clinical diagnosis and therapeutic targets [6][7][8]. In some cases, circular RNA has been shown to function as a molecular sponge to remove microRNAs as means of regulating transcription [9,10]. Besides these known functions, circular RNAs are predicted to be involved in many biological processes, including brain development, cellular stress, and aging a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 [11,12]. Nevertheless, the detailed mechanisms by which circular RNAs are involved in these processes are currently unknown. In mammalian genomes, a subset of genes are expressed only from one allele due to an epigenetic mechanism termed genomic imprinting, by which one allele is usually repressed by DNA methylation and histone modifications [13,14]. Imprinted genes tend to be clustered in specific regions of chromosomes, forming imprinted domains. The imprinting (mono-allelic expression) of several genes in a given domain is controlled through small genomic regions, termed Imprinting Control Regions [13,14]. ICRs obtain allele-specific DNA methylation during gametogenesis, which is then maintained throughout the lifetime after fertilization [13,14]. Many cis-regulatory elements are involved in the imprinting control of a given domain. In particular, alternative promoters located upstream of ICRs are known to be involved in establishing gametic DNA methylation on ICRs [15,16]. To identify these alternative promoters for the Peg3 domain, we previously performed several sets of Next Generation Sequencing (NGS)based Rapid Amplification of cDNA Ends (RACE) experiments [17,18]. Indeed, one of the identified alternative promoters, termed U1, is involved in establishing DNA methylation on the ICR of the Peg3 domain [19,20]. While analyzing the sequence data from the 5'RACE experiments, we serendipitously identified unusual circularized RNA transcripts that had been derived from several imprinted genes. In the current study, we have further characterized these potential circular RNAs with series of RT-PCR experiments. According to the results, two circular RNA, circPeg3 and cir-cIgf2r, identified from Peg3 and Igf2r imprinted loci, respectively, appeared to be genuine in vivo transcripts. Also, the expression patterns of these circular RNAs were quite distinct from their linear mRNA counterparts, suggesting potential unique roles by circular RNAs in genomic imprinting. Circular RNA identified from 5'RACE experiments Several sets of Next Generation Sequencing (NGS)-based Rapid Amplification of cDNA Ends (RACE) experiments were performed as part of ongoing efforts to identify upstream alternative promoters for several imprinted genes (Fig 1). For this series of experiments, total RNA was isolated from various mouse tissues, including hypothalamus, neonatal brain, ovary and testis. The isolated total RNA was first reverse-transcribed with the gene-specific primers that had been derived from the 2nd exons of imprinted genes, including Snrpn, Zac1, Gtl2, Dlk1, Igf2r, Peg3, and non-imprinted Myc, which served as a control. These cDNA were further processed with G-tailing followed by nested PCR amplification for NGS runs. We obtained total 2.5 million reads for this set of genes (S1 File). Initial inspection indicated that the majority of reads were derived from the three categories of transcripts: normal spliced transcripts with Exon1 (E1) and Exon2 (E2), unspliced transcripts and alternative transcripts starting from upstream alternative exons/promoters (U1) (Fig 1). Detailed inspection also revealed a fraction of raw reads with unusual exon combinations. The sequences from the 2nd exons of several genes were connected to the sequences of the 2nd exon itself, but through the 3'-side, or its downstream exons, forming potential circular RNAs via back-splicing events. The genes with these potential circular RNAs include Peg3, Dlk1, and Igf2r. In the case of Peg3 and Dlk1, the circularized transcripts also contain previously unknown small exons that are derived from the 1st intron of each gene (S2 File). Interestingly, these exons are not included as part of the linear mRNA, but unique to the circular RNAs, thus named circular RNA-specific exons (circE). In the case of Igf2r, multiple forms of circular RNAs were detected, showing various exon combinations involving Exon 2 through 12. In terms of abundance, the sequence reads corresponding to the two circular RNAs, termed circPeg3 and circDlk1, accounted for less than 1% of the total number of the raw reads. On the other hand, the circular RNAs from the Igf2r locus, termed circIgf2r, accounted for about 20 to 50% of the total reads derived from each of the four tissue libraries, suggesting that circIgf2r may be a predominant group of transcripts in vivo. Circular RNA from the Peg3 locus According to the results, the predicted circular RNA from the Peg3 locus, circPeg3, is 214 nucleotide (nt) long, and made of two exons: the 2nd exon of Peg3 and the circE exon, with 85 and 129 nt in length, respectively (Fig 2). The circE exon is localized 1.6-kb upstream of the 2nd exon of Peg3 (chr7: 6,680,855-6,680,983 in mm9). This genomic region is still part of the 4-kb Peg3-DMR (Differentially Methylated Region), an Imprinting Control Region for the Peg3 imprinted domain. This DMR is also known to have many YY1 binding sites. In fact, the 129-nt-long circE exon also contains one YY1 binding site (S2 File). The predicted circPeg3 was further confirmed through a nested RT-PCR scheme involving two sets of primers with divergent orientation, which were derived from the circE exon (R1/F1 and R2/F2 in Fig 2B). For this test, we used a panel of cDNA that had been prepared from the total RNA isolated from various mouse tissues (Fig 2A). As expected, high levels of Peg3 expression were detected from 14.5-dpc (days postcoitum) embryo and placenta as well as neonatal brain [21][22][23]. Medium to low levels of the expression were also detected from the remaining tissues (the third panel in Fig 2A). In contrast, low levels of circPeg3 were detected from neonatal brain, lung, and ovary after two rounds of nested PCR, indicating that the expression levels of circPeg3 are overall very low in the observed tissues. This is also consistent with the small number of the sequence reads detected from the initial NGS runs corresponding to circPeg3, which accounted for less than 1% of the total reads. We repeated this series of RT-PCR with another set of biological replicates, successfully detecting low levels of circPeg3 from various adult tissues, including neonatal brain, thymus, lung, liver, kidney, fat, and ovary (the second panel in Fig 2A). It is salient to note that there was no correlation between the expression levels of Peg3 compared to the detection of circPeg3 among the tested tissues. Overall, this series of RT-PCR analyses confirmed the presence of low-abundant circPeg3 in the various tissues in vivo. Formation of circPeg3 in various mutant alleles We further tested the formation of circPeg3 in several mutant alleles targeting the Peg3 locus (Fig 3). The 4-kb genomic interval encompassing the bidirectional promoter for Peg3 and Usp29 has been hypothesized to be an ICR for this imprinted domain, thus this region has been targeted multiple times through mouse knockout experiments. One of the mutant alleles, termed KO2, contains a deletion of this 4-kb genomic region [21]. Since the circE exon is also localized within the same deleted region, we tested the formation of circPeg3 in the mutant animals with either paternal or maternal transmission of the KO2 allele (lane 3 and 4 in Fig 3). The results indicated that circPeg3 was detected in the mutant with the maternal transmission but not with the paternal transmission of the KO2 allele, confirming that circPeg3 originates from the paternal allele of Peg3. This also agrees with the paternal-specific expression of the linear mRNA counterpart of Peg3 [22,24,25]. We further tested the formation of circPeg3 in additional mutant animals. The CoKO allele contains a 7-kb insertion of the expression cassette containing the β-galactosidase and neomycin resistance genes at the 5th intron of the Peg3 locus (Fig 3B). In this mutant allele, the transcription of Peg3 becomes truncated due to the two Poly-A signals included in this expression cassette [26,27]. circPeg3 was not detected in the animals with paternal transmission of the CoKO allele (lane 5), thus suggesting that the formation of circPeg3 may require transcription of the entire length of the Peg3 locus. The DelKO allele contains a deletion of the 1-kb genomic interval encompassing Exon 6 of Peg3, yet the transcription and subsequent splicing of this mutant allele has been shown to be normal according to the previous studies [27]. circPeg3 was also detected in the animals with paternal transmission of DelKO (lane 6), confirming no obvious effects on the formation of circPeg3 by the deletion of Exon 6. Finally, the 4-kb genomic interval of the Peg3-DMR was inverted in one mutant allele, termed Invert [28]. In this mutant allele, the promoters and 1st exons are exchanged between Peg3 and Usp29. The circE exon is also part of the inverted region, thus localized in the direction of Usp29 in this mutant allele. According to the previous study, the transcription and splicing of both genes normally occur in this mutant, but as fusion transcripts [28]. The 1st exon of Peg3 is now connected to the 2nd exon of Usp29, while the 1st exon of Usp29 is connected to the 2nd exon of Peg3. Nevertheless, circPeg3 was not detectable in the animals with paternal transmission of the inverted allele (lane 7 in Fig 3), thus suggesting that the formation of circPeg3 may require both the circE and 2nd exons of Peg3. Overall, this series of analyses concluded that circPeg3 originates from the paternal allele, and further that the formation of circPeg3 requires the transcription of the entire Peg3 locus and also the 2nd exon of Peg3. Circular RNA from the Igf2r locus The potential circular RNA from the Igf2r locus, termed circIgf2r, was also characterized in a similar way as described above (Fig 4). According to the results from the NGS runs, multiple forms of circIgf2r likely exist with various exon combinations. Also, circIgf2r is likely a group of major transcripts based on the sequencing results that the sequence reads corresponding to circIgf2r accounted for 20 to 50% of the total reads in the given library (S1 File). In this case, a slightly modified nested PCR scheme was employed to detect multiple forms of circIgf2r. First, an initial RT-PCR was performed with a set of divergent primers that were derived from the 2nd exon of Igf2r. As demonstrated for circPeg3 (Fig 2), the same cDNA panel was also used for circIgf2r detection. This initial PCR amplified a large number of PCR products that were readily detectable from multiple tissues ( Fig 4A). Yet, the sizes of these PCR products were discrete but not contiguous, showing an about 100-bp difference between two adjacent PCR products. The Igf2r locus is made of 48 exons with each exon averaging 100 bp in length, particularly from Exon 2 through 10 ( Fig 4B). Thus, this may be an indication that this initial PCR might have amplified multiple forms of circIgf2r with different exon combinations. To test this possibility, we performed the 2nd nested PCR with a slightly modified scheme. In this scheme, the first primer targeted the 2nd exon of Igf2r in the reverse direction, E2R3, but a set of the second primers were designed from individual exons in the forward direction. According to the results from the NGS runs, Exon 2 was often connected to Exon 4, 6, 7 and 10. Thus, a set of four primers, including E4F1, E6F1, E7F1, and E10F1, were designed and used individually with the E2R3 primer to detect the corresponding circular RNA (Fig 5). PCR with the three combinations of primers except the E2R3-E10F1 combination successfully amplified the target products based on their expected sizes, ranging from 118 to 141 bp in length. The shortest form of circIgf2r, E2-E3-E4, was detected from heart and ovary (top panel), whereas the longest form, E2-E3-E4-E5-E6-E7, was detected from 14.5-dpc embryos, neonatal brain, thymus, kidney, and ovary (bottom panel). This series of PCR also detected another form, E2-E3-E4-E5, in several tissues, although this particular form had not been previously observed from the NGS runs. The two forms, E2-through-E5 and E2-through-E7, seemed to be detected more frequently than the other two, E2-through-E4 and E2-through-E6, in the tissues examined so far ( Fig 5B). This series of analyses were repeated with another set of biological replicates (S3 File). Ten out of the 15 detected cases were reproducible between these two biological replicates. Overall, this series of analyses confirmed that circIgf2r exists as multiple forms with various exon combinations in vivo. Discussion In the current study, two circular RNAs, circPeg3 and circIgf2r, identified from the Peg3 and Igf2r imprinted loci were further characterized by series of RT-PCR analyses. circPeg3 is a relatively low abundant transcript detected in various mouse tissues. In contrast, circIgf2r represents a group of high abundant transcripts displaying multiple forms with various exon combinations. According to the results, the expression patterns of these two circular RNAs among individual tissues were quite different from the expression patterns of the linear mRNA counterparts, suggesting potential unique roles played by these two circular RNAs. The experimental strategy used for the current study, NGS-based 5'RACE experiments, appeared to be quite successful for identifying circular RNA (Fig 1). Three potential circular RNAs have been identified from the seven genes tested so far, yet two of these turned out to be genuine in vivo transcripts according to the results from RT-PCR analyses (Figs 3 and 5). This success may be contributed by several factors. First, using unpurified total RNA rather than purified mRNA may have increased the chance of identifying circular RNA, given the fact that circular RNAs lack Poly-A tails [3]. This is also likely one of the main reasons why we previously failed to detect highly abundant circular RNAs despite numerous trials of RNA-seq experiments. Second, reverse transcription starting from the 2nd exons of individual genes may have also increased the odds of identifying circular RNA, since the 2nd exons of individual genes are most frequently involved in the formation of circular RNAs [3,4]. Finally, recent advancements of NGS-based sequencing approaches have definitely increased the chances of finding low abundant circular RNA, which was demonstrated through the successful identification of circPeg3. Overall, although serendipitous but not planned, the current approach turns out to be an effective approach for identifying circular RNA. The expression patterns of two circular RNAs, circPeg3 and circIgf2r, are quite different from those of the linear mRNA counterparts among the tested tissues. In the case of the Peg3 locus, circPeg3 was detected in neonatal brain, lung, and ovary, although the expression levels of the linear mRNA counterpart, Peg3 transcript, were very low in these tissues, especially in ovary and lung (Fig 2). This is also the case for the Igf2r locus. In this case, the expression of each form of circIgf2r appeared to be unique to each tissue. For instance, the frequent forms detected in the ovary were the E2-through-E4 and E2-through-E7 exon combinations, whereas the frequent form in lung, kidney, and testis was the E2-through-E5 combination (Fig 5). It is interesting to note that none of the tissues examined so far have all four forms. Thus, it is unlikely that these various forms of circIgf2r represent the erroneous byproducts that are generated during the splicing process of the linear pre-mRNA. In a similar context, it is also relevant to note that the small exon specific to circular RNA, circE in the case of circPeg3, contains a DNA-binding site for YY1. Since YY1 binding sites are quite ubiquitous in the mammalian genomes [29], it is reasonable to predict that some of the small RNAs, such as miRNA or piwiRNA, may also contain YY1 binding sites. In this case, this DNA-binding site within circPeg3 could function as a bait to attract these miRNAs, thus rendering circPeg3 as a potential miRNA sponge [9,10]. It is currently unknown whether a circular RNA plays similar functions as its corresponding host genes. Nevertheless, it is relevant to note that both imprinted genes, Peg3 and Igf2r, play significant roles in mammalian reproduction [1,2]. Also, Peg3 is a well-known tumor suppressor, and its promoter region is usually hypermethylated in the patients of ovarian and breast cancers [25]. The expression levels of Peg3 tend to be very low in dividing cells, such as stem cells, whereas its expression levels tend to be very high in differentiated cells, such as muscle and neuron cells. These patterns were also demonstrated in the results of RT-PCR (the third panel in Fig 2A), showing high expression levels in neonatal brains versus low expression levels in thymus, lung and liver. Interestingly, the detection of circPeg3 was more obvious and consistent in the tissues with low expression levels of Peg3. Thus, this may be an indication that circPeg3 might play opposing roles compared to Peg3, promoting cell division. If this is the case, characterizing the precise roles of circPeg3 should be of great interest in the near future. In conclusion, the two circular RNAs identified from the mouse Peg3 and Igf2r loci are genuine in vivo transcripts. Ethics statement All the experiments related to mice were performed in accordance with National Institutes of Health guidelines for care and use of animals, and also approved by the Louisiana State University Institutional Animal Care and Use Committee (IACUC), protocol #16-060. NGS-based 5'RACE experiments Tissues were collected from hypothalamus, testis, liver, heart, and kidney from an adult male mouse (WT); ovaries were collected from an adult female (WT); a whole head was used from a one-day-old neonate (WT). The tissues were subject to total RNA isolation using the Trizol RNA isolation kit (Invitrogen). The resulting total RNA (2.5-5 μg) was mixed with gene-specific primers corresponding to the second exons for Peg3, Gtl2, Dlk1, Igf2r, Snrpn, Zac1, and Myc (S4 File), and reverse-transcribed using the M-MuLV reverse transcriptase (New England Biolabs, Cat. No. M0253S). The cDNA products were purified using phenol/chloroform extraction and ethanol precipitation. The 3 0 -ends of the purified cDNA was further modified by the tailing reaction using dGTP and terminal deoxynucleotidyl transferase according to the manufacturer's protocol (New England Biolabs, Cat. No. M0315S). The tailed cDNA was amplified using two primers: the tail-long primer (5 0 -GGTTGTGAGCTCTTCTAGATCCCC CCCCCCCCNN-3 0 ) and internal gene-specific primers to check for quality (S4 File) [17,18]. The amplified cDNA was re-amplified with a set of nested primers: the tail-out primer (5 0 -GGTTGTGAGCTCTTCTAGA-3 0 ) and additional internal gene-specific primers to increase the possibility to detect low abundant transcripts. The PCR products were further purified, multiplexed, and sequenced according to a next generation sequencing (NGS) protocol [17,18]. Additional RT-PCR reactions were also performed to monitor the quality of cDNA, and also the specificity of the tailing reaction via the gene-specific forward and reverse primers that are derived from the 1st and 2nd exons, respectively (S4 File). The first round of NGS-based 5'RACE included the hypothalamus, neonate head, ovary, and testis tissues to identify potential alternative promoters for Snrpn, Zac1, Gtl2, Dlk1, Igf2r, and Myc. The second round of NGS-based 5'RACE included the testis, ovary, heart, and kidney tissues to identify potential alternative promoters for Peg3 (S1 File). The results from these NGS runs have been deposit to the SRA database (SRA Accession No. SRP156941). RT-PCR Several sets of cDNA panel were generated using the total RNA isolated from various mouse tissues. In brief, the total RNA was isolated from each tissue using the Trizol RNA isolation kit (Invitrogen). The isolated total RNA (2.5-5 μg) was mixed with random hexamers, and subsequently reverse-transcribed using the M-MuLV reverse transcriptase (New England Biolabs, Cat. No. M0253S). The resulting cDNAs were used as templates for detecting circular RNAs and linear mRNA transcripts for each imprinted gene. Nested PCR schemes were used for the detection of circular RNA with the following PCR condition. The initial PCR for circular RNAs was performed in the following parameters: 35 cycles of 30 seconds at 95˚C, 30 seconds at 60˚C, 30 seconds at 72˚C. The annealing temperature for circIgf2r was 60˚C. One μl of the initial PCR products was diluted 10-fold with water, and these diluted PCR products were subsequently used as templates for the second PCR with the following parameters: 20 to 25 cycles of 30 seconds at 95˚C, 30 seconds at 62˚C, 30 seconds at 72˚C. We also performed a set of control experiments with the AMV reverse transcriptase (New England Biolabs, Cat. No. M0277S) to ascertain that the RT-PCR products obtained through the M-MuLV reverse transcriptase are derived from genuine circular RNAs (S5 File). The information regarding the sequences of the primers used for these PCR are available (S2 and S4 Files).
Psychometric properties of a brief version of the COVID‐19 Stress Scales (CSS‐B) in young adult undergraduates Abstract We extracted items to create a brief version of the COVID‐19 Stress Scale (i.e., CSS‐B) and examined its psychometric properties in young adults. A sample of 1318 first‐ and second‐year undergraduates from five Canadian universities (mean [SD] age = 19.27 [1.35] years; 77.6% women) completed an online cross‐sectional survey that included the CSS‐B as well as validated measures of anxiety and depression. The 18‐item CSS‐B fit well on both a 5‐factor and a hierarchical model indicating that the five CSS‐B dimensions may be factors of the same over‐arching construct. The CSS‐B factor structure displayed lower‐order and higher‐order configural and metric invariance across sites but not scalar invariance indicating that the intercepts/means were not consistent across sites. The CSS dimensions were positively related to measures of general anxiety and depression but not so strongly as to indicate that they are measuring the same construct. The CSS‐B scale is a valid measure of COVID‐19 stress among young adults. It is recommended that this shorter version of the scale be considered for use in longer surveys to avoid participant fatigue. and each subscale showed good internal consistency (i.e., Cronbach's α > 0.80; Taylor et al., 2020b). The five dimensions are all positively correlated with one another with relatively strong correlations (i.e., 0.41-0.73; Taylor et al., 2020b). As such, Taylor et al. (2020a) deemed a total sum score of the entire scale as a useful measure of overall pandemic-related distress in addition to the specific subscale scores. The CSS has since been translated into at least three languages (i.e., Arabic; Abbady et al., 2021;Mahamid et al., 2021;Persian;Khosravani et al., 2021;Turkish;Demirgöz Bal et al., 2021). The scale's original 5-factor structure has been shown to hold in a general sample of Palestinian adults with a narrower age range than the original scale development sample (Mahamid et al., 2021), a Persian sample with anxiety disorders and obsessive-compulsive disorders (Khosravani et al., 2021), and Egyptian and Saudi university students (17-36 years old; Abbady et al., 2021). Research to date has shown that the CSS dimensions are positively related to general anxiety, depression, and stress/distress (Khosravani et al., 2021;Mahamid et al., 2021;Taylor et al., 2020b). The CSS dimensions were also positively related to other COVID-19 distress measures, specifically Ahorsu et al.'s (2020) Fear of Coronavirus-19 Scale (FCV-19S;Khosravani et al., 2021;Mahamid et al., 2021) and Arpaci et al.'s (2020) COVID-19 Phobia Scale (C19P-S; Khosravani et al., 2021). Total CSS scores have been linked to negative thoughts and emotions during social isolation (e.g., stressed, bored, sad, lonely) and coping behaviours during social isolation (e.g., online shopping, increased eating, increased alcohol consumption, seeking medical help online; Taylor et al., 2020a). Taylor et al. (2020b) also found that CSS dimensions were positively linked to retrospective ratings of obsessive-compulsive checking and contamination symptoms pre-pandemic. Similarly, Khosravani et al. (2021) found that the CSS was related to measures of obsessive-compulsive disorder (OCD) symptoms and anxiety disorder symptoms in a sample of those with OCD or anxiety disorders. Young adults have experienced considerable stressors since the onset of the pandemic including disruptions to education and employment opportunities, as well as key rites of passage, such as graduation. In fact, COVID stress (as measured via overall CSS) positively predicted future career anxiety in a sample of final year college students (Rahmadani & Sahrani, 2021). Many mental health issues onset during young adulthood (e.g., depression; Klein et al., 2013) and the pandemic has enhanced this vulnerability (Lopez-Nunez et al., 2021;Qian & Yahara, 2020). A review of risk factors for psychological symptoms during the pandemic revealed that young adulthood and student status were two important risks (Xiong et al., 2020). As a result, assessing COVID-19 related distress among this group is particularly important. While the 36-item CSS is clearly a psychometrically sound measure that contains appropriate coverage of the various domains of distress involved in the conceptualization of the COVID Stress Syndrome, it does have one important disadvantage. Specifically, its relatively long length makes it unfeasible for use in shorter surveys. If a shorter version could be developed that continued to tap the five main domains of the COVID Stress Syndrome in young adult undergraduates, it could be readily incorporated into university student surveys. This would allow for tracking of the reduction or maintenance of students' pandemic-related distress over time, and permit comparison of distress levels across institutions in regions with differing infection rates and restrictiveness of containment strategies. We developed a brief version of the original CSS ("the Brief CSS" or "CSS-B") and then examined its psychometric properties (structural validity, internal consistency, convergent validity in terms of its association with general mental health measures) in a multi-site sample of young adult university students. As Taylor et al. (2020a) stated that a total sum score could be used for the full scale, a lowerorder and higher-order (hierarchical) model will be tested. Further, given the considerable variability in infection rates and public health protocols across provinces and municipalities, we also assessed whether the CSS psychometric properties hold across five Canadian post-secondary institutions. | Participants One thousand three hundred and 18 participants from five Canadian universities completed an online cross-sectional survey. Sites 1 and 2 are located in Nova Scotia. Site 3 is in Ontario, Site 4 in British Columbia, and Site 5 in Quebec. All study sites are in major cities except for Site 2. The majority of participants were female (79.4%), while 20.5% were male (0.2% did not respond). Similarly, the majority of participants identified as a woman (77.6%), while 20.1% identified as a man, 0.2% as trans, 1.5% as non-binary, and 0.3% as other (0.3% did not respond). They were recruited through multiple means including direct email, social media advertisements, and through the SONA participant recruitment system. Data was collected between February and April 2021. It was required that participants be in either their first or second year of undergraduate study and be 18-25 years of age. Participants were compensated with either bonus points towards one of their psychology courses or through Amazon gift cards. Participation was entirely voluntary and was not a requirement for any course or programme of study. Participants' mean age was 19.27 years (SD = 1.35). The majority of participants were full-time students (86.5%) and White (66.9%). There was an approximately even split between first (53.4%) and second (46.6%) year of study. | Measures and procedure Following informed consent, participants answered questions related to their general mental well-being, behaviours and experiences during the COVID-19 pandemic, and demographic questions. Research Ethics Board (REB) approval was received from each university study site. Distress during the pandemic (in the last 30 days) was measured using 18 of the 36 items from the COVID-19 Stress Scales (CSS; Taylor et al., 2020b). The CSS measured five factors: (1) COVID danger and contamination fears, (2) COVID fears about economic consequences, (3) COVID xenophobia, (4) COVID compulsive checking and reassurance seeking, and (5) COVID traumatic stress symptoms. Participants were asked to report about the various kinds of worries they experienced related to COVID-19 since returning to class in the winter term. The full scale uses 6 items per factor (except for COVID danger and contamination fears which, which has 12 items) while the current study used the top 3 items (6 for COVID danger and contamination fears) that showed the strongest loadings on each factor in the Taylor et al. (2020b) factor analytic solution of the original CSS. For details on what items were used, see CFA factor loadings in Table 2. As with the original CSS, response options ranged from 0 (not at all) to 4 (extremely). Participants' general mental well-being was assessed through two measures: the 7-item Generalized Anxiety Disorder scale (GAD-7; Spitzer et al., 2007) and the 9-item Patient Health Questionnaire (PHQ-9; Kroenke et al., 2001). The GAD measures (generalized) anxiety while the PHQ measures depression. Participants were asked to rate how often they were bothered by each of the symptoms in the last 30 days on a scale from 1 (not at all) to 4 (nearly every day). Items were summed to yield total scores for the GAD and PHQ. The current study yielded Cronbach's alphas of 0.90 for the GAD and 0.89 for the PHQ. | Confirmatory factor analysis Confirmatory factor analysis (CFA) was conducted to confirm the scale's factor structure. All model tests were based on the covariance matrix and used ML estimation as implemented in Mplus 7.4. A 1factor model (i.e., all 18 items loading on a single factor), 5-factor model (i.e., COVID danger and contamination fears, COVID fears about economic consequences, COVID xenophobia, COVID compulsive checking and reassurance seeking, and COVID traumatic stress symptoms), and a hierarchical model (i.e., items loading onto 5 factors which then all load onto one higher order factor) were tested. Comparative fit index (CFI) and Tucker-Lewis index (TLI) values greater than 0.95 are considered good fit to the data (Hu & Bentler, 1999) and 0.90 indicates adequate model fit. The recommended cutoffs for the Root Mean Square Error of Approximation (RMSEA) and the Standardized Root Mean Square Residual (SRMR) should be less than 0.08 (Kelloway, 2015). Chi-squared and degrees of freedom are reported but comparative model fit were made using the difference in other fit indices as chi-squared difference test are impacted by large sample sizes where even small differences can become significant (Alavi et al., 2020;Putnick & Bornstein, 2016). As such, the criteria used for determining differences in model fit will be as follows: models will be deemed significantly different if ΔCFI >0.01, ΔRMSEA >0.015, and ΔSRMR >0.03 (Putnick & Bornstein, 2016). The fit indices for the CSS-B suggested that the model with the best fit is the 5-factor or hierarchical model (see Table 1). The 5-factor model fit substantially better than the 1-factor model (χ 2 difference ( and TLI are both above 0.90. The RMSEA and SRMR were both less than 0.08. While model fit is slightly higher for the more parsimonious 5-factor model, invariance testing (below) was done on the hierarchical model as Taylor et al. (2020a) reported that a total sum score was accepted. All factor loadings for the hierarchical model were greater than 0.50 (see Table 2). | Group invariance As data were collected across Canadian universities in multiple provinces, CSS scores may have differed across groups. Table 3 provides the means and standard deviations across sites. Measurement invariance tests were conducted across sites using Mplus 7.4 (MLR estimation) following Rudnev et al. (2018). The invariance models were conducted in the following order: configural invariance, metric invariance of the first order (or lower-order) factors, metric invariance of the first-and second-order factors (i.e., lower and higherorder), and scalar invariance of the first order factors and scalar invariance of the first-and second-order factors. Invariance was determined by comparing changes in the comparative fit index (CFI) between successive models. A change of less than or equal to 0.01 is considered evidence of invariance (Zimprich et al., 2012). Note that if the CFI was greater than 0.01 and the model did not show invariance, the subsequent model was not conducted. As a confirmation, changes in RMSEA and SRMR were also examined using the thresholds as used in the ΔCFA (i.e., ΔRMSEA <0.015, and ΔSRMR <0.03 as evidence for invariance; Putnick & Bornstein, 2016). Table 4, the configural invariance by site model provided an adequate fit to the data indicating that the factor THIBAULT ET AL. Tukey's HSD post hoc tests were conducted to examine individual site differences. Danger and contamination fears were higher in Site 3 than any other site (S1 SE = 0.08, p < 0.001; S2 SE = 0.08, p < 0.001; S4 SE = 0.09, p < 0.001; S5 SE = 0.11, p < 0.01). Danger and contamination fears were lower in Site 2 than any other site (S1 SE = 0.07, p < 0.001; S3 SE = 0.08, p < 0.001; S4 SE = 0.08, p < 0.01; S5 SE = 0.10, p < 0.001). Economic fears were higher in Site 3 than any other site (S1 SE = 0.07, p < 0.001; S2 SE = 0.07, p < 0.001; S4 | CSS-B validity As shown in Table 5, all five CSS dimensions are positively correlated with one another and with both measures of general anxiety and depression. Disattenuated correlations are a way to control for measurement error and are calculated using the raw Pearson correlation coefficients divided by the square root of the product of the scales' Cronbach's alphas (Hancock, 1997;Kenny, 2011). Even after controlling for measurement error, the correlations between CSS, anxiety, and depression show that these three constructs are related but different. Additionally, the disattenuated correlation between COVID danger and contamination fears and the total CSS sum score is almost 1.0 and is higher than the correlation between the total score and any of the other dimensions. This indicates that this dimension appears to contribute to the total score more than the other dimensions. This makes sense as this dimension contains more items and, in this sample, has a higher mean than the other dimensions. | DISCUSSION The current study found that the five dimensions of the brief CSS fit adequately on a higher-order model. This is consistent with Taylor et al.'s (2020a) decision to assess the full CSS scale as a total score. Furthermore, our fit indices are similar to Taylor et al.'s (2020b) original CFA 5-factor lower-order findings (i.e., RMSEA = 0.05, SRMR = 0.04, CFI = 0.93 in their US sample). While reported CSS dimensions may have differed across university sites, the measure's structure held across sites. Specifically, the scale had metric invariance at the higher order, but did not have scalar invariance. This indicates that factor loadings held across sites but the intercepts (or factor means) did not. This is likely due to the Abbreviations: CFI, comparative fit index; RMSEA, root mean square error of approximation; SRMR, standardized root mean square residual; TLI, Tucker-Lewis index. THIBAULT ET AL. -5 vein, COVID traumatic stress symptoms were higher in Site 3 (located in Toronto, Ontario) than any other site. Infection numbers in participants' locations likely impacted provincial restrictions and students' COVID-19 stress. Considering that infection rates and accompanying restrictions differed by study site, a lack of scalar invariance as shown in the current study should be expected. Like others (e.g., Mahamid et al., 2021;Taylor et al., 2020b), the current study found that the CSS dimensions and overall CSS total score were positively related to anxiety and depression. That said, the CSS is measuring something distinct from general anxiety and depression. The disattenuated correlations (i.e., correlation coefficients corrected for measurement error) were significant but not overly strong, indicating that the CSS-B is measuring something correlated with, but separate from, general mental health symptoms. The current study showed that the shortened CSS measure is still reliable and valid. Furthermore, its psychometric properties held in a sample of young adult university students, a population that is especially vulnerable during the pandemic (Xiong et al., 2020). We were also able to show that the CSS-B dimensions differed by study site, indicating that infection rates and public health policies/restrictions influence COVID-related distress. | LIMITATIONS AND FUTURE RESEARCH While the results demonstrate the scale properties of the CSS-B, the current study did not include the full CSS. We were unable to directly compare the reliability and validity of the CSS-B to the original CSS. Future research should ensure that the CSS-B captures the full domain of the COVID Stress Syndrome. That said, while we did cut the scale in half, we ensured that we still measured each dimension of the syndrome (over choosing only a few select dimensions). The current study relied on cross-sectional, self-report data increasing the likelihood of common method variance (CMV which can inflate relationships artificially or otherwise bias the data in some way; Doty & Glick, 1998;Lindell & Whitney, 2001;Malhotra et al., 2017). That said, disattenuated correlations control for measurement error. Additionally, the CFA support a multifactor solution, while CMV would enhance the likelihood of support for a unidimensional, rather than multidimensional, factor solution (Harman, 1976). Multi-source data (e.g., other rated behaviour) is recommended to further examine the relationships between the CSS and other variables in future. For instance, self-report data can be collected on the CSS while a spouse or roommate provides behavioural ratings. Longitudinal research should be conducted to examine the stability of the measure over time as well as its longitudinal associations with variables such as general mental health outcomes and pandemic-related behaviour (e.g., adherence to public guidelines, substance use, excessive eating). That said, as COVID distress should be a state as it is due to a specific event and not a trait, we would not expect the measure to be stable over long periods of time. In fact, measurement stability would be cause for concern as it could indicate that there are long-lasting mental consequences of the pandemic if scores continue to be high. That said, people with high scores now may be those that stay relatively higher than others over time even as pandemic-related stress habituates and declines with opening up and with reduced risk for serious illness with vaccines. It is unclear whether COVID-19 distress could become chronic . It would be interesting to examine who is most susceptible to continued COVID distress over time even as things return to 'normal' (e.g., are traits like anxiety sensitivity related to a relative maintenance of COVID stress over time?). It is also important to note that the majority of our sample were female (79.4%). While there was still a decent sample size for males (n = 270), we had very little representation of those falling outside the gender binary. This may impact the generalizability of our findings to these groups. Note: N = 1287; all variables calculated as summed scores; first value = uncorrected correlation, value in brackets = disattenuated correlation; *p < 0.01, **p < 0.001. M SD While we tested for invariance across study sites (i.e., universities), the experiences of students within these sites may not have been that similar. Since many university classes were not in person and most of these campuses were closed during data collection, it is unclear how many students were "on site" in the university's city (or even in Canada). Further city-and country-wide research is needed. That said, we did observe site differences that would be expected. Taylor et al. (2021) found that trait resilience and optimism were negatively correlated with CSS dimensions while health anxiety proneness and intolerance of uncertainty were positively correlated with CSS dimensions. Future research should examine other factors that may impact CSS scores such as other personality traits (e.g., Big Five). Research should also determine if CSS scores impact behaviour during the pandemic (e.g., self-care behaviours, unhealthy coping methods, student academic performance). | CONCLUSION As the COVID-19 pandemic was a novel, unprecedented event, research is being conducted quickly. As many variables are often of interest to researchers, there is a need for shorter scales that can quickly capture constructs while avoiding participant survey fatigue. The current study found that a brief version of the CSS was structurally valid and possessed partial measurement invariance across institutions with varying levels of impact of the pandemic; the subscales were internally consistent and showed expected overlap with general anxiety and depression, establishing construct validity. Various studies have found that the pandemic environment, social distancing and lockdown have negatively impacted mental health (e.g., Ahmed et al., 2020;Tang et al., 2020;Zajacova et al., 2020). It is important that researchers continue to examine mental health as the pandemic continues and once it is behind us. It is unclear at what rate these elevated stress symptoms will be reduced following the pandemic or if they will reduce at all in the short term (Taylor et al., 2020a). As such, researchers should continue to use COVIDspecific scales to assess stressors related to the pandemic.
LAMOST Spectrograph Response Curves: Stability and Application to flux calibration The task of flux calibration for LAMOST (Large sky Area Multi-Object Spectroscopic Telescope) spectra is difficult due to many factors. For example, the lack of standard stars, flat fielding for large field of view, and variation of reddening between different stars especially at low galactic latitudes etc. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC), but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude |b|>=60 degree and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee the selected stars had been observed by each fiber, we selected 37,931 high quality exposures of 29,000 stars from LAMOST DR2, and more than 7 exposures for each fiber. We calculated the SRCs for each fiber for each exposure, and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable with statistical errors<= 10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra which were abandoned by LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with SDSS, the relative flux differences between SDSS spectra and that of LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration. Introduction The LAMOST is a quasi-meridian reflecting Schmidt telescope with an effective aperture of ∼4m and field of view (FoV) of 5 degree in diameter. At the focal plane, 4,000 robotic optical fibers of aperture size 3.3 arcsec projected on the sky relay the target light to 16 spectrographs, 250 fibers each (Cui et al. 2012;Deng et al. 2012). Proceeded by one-year Pilot Survey, the LAMOST Regular Surveys started in September 2012. The wavelength range of LAMOST covers 3,700 to 9,000Å and is recorded in two arms, a blue arm (3,700-5,900Å) and a red arm (5,700-9,000Å), with the resolving power of R∼1,800. A final spectrum is obtained by merging several exposures and connecting wavelength bands. Raw data from the LAMOST surveys are reduced with the LAMOST 2D pipeline (Luo et al. 2015). The procedures used by the 2D pipeline, similar to those of SDSS (Stoughton et al. 2002), aim to extract spectra from the CCD images and then calibrate them. The main tasks of the 2D pipeline include the steps of fiber tracing, flux extraction, wavelength calibration, flat fielding, sky subtraction, flux calibration, multi-exposure co-addition and the connection of the two wavelength bands. Since the data reduction steps are the reverse process of the data acqusition process, we should understand the data acquisition process of LAMOST, which can be simplified as follows. In this equation, F o (j, λ) is the observed signal, where j denotes the j -th fiber and λ denotes the wavelength; F i (j, λ) is the target signal before pass through the atmosphere; d s (λ) is the extinction function including atmospheric and interstellar reddening; sky r (λ) is sky background; d f (λ) is the fiber transmission function, a random number selected from a Gaussian distribution, with a mean of 0.9 and variance of 1.0; d p (λ) is the spectral response function due to the dispersion of the spectrograph; scatter(j, λ) is the scattering light including symmetrical scattering and the cross-contamination of fibers; C k (j, λ) is the parameter to compensate with cosmic rays; B is the CCD background. The purpose of the LAMOST flux calibration is to remove the spectral response curve (SRC) from observations. Considering that d f (λ) is divided during the flat field, the SRCs of spectrographs could be simplified as shown in equation 2, which only includes d s (λ) and d p (λ). In the real flux calibration process, d s (λ) and d p (λ) are considered as a whole SRC, by which the single exposure is divided. For the LAMOST 2D pipeline, selection of standard stars is the first step of flux calibration (Song et al. 2012). The pipeline selects standard stars automatically by comparing all the observed spectra with the KURUCZ library produced based on the ATLAS9 stellar atmosphere model data (Castelli et al. 2004). For each of the 16 spectrographs, several high quality spectra with SNR≥10, 5,750 K≤ T eff ≤7,250 K, log g≥3.5 dex and -1.0 dex≤[Fe/H]≤ 0 dex are selected as standard stars. Actually, the LAMOST 2D pipeline picks out standards with the temperature in the range of 6,000-7,000 K at first step, if there is not enough stars in this range, the 2D pipeline will extend the range to 5,750-7,250 K. If more than 3 standard stars are found for a spectrograph, the SRCs of the spectrograph can be derived by comparing the observed spectra with synthetic spectra (using the corresponding parameters from the KURUCZ spectral library). Because the 2D pipeline estimates the parameters by simple fitting with KURUCZ model, the parameters have great uncertainties for the stars with [Fe/H]<-1.0 dex, meanwhile considering that the number of metal poor stars is small in each spectrograph, the 2D pipeline uses the metallicity cut of -1.0 dex≤[Fe/H]≤ 0 dex for the selection of the standards. Unfortunately, for the current LAMOST 2D pipeline, when there are not enough suitable standard stars, data reduced of the spectrograph has to be suspended. In this paper, to rescue the unsuccessful spectra, we propose a novel flux calibration method based on the stability analysis of the SRCs. Thanks to more than 2 million spectra, with reliable stellar parameters in DR2, we are able to statistically measure the instrument stability. Through stellar parameters, the SRC of each fiber could be obtained. By averaging SRCs in each spectrograph, we can get an average spectrograph SRC (ASPSRC), and use it to calibrate spectra in each spectrograph without pre-selecting the flux standard stars assuming the ASPSRC is sufficiently stable. This flux calibration method can rescue more spectra from LAMOST which were abandoned by 2D pipeline. The paper is organized as follows. Section 2 details of the procedures used to create the ASPSRC for each spectrograph. The accuracy analysis of the ASPSRC and its application to flux calibration are presented in Section 3. We conclude with Section 4 which summarizes and discusses the results. Selection of the Sample Work by Xiang et al.(2015) show that variations of the SRCs exist, this is done by using stars in high dense fields, however these suffer from high interstellar extinction. However, to study the variations of the SRCs one should use stars with less extinction. Therefore, we selected stars at high Galactic latitude to analyze instrument response (Fitzpatrick et al. 1999(Fitzpatrick et al. , 2007. To obtain a good approximation of the ASPSRCs, we require as many flux standard stars possible. To ensure the quality of the sample, the stellar parameters of LASP (Wu et al. 2011a,b) were used to select the F-stars with the highest signal to noise ratios (SNRs). We selected stars with 6,000 K≤ T eff ≤7,000 K, log g≥3.5 dex and Galactic (Xiang et al. 2015). As the ASPSRCs are derived from a great number of standard stars instead of a group of several standards in the 2D pipeline, and 90% of the metallicities are in the range of [Fe/H]≥ -1.0 dex, which the averaged SRCs are generated from. The accuracy of the parameters measured by LASP are good enough even for metal poor stars, and will not affect the averaged result. Thus we did not use a metallicity cut in this sample selection. With the benefit of the large sample of targets that satisfied the above parameter space, we find that there are sufficient and appropriate exposures across all fibers and spectrographs to allow us to use them as standards. Fig 1 shows the histogram of the number of standards per spectrograph from DR2, which indicates that at least 7 standards per fiber (with 250 fibers in each spectrograph this is equivalent to at least 1,750 individual exposures, shown in Fig 1). Fig 2 shows the histogram of their effective temperatures, mostly located in the vicinity of 6,100 K (i.e. F8 type stars). Spectral Response Curves Let F o (λ) and F i (λ) denote the measured and intrinsic spectral flux density thus, where d s (λ) is the combined atmospheric and interstellar extinction, and d p (λ) the telescope and instrumental response. In this work, we adopted synthetic flux as F i (λ), which is calculated using SPECTRUM based on the ATLAS9 stellar atmosphere model data released by Fiorella Castelli. The synthetic spectra of 1Å dispersion from the library of KURUCZ were used, then the spectra were degraded to the LAMOST spectral resolution by convolving with a Gaussian function. Only those with a constant micro-turbulent velocity of 2.0 km/s and a zero rotation velocity were adopted, since these two parameters have little effect on the spectral energy distribution (SED) at a given temperature (Grupp 2004). The interstellar extinction can be neglected owing to our selection of high latitude standards, however the atmospheric extinction can not be separated from instrumental response. The SRCs in this paper include atmospheric extinction, and their variations are included in the uncertainty of the SRCs. It is generally assumed that the SRCs are smooth functions of wavelength. In order to derive the SRCs, we applied a low-order piecewise polynomial fitting to the ratios of the observed and the synthetic spectra of the standard stars. Derivation of the ASPSRCs For the 250 fibers in each spectrograph, at least 1800 good SRCs (through multiexposures) were derived (excluding the bad fibers). We chose the fitted SRCs rather than the direct ratios of observed and synthetic to estimate the ASPSRC because the direct ratios are susceptible to noise. We concentrate on the relative flux calibration rather than absolute flux calibration, such that, for a given spectrograph, SRCs yielded by the spectra of the individual standard stars were divided by the average of their SRCs (i.e. the SRCs were scaled to a mean value of unity). It is generally assumed that the differences in the We expect that there is a unified response for a given spectrograph during exposures of different times and using different plates. The red curves in Fig 5, Fig 6, Fig 7 and We calculated the mean values of absolute and relative uncertainties for g, r and i-bands, which are presented in Table 1. Table 1 shows that for all 16 spectrographs, the uncertainties are smaller than 8% for both g and i-bands. The r-band is located at the edge of both arms and thus due to the low sensitivities, the uncertainties for r-band are much larger ( for example Spectrograph No.5 can differ by up to 11.13%. This means the fluxes and centroids of the lines located at the junction of the blue and red arms (such as Na D at λ5,892Å), are sometimes not credible. Time Variations Generally, the LAMOST observational season spans nine months from September to Still, we can conclude that spectrograph No.4,No.11,No.15 and No.16 are more stable than others during the DR2 period. Flux Calibration Based on ASPSRCs The spectral flux calibration of target objects is generally achieved through obtaining separate measurements of spectrophotometric standard stars (Oke et al. 1990;Hamuy et al. 1992Hamuy et al. , 1994 on the same observing night with the same instrumental setup. However large spectroscopic survey, obtaining separate measurements of sufficient standard stars for each night and each spectrograph becomes impossible, and an alternative strategy has to be adopted. In the case of the Sloan Digital Sky Survey (York et al. 2000), F turn-off stars within the FoV are used to calibrate the spectra. These standards are preselected based on the photometric colors and are observed simultaneously with the targets (Stoughton et al. 2002;Yanny et al. 2009). The intrinsic SEDs of F turn-off stars are well determined by theoretical models of stellar atmospheres and the effects of interstellar extinction can be characterized and removed using the all-sky extinction map of Schlegel et al. (1998) (Schlegel et al. 1998;Schlafly et al. 2010). Without a photometric system for LAMOST, and lacking of extinction values especially for low galactic latitudes, the standard stars are not pre-assigned. Usually, the flux standard stars are selected from the spectra in each spectrograph after observation. Sometimes, the selection of the standard stars fails, thus the spectrograph of plate has to be abandoned by the LAMOST 2D pipeline. This is indeed why the ASPSRC method is important, as using fixed instrumental response curves can recover some of these abandoned plates. Co-add the multi-exposures To improve the SNRs and overcome the effect of cosmic rays, each field is designed to be exposed multiple times. The spectra of each single exposure may be on different scales due to the variation of observational condition. The spectra on different scales can not be co-added since they are divided by the same ASPSRC. Fig The monochromatic AB magnitude is defined as the logarithm of a spectral flux density with a zero-point of 3631 Jansky (Oke et al. 1983), where 1 Jansky = 1 Jy = 10 −26 W Hz −1 m −2 = 10 −23 ergs −1 Hz −1 cm −2 . If the spectral flux density is denoted f ν , the monochromatic AB magnitude is: Actual measurements are always made across some continuous range of wavelengths. The bandpass AB magnitude is defined similarly, and the zero point corresponds to a bandpass-averaged spectral flux density of 3631 Jansky. where e(ν) is the equal-energy filter response function. The (hν) −1 term assumes that the detector is a photon-counting device such as a CCD or photomultiplier. The synthetic magnitude can be obtained by convolving the flux spectra with the SDSS g and i band transmission curves (Hamuy et al. 1992(Hamuy et al. , 1994. We adopted the g and i filter zero points from Pickles (Pickles 2010). The spectra are then scaled by comparing the synthetic magnitude with the photometry magnitude. The scale coefficient SC(g) and SC(i) are obtained as follows, and are multiplied with observed spectra. The spectra of Fig 13 were scaled using the method described above. The rescaled spectra can then be co-added, and the final spectra derived, which is shown in Fig 13 (bottom panel). It should be noted that this method is subject to the SNR of the spectra, since the synthetic magnitudes depend on the quality of the spectra. This method needs the photometry magnitudes of g and i-band for each target, thus we cross-matched LAMOST targets with Pan-STARRS1 (Tonry et al. 2012) within 3 mas. The LAMOST sources are selected from multiple catalogs with multi-band photometry. Consequently, not all the LAMOST targets overlap Pan-STARRS1. By cross-matching, we found that about 80% of the LAMOST targets are included in Pan-STARRS1. For those targets not in the Pan-STARRS1 catalog, SDSS PetroMag ugriz had been adopted. About have to only use the overlap between the blue and red arms in a very small wavelength range to connect them, that might lead to an piecing discontinuity if the signal noise ratio is too low in the overlap. For the final spectra, spline fitting with strict flux conservation is adopted to re-bin the spectra to a common wavelength grid. Once the flux is co-added by this method, the blueand red-arms are pieced together directly and the SEDs are consistent with their target colors. For the ones which don't have photometry in the optical band, but have multiple exposures, we scaled the flux of multi-exposures to the flux of exposure with the highest SNR. After the multi-exposures being co-added, The blue and red-arm are pieced together with adjusting one of the scale (using the overlaps) to yield the final spectra. Accuracy analysis for flux calibration through ASPSRC Before discussing the accuracy of the ASPSRCs, we studied the SRCs of the DR2 plates which are derived from the LAMOST 2D pipeline to further confirm the stability of the LAMOST spectrograph response curves. and it is consistent with the average SRC of the SRCs from the LAMOST 2D pipeline. Table 1 shows the mean uncertainties of ASPSRCs are smaller than 10%, which are consistent with the 1 σ uncertainties of the SRCs at high Galactic latitude from the 2D pipeline. To verify the feasibility of applying the ASPSRCs to the flux calibration, we selected stars observed by both LAMOST and SDSS. We cross-matched the abandoned targets of LAMOST DR2 with SDSS DR12, and obtained 1,746 spectra of 1,702 stars with SNRs higher than 6. We have calibrated the LAMOST spectra abandoned by 2D pipeline and divided them by the spectra of the same sources from SDSS. The ratios of the two spectra were calculated and then scaled to their median values of unity, and the results are shown in Fig 14. The ratios yield an average that is almost constant around 1.0 for the whole spectral wavelength coverage except for sky emission lines region; oxygen and water vapor bands of the earth's atmosphere are attributed to the uncertainties of flat-fielding and sky-subtraction. The standard deviation is less than 10% at wavelengths from 4,500Å to 8,000Å but for both edges, the standard deviation increases to 15% due to the rapid decline of the instrumental throughput. The results show that flux calibration using ASPSRCs has achieved a precision of ∼ 10% between 4,100Å and 8,900Å. For the bright and very bright plates, most can be calibrated successfully by 2D pipeline. However for LAMOST faint plates (F-plates) of DR2, the flux-calibration failure rate of the 2D pipeline is around 9% and for the medium plates (M-plates), the failure rate is around 8%. Fig 15 to Fig 17 show the spectra of galaxies, QSOs and stars rescued from the abandoned plates. We compared the rescued spectra with that of SDSS DR12 (the former are plotted with black curves and the latter are represented with red curves). Most match with their corresponding SDSS spectra quite well, with differences of only a few percent for their continua. For LAMOST 20130208-GAC062N26B1-sp13-112, the spectra of the red arm has turbulent components for the spectrograph No.13, this is explained by the spectrograph having problems caused by the cooling system of the CCD. For LAMOST 20140306-HD134348N172427B01-sp10-014, the SED from the ASPSRC method is bluer than that of SDSS. We believe this is due to the fact, we do not separate the Earth's atmospheric extinction from the response of the spectrograph. Generally, the variations of the optical atmospheric extinction curve can be calculated by low order polynomials (Patat et al. 2011). The atmospheric extinction curve included in the ASPSRC is an average one, and multiplication by a low order polynomial is required to obtain the real atmospheric extinction curve when the target observed. Therefore, some spectra calibrated using the ASPSRCs need low order polynomials to match SDSS spectra. The atmospheric extinction of LAMOST will be deeply studied and integrated into this work. Overall, the ASPSRCs flux calibration has achieved a precision of ∼ 10% for the LAMOST wavelength range. The potential uncertainties and temporal variations of the atmospheric extinction generally do not have an impact on the final accuracy of spectral lines, though they do affect the shapes of SEDs deduced (low order polynomials). Rescue the Abandoned Targets For the LAMOST DR2, there are 1,095 spectrographs with 385 plates which have been abandoned by 2D pipeline due to the failure of finding standard stars. We started with the 2D pipeline for fiber tracing, flux extraction, wavelength calibration, flat fielding and sky subtraction. The ASPSRCs were then adopted to calibrate the 195,694 spectra in 1,095 spectrographs. After the flux calibration and the co-add, the LAMOST 1D pipeline was employed to classify the spectra and measure the radial velocity for stars, and the redshift for galaxies and QSOs. Based on a cross-correlation method, the 1D pipeline recognizes the spectral classes and simultaneously determines the radial velocities or redshifts from the best fit correlation function. The 1D pipeline produces four primary classifications, namely STAR, GALAXY, QSO and UNKNOWN. It is difficult to recognize galaxy and QSO spectra and determine their redshift, and as such the LAMOST 1D pipeline does not work as well as for stellar classification due to the SNRs of galaxy and QSO spectra being relatively lower. An additional independent pipeline, the Galaxy Recognition Module (GM for short), has been designed for galaxy recognition and redshift measurement. After the 1D pipeline was run it automatically identifies galaxies and measures the redshifts by recognizing lines. The redshifts of galaxies are determined through line centers. Before line centers are measured, a Gaussian function with sigma of 1.5 times the wavelength step is applied to the spectra to eliminate noises. The continua, which were smoothed by a median filter, are divided to complete normalization. Those data points that exceed 2σ of a normalized spectrum are selected as candidates of emission lines, then a set of Gaussian functions is used to fit the lines. All the line centers are compared with line lists, which are spaced by steps of 0.0005 in redshift(z). If most of the lines are matched successfully with heavily weighted lines such as NaD, Mgb, CaII H or CaII K for absorption galaxies, or Hα, OII, Hβ, OIII or NII for emission galaxies, the spectrum is classified as galaxy, and the corresponding z is the raw redshift of the spectrum. However, for QSOs, the classifications and measurements highly depend on visual inspection. We combined the classification of GM, 1D pipeline and expert inspection and thus the final classifications of the spectra of the 1,095 spectrographs is presented in Table 2. In total 52,181 additional spectra has been recognized in DR2, and will be officially released in the Third Data Release (DR3) of the LAMOST Regular Survey. The fraction of objects rescued is about 52,000/2,000,000 (∼ 2.5%). For the rescued 52,181 targets, we evaluated the quality by plotting the magnitude against SNR relationships for galaxies and QSOs, and stars. For galaxies and QSOs, most of the magnitudes are spread between 17.0 and 19.0, which is shown in Fig 18. This is close to the limit of LAMOST observation, consequently, the majority of their SNRs are so low that they do not reach a SNR of 10. To reduce the differences of SNRs due to differences in exposure times, all of the SNRs in this paper were scaled to 5400s. For stars, there are two peaks in the distribution of magnitudes, as shown in Fig 19. The magnitudes of A,F,G,K-type stars range from 13.0 to 17.0, and M-type stars range from 15.0 to 18.0 magnitudes. The SNRs of the stars are higher than that of galaxies and QSOs, however, most are below 30, which is comparatively low for stars. With the exception of M-type stars, we selected the stars with SNRs in the r band larger than 2.0 for the release. Therefore, an obvious cut is seen in the bi-modal point distributions of early and late type stars in Fig 19. For F,G,K-type stars, by running LASP, we parameterized those with SNRs in the g band larger than 6.0 for nights with a dark moon and 15.0 for nights with a bright moon. The final stellar parameters coverage is presented in Fig 20. Revision of the 2D calibration To minimize potential errors introduced by poor sky subtraction, the current LAMOST 2D pipeline (v2.7) scales the sky spectrum to obtain the same flux intensities of sky emission lines as that of the target spectra, which it will be subtracted from. It is assumed that the emission lines are homogeneous across the FoV of individual spectrographs (about 1 deg). However, the continuum sky-background and sky emission lines originate from very different sources and are excited by different mechanisms, thus their emission levels are unlikely to scale linearly. In fact, even amongst the sky emission lines, lines from different species may have quite different behavior in terms of their temporal and spatial variations (Oliva et al. 2015). Consequently, scaling the sky spectra by the measured fluxes of sky emission lines risks subtracting an incorrect level of sky background. For a minority of spectra the standard stars telluric bands are extremely under subtracted, and thus it turns out that the SRCs of the standards are over-fitted (see Fig 21). The oxygen band is under-subtracted for the spectra of the standards. This leads to the over fitted SRC also containing the oxygen band and introduces artificial spectral lines to all the spectra of the spectrograph, making the classification of spectra by 1D pipeline difficult. Fig 21 using the 2D pipeline), which is plotted with black curves. We recalibrated the spectra using the ASPSRCs, which is presented with red curves in Fig 22. After recalibration, the spectra was classified as F0 by the 1D pipeline (an improvement over the 'Unknown' classification by the 1D pipeline previously). Comparing the ASPSRCs with the SRCs from the LAMOST 2D pipeline, we found 6 spectrographs from DR2 have this problem, and all of the 6 plates were observed in the nights with very bright moon. The ASPSRC method has been used to correct this problem, and the spectra of this 6 spectrographs will be released in LAMOST DR3. Analysis and Discussions We have applied the ASPSRCs to the flux calibration for LAMOST, however, there are still some uncertainties in the ASPSRCs caused by individual SRCs. The causes of these variations in the shape of SRCs might be attributed to several factors. First of all, although we selected the standard stars from high galactic latitudes to minimize the affects from variations of interstellar extinction, the effect of the Earth's atmospheric extinction still exist. Typical atmospheric extinction curves are smooth functions of wavelength in the LAMOST wavelength coverage (Bongard et al. 2013;Cullen et al. 2011), and this is usually true for the variations of atmospheric extinction, which can be well represented by low order polynomials. Therefore, the mean atmospheric extinction curve included in the ASPSRCs does not affect the spectral lines of the calibrated spectra. SNRs lower than about 10, the discrepancies increase rapidly, along with some systematic differences (Xiang et al. 2015). To minimize the uncertainties introduced by spectral SNRs, we selected the standard stars with SNRs larger than 20 to obtain the ASPSRCs. In addition, errors due to the stellar atmospheric parameters of standard stars also cause variations in the SRCs. For flux standard stars of 5,750 K≤ T eff ≤6,750 K, an error of 150 K in T eff can lead to a maximum uncertainty of 12% in the shape of the stellar SED and thus it will change the shape of the SRC derived from it. Uncertainties caused by errors in log g are negligible (i.e. for an estimated uncertainty of 0.25 dex in log g, about 1% for the whole wavelength range is affected). Metallicity mainly affects the blue-arm spectra at wavelengths less than 4,500Å. An error of 0.2 dex in [Fe/H] can change the SED shape between 3,800Å and 4,500Å by approximately 3%, while the effects at wavelengths greater than 4, 500Å are only marginally (Xiang et al. 2015). This is the reason we remove the candidates with standards which above uncertainties in T eff larger than 150 K. The advantage of the ASPSRCs comes from using an average SRC of instrument response curves and the average of atmospheric extinction curves, although there are many uncertainties introduced by the influencing factors discussed, which will eliminate the effects. Our experiments prove that all the influencing factors on accuracy of flux calibration is less than 10 during the DR2 period. The average SRCs are presented in Table 3 to Table 6. One can use them to calibrate spectra of the LAMOST DR2 catalogue. For the spectra observed subsequent to DR2, new ASPSRCs will need to be produced to counter variations from the instrument.
Measurement of Physical Activity in Adults with Muscular Dystrophy: A Systematic Review of Physical in Adults with Muscular A Systematic Review.” There is little consensus about measurement of physical activity in adults with muscular dystrophy. This systematic review summarizes evidence for measurement properties of direct and indirect measures of physical activity in adults with muscular dystrophy. A two-phase search for peer-reviewed articles identified firstly, studies which measured physical activity in this population and secondly, studies reporting the measurement properties of activity measures. Methodological quality was assessed using COSMIN guidelines and a best evidence synthesis conducted. Phase 1 included 53 studies identifying 63 measures including accelerometers, direct observation, heartrate monitors, calorimetry, positional sensors, activity diaries, single scales and questionnaires. Phase 2 included 26 studies of measurement properties for 32 measures. Methodological quality of the included studies was low, only 2 were rated good. There was insufficient evidence to robustly recommend any physical activity measures and further research is required to validate measures of physical activity for adults with muscular dystrophy. Based on the findings of this review, measures with potential for further study have been highlighted. Introduction The aim of this review was to appraise measures of physical activity for the assessment of adults with muscular dystrophy. Effective physical activity measurement is important to evaluate outcomes in randomised controlled trials (RCTs), to monitor disease progression and to make recommendations for optimising physical activity [1]. For adults with muscular dystrophy, physical activity has been linked to health benefits, such as improved fitness and self-management [2,3]. However, more research reporting quantified physical activity levels is required to determine optimal activity for adults with muscular dystrophy and to evaluate potential risks, such as exercise-induced damage to dystrophic muscle [4,5]. Diagram created by author (SRL) based on common clinical practice and physical activity measurement analysis frameworks applied in the literature [8,9] Physical activity is defined as behaviours involving bodily movements and energy expenditure [6,7]. Measurement of physical activity can be defined using a well-recognised conceptual framework [8,9] which considers Frequency, Intensity, Timing and Type (FITT) of activity or overall measurement encompassing these parameters (see Figure 1). The qualities of measurement tools can be defined in terms of measurement characteristics and properties, according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) taxonomy [10] (see Figure 2). Measurement characteristics (generalizability and interpretability) of physical activity measures are variable because they depend on population and setting, and because there are numerous diverse ways to measure physical activity. These include indirect self-report tools, such as diaries and questionnaires, and direct tools which record the physiological consequences of activity, including bodily movements, metabolism and cardio-respiratory responses [11,12]. The characteristic pros and cons of these measurement tools (such as ease of use, burden, range and ability to capture FITT parameters) have been discussed in healthy individuals [8,12,13,14,15] older adults [9], wheelchair users [16] and people with neuromuscular diseases [17]. However, it is not known which measures might be most suitable for the assessment of physical activity specifically in adults with muscular dystrophy which is characterised by progressive weakness, heterogeneous presentations and variable function. It is therefore important to ascertain the generalizability and interpretability of physical activity measures in adults with muscular dystrophy to aid selection of appropriate measurement tools. The measurement properties (reliability, responsiveness and validity) of physical activity measures have also been investigated in multiple studies in various populations, including other neurological, rheumatological, oncological or pulmonary conditions and healthy, elderly, disabled or cognitively impaired individuals [9,12,13,17]. However, cumulative evidence is inconclusive due to conflicting reports, varied study design, diversity of measures and lack of consensus about gold standard criterion measures. Furthermore, reliability, responsiveness and validity of physical activity measures established in other populations may not be transferable to adults with muscular dystrophy who may have very different muscle, metabolic and cardiac functioning [18,19,20,21]. Thus, the measurement properties of physical activity measures when used with adults with muscular dystrophy remain unclear. To the authors' knowledge, this is the first review to examine population specific evidence for the reliability, responsiveness and validity of physical activity measurement in adults with muscular dystrophy. The objectives of this review were, firstly, to identify direct and indirect physical activity measures used to assess adults with muscular dystrophy in a range of study designs and to describe their generalizability and interpretability. Secondly, to appraise the evidence of reliability, responsiveness and validity for physical activity measures in studies which included adults with muscular dystrophy. Finally, based on a narrative synthesis, to make recommendations, where possible, for the selection of suitable physical activity measurement tools for use with adults who have muscular dystrophy. Methods The protocol was registered on PROSPERO in July 2017 (Registration Number CRD42017070514) and follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [22]. Search The search was conducted in 2 phases. Phase 1 was designed to identify physical activity measures used to assess adults with muscular dystrophy and to describe their generalizability and interpretability. Phase 2 was designed to identify and appraise evidence for their reliability, responsiveness and validity. In Journals @ Ovid Full Text. In phase 1, Ovid search terms were expanded from 'muscular dystrophy', 'physical activity' and 'measure'. In phase 2, the search strategy was informed by the previous search results and additional 'measurement property' search terms were added. (For full search see supplementary material appendix I.) Study Selection Studies were selected by 3 reviewers using the eligibility criteria listed in Table 1. (In phase 1: SRL and CW (10% sample); in phase 2: SRL, FS and CW (10% sample)). Disagreements were resolved by consensus discussion at this stage and throughout (arbitrated by CW). Studies of any design were included if they had measured physical activity in any adult(s) with muscular dystrophy. Only studies where the measurement of activity spanned more than 10 minutes were included as shorter bouts of activity are not considered to contribute to recommended daily activity tallies [23,24,25]. The FITT framework (see Figure 1) was applied to ensure that only those intending to quantify physical activity overall or in 3 or more FITT parameters were included. When several reports pertained to the same study, the most recent or comprehensive article in terms of physical activity measurement was selected. In phase 2, inclusion was further limited to full text articles that evaluated reliability, Journal of Physical Activity Research 3 responsiveness and validity. A scarcity of physical activity measurement evaluation studies in adults with muscular dystrophy was anticipated. So, inclusion encompassed not only studies that overtly reported validity, reliability and responsiveness but also those that included hypothesis testing which incidentally indicated measurement properties of physical activity measures. Data Extraction The data extraction form was developed a priori, customised from previously published extraction tools [26,27] (see supplementary material appendix II). In phase 1, descriptive data were extracted by a single reviewer (SRL). In phase 2, 2 reviewers (SRL and FS) independently extracted the data. Methodological Quality In phase 1, methodological quality assessment was unnecessary because data were descriptive only. In phase 2, methodological quality was independently assessed by 2 reviewers (SRL and FS) using the COSMIN guidelines [28] to rate evidence supporting measure reliability, responsiveness or validity as excellent, good, fair or poor. Synthesis In phase 1, physical activity measures identified were described, listed and categorised. Their generalizability was quantified in terms of number of studies, number of participants, demographics (including age range, gender, diagnoses and mobility) and environment. Their interpretability was considered in terms of FITT measurement scope (i.e. capture of how often and how long different activities were carried out for and at what intensity, for example light, moderate or vigorous), timeframe, mode, metric and range of scores. In phase 2, evidence, and its methodological quality rating, for the validity, reliability and responsiveness of physical activity measures was listed for each included study. A narrative synthesis was carried out, considering the strength and consistency of evidence. Intervention Physical activity including free-living activities or prescribed exercise, lasting 10 minutes or more, where physical activity can be defined as:  "Behaviour that involves human movement, resulting in physiological attributes including increased energy expenditure and improved physical fitness" (page S11) [7] or  "Any bodily movement produced by skeletal muscles that requires energy expenditure beyond resting expenditure" (page 3109) [6] and "the execution of a task or action by an individual" (page 577) [121] No physical activity The following types of activities:  activity lasting less than 10 minutes (or for an undisclosed time period)  Activities during therapy sessions  Activities performed at relative rest (e.g. sleep or nocturnal movements, resting activity, mouth or tongue exercises, small muscle hand exercises and pulmonary muscle training)  Functional activity milestones (e.g. interval analysis of loss of ambulation) Comparison and outcomes Studies reporting measurement of physical activity. Measurement encompassing 3 or more FITT parameters by a battery of measurement approaches or encompassing overall physical activity measurement for: Phase 1 Study selection is summarised in Figure 3a. Agreement between reviewers (SRL and CW) was 90% and 87% for abstract and full text screening respectively, with full agreement after consensus discussion. Included articles are listed and described in Table 2; 63 physical activity measures were identified (see Table 3 and 3a supplement). Generalizability Activity measurement was generalizable across gender, mobility from independent walking to wheelchair use, age range from teenager to elderly and amongst different muscular dystrophy diagnoses. Myotonic Dystrophy and Facioscapulohumeral Dystrophy were the most commonly assessed. Indirect measures were used to assess larger numbers of participants than direct measures, particularly standardised questionnaires (n=1567). Of the direct measures, the greatest number of participants were assessed using accelerometry (n=731 Free-living physical activity was most usually assessed, especially by questionnaires, diaries, continuous heartrate monitoring and accelerometry. However, prescribed activities were also assessed at home and in other environments, including the gym, sports pitch and laboratory, where activity was monitored by indirect calorimetry, periodic heartrate monitoring and some training logs. The most generalizable tools within each category, used in the most studies and participants with the widest spectrum of demographics, included 2 standardised questionnaires (the International Physical Activity Questionnaire (IPAQ), the Physical Activity Scale for Individuals with Physical Disabilities (PASIPD)), activity logs, Polar heartrate monitors and triaxial, ankle accelerometers (although, the only accelerometer used to assess non-ambulant participants was wrist-worn [46]). For full measure descriptions and categorizations, see Table 3a in the supplementary material. Interpretability Indirect measures collected activity spanning 3 days to a year (or lifetime), some in real-time, including activity diaries of 3 days to 6 months, and others by recall, including standardised questionnaires often over 7 days. Whereas, all direct measures recorded activity in real-time from 10 minutes to 6 months. Most recording periods lasted 1-14 days, except for direct observation, periodic heartrate monitoring and indirect calorimetry which were conducted over shorter timeframes of between 10-90 minutes. There was great variability in the metrics of activity measures, making it difficult to compare activity measurement ranges (see table 3 and 3a supplement). Phase 2 Study selection is summarised in Figure 3b. Agreement between the 2 reviewers (SRL and FS) was 86%, 87%, 91% and 86% for abstract, full text screening, data extraction and COSMIN ratings respectively, with full agreement after consensus discussion. Evidence for the reliability, responsiveness and validity of 32 physical activity measures is listed in Table 4 (and 4a supplement). Only 5 included studies [42,46,47,71,75] had as their primary objective the evaluation of measurement properties of a physical activity measure; the remaining 21 articles were included for incidental measurement properties from hypothesis testing relating to other objectives. No studies were rated as excellent; 2 were rated as good [36,42], 12 as fair and 11 as poor. This was largely due to low sample sizes and incidental measure evaluation. Reliability and Responsiveness There was very little evidence for reliability or responsiveness testing of any physical activity measures. Of the indirect measures, there was good quality evidence of internal consistency of the PASIPD from an evaluative study including 372 participants, an estimated 7% of whom had muscular dystrophy [42]. There was fair quality evidence of internal consistency of the Physical Self-Description Questionnaire (PSDQ-S) from an evaluative study including 50 participants, 8% of whom had muscular dystrophy [75]. There was also incidental report of moderate to high test-re-test reliability of the Canada Fitness Survey (CFS) [43,76]. Of the direct measures, there was poor quality evidence of good test-re-test reliability of the StepWatch [71] accelerometer and moderate measurement error of Ubitrak (a Wi-Fi and GPS (Global Positioning System) movement tracker) [47]. There was poor quality, incidental evidence of inter-rater reliability between indirect calorimetry gaseous analysers, K4 b2c and Oxycon Mobiled [60] and responsiveness of a pedometer compared to the Physical Activity Scale for the Elderly was tenuously indicated, as neither detected significant changes in physical activity post intervention [41]. Validity There was a small amount of evidence supporting the validity of 2 indirect measures (see table 4). The strongest evidence was for the PASIPD, which had good quality evidence of significant discriminative validity between extreme groups [ There was no good quality evidence supporting the validity of any direct measures. However, there was some collective, low-quality evidence concerning accelerometry and heartrate monitoring. There was cumulative, predominantly incidental, evidence of discriminative and convergent validity of accelerometry, which was stronger for triaxial accelerometers ( Discussion The main finding of this systematic review is that physical activity has been measured in numerous and various ways in a range of 53 studies assessing adults with muscular dystrophy. There is no consensus about the most generalizable or interpretable activity measurement tools for this group. Furthermore, evidence is limited about measure reliability, responsiveness and validity for the assessment of physical activity in adults with muscular dystrophy. Only 5 studies overtly evaluated the measurement properties of physical activity measures and none have provided high quality evidence of reliability, responsiveness and validity. Direct Measures Despite the paucity of evidence for reliability, responsiveness and validity of direct measures of physical activity in adults with muscular dystrophy, tools like accelerometry and heartrate monitoring might have potential. As demonstrated in the literature [9,14,15,16,77] and by the studies identified in this review, accelerometry and heartrate monitoring are both fairly generalizable and interpretable. Accelerometry can capture free-living activity over the medium (days/weeks) to long-term (months) and can detect frequency, absolute intensity, and timing, also yielding an overall quantification of physical activity. Although accelerometry cannot discern relative exertion or type of activity, it is adaptable, relatively inexpensive, and unobtrusive. In terms of measurement properties, tentative construct validity of accelerometry has been indicated in this review, with the best evidence in support of triaxial devices. Multi-plane movement detection, although not integral for regular walking, may be more suited to irregular torsions [78], characteristic of abnormal mobility in adults with muscular dystrophy [79]. Furthermore, for healthy people and those with chronic diseases, multiaxial devices have also demonstrated stronger criterion validity and lower measurement error than uniaxial devices [80]. Similarly, the triaxial GENEActiv has been validated over 6 minutes or less in adults with myotonic dystrophy [81] with construct validity supported incidentally in a high quality RCT [82] (too recent for inclusion in this systematic search) and the biaxial StepWatch has been extensively validated in ambulant people with Multiple Sclerosis, Parkinson's Disease and children with Duchenne's Muscular Dystrophy [71,83,84,85]. In contrast, criterion validity was reportedly low and measurement error unacceptably high for the uniaxial Digi-walker over 2 minutes, in ambulant adults with neuromuscular diseases, including muscular dystrophy [86]. In this review, there was more evidence for generalizability of accelerometer placement on the ankle than the trunk or wrist, although it came only from ambulant participants; whereas, wrist placement better encompassed a range of mobility including wheelchair users [46]. In the literature, wrist accelerometry has been linked to non-ambulant assessment [87] and lower measurement error at slow walking speeds [88] which may become relevant as muscular dystrophy progresses [79]. Thus, triaxial accelerometry, placed at the ankle or wrist, represents a potential tool for the assessment of physical activity in adults with muscular dystrophy, subject to establishing robust reliability, responsiveness and validity in both ambulant and non-ambulant. Heartrate monitoring may also have potential, particularly for monitoring compliance with, and recording intensity of, prescribed exercise interventions in adults with muscular dystrophy. In this review there were tentative indications for construct validity of Polar devices. They are generalizable and can record frequency, timing and relative intensity of exertion, which is particularly useful for quantifying prescribed activity [89] Indirect Measures The same reservations about energy expenditure extrapolations must be applied to indirect measures that estimate metabolic expenditure. Additional caution is also necessary when interpreting questionnaire scores due to the potential for self-report, re-call and/or social desirability bias, which usually produce overestimations [8]. However, indirect, self-report measures of physical activity for adults with muscular dystrophy are widely generalizable, inexpensive, acceptable and easy to use [9,11,12]. Several standardized questionnaires were identified as having potential in this review. The PASIPD had the strongest evidence supporting its reliability and validity which is consistent with evidence from other populations including strong test-re-test reliability [92,93], discriminative validity [94] and low [92,93,95,96] to moderate [94] criterion validity. However, significant overestimation measurement error has been reported [95]. In terms of interpretability, PASIPD comprehensively covers FITT and is sensitive to disabled and low-level activities, although it is unsuitable for comparisons with non-disabled populations. The IPAQ, BPAQ and PSDQ-S are suitable for comparison with other populations; the BPAQ and PSDQ-S are situation specific to bone health [97] and self-perception [98] respectively. The IPAQ is the most generalizable questionnaire identified in this review and various versions are available including short, long and modified versions (more sensitive to lower activity intensities and non-ambulant mobility [99,100] [105] validity and predominantly overestimation measurement error [104,106]. Thus, if acceptable reliability, responsiveness and validity can be established and energy expenditure scores are treated circumspectly, both the PASIPD and IPAQ have potential for the assessment of physical activity in adults with muscular dystrophy. Activity diaries also have potential as generalizable and interpretable activity measures, especially those designed to span FITT which are often used for prescribed activity monitoring. In addition, diaries might have potential as an adjunctive activity measure. Supplementary activity logs have been shown, for example, to mitigate IPAQ overestimation [107] and to improve criterion validity and measurement error [104]. Diaries have also been advocated alongside direct activity measures [14,15] and, in this review, diaries appeared to strengthen interpretation of heartrate monitoring and indirect calorimetry equivalence [53,54,55]. Activity diaries are, therefore, not only useful for monitoring prescribed activity, they may have an application as adjuncts to enhance interpretability of free-living physical activity measurement. Implications Clearly, all physical activity measures have limitations, both general and specific to adults with muscular dystrophy. These must be considered in study design and some authors have compiled checklists to aid measure selection [17,108]. There is also an argument, reflected by the findings of this review, for a multi-measurement approach, where multiple, complementary activity measures are employed, to improve the interpretation of physical activity measurement [14,15,16,80] and potentially improve measurement properties [104,107]. Recall bias can be neutralised by triangulation with real-time measurement and social desirability responding can be minimised by the knowledge that responses will be verified directly [109]. Recording both relative and absolute activity, by heartrate monitoring and accelerometry or GPS, can enrich physical activity data interpretation and has also been shown to improve measurement properties [110][111][112][113]. Thus, diaries, heartrate monitoring and, possibly, GPS might be suitable adjuncts to standardised questionnaires or accelerometry. A multi-measurement approach is recommended for the assessment of physical activity in adults with muscular dystrophy. The current lack of research evaluating measurement properties of physical activity measurement in adults with muscular dystrophy means that authors should be encouraged to report study level reliability and validity of the measures employed in trials or observational studies. In addition, measure evaluation studies are required to determine the validity, reliability and responsiveness of physical activity measures for use with adults with muscular dystrophy. The evidence, both evaluative and incidental, compiled in this review was predominantly low quality-rated, often linked to sample sizes below the 50-100 participant threshold set by COSMIN for high quality-ratings [27]. Sample size challenges include the rarity of adults with muscular dystrophy and restricting study designs to single diagnoses and/or separating ambulant and non-ambulant [1,17]. In larger samples, it is also difficult to find an activity measure suitable to encompass activity heterogeneity within and between muscular dystrophic diagnoses [114,115] and stages of disease progression [116,117,118]. Restrictive sampling is advocated for experimental designs [1]. However, to optimise statistical power, a larger, heterogeneous sample (with whole and sub-group analyses) is recommended for future evaluative studies where measurement properties are to be elucidated. For evaluative research, it is also difficult to identify a gold-standard criterion measure of physical activity. In the wider physical activity literature, criterion measures include calorimetry, accelerometry and direct observation [8,12,13,119]. Due to burden and cost, direct observation and indirect calorimetry are limited to smaller samples and short timeframes (<1 day). Calorimetry by double-labelled water is suitable over a timeframe of 1-2 weeks, but burdensome. Energy expenditure calculations should also be viewed with caution because calorimetry is likely to be impacted by metabolic abnormalities and progressive physiological changes in muscular dystrophy [18,19,90]. Whereas, direct observation has inherent content validity [119] and, in this review, it was interpretable and generalizable in 13 studies. Thus, it represents a suitable, initial gold-standard criterion for short-term validation. Accelerometry is generalizable in larger samples and over various timeframes. Thus, accelerometry, with prior validation against direct observation, might represent a suitable criterion against which to validate other activity measures for adults with muscular dystrophy. Strengths and Limitations To the authors' knowledge, this is the first systematic review about measurement characteristics and properties of physical activity measures specifically for adults with muscular dystrophy. The review employed a broad, sensitive search strategy, 3 independent reviewers and rigorous COSMIN appraisal. However, there are some limitations. These include, firstly, exclusion of non-English language articles which means relevant articles published in other languages may have been missed. Nevertheless, a recent review of physical activity measures in adults and children with neuromuscular diseases [17] did not identify additional measurement approaches beyond those identified in this review; which suggests no pertinent measures were missed. Secondly, there is potential for bias in phase 1 as only a 10% sample was second reviewed and there was no methodological appraisal. However, the descriptive nature of phase 1 was straightforward, and the methodological quality of the studies did not impact description of the tool. Thirdly, COSMIN methodology was developed for patient-reported outcome measures, and as such, the participant number cut offs may be too stringent for direct measure evaluation where smaller participant numbers can be statistically robust [120]. Finally, risk of reporting bias was introduced by the inclusion of incidental hypothesis testing (indicative of discriminative or convergent relationships for which null findings are less frequently reported) thus the case for construct validity might have been artificially inflated. Conclusions Accelerometry, heartrate monitoring, direct observation, calorimetry, GPS, questionnaires and diaries have been used to assess physical activity in adults with muscular dystrophy. They were largely generalizable for adult age ranges, both genders and ambulant and non-ambulant people with a range of muscular dystrophy diagnoses. However, interpretability varied between measures and there was insufficient evidence to support their reliability, validity or responsiveness for use with adults who have muscular dystrophy. Measures identified as having most potential in this review included multi-axial accelerometry and the PASIPD questionnaire. Future evaluative studies of these, and/or other, physical activity measures for use with adults with muscular dystrophy are required. Future evaluative study design should consider direct observation as a fundamental criterion and maximizing sample size. Study design should include an awareness of activity measure limitations (in general and specific to muscular dystrophy) and the potential for improved interpretability by multi-measurement. [20] Yotsukura M, Fujii K, Katayama A, Tomono Y, Ando H, Sakata K, Ishihara T, Ishikawa K. Nine-year follow-up study of heart rate variability in patients with Duchenne-type progressive muscular dystrophy. Am Heart J 1998; 136(2): 289-296. [128] CEBM. Centre for Evidence Based Medicine: Study Designs. Journal of Physical Activity Research University of Oxford; 2018. Appendix 1. Full search 1. EXP Muscular Dystrophies 2. Muscular dystrophy 3. Facioscapulohumeral 4. Limb girdle muscular dystrophy 5. Becker's muscular dystrophy 6. Myotonic dystrophy 7. Sarcoglycanopathy 8. Duchene muscular dystrophy 9. 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 10. Wheelchair 11. 9 or 10 12. Physical activity 13. EXP human activities/ or "activities of daily living"/ or EXP social participation/ or EXP exercise/ or EXP circuitbased exercise/ or EXP cool-down exercise/ or EXP muscle stretching exercises/ or EXP physical conditioning, human/ or EXP plyometric exercise/ or EXP resistance training/ or EXP running/ or EXP swimming/ or EXP walking/ or EXP warm-up exercise/ or leisure activities/ or recreation/ or dancing/ or gardening/ or EXP sports/ or EXP athletic performance/ or EXP physical endurance/ or EXP physical fitness/ or baseball/ or basketball/ or EXP bicycling/ or boxing/ or football/ or golf/ or gymnastics/ or hockey/ or martial arts/ or mountaineering/ or racquet sports/ or return to sport/ or running/ or jogging/ or skating/ or snow sports/ or soccer/ or EXP sports for persons with disabilities/ or swimming/ or "track and field"/ or volleyball/ or weight lifting/ or wrestling/ or youth sports/
Patient priorities in herpes simplex keratitis Objective Herpes simplex keratitis (HSK) is a sight-threatening disease and a leading cause of infectious corneal blindness. Involving patients in setting the research agenda maximises patient benefit and minimises research waste. With no published patient involvement exercises, patients’ priorities in HSK are unclear. The objective of this study is to explore patients' priorities for research in HSK. Methods A literature review of publications in the year preceding recruitment of patients identified nine domains of research interest. A questionnaire was sent to participants asking them to rank these in order of priority. The ranking results were given a weighted-average score, and a thematic analysis was undertaken for the narrative data. Results Thirty-seven patients participated in the survey. Top priorities for patients were risk factors for recurrence of infection, diagnostic tests and treatment failure. The narrative data revealed three key clinical needs: difficulties in long-term symptom control, the need for rapid access care in acute infection and the desire for more accessible information. Conclusion This study highlighted three major issues in our current approach to HSK. First, there may be a misalignment between research efforts and patient priorities. Second, high-quality patient information is not widely available. This may hamper patients’ abilities to make informed decisions and contribute towards research. Third, clinical service priorities are of equal importance to patients as research. Researchers and clinicians are encouraged to address both needs in parallel. ► Involving patients in setting the research agenda ensures research benefits those who ultimately live with a condition and prevents research waste. To date, there are no published patient involvement exercises for herpes simplex keratitis (HSK), and as such it has been difficult to ensure patient priorities are being addressed. What are the new findings? ► This survey would constitute the first published exploration of patients' priorities for research in HSK. We undertook a patient involvement exercise, conducted in the West Midlands, UK. We found that top research priorities for patients were knowledge of modifiable risk factors for disease recurrence, development of accurate and rapid diagnostic tests, and more understanding of how/when treatment failure occurs. How might these results change the focus of research or clinical practice? ► Our narrative data give a new insight into patients' urgent clinical needs, which should be addressed in parallel to research. Our group emphasised the need for better symptom control (during and between flare ups), rapid access to specialist ophthalmic care and high-quality patient information resources. AbsTrACT Objective Herpes simplex keratitis (HSK) is a sightthreatening disease and a leading cause of infectious corneal blindness. Involving patients in setting the research agenda maximises patient benefit and minimises research waste. With no published patient involvement exercises, patients' priorities in HSK are unclear. The objective of this study is to explore patients' priorities for research in HSK. Methods A literature review of publications in the year preceding recruitment of patients identified nine domains of research interest. A questionnaire was sent to participants asking them to rank these in order of priority. The ranking results were given a weighted-average score, and a thematic analysis was undertaken for the narrative data. results Thirty-seven patients participated in the survey. Top priorities for patients were risk factors for recurrence of infection, diagnostic tests and treatment failure. The narrative data revealed three key clinical needs: difficulties in long-term symptom control, the need for rapid access care in acute infection and the desire for more accessible information. Conclusion This study highlighted three major issues in our current approach to HSK. First, there may be a misalignment between research efforts and patient priorities. Second, high-quality patient information is not widely available. This may hamper patients' abilities to make informed decisions and contribute towards research. Third, clinical service priorities are of equal importance to patients as research. Researchers and clinicians are encouraged to address both needs in parallel. InTrOduCTIOn Understanding patients' perspectives is vital for directing the research agenda. Clinicians, academics and the pharmaceutical industry are all key stakeholders in driving research forward, but their priorities are not always aligned with that of patients. 1 It has been argued that involving patients in research ensures the benefit to those who ultimately live with the disease and therefore prevents research waste. 2 In the UK, organisations such as the James Lind Alliance and INVOLVE (a National Institute for Health Research funded advisory group), have been major driving forces in facilitating public involvement in healthcare research. 3 Similarly, the Patient-Centered Outcomes Research Institute in the the United States was set up to support patient-centred research, and to ensure funding is directed at research questions critical to the patient. 4 Herpes simplex keratitis (HSK) can be a painful and debilitating disease, and when severe, can take a remitting and relapsing course with gradual loss of sight over time. 5 The virus is usually acquired early in life, after which it resides in the trigeminal root ganglion in a quiescent state. Years later, the virus travels to the ocular surface via the trigeminal nerve, and has the potential to damage all layers of the cornea. 6 Making an initial diagnosis of HSK can be difficult due to non-specific clinical signs, as well as low sensitivity and relatively low uptake of corneal PCR assays and conjunctival swabs. 7 Open access Usually, there is a need to start empirical treatment in the absence of confirmatory tests. Repeated infections can accumulate blinding complications such as scarring, neovascularisation, persistent epithelial defects, corneal melt, neurotrophic keratitis and secondary bacterial infection. [8][9][10] The mainstay of treatment is topical antiviral therapy for epithelial disease and/or topical steroids for stromal complications. 11 Oral antiviral therapy as longterm prophylaxis has been shown to significantly reduce the risk of recurrences, but there is increasing recognition that HSV resistance can occur in up to one-third of patients on oral antiviral therapy for over a year. 12 There are currently no published studies of the patients' priorities in research for HSK. The Sight Loss and Vision Priority Setting Partnership produced a list of priorities for research in 12 categories of eye diseases in 2014, however, HSK was not ranked in the list of priorities for corneal and external eye disease. 13 Patient-reported outcome measures (PROMs) are an assessment of health status that comes directly from the patient and are increasingly used in clinical effectiveness research, and in health policy and commissioning decisions. There is growing interest in the use of PROMs in ophthalmology, however, none have been developed specifically for HSK. Understanding of the patients perspective in ocular surface disease has focused primarily on dry eye disease (DED), and published work is centred on the development of PROMs for symptom control and quality of life (QoL). 14 Assessment tools, such as the Ocular Surface Disease Index 15 and Impact of Dry Eye on Everyday Life, 16 allow assessment of a patient's QoL and vision-related functioning, and the National Eye Institute Visual Function Questionnaire-25 (VFQ-25) has shown that the degree of visual impairment confers a worse QoL. 17 While DED and HSK share several QoL implications, there is a wide spectrum of HSK-specific consequences (such as fear of relapse, demanding treatment regimens, neurotrophic keratitis and immunosuppressive treatment) that are not addressed by existing tools. Reynaud et al have published the only QoL study in HSK thus far, focusing on patients during quiescent disease. 18 They found levels of QoL impairment in quiescent HSK to be comparable with other sight threatening diseases such as anterior uveitis, cataract, graft-versus-host disease and Sjögren-related DED. However, this study was limited by the lack of a QoL tool specific to HSK, and instead used a combination of The National Eye Institute VFQ-25, 19 the Glaucoma QoL 20 and the Ocular Surface Disease QoL questionnaire 21 ) to assess the various dimensions of living with HSK. From the literature that is currently available, our understanding of the patient's perspective in HSK is incomplete. We formed the first HSK patient participation group in the West Midlands, UK with three aims: (1) to recognise patients as stakeholders for research and clinical care in HSK, (2) to provide an opportunity for patients to steer the direction of future research and (3) to understand the patient's perspective on living with HSK. MeTHOds The survey took place across four regional hospitals in the West Midlands, UK. Patient participation was enrolled by five corneal specialists at clinic appointments. Patients were recruited sequentially during a 6-month time frame. During consultation, all patients with an established diagnosis of HSK were invited to participate in the survey. A literature search was conducted for publications relating to HSK in the year preceding recruitment of patients (2014). The search strategy was pragmatic in approach and deliberately targeted. Database searches were restricted to PubMed/MEDLINE, with the search term 'Herpes Simplex Keratitis' expanded as follows: Publications were grouped based on their clinical relevance and classified into nine key domains (online supplementary file). The number of publications for each domain was calculated as a proportion of all research relating to HSK in that year, to serve as an indicator for the research priorities of the scientific community (table 1). We distributed the survey online via Survey Monkey (SurveyMonkey, San Mateo, California, USA) for patients who had access to the internet (at https:// www. surveymonkey. com; accessed 14 August 2017), and by telephone interview for those who could not access the internet. We asked participants to rank these nine domains of interest in order of importance and share thoughts on each domain. For each patient, the order of the nine domains was randomised automatically by SurveyMonkey. There was also free-text space at the end for patients to comment on anything not covered by the nine domains. Ranking results were given a weighted-average score using SurveyMonkey's standard formula. 22 Thematic analysis was undertaken by XL and GW for all narrative data using the protocol described by Braun and Clarke. 23 First, coding of data and identification of potential themes was conducted separately by XL and GW, then key themes were agreed on by consensus. XL, GW and PS defined and named the identified themes and highlighted representative quotes which supported each theme. resulTs Priorities of the scientific community Review of the literature identified nine areas. These nine areas were reworded in plain English for patients: (1) risk factors for recurrence of infection, (2) how quickly the infection can be treated, (3) when there is failure to treat the infection, (4) developing tests to guide our treatment more effectively, (5) uncertainties about disease resistance to treatment, (6) the need for long-term treatment, (7) risk factors for developing infection, (8) impact of 8. Impact of the disease on quality of life 0% (0) 8 7 9.The frequency of hospital visits 0% (0) 9 9 All patients in the HSK Patient Participation Group were sent a survey based on these nine domains of research. (table 1). Priorities of patients Fifty-six patients from five centres in the West Midlands agreed to take part in the survey. Participating patients ranged from mild to severe disease, with varying lengths of diagnosis. Forty per cent of participants were male and 60% were female. The mean age of participants was 57 (range 19-89) years. Socioeconomic background was categorised using Index of Multiple Deprivation (IMD) as measured by postal code. Median IMD of the group was 6 (range 1-10). 24 Thirty-seven responses to the survey were received, of which six patients completed the questions via telephone. The weighted ranking score of patients for each area of interest is shown in figure 1. The highest rated domains were: risk factors for recurrence of infection (weighted ranking score 6.16), developing tests to guide treatment more effectively (5.57) and when there is failure to treat the infection (5.35). The domain ranked least important was the frequency of hospital visits (3.97). Thematic analysis Using qualitative research techniques and thematic analysis, we also identified three themes from the narrative data. Theme 1: controlling symptoms A prominent theme was difficulty in controlling symptoms (example quotes in table 2). In many cases, this translated to the need for frequent eye-drops. Acute exacerbations Open access 'I have to take drops every day for the last 4 years, it seems unlikely I will ever be free of them' 'Worse bit is putting the drops in' 'Need to get quicker pain relief' 'Anything which would make treatment more effective especially drops, rather than ointment' Theme 2: access to the ophthalmologist 'Early identification for front line non-specialist professionals, for example, Opticians, general practitioners' 'Direct access to ophthalmology department without need to involve the general practitioner' 'The biggest failure in the system is the inability of general practitioners to recognise it and their misdiagnosis' 'Hospital access by request should a worry arise unexpectedly' 'Diagnosis needs to be a lot quicker-more specialists need to be assigned' Theme 3: the need for more information Questions we have answers to: patient education and modifiable risk factors. 'How can patients self-identify (recurrence of infection)?' 'What are the risk factors and why?' 'I've never explored or had it explained to me why I got the infection' 'Are there any lifestyle factors that could be avoided to prevent recurrence?' 'People need to ensure they are not causing or exacerbating risk factors.' 'The possibility of resistance to treatment and how to deal with it should be analysed, and the information made available to patients.' 'I would like to know how to advise others to prevent them suffering the same infection.' 'I think it is also important to support the patient emotionally.' 'Is it something you catch or is it something already in your system?' Questions we are still looking into: Setting the research agenda 'Can it go from one eye to the other?' 'If long term treatment will reduce a recurrence happening again?' '(what is the) likelihood of complications, the effectiveness at preventing further recurrences and the necessary duration of treatment?' 'What are the risks of long term treatment?' 'What can be done to improve outcomes where there has been a late diagnosis and damage has been done?' 'Why has (failure to treat the infection) taken place?' 'How quickly does catastrophic blindness happen?' 'Establishing whether prolonged treatment causes resistance and therefore optimal treatment duration.' Quotes are grouped into three themes: controlling symptoms, access to the ophthalmologist and the need for more information. HSK, herpes simplex keratitis. require topical antiviral and steroid therapy every few hours. In patients with complicating DED or high intraocular pressure (IOP), there may be further treatment with lubricating or IOP lowering drops. Some patients describe the practical difficulties with frequent eye drops: 'I am on fourteen drops a day for the last year, it is difficult for me to keep up,' and some patients are dependent on others to administer their medication: 'I rely on my wife to put in my drops because my hands don't work well-I cannot even use a knife and fork.' Some symptoms persist despite treatment: 'The eye is never 80% comfortable even when well. Therefore, long-term treatment would be great.' Theme 2: access to the specialist Participants found it difficult to gain rapid access to the specialist (table 2). In the UK, patients may not have direct access to an ophthalmologist, without referral from a family/general practitioner. One patient commented that 'the time difference between a general practitioner referral and a consultation appointment is important, and pathology of the disease is not always fully understood at primary care level.' Others pointed out that 'diagnosis needs to be a lot quicker-more specialists need to be assigned,' and that HSK 'needs to be spotted by normal ophthalmologists.' It seems that some patients are experiencing significant delays to diagnosis and treatment, which has the potential to cause irreversible damage. One patient suggested developing 'a treatment pack Open access which can be kept on standby by the patient for instant treatment of flare ups,' and others asked the question 'is self-diagnosis acceptable?' Theme 3: the need for more information In asking the patient for their priorities, we received many questions in return (table 2). We have categorised these into two groups: questions we have the answer to (which should be made widely available as patient information) and questions without clear answers (which should form the basis of setting the research agenda). Questions ranged from 'can it go from one eye to the other?' and 'what the risk factors are and why?' to more challenging ones such as 'why has (failure to treat the infection) taken place?' and 'what can be done to improve outcomes when there has been a late diagnosis and damage has been done?' Some patients pointed out the need for information early on: 'I've never had it explained to me why I got the infection' and 'how quickly does catastrophic blindness happen?' Many questions also centred around lifestyle changes and modifiable risk factors: 'Are there any lifestyle factors that could be avoided to prevent recurrence?' as well as ways in which patients can play a more active role in managing their disease: 'How can patients self-identify (recurrence of infection)?' dIsCussIOn With a large clinical and economic impact, significant research efforts are directed towards HSK-including the development of better diagnostic tools, treatment strategies and vaccination. [25][26][27][28] To our knowledge, this study is the first published report of patient priorities for HSK research. Through this qualitative exercise, we have begun to identify what is most important for the patient. Our literature search highlighted disparities between the research priorities of patients and the scientific community. The top priorities for research were: risk factors for recurrence of infection; developing tests to guide treatment more effectively and failure to treat the infection. Significant efforts have already been made to understand the viral and host factors influencing infection and reactivation, but this remains poorly understood. We know that certain triggers such as hormonal changes, fever, psychological stress and ultraviolet light exposure may induce reactivation, but the underlying mechanisms remain unclear. Improving diagnosis and monitoring with the use of novel imaging techniques has been an area of rapid growth in recent years and continues to expand. Newer techniques such as anterior segment optical coherence tomography and in vivo confocal microscopy, as well as automated imaging analysis platforms are providing more accurate ways of visualising and quantifying disease. [28][29][30] Treatment failure results in irreversible scarring which may require corneal grafting to preserve vision. The complicating factor in HSK is that the insult of surgery itself may trigger viral reactivation and cause graft failure. Research efforts have focused on understanding the mechanisms of graft rejection and effectiveness of antiviral treatment to reduce the risk of graft failure. 31 Surprisingly, frequency of hospital visits was ranked lowest in priority. One patient commented that 'regular and predictable consultations not only provide reassurance, but a better chance of correct treatment,' while another asked 'would less time between visits help catch it earlier?' It is worth considering whether more routine reviews improve outcomes, or if it is a source of reassurance for patients. This is especially important, as anxiety impairs QoL in patients with HSK even while in remission. 18 An unintended outcome of this study was the extent to which patients also used the survey to draw attention to their priorities for clinical care. They highlighted significant areas of unmet need such as poor access to specialists during times of acute infection. Patients asked if it was acceptable to 'self-diagnose' acute infection and use 'rescue treatment packs' at home. In chronic obstructive pulmonary disease, National Institute for Health and Care Excellence guidelines recommends the use of rescue packs containing steroids and antibiotics kept at home for acute exacerbations. 32 A similar set-up in HSK may benefit patients in acute infection, however, we need strategies that enable them to do so safely. An area requiring attention is patient education. Highquality information provision is an intervention which has been shown to positively impact patients' experiences and health behaviours. As such, providing accessible information is now firmly embedded in health policy. 33 Currently, there is little in the way of published literature and digital resources for HSK. Efforts should concentrate on patient education that is effective, engaging and accessible for the wider population. This study represents the first open invitation for patients with HSK to express what is most important to them, as well as the first examination of whether current research is aligned with the priorities of patients. Despite being a leading cause of infectious corneal blindness, HSK did not feature in the top priorities ranked in the corneal and external diseases category of the Sight Loss and Vision Priority Setting Partnership Survey. From the disparity of research priorities demonstrated by our study, we feel that a more in-depth priority setting exercise for HSK patients would be beneficial in directing future research. Our study has several limitations. Our initial scope of the literature which informed the 9 domains of research interest only included studies in the year preceding our survey. This was not intended to be an exhaustive review of the literature, but rather an indication of where research efforts within HSK were focused at the time. Nineteen patients who originally agreed to take part did not complete the survey and it is unclear whether the drop-out rate may have introduced bias to our findings. For this initial qualitative scoping exercise, we have not collected baseline characteristics of participants, Open access therefore, further evaluation is required to determine the needs and priorities for different patient groups. Our study suggests that elderly patients have concerns specific to coping with the demands of frequent drops. With an ageing population, it is vital that clinicians consider the elderly patient's physical and cognitive limitations, and the feasibility of their management plans. 34 Furthermore, centre and patient variation in treatment regimen may exist, including, for example, the need to treat concurrent elevation in IOP, which may have exacerbated the need for drop frequency. More in-depth interviews among different patient groups are required to explore the full spectrum of patient priorities. Our geographical reach was limited to the West Midlands, UK, where guidelines and structure of service provision differ from other areas. It is, however, an area of ethnic and socioeconomic diversity, with the largest non-white regional population outside of London. 35 Further investigation encompassing the wider cohort is needed. This study has highlighted three issues in our current approach to HSK. First, the research agenda has so far been set without published knowledge on what is important for the patient. This can lead to a disparity between priorities of patients and the scientific community. We have identified several areas of key importance to patients for future research to bridge this gap. Second, there is a lack of easily accessible patient information on HSK. This hampers their ability to make informed decisions relating to their own clinical care and limits their ability to contribute towards research. Third, clinical service priorities are of equal importance to patients as research. Clinicians and researchers should be aware that a patient's urgent desire for better clinical care today may outweigh the uncertain benefits of research tomorrow, even if the latter might lead to a cure. To understand patients' true priorities, the enquiry must be open and unbiased. Our survey is the first attempt to engage patients with HSK in a dialogue about setting research priorities. Patients are eager to fulfil their role as key stakeholders for research, therefore, it is vital that patients are provided with a platform to voice their priorities and establish partnerships with researchers. We intend to expand our patient participation group for this purpose and continue to share our findings with the scientific and clinical community. Contributors XL and GW: project conceptualisation, methodology, design, patient recruitment, data analysis and manuscript drafting. SK, PM, AP and MQ: are responsible for clinical care of participants, patient recruitment and manuscript review. KS: patient representative, analysis of results and manuscript review. AKD and PS: study design, providing expertise in qualitative analysis, review of manuscript. Funding XL and AKD receive funding from the Wellcome Trust (grant number 200141/Z/15/Z). Competing interests None declared. Patient consent for publication Not required. Provenance and peer review Not commissioned; externally peer reviewed. Author note This paper is dedicated to the memory of our wonderful colleague, Vinette Cross, who recently passed away. Open access This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https:// creativecommons. org/ licenses/ by/ 4. 0/.
HF-EPR, Raman, UV/VIS Light Spectroscopic, and DFT Studies of the Ribonucleotide Reductase R2 Tyrosyl Radical from Epstein-Barr Virus Epstein-Barr virus (EBV) belongs to the gamma subfamily of herpes viruses, among the most common pathogenic viruses in humans worldwide. The viral ribonucleotide reductase small subunit (RNR R2) is involved in the biosynthesis of nucleotides, the DNA precursors necessary for viral replication, and is an important drug target for EBV. RNR R2 generates a stable tyrosyl radical required for enzymatic turnover. Here, the electronic and magnetic properties of the tyrosyl radical in EBV R2 have been determined by X-band and high-field/high-frequency electron paramagnetic resonance (EPR) spectroscopy recorded at cryogenic temperatures. The radical exhibits an unusually low g1-tensor component at 2.0080, indicative of a positive charge in the vicinity of the radical. Consistent with these EPR results a relatively high C-O stretching frequency associated with the phenoxyl radical (at 1508 cm−1) is observed with resonance Raman spectroscopy. In contrast to mouse R2, EBV R2 does not show a deuterium shift in the resonance Raman spectra. Thus, the presence of a water molecule as a hydrogen bond donor moiety could not be identified unequivocally. Theoretical simulations showed that a water molecule placed at a distance of 2.6 Å from the tyrosyl-oxygen does not result in a detectable deuterium shift in the calculated Raman spectra. UV/VIS light spectroscopic studies with metal chelators and tyrosyl radical scavengers are consistent with a more accessible dimetal binding/radical site and a lower affinity for Fe2+ in EBV R2 than in Escherichia coli R2. Comparison with previous studies of RNR R2s from mouse, bacteria, and herpes viruses, demonstrates that finely tuned electronic properties of the radical exist within the same RNR R2 Ia class. Introduction Ribonucleotide reductase catalyzes the conversion of ribonucleotides to the corresponding deoxyribonucleotides in all living organisms via a radical-based chemical mechanism, thereby providing and controlling the pool of precursors necessary for DNA synthesis and repair [1,2,3,4]. Based on differences in the generation of the free radical, the amino acid sequence, and the overall quaternary structure [1], RNRs can be grouped into three different classes (I, II, and III). Class I RNR, the most common class, found in almost all eukaryotic organisms is oxygen dependent and consists of two subunits, designated R1 and R2 [2]. These can assemble into enzymatically active homodimeric tetramers (R1 2 R2 2 ) and higher order oligomers [5,6,7]. Most R2s generate a stable tyrosyl radical in the vicinity of a dimetal-oxygen cluster [3,6]. This radical transfers from the radical dimetal site in R2, over a distance of ca. 35 Å , to the active site of the R1 subunit, where it forms a thiyl radical. Its transfer likely is facilitated by a conserved array of hydrogen bonded amino acids [2,8,9]. Herpes viruses are ubiquitous eukaryotic pathogens infecting a large variety of animal species. They share similar architecture with a double-stranded DNA genome encased within a proteinaceous cage. Many herpes viruses, including those of the a-and csubfamilies, encode for an active RNR in their genomes. Most likely, the viral DNA replication relies on de novo synthesis of deoxyribonucleotides by the viral RNR under conditions in which the host cell RNR is inactive, such as in non-dividing cells [10]. Herpes viral R2 subunits have limited similarity in amino acid sequence compared with mammalian and bacterial homologues. Yet, previous studies of the RNR form herpes simplex virus (HSV) 1 and 2 indicated a similar catalytic mechanism as in Escherichia coli class Ia RNR [11,12]. Epstein-Barr virus (EBV) is classified in the gamma subfamily of herpes viruses and represents one of the most common pathogenic viruses in humans worldwide. EBV R2 has low sequence similarity with mouse R2 (27.0% sequence identity and 53.7% sequence similarity (as calculated with the blosum62 matrix ), human p53R2 (26.7% and 52.7%), herpes simplex virus (HSV) 1 (36.7% and 59.6%), HSV 2 (36.4% and 60.9%) and E. coli R2 (27.0% and 53.7%). Amino acid sequence alignments indicate that the equivalent amino acid for the conserved metal coordinating aspartate in most R2s is a glutamate (Glu61 in EBV R2). The virus may induce development of several diseases such as infectious mononucleosis, and is associated with neoplasms, including lymphomas and carcinomas [13]. In this context, the viral RNR is an important drug target for EBV [14,15]. Certain small organic molecules are potential inhibitors of herpes viral RNRs and thus can be employed for the treatment of herpes viral infections by targeting the R2 tyrosyl radical site. For example, hydroxyurea (HU), a potent reductant, diminishes the EBV genome in Burkitt's lymphoma cell lines in vitro, whereas prolonged exposure may lead to HU resistance in some cell lines [16]. Generally, an understanding of the specificity of inhibitors requires detailed knowledge of the accessibility, reactivity and the electronic and magnetic characteristics of the radical site [17]. The tyrosyl radical from several different R2s have been characterized, showing that the magnetic properties of the radical unit are strongly correlated with the dihedral angles of its tyrosyl bmethylene protons [18,19,20,21] (Figure 1). Applying high-field/ high-frequency electron paramagnetic resonance (HF-EPR) allows measurement of the g-tensor anisotropy of the radical center with high accuracy. Previous studies of tyrosyl radicals demonstrated that the g-values are sensitive probes for the local electrostatic environment. HF-EPR measurements showed that there are clear differences in the g-tensor anisotropy between different RNR R2s from class Ia [22,23,24]. Such differences are assumed to arise from variations in the hydrogen bonding interaction with a nearby hydrogen for both the mouse R2 radical and the Y D radical from photo system II (PS II) [22]. In this work the physicochemical properties of the tyrosyl radical from EBV R2 were characterized experimentally and theoretically with the use of X-band, HF-EPR, resonance Raman (rRaman) spectroscopy and density functional theory (DFT). Further, we report ultraviolet/visible light spectroscopic (UV/VIS) experiments probing the dissociation of iron -in different oxidation states -and the radical with metal chelators and radical scavengers, respectively. We compare our results with previous observations for RNR R2s from mouse, bacteria, and other herpes viruses and discuss possible implications. Results and Discussion Low temperature EPR spectra of the EBV R2 tyrosyl radical at two different microwave frequencies The free tyrosyl radical from metal-free EBV R2 reconstituted with Fe 2+ has been studied in the temperature range T = 3-30 K at two EPR microwave frequencies, 9.6 (X-band) and 285 GHz (high-frequency/field, HF-EPR). The formation of the tyrosyl radical in EBV R2 occurs with ,1 radical per R2 dimer, which is similar to our earlier studies on human and mouse p53R2 and mouse R2 under similar reconstitution conditions. These values are higher than those obtained for HSV 1 and HSV 2 R2, which were calculated to be 0.3-0.6 radical per R2 dimer [21,25,26,27,28,29]. The observed first-derivative EPR spectra are shown in Figure 2A and 2B, respectively. The X-band EPR envelope of EBV R2 (Figure 2A, Obs) is similar to the spectra observed for HSV 1 and mouse R2 as well as for HSV 2 R2 [12,23,24,27,29,30]. As expected, the anisotropic g-tensor components are poorly resolved under this microwave frequency. The HF-EPR (285 GHz) spectrum and g iso from X-band on the other hand clearly show the g-tensor anisotropy with values at g 1 = 2.0080, g 2 = 2.0043, and g 3 = 2.0021 ( Figure 2B, Obs). The spin concentration of the tyrosyl radical was determined as one spin equivalent per dimer, and remained constant in the temperature range examined. The observed total resonance width (DB,6.5 mT) is characterized by a composite resonance line, due to the presence of several anisotropic hyperfine splitting components (A), in analogy with previous analyses of the mouse and HSV 1 R2 spectra at 9.6 GHz. This phenomenon arises from the interaction of the unpaired spin on the phenolate with magnetically different hydrogen nuclei of the radical carrying tyrosine backbone as well as the b 1,2 -H protons ( Figure 1 and Figure 3). The EPR signal lineshape and resonance features depend on the rotational configuration of the tyrosyl ring. Therefore, spectra simulations that fit both the X-band and high-field EPR envelopes (Figure 2A and 3B, Sim), followed by comparison with values from the literature, allowed an estimate for the rotational angle h of ,30u (see Scheme 1 for the definition of the h rotational angle and the DFT generated spin density at UB3LYP/6-311++G** of the radical model in Figure 3). According to previous theoretical DFT studies on model tyrosyl radical systems such an angle falls at a local minimum in the potential energy curve as described by the rotational motion of the phenoxyl plane with respect to the a,bcarbons [18]. The simulation parameters used are collected in Table 1, together with previously determined g, A and h values for the tyrosyl radical from E. coli R2 [23,31], mouse R2 [23,24,27,32,33], HSV 1 R2 [23,24,27,32], Salmonella typhimurium R2 [34] and PS II Y D [22,34] included for comparison. Previous work demonstrated that measurement of the g-value anisotropy can be used as a probe for the presence of a hydrogen bond to the phenol oxygen of the tyrosyl radical. The absence of a hydrogen bond, as occurring for the tyrosyl radical in E. coli R2, results in a large g 1 -value of ca. 2.0090 [22,23,31]. In contrast, when a positive charge is located in close vicinity such that a hydrogen bond is formed with the tyrosyl phenol oxygen, the spin density on the phenolic moiety is lowered proportionally ( Figure 3B), resulting in smaller g 1 -values [18,20,22,23,32,33,35,36]. In the crystal structure of the diferric (Fe 3+ -O 22 -Fe 3+ ) form of mouse R2 without a tyrosyl radical, two water molecules are present in close proximity of the tyrosyl radical site [37]. Furthermore, in the Y107 N -radical site of the Y122F/W107Y double mutant from E. coli RNR R2, the radical forms a strong H-bond and displays a low g 1 -value (g 1 = 2.0069 [23]). In the latter case, one of the water molecules in the crystal structures is located substantially closer to the tyrosyl radical than found in mouse R2 [23,38]. The g 1 -value of p53R2 [39] has not been reported, but is indicated in chapter 1 in reference ( [2]) to have a low g 1 -value (it is possible that the diiron center shows a mixed valence signal). Therefore, the perturbation of the g-tensor values of the tyrosyl radical in R2s usually originates from the local electrostatic environment [40]. The energies of the excited states associated with the radical unit, which are characterized by the nonbonding molecular orbitals of the phenoxyl oxygen, can be modified in a way that spin-orbit mixing occurs between low-lying excited states with the ground state, leading to alterations in the g-anisotropy. The sensitivity of the g 1values for the tyrosyl radical to the local chemical environment must also be reflected in electrochromic shifts in infrared (IR), Raman (see Figure 3) and rRaman spectra. The observed g 1 -value of the tyrosyl radical in EBV R2 is lower than that of E. coli R2, but slightly higher than of both mouse R2 and HSV 1 R2. Therefore, in EBV R2 spin-mixing and a hydrogen bond to the tyrosyl radical can be present. If a hydrogen bond is present, the strength of the latter must be weaker than those observed in mouse and HSV 1 R2. However, the molecular basis of such an interaction cannot be determined by simple continuous-wave EPR experiments alone (vide infra). Davies D-electron nuclear double resonance (ENDOR) studies of mouse and HSV 1 R2 demonstrated that, while both display identical g 1 values, the distance (d) between the exchangeable H/D (probably originating from a water molecule) is slightly shorter in HSV 1 R2 (d = 1.86 Å ) compared with mouse R2 (d = 1.89 Å ) [27]. Thus, if a similar interaction in EBV R2 were present, the expected distance to the hydrogen should be slightly larger (1.9#d#2.6 Å ). Other moieties, than an exchangeable hydrogen from a water or hydroxide molecule, such as amino acids sidechains from the protein located in proximity of the tyrosyl radical site could contribute to the decrease of the g 1 -component. As found for example in PS II Y D (D2-Tyr160 from the cyanobacterium Synechocystis), the low g 1value of the tyrosyl radical arises from positive charges located close to the phenol-oxygen (see Table 1). In this case, the hydrogen bonding interaction is not formed with a water or hydroxide molecule, but with a neighbouring histidine residue. Its imidazole moiety can serve as an acceptor for the phenolic proton upon oxidation, forming a hydrogen bond to the phenolic oxygen of the neutral tyrosyl radical as demonstrated by combined mutagenesis studies and Mims electron spin echo-ENDOR measurements [41]. Thus, any shift of the g 1 -tensor component derived from HF-EPR measurements must be analyzed with caution as an indication for the presence of a water or hydroxide molecule in close proximity of the tyrosyl radical. EPR relaxation measurements allow closer investigation of the chemical environment of the tyrosyl radical through observation of the progressive microwave power saturation properties of the EPR spectra in the temperature range of T = 26-100 K ( Figure 4). When an effective magnetic coupling occurs between the radical and the diferric metal-oxygen cluster, the radical relaxation properties, as indexed by the half-saturation value (P 1/2 ), become larger than the one observed in the absence of magnetic coupling. Compared with previous studies carried out on mouse and HSV 1 R2 (with values of P 1/2 = 1.2 mW and P 1/2 = 3 mW at 30 K, respectively), the resonance saturation for the EBV R2 radical occurs at a lower applied microwave power (P 1/2 = 0.14 mW), but higher than for E. coli R2 (P 1/2 = 0.05 mW) [30,42]. Thus, the magnetic interaction between the iron-oxygen cluster and the tyrosyl radical must be weaker than that estimated for both mouse and HSV 1 R2. The presence of a glutamate in EBV R2 (Glu61) in the amino acid sequence position of the common iron-coordinating aspartate in HSV 1 and mouse R2, may have an impact on these observed differences. Similar to E. coli R2 and S. typhimurium R2, the P 1/2 value in EBV R2 derived here is larger than that observed for a free tyrosyl radical lacking any magnetic interaction with another paramagnetic center [42]. For EBV R2 we did not succeed to trap the mixed valence (Fe 2+ Fe 3+ ) complex or to detect integer spin signals, (with g values in the range 8#g#16) of the diferrous site with the chemical mediators employed in this study. This electronic characteristic in EBV R2 distinguishes it from those previously observed by us for mouse R2, where both signals can be detected, and from HSV 1 R2, which exhibit the mixed valence signal under mildly reducing conditions [43,44]. The mouse R2 integer spin EPR signal can be explained by the relatively small zero field splitting parameters designated D or d, making it an observable signal for X-band EPR [9,45]. Thus, there are clear differences in the electronic characteristics of the tyrosyl radical site in EBV R2 compared with previously studied R2s [21,27]. Resonance Raman spectroscopic studies of the EBV R2 tyrosyl radical In order to investigate in more detail the basis for the low g 1value, rRaman spectroscopy was employed. The rRaman and IR The upper spectrum (Obs) shows the 9.6 GHz (X-band) resonance envelope recorded at T = 26 K with a microwave power of 0.08 mW and modulation amplitude 0.4 mT, and the lower spectrum shows its computer simulation (Sim) (B) The upper spectrum (obs) shows the 285 GHz resonance envelope recorded at T = 15 K with a modulation amplitude of 0.4 mT and microwave power in the mW range, and the lower spectrum its computer simulation (Sim). The derived spin-Hamiltonian parameters are listed in Table 1. doi:10.1371/journal.pone.0025022.g002 spectroscopy are excellent complementary techniques to HF-EPR spectroscopy, because even weak interactions between the tyrosyl radical and molecules in its close proximity can be detected. Tyrosyl radicals show a characteristic light absorption maximum at ca. 410 nm (see below) that is usually used for excitation in rRaman spectra of the radical. Vibrational spectroscopy can identify redox-linked structural changes associated with electron transfer reactions, as shown for the tyrosyl radical (Y122 N ) in E. coli RNR. For the latter such an electron transfer reaction has been proven to be coupled to a conformational change in the R2 structure [46]. One vibrational characteristic of the radical, the phenoxyl n 7a mode (Wilson notation) with components of the C4-O stretching vibrations ( Figure 5), is a sensitive marker for hydrogen bonds in this context. This vibrational mode is strongly enhanced when rRaman is employed at excitation frequencies n ex around 405-415 nm, i.e. very close to the light absorption maximum of the tyrosyl radical. Without a nearby hydrogen bonding interaction, the n 7a vibration of the tyrosyl radical has been observed between 1497 and 1501 cm 21 in E. coli R2, whereas in mouse R2 it is observed at 1515 cm 21 [47]. In mouse R2 a water molecule in close proximity (-O phenol …H distance of 1.89 Å ) with exchangeable hydrogens is present as demonstrated with pulsed ENDOR [27]. As shown in Figure 5 (lower line), the EBV R2 tyrosyl displays a C4-O stretching vibration at slightly higher energy than that observed in E. coli R2 with the n 7a vibration maximum at 1508 cm 21 , which is at lower energy than that observed for mouse R2. Therefore, our rRaman results are consistent with our HF-EPR data on EBV R2, with the tyrosyl radical displaying a g 1 -value in between those observed for mouse and in E. coli R2. In EBV R2 the observation of such an intermediate frequency mode is, in principle, supportive of the existence of a hydrogen bonded tyrosyl radical with a water molecule as hydrogen bond donor. If present, such a hydrogen bond is weaker than in mouse R2. Theoretical analyses of the influence of a hydrogen-bond to a water molecule on the tyrosyl radical mode frequency and EPR spectra In order to test the impact of a water molecule in the proximity of the tyrosyl radical on the Raman spectra, DFT calculations were performed using a simplified model, the p-ethylphenoxyl radical. This has been demonstrated to be a good model system before [36]. The molecular backbone of the radical unit in EBV R2 is characterized by a rotational configuration of the tyrosyl ring h constraint at 30u, as derived our EPR results. Such analyses gives an estimation of the C4-O stretching vibration trends (B3LYP/6-31G(d,p)) either in the presence or absence of a hydrogen bonded water molecule. The calculated Raman spectra are shown in Figure 6A), a value identical to that obtained for the tyrosyl radical by Johnson and coworkers [48]. The calculated n 7a vibration in this simplified model closely agrees with the Fourier transform-IR analyses on the tyrosinate and tyrosyl radical by Barry and coworkers (tyrosyl radical, n 7a = 1516 cm 21 ), who have explicitly considered the effect of an amino acid backbone in their theoretical calculations (UB3LYP/6-31++G(d,p) as well as from our calculations using the entire tyrosyl radical backbone, computed at a higher level of theory (UB3LYP/6-311++G(d,p), n 7a = 1496 cm21, Figure 3)) [49]. The value does not differ substantially from results obtained using the p-methoxyphenoxy radical as a model system, which employed a different level of theory (BPW91/6-31G**, n 7a = 1496 cm 21 ) [50]. When a water molecule is placed at a distance of 1.93 Å to the phenoxyl plane (and a w = 2.6u after optimization), the n 7a vibration shifts significantly to longer wave number at 1529 cm 21 ( Figure 6B) but slightly decreases to 1526 cm 21 ( Figure 6C) if a water molecule is placed outside of the phenoxyl plane (w,42u after optimization). When the rotational configuration of the tyrosyl ring is lifted from the constraint of 30u, this angle after optimization, refined to slightly less than 90u (h = 89.1u) [18]. Here a water molecule placed in the phenoxyl plane at the same distance of 1.93 Å , gives for the n 7a vibration a similar shift at 1529 cm 21 ( Figure 6D) but it exhibits a larger decrease to 1519 cm 21 ( Figure 6E) when is moved out of the phenoxyl plane (w,42u after optimization). Upon placement of the water molecule further away (2.60 Å ) ( Figure 6F and 6G) an effective perturbation of the n 7a vibration is reduced proportionally (n 7a = 1518 cm 21 and 1519 cm 21 , respectively). This is accompanied by an increase of the phenoxyl (-ON) Mulliken spin density and Dn = 8-9 cm 21 together with a decrease in the C4-O bond length. Clearly, both properties tend to approach the structural and electronic parameters calculated without nearby water. By placing a positive charge (Li + ) close to the phenoxyl oxygen (O N -Li + = 2.60 Å , on the phenoxyl plane, w,0u) the perturbation of the n 7a vibration becomes nonetheless very large ( Figure 6I, n 7a = 1554 cm 21 ) (see also Information S1). The C4-O stretching frequency (n 7a vibration) is characterized experimentally by a narrow bandwidth, of approximately 5 cm 21 , as shown in Figure 5. This vibration is thus expected to be influenced by i) the relative angle of the vector between a hydrogen bonding neighboring molecule and the phenol oxygen and the phenol plane, ii) the distance of the hydrogen bonded molecule to the phenol oxygen and iii) by the conformational orientation of the phenoxyl group with respect to the protein backbone, represented here by a simple ethyl molecule. The impact of molecular conformation on the calculated g-tensor and hyperfine tensor (A H ) components complemented by their further modulation due to dielectric effects associated to the medium have been discussed thoroughly in other reports, and thus are not further addressed here [35,36,50,51,52]. Basis for the lack of deuterium shift in EBV R2 rRaman spectra In order to probe the presence of a water molecule with exchangeable hydrogens located in proximity of the tyrosyl radical in EBV R2, we experimentally investigated the frequency shifts of the tyrosyl C4-O rRaman stretching vibration with deuterated EBV R2. In mouse R2, deuterium exchange of the protein clearly results in a 5 cm 21 shift to a lower energy of 1510 cm 21 compared with spectra of the non-deuterated protein. This effect was not observed in E. coli R2 [47]. In the former, the exchangeable hydrogens from a water molecule are located in proximity of to the phenol plane as detected by ENDOR. Interestingly, in EBV R2 a similar deuterium induced shift in the rRaman spectrum was not observed ( Figure 5, upper line). Yet, this result does not preclude the presence of a water molecule as a hydrogen bond donor in close vicinity of the tyrosyl radical for the following reasons: i) Assuming a buried location of the water in the protein interior, its hydrogens may not be readily exchangeable to deuterium; ii) The distance from an immobilized water molecule to the tyrosyl residue could be too large to be detectable in our rRaman measurements. This scenario would be consistent with the theoretical Raman frequency shift calculated ( Figure 6G) where a water molecule is placed close to the phenoxyl plane at a distance of 2.60 Å from the radical site. In this case, any deuterium shift is expected to be extremely small (,2 cm 21 ); iii) The absence of a deuterium shift could also arise from interactions with a positive charge located close (2.60 Å ) to the tyrosyl radical site. However, the theoretically calculated effects on the phenoxyl oxygen spin-density and the n 7a vibration, using a Li + placed at 2.60 Å from the phenoxyl oxygen mimicking the presence of such a charged group nearby, are too large (n 7a = 1554 cm 21 , Figure 6I). Thus, the latter scenario seems unlikely; iv). The most probable explanation is a water molecule as a hydrogen bond donor in EBV R2 with similar overall structural relations to the tyrosyl radical as in HSV 1 R2. This is characterized by a distance of the water to the tyrosyl radical of .2.5 Å together with a large angle between the phenoxyl plane and the C4-O phen … H water plane (w) ( Figure 6H, calculated n 7a = 1520 cm 21 and hydrogen bond C4-O…H distance of 2.60 Å ). This structural organization would suitably explain the lack of an observable deuterium shift for EBV R2. Preliminary observations of interaction of EBV R2 with radical scavenger and metal-ion chelators probed by UV/VIS spectroscopy Certain radical scavengers, such as HU (used since long time in clinical treatment of some cancers), function as potent inhibitors of RNR activity and are attractive drug candidates [2] [53]. HU reduces the tyrosyl radical and large differences in reactivity of HU have been documented between E. coli R2 and both mouse R2 and HSV 2 R2s. The tyrosyl radicals of mouse R2 and HSV 2 R2 are more reactive and are scavenged substantially faster than that of E. coli R2. The radical quenching process occurs also more rapidly with some hydrophobic radical scavengers, such as p-alkoxyphenol compounds, than with HU [28,54,55]. Upon reconstitution of the dimetal site of metal-free EBV R2 with Fe 2+ and exposure to air, relatively strong and characteristic light absorption bands are formed with distinctive absorption features for the diferric center and the tyrosyl radical are apparent at ,320, 360 nm and ,410 nm, respectively (Figure 7, light grey line). The strong 320-360 nm band originates from oxo-to-Fe 3+ dimer charged transfer (CT) transitions. When the diferric active EBV R2 protein was incubated under aerobic conditions with HU, quenching of the tyrosyl radical signal occurred slowly at T = 277 K with the total radical quenching occurring after ,25 min. The quenching rate however was substantially increased at higher temperature (T = 293 K), and the absorption signal associated with the tyrosyl radical disappeared after 7 min. In mouse R2 the reaction of the diferric center with HU leads to reduction of Fe 3+ followed by rapid dissociation of Fe 2+ from the protein [56]. For EBV R2, such an effect does not appear prominent ( Figure 8). Thus, the reductive ability of HU towards the diferric center in EBV R2 is at an intermediate level between those observed for mouse R2 (very fast) and E. coli R2 [55] (non effective). In order to obtain preliminary knowledge of the dynamics of this reaction in EBV R2, the reconstituted active tyrosyl-radical containing ferric protein was incubated under anaerobic conditions in the presence of HU and bathophenanthrolinesulphonate, a potent Fe 2+ chelator. The Fe 2+ complexation by bathophenanthrolinesulphonate was monitored following formation of an absorption band at l max at 535 nm, as described previously [56,57]. The complex formation process of Fe 2+ occurred more slowly ( Figure 8A) and in parallel with a decrease in the absorption band at 410 nm, which is associated with tyrosyl radical quenching. In order to examine this further an analogous experiment with HU and desferrioxamine, which chelates with Fe 3+ with formation of a typical light absorption at ,430 nm (53) was carried out. In contrast to mouse R2, the reaction with EBV R2 did not result in this characteristic absorption after 3 hrs (data not shown). From these preliminary results we conclude that iron dissociated from EBV R2 in the presence of HU is predominantly in the Fe 2+ state. This implies that HU reacts with both the radical and Fe 3+ at the diiron site in the protein, as probably also in mouse R2; and that Fe 2+ is more weakly bound than Fe 3+ in the dimetal site (Table 2) [54,58]. The mobilization of Fe 3+ from EBV R2 by catechol, both a potent metal chelator and radical scavenger [28], was followed spectrophotometrically. The smaller chatecholate compound is a more effective Fe 3+ chelator than the larger desferrioxamine, as it reacts without any addition of HU with EBV R2. The reaction resulted in a rapid decline of the absorbance at both 360 nm and 410 nm ( Figure 8B) together with formation of a blue-violet catecholate to Fe 3+ CT transitions around 600 nm. Thus, in these experiments with EBV R2, scavenging of the tyrosyl radical occurs concomitant with dissociation of the diferric center, similar to the analysis for HSV 2 R2 [28]. In contrast to E. coli R2, in EBV R2 catechol seems to react with the diferric center. The data imply that the reactivity of the tyrosyl radical with catechol in EBV R2 is substantially higher than for E. coli R2 and very similar to that of HSV 2 R2 (Table 2). Conclusion Through a combination of EPR, HF-EPR, UV/VIS and rRaman spectroscopy the electromagnetic characteristics of a spectroscopically unique tyrosyl radical from EBV R2 have been determined. The g 1 -value from EPR and HF-EPR, and the rRaman shift indicate that the tyrosyl radical is hydrogen bonded, yet deuterium exchange of the protein had no effect on the rRaman spectra. Therefore, the g 1 -value and rRaman results cannot be explained by a hydrogen bond with structural properties identical to that found in mouse R2. If a hydrogen bond to the tyrosyl radical is present in EBV R2, our spectroscopic data and DFT analysis indicate that it has characteristics more similar to that in HSV 1 R2, with a hydrogen outside of the phenoxyl plane. In contrast to mouse and HSV 1, where evidence for the involvement of a hydrogen bonded water molecule with exchangeable hydrogens has been observed, we could not clearly identify a water molecule as the hydrogen bond donor moiety for the tyrosyl radical in EBV R2. The observation of a spectroscopically different kind of tyrosyl radical in EBV R2 is a further indication for the variation of tyrosyl radicals of class Ia R2s in particular and in proteins in general. The preliminary UV/VIS spectroscopic studies carried out on EBV R2 showed that iron is most likely more weakly bound compared with E. coli R2, but stronger than for mouse R2. Differences can also be seen in the impact of the radical scavenger HU, which exhibits a reduced effect on the diferric center compared with mouse R2, where the reaction leads to reduction of the diferric center and dissociation of Fe 2+ . Our findings indicate that EBV R2 protein has evolved and includes a different kind of tyrosyl radical than in the seemingly simpler E. coli R2. Materials and Methods All high-grade chemicals were purchased from Sigma or Fluka unless stated otherwise. Protein expression and purification The BaRF1 gene (accession number YP_401656; encoding fulllength EBV R2) was cloned from a B95-8 derived bacterial artificial chromosome by recombinatorial cloning (Gateway TM , Invitrogen) in the laboratory of Prof. Jürgen Haas (Max von Pettenkofer Institut, Gene center, Ludwig-Maximilians University, Munich). BaRF1 was then subcloned with the LR recombination reaction into a pTH27 plasmid, with a coding sequence for an Nterminal polyhistidine tag (one letter coded amino acid sequence: MGPHHHHHHLESTSLYKKA-GSA), with the Gateway system (Invitrogen) according to the manufacturer's instructions. E. coli strain BL21 (DE3) (Stratagene) was transformed with the product from the LR reaction and small-scale soluble protein expression was verified as described [59,60]. DNA sequencing (MWG Biotech, Germany) used the T7 forward and reverse promoter sequences in the plasmid as primers. Recombinant EBV R2 was produced in BL21 (DE3) E. coli cells in phosphate-buffered terrific broth (Formedium) supplemented with carbenicillin (Duchefa; final concentration of 50 mg/ml). Cells were grown at 37uC until an OD 600 of 1-1.3. Protein expression was induced by addition of isopropyl-c-D-thiogalactopyranoside (Anatrace; final concentration of 0.2 mM) and cells grown for ,16 hrs at 18uC. The cells were harvested by centrifugation at 6500 rpm for 40 min. The cell pellet was resuspended in a buffer of 50 mM Na 2 HPO 4 / NaH 2 PO 4 , (Scharlau) pH 8.0, 500 mM NaCl (Scharlau), 10% (v/v) glycerol (Saveen Werner), 0.5 mM tris(2-carboxyethyl)phosphine (TCEP) (buffer A) containing 10 mM imidazole, 0.08 mg/ ml lysozyme, 10 Units/ml DNase I recombinant (Roche), and ethylenediaminetetraacetic acid (EDTA)-free protease inhibitor tablets (Roche) (1 tablet/50 ml). Solubilized pellets were stored at 253 K until usage. Upon sonication on ice-bath, cell debris was removed by centrifugation at 65 000 rpm at 5uC and subsequent filtration (0.20 mm filter, Millipore) on ice. The soluble extract was passed over a 5 ml HisTrap FF Crude column (nickel sepharose 6 fast flow resin; GE Healthcare) at ,10 ml/min flowrate, at 5uC , on an Ä kta Express purification system (GE Healthcare). The column was washed with 10 column volumes of buffer A containing 10 mM imidazole followed by 10 column volumes of buffer A containing 70 mM imidazole. EBV R2 protein was eluted with buffer A containing 500 mM imidazole, with protease inhibitor tablets (1 tablet/50 ml). The combined elution fractions (volume ca. 10 ml) were gelfiltered on a Superdex S-200 HiLoad 16/60 column (GE Healthcare) on an Ä kta Express system at 5uC with a buffer of 20 mM Hepes, pH 7.5, 300 mM NaCl, 10% (v/v) glycerol (buffer B), with 0.5 mM TCEP. Protein samples were concentrated with a 10 kDa molecular weight cut-off concentrator (Amicon Ultra-15, Millipore). Metalfree EBV R2 was purified analogously as described above with buffer B with 0.5 mM TCEP and 25-30 mM EDTA. The gelfiltered protein samples were left incubated 15-20 hrs in this solution, then concentrated to ,2-3 ml, and dialysed (Slide-A-Lyzer, 7 kDa molecular weight cut-off, Pierce) twice against 5 l of buffer B at 5uC. Sample preparation Protein samples of metal-free EBV R2 were concentrated up to ca. 1 mM, using the theoretical calculated absorption coefficient [61] of EBV R2 at 280 nm. Reconstitution of the dimetal-oxygen cluster and generation of the tyrosyl radical site in EBV R2 was performed by addition of Fe 2+ ((NH 4 ) 2 Fe(SO 4 ) 2 ?6H 2 O) in a ca. 7:1 molar ratio to the R2 homodimer) and naturally present O 2 . All samples were incubated at 273 K for 10 minutes. The final volume of the EPR samples was ,200 mL in 50 mM HEPES (4-(2hydroxyethyl)-1-piperazineethanesulfonic acid) pH 7.5, 100 mM KCl, 20% (v/v) glycerol. Samples of active EBV R2 for resonance Raman spectroscopy were prepared in the same way as for EPR measurements, but the protein was first transferred to a buffer of 50 mM Tris (tris(hydroxymethyl)aminomethane)-HCl, pH 7.5, 100 mM KCl. Samples of active EBV R2 protein in deuterated buffer (50 mM Tris-HCl, 100 mM KCl, pD = 7.9) for resonance Raman spectroscopy were prepared by deuterating the metal-free protein with D 2 O (99.9%D, Cambridge Isotope Laboratories) by repeatedly diluting and re-concentrating in this buffer using Amicon Ultra-15 concentrators (Millipore) before reconstituting the diironoxygen cluster as explained above. The diferrous R2 samples were prepared under anaerobic conditions in air-tight vessels by several rounds of vacuum and argon exchange using the Schlenk technique. 200 ml of apo EBV R2 (150 mM) was reduced with 2 mL of 10 mM dithionite 2 mL and 5 mM methyl viologen (reductant mediator). 5 ml of the ferrous solution (42 mM) was added to fully reconstitute the dimetal site followed by incubation for 10 min at 273 K. The samples were finally transferred to anaerobic EPR tubes. Samples of EBV R2 for measurement of the mixed valence signal were prepared as described [43] both anaerobically and aerobically. 1-2 mM of dithionite was used as a reductant and 2 mM of phenazine methosulfate was used as a reductant mediator. Light absorption spectra of purified histidine-tagged EBV R2 were measured on a Hewlett-Packard 8452 diode array spectrophotometer in the wavelength range of 250-700 nm using a thermostated water bath. Experiments under anaerobic conditions were carried out in cuvettes capped with a rubber septum, which had been deaerated with argon for $2 hrs. Addition of deaerated solutions of EBV R2 and radical scavengers were made through a septum with gas tight syringes purged with deoxygenated water. EPR experiments EPR spectra were recorded at X-band on a Bruker Elexsys 560 EPR spectrometer fitted with a Bruker ER41116DM dual-mode cavity. All EPR samples contained 20% (v/v) glycerol for vitrification during the low-temperature recordings. EPR signals were measured at different microwave powers to prevent microwave power saturation and quantified by comparing double integrals of spectra with a standard of 1 mM Cu 2+ EDTA in a solution of 50 mM HEPES, pH 7.5, 20% (v/v) glycerol. All spectra were measured under identical non-saturating microwave power. First-derivative EPR spectra were recorded at different microwave power (P) and at various temperatures (see graphs) to determine the microwave power at half saturation (P 1/2 ) for each of the temperatures. The data were fit with the function S/!P = 1/ 1+(P/P 1/2 )) b/2 , where S denotes the double integrated signal intensity of the EPR signal, and b is a component relating to the type of relaxation. The b factor is equal to 1 for a completely inhomogenous relaxation and 3 for an entirely homogenous relaxation. The simulated EPR spectra (at X-band and at 285 GHz) were computed with the program SIM written by Weihe in order to extract numerical values of spin Hamiltonian parameters from experimental EPR spectra [62,63]. High-field EPR measurements The low temperature 285 GHz spectra were obtained with a 95 GHz Gunn oscillator (Radiometer Physics, Germany) coupled to a frequency tripler as the frequency source and a superconducting magnet with a maximum field of 12 T at 4.2 K (Cryogenics Consultant, UK) for the main magnetic field. The detection of light transmitted through the sample was performed with a 'hot electron' InSb bolometer as described [32,34]. Resonance Raman spectroscopy A three stage Laser system was employed as a light source for the Raman spectroscopy: a Spectra-Physics Millennia Pro 12sJS Nd:YV0 4 solid state laser (6.5 W at 532 nm) pumped a Sirah Matisse TR Ti:Sa ring Laser that produced a power of 1 W at 820 nm. This was finally doubled to 410 nm using a Spectra-Physics Wavetrain frequency doubler (550-990 nm) yielding 20 mW of laser light having a line width ,4 MHz. The applied power of the 410 nm laser at the sample was ,5 mW. For the Raman measurements, a Jobyn Yvon Horiba T64000 instrument equipped with a 410 nm Kaiser Optical holographic Super-Notch filter served as a single spectrograph in order to minimize the reduction of Raman light. The narrow bandwidth and the small shifts of the Raman peaks investigated required an entrance slit width of 100 mm and resolution grating of 3600 grooves/mm. 60 scans of 60 seconds were averaged for each Raman spectrum. Data reproducibility was evaluated by triple measurements on protein samples from different preparations. The samples with a volume of ca. 50 ml were kept in EPR tubes cooled with liquid N 2 in a quartz cold finger EPR cryostat. The fluorescence that inevitably occurred was subtracted by fitting with a polynomial function, and the frequency scale was calibrated using 4acetamidophenol. The accuracy for the frequency of the Raman peaks was then determined within an error range of 61.1 cm 21 under these settings. The reference peak values were obtained from published tables [64]. Computational procedures The theoretical modeling of the tyrosyl radical was carried out by density functional theory (DFT) in gas phase using a simplified system with the p-ethylphenoxyl radical (neutral form) using unrestricted B3LYP function (Exchange: 0.2000 Hartree-Fock, 0.0800 Slater and 0.7200 Becke, and correlation 0.8100 LYP and 0.1900 VWN1RPA) with the Euler-Maclaurin-Lebedev (70,302) grid and basis set 6-31G(d,p) as implemented in the computational package Spartan 08/10. The molecular structures in the presence and absence of an H-bonded water molecule were fully optimized (root mean square gradient below 10 27 ) with and without C6-C1-Ca-Cb torsional angle (constraint h) followed by frequency calculation in order to derive vibrational frequencies and Raman scattering activities (unscaled). More accurate spin-densities used for the estimation of the hyperfine A H tensor components in EBV R2 have been derived by using the entire tyrosyl radical without and with a water molecule located at 2.60 Å from the tyrosyl oxygen by UB3LYP/6-311++G(d,p) complemented by Raman frequency calculations at the same theory level. Details are provided in the Information S1 and methods section. UV/VIS light spectrophotometric assays Light absorption spectra of purified histidine-tagged EBV R2 were measured on a Hewlett-Packard 8452 diode array spectrophotometer in the wavelength range of 250-700 nm using a thermostated water bath. Experiments under anaerobic conditions were carried out in cuvettes capped with a rubber septum, which had been deaerated with argon for $2 hrs. Addition of deaerated solutions of EBV R2 and radical scavengers were made through a septum with gas tight syringes purged with deoxygenated water. Supporting Information Information S1 Theoretical DFT Analyses. (DOC)
Combining Globally Rigid Frameworks Here it is shown how to combine two generically globally rigid bar frameworks in $d$-space to get another generically globally rigid framework. The construction is to identify $d+1$ vertices from each of the frameworks and erase one of the edges that they have in common. Introduction and definitions Suppose that a finite configuration p = (p 1 , p 2 , . . . , p n ) of labeled points in Euclidean d-dimensional space E d is given, together with a corresponding graph G whose vertices correspond to the points of p. Each edge of G, called a member, is designated as a cable, strut, or bar. All this data is denoted as G(p), and it is called a tensegrity, or if all the members of G are bars, G(p) is called a bar framework. We say the tensegrity G(p) dominates the tensegrity G(q), and write G(q) ≤ G(p), for two configurations q and p, if |p i − p j | ≥ |q i − q j | for {i, j} a cable, |p i − p j | ≤ |q i − q j | for {i, j} a strut and (1) |p i − p j | = |q i − q j | for {i, j} a bar. Basic previous results There has been a lot of work developing computationally feasible criteria for both local and global rigidity that involve purely combinatorial calculations for the graph G and numerical criteria involving, additionally, the configuration p. A graph G is called m-connected if it takes the removal of, at least, m vertices to disconnect G. For example, in the plane E 2 there is a popular algorithm, the pebble game, to compute, for a bar framework, whether G(p) is locally rigid when p is generic. This algorithm is purely combinatorial, only depends on the graph G, and is polynomial in n, the number of vertices of G. For information about this theory, see [12,13,17,22]. For all dimensions, determining whether a given bar framework G(p) is locally rigid at a generic configuration is also quite feasible, although it is not known to be feasible purely combinatorially. For every bar framework G(p) in E d with n ≥ d vertices, there is an associated e-by-dn matrix R(p), the rigidity matrix, such that G(p) is locally rigid in E d if and only if the rank of R(p) is dv − d(d + 1)/2. In order to understand some of the results about global rigidity it is helpful to look at the case of tensegrities, and in order to understand that it is helpful to understand stresses and stress matrices. For any tensegrity G(p), a stress ω = (. . . , ω ij , . . . ) is a scalar ω ij = ω ji associated to each member {i, j} that connects vertex i to vertex j of G. If vertex i is not connected to vertex j, then ω ij = 0. We say the a stress ω for the tensegrity or framework G(p) is an equilibrium stress if for all j, the following vector equation holds: If G(p) is a tensegrity, we say that ω is a proper stress if ω ij ≥ 0 for all cables {i, j}, and ω ij ≤ 0 for all struts {i, j}. If ω is a stress for G(p), and G has n vertices, form an n-by-n symmetric matrix Ω, called the stress matrix, as follows: Each off-diagonal entry of Ω is −ω ij , and the diagonal entries are such that the row and column sums are 0. Figure 2 shows a simple example of a tensegrity with a proper equilibrium stress indicated. The stress matrix for this stress is In order to understand a fundamental theorem that implies universal global rigidity, we define the following concept. Let v 1 , . . . , v k , be vectors in E d . Regard these vectors as points in the real projective space RP d−1 of lines through the origin in E d . We say that v 1 , . . . , v k lie on a conic at infinity if, as points in RP d−1 they lie on a conic (or quadric) hypersurface. For example, in the plane E 2 , a conic at infinity consists of at most two points. In 3-space E 3 , if we project the vectors into a plane, not through the origin, the conic is the usual notion of a conic, including the degenerate case of two lines. The following is a fundamental result that has motivated a lot of the later results about global rigidity. This can be found in [3,7]. Then G(p) is universally globally rigid. In many cases, Condition 3.) is easy to verify. The difficulty usually lies with Condition 1.) and Condition 2.). When the affine span of p is ddimensional, then the rank of Ω is at most n−d−1, because of the equilibrium conditions (2). When the three conditions of Theorem 1 are satisfied we say that the tensegrity is super stable. A partial converse to Theorem 1 is the following result of S. Gortler, A. D. Healey, and D. Thurston [10]. Theorem 2. Let G(p) be a universally globally rigid bar framework in E d , where p is generic and G has at least d + 2 vertices. Then G(p) is super stable. So this means that under the conditions of Theorem 2, there is an equilibrium stress such that the three conditions of Theorem 1 hold. So if the bars are converted to cables or struts to follow the sign of that stress, the bar constraints can be replaced by the much weaker inequality tensegrity constraints in (1). Figure 1(a) and Figure 1(b) are super stable, while Figure 1(c) satisfies Condition 1.) and Condition 3.), but not Condition 2.) and, indeed, Figure 1(c) is not even globally rigid in the plane. In order to understand Condition 2.) and use it, it helps to interpret the rank condition on Ω. One very useful way to do this uses the following concept. Suppose p is configuration with n vertices in E d with an equilibrium stress ω. We say the configuration p is universal with respect to ω if, when q is any other configuration on the same number of vertices such that ω is an equilibrium stress for q, then the configuration q is an affine image of the configuration p. In other words, there is a d-by-d matrix A and a vector v ∈ E d such that Ap i + v = q i for all i = 1, . . . , n. The following result in [3] relates the notion of a universal configuration to the rank of the stress matrix. We assume that the affine span of the configuration p is d-dimensional. Proposition 3. A non-zero equilibrium stress ω for a configuration p with n vertices in E d is universal if and only if the rank of the associated stress matrix Ω is n − d − 1. A basis, including the vector of all one's, for the kernel of Ω, ker(Ω), can be used to construct a universal configuration as shown in [3]. For example, in E d when the configuration p is universal with respect to the stress corresponding to Ω, the d vectors consisting of the i-th coordinates, for i = 1, . . . , d and the vector of n one's correspond to a basis for ker(Ω). When the emphasis is on a fixed configuration rather than a fixed equilibrium stress, we say Ω is of maximal rank if its rank is n − (d + 1). Combining tensegrities The stress matrix, since it is symmetric, can be regarded as a quadratic form on the space of all configurations p, and we can add these quadratic forms as functions. (Technically, though it is the tensor product of Ω with the identity matrix I d , Ω ⊗ I d , that corresponds to the quadratic form on the configurations p.) When we add positive semi-definite quadratic forms, the sum is positive semidefinite, and Condition 3.) is also easy to verify in most cases. It is also possible to check Condition 2.), when it is true. For example, it is easy to see that Figure 1(c) is obtained by superimposing the rightmost strut in Figure 1 Proposition 4. Suppose that G 1 (p) and G 2 (q) are two super stable tensegrities in E d with at least d+1 vertices in common, such that the d+1 vertices do not lie in a (d − 1)-dimensional hyperplane, and such that one cable in G 1 overlaps with a strut in G 2 . Then the tensegrity G(p ∪ q) obtained by superimposing their common vertices and members, but erasing the one common cable and strut, is also superstable. It is understood that in G, if two other cables overlap, the resulting member in G is a cable; if two struts overlap, the resulting member is a strut; and if another cable and strut overlap, the resulting member can be either a cable or strut or disappear, depending on the stresses of G 1 (p) and G 2 (q). Figure 3 shows an example of this. The example of Figure 3 is one case of a Cauchy polygon, and Proposition 4 is explained in more detail in [3]. Note that with this process, it is necessary to match a strut with a cable. Globally rigid generic bar frameworks For bar frameworks the story for global rigidity is different. The starting point is to assume that the configuration p is generic, which has advantages and disadvantages. An advantage is that, in principle, generic global rigidity in E d can be calculated with the help of some numerical calculation, but the downside is that the generic condition is hard to work with computationally. For the case of local rigidity, the condition of being generic can be replaced by some polynomial conditions on the coordinates that are to be avoided. For the case of global rigidity in E d , for d ≥ 3, there are some polynomial conditions also that are to be avoided, but they seem to be intrinsically difficult to calculate. The following basic result can serve as a starting point. The "if" part of the statement is due to [5], and the "only if" part is due to S. Gortler, A. D. Healy, and D. P. Thurston [10]. Theorem 5. Let G(p) be a bar framework at a generic configuration p in E d with n ≥ d + 2 vertices. It is globally rigid in E d if and only if there is a non-zero equilibrium stress whose stress matrix Ω has rank n − d − 1. The only globally rigid (generic) frameworks G(p) not covered in Theorem 5 are when G is the complete graph on less than d + 2 vertices. Note that Theorem 5 essentially involves Condition 2.) of Theorem 1. Condition 3.) follows easily from the generic hypothesis and the equilibrium stress. Note that a consequence of (the "only if" part of and that the stress rank condition is a generic property) Theorem 5 is that if G(p) is globally rigid at one generic configuration p, then G(q) is globally rigid at all other generic configurations q. Furthermore, although generic configurations are hard to calculate concretely, it is enough to verify that for some configuration p the rank of the rigidity matrix R(p) is dn − d(d + 1)/2, and the rank of a stress matrix is n − d − 1, as mentioned in [5,8]. In dimension two the situation is even better, depends on the local rigidity properties of G(p) and depends on the combinatorics of G only. If G(p) is a bar framework in E d , is locally rigid and remains locally rigid after the removal of any bar, we say that G(p) is redundantly rigid in E d . The "only if" part of Theorem 6 is due to B. Hendrickson in [14]. The "if" part of Theorem 6 is by A. Berg and T. Jordan; B. Jackson and T. Jordan; R. Connelly [2,15,5]. The pebble game of [17] provides an efficient purely combinatorial algorithm to compute generic redundant rigidity in the plane, and the computation of connectedness is known to have efficient polynomial time algorithms, so Theorem 6 essentially provides a computationally effective method for computing generic global rigidity in the plane. We say that a graph G has the Hendrickson property in E d if G is (d + 1)connected and G(p) is redundantly rigid in E d , when p is generic. In [14] B. Hendrickson shows the following: Theorem 7. If a bar framework G(p) with p generic is globally rigid in E d then G(p) is redundantly rigid and (d + 1)-connected. Originally Hendrickson conjectured the converse of Theorem 7 for d ≥ 3, but that is false since, in [4], it is shown that the complete bipartite graph K(5, 5) has the Hendrickson property in E 3 , but it is not globally rigid in Combining generic globally rigid bar frameworks In E d for d ≥ 3, there is no known efficient deterministic combinatorial algorithm to compute generic global rigidity. So it is reasonable to consider special combinatorial ways to create generically globally rigid bar frameworks from others especially in the spirit of Section 3. One very natural way to combine two frameworks is to assume some overlap of the vertices and remove some of the members joining the common vertices. If some members belong to one side, but not the other, the following natural result by K. Ratmanski [20] is useful. Theorem 8. Suppose that G 1 (p) and G 2 (q) are globally rigid bar frameworks in E d with d + 1 vertices (or more) in common such that p ∪ q is generic. Let G be the graph obtained by taking the union of their vertices and members, but deleting those members from G 2 not in G 1 . Then the bar framework G(p ∪ q) is also globally rigid in E d . Proof. This follows directly from the statement of global rigidity. Suppose that the framework G(p ∪ q) is equivalent to G(p ∪q) in E d . Since G 1 (p) is globally rigid, the configurations p andp are congruent. So all the lengths of members in p ∩ q are preserved. So q andq are congruent since G 2 (q) is globally rigid. Since p ∪ q are generic and there are d + 1 vertices in common, p ∪q is congruent to p ∪ q. In order to treat the case when we delete a common member, first consider the following. We need an elementary Lemma from linear algebra. Lemma 9. Suppose that Ω 1 and Ω 2 are two n-by-n symmetric matrices, such that the dimension of ker Ω 1 ∩ ker Ω 2 is k, and the rank of rank{Ω i } = r i , i = 1, 2, where r 1 + r 2 = n − k. Then Proof. We next apply this to stress matrices. Lemma 10. Suppose that G 1 (p) and G 2 (q) are two bar frameworks in R d , with n 1 and n 2 vertices, respectively, that share exactly d+1 vertices not lying in a (d − 1)-dimensional hyperplane, and with corresponding stress matrices Ω 1 and Ω 2 . Extend Ω 1 toΩ 1 to include the vertices q of G 2 not in G 1 , but with 0 stress on all the extra pairs of vertices. Similarly extend Ω 2 toΩ 2 . If each Ω i has maximal rank n i − (d + 1), then for all values of t = 0, 1, tΩ 1 + (1 − t)Ω 2 has maximal rank n 1 + n 2 − 2(d + 1). The union of the vertices of p and q, p∪q, is a configuration that satisfies the equilibrium equations of the stresses corresponding to bothΩ 1 andΩ 2 . Ifp ∪q is another configuration that satisfies the equilibrium equations of the stresses corresponding to bothΩ 1 andΩ 2 , thenp is an affine image of p andq is an affine image of q, since p and q are universal with respect to the stresses corresponding to Ω 1 and Ω 2 , respectively. Thus, by extending the correspondence between p ∩ q andp ∩q, we get an affine map from p ∪ q tō p ∪q. Thus p ∪ q corresponds to a basis for the intersection of the kernels ofΩ 1 andΩ 2 . The affine span of p and q are both d-dimensional with d + 1 affine independent points in the intersection, so p ∪ q has a d-dimensional affine span. In other words, kerΩ 1 ∩kerΩ 2 has dimension d+1. Then Lemma 9 implies the conclusion with k = d + 1, and rank{Ω i } = r i , i = 1, 2, since n = n 1 + n 2 − (d + 1), and r 1 + r 2 = n 1 − (d + 1) + n 2 − (d + 1) = n − (d + 1). 6 The main theorem Theorem 11. Suppose that G 1 (p) and G 2 (q) are globally rigid bar frameworks in E d , with p ∪ q generic, exactly d + 1 vertices in common, each with at least d + 2 vertices, and a bar {i, j} in G 1 and G 2 . Then the bar framework G(p ∪ q) obtained by superimposing their common vertices and bars, but erasing the bar {i, j}, is also globally rigid in E d . Proof. By Theorem 5 there are non-zero stress matrices Ω 1 for G 1 (p), and Ω 2 for G 2 (q) such that rank{Ω 1 } = n 1 − (d + 1) ≥ 1, and rank{Ω 2 } = n 2 −(d+1) ≥ 1 where n 1 is the number of vertices of G 1 , and n 2 is the number of vertices of G 2 . Then Lemma 10 implies that for t = 0, 1, tΩ 1 + (1 − t)Ω 2 has maximal rank n 1 + n 2 − 2(d + 1). Let ω ij (1) and ω ij (2) be the stresses corresponding to Ω 1 and Ω 2 , respectively for the bar {i, j}. If either ω ij (1) = 0 or ω ij (2) = 0, Theorem 8 implies that G(p ∪ q) is globally rigid in E d . Otherwise by rescaling Ω 1 and Ω 2 , if necessary, we can assume that ω ij (1) = 1 and ω ij (2) = −1. Then Lemma 10, with t = 1/2, implies that there is a stress matrix, with maximal rank, such that the stress on {i, j} is 0, which allows us to remove it. Then Theorem 5 applies again to show that the resulting framework with {i, j} deleted is globally rigid in E d . Figure 3 is a typical example in the plane of Theorem 11, but where the members are interpreted as bars. It would be interesting to consider the case when there are more than d+1 vertices in common, but the method here does not seem to apply directly, since there may be linear combinations of the two stresses that are of lower, and we can't zero out a stress on a given member keeping the maximal rank condition. For example, when there are d + 2 vertices in common in E d , and there is a vertex in the intersection of degree d + 1, no maximal rank linear combination of the two stresses can zero out the stress on one bar . (This is an observation of Tibor Jordán.) Question 1. Suppose that we combine two graphs that have the Hendrickson property as in Theorem 11. Does the resulting framework have the Hendrickson property. It seems that the connectivity property holds for the resulting framework. I don't know about the redundant rigidity property for d ≥ 3. For d = 2, the statement of Question 1 is true because the Hendrickson property and generic global rigidity are equivalent by [15]. But, for d = 2, the redundant rigidity condition holds by itself, without the need of the connectivity condition by a recent result of Bill Jackson and Tibor Jordán. The proof of Lemma 9 and Lemma 10 was inspired by a draft Lemma in [6] that was incorrect. The author is very grateful for Dylan Thurston and Tibor Jordán for pointing out a previous incorrect statement (and proof of course) of Lemma 9. The author also thanks Igor Gorbovickis for several useful comments and corrections. For other related results, see [18,21,11,16,8,19,16,1].
Comparative Study and Analysis of Variability Tools The dissertation provides a comparative analysis of a number of variability tools currently in use. It serves as a catalogue for practitioners interested in the topic. We compare a range of modelling, configuring, and management tools for product line engineering. The tools surveyed are compared against the following criteria: functional, non-functional, governance issues and Technical aspects. The outcome of the analysis is provided in tabular format. Achievable Goals:  To Understand & Analyse the conceptual work of variability management in software product line engineering.  To perform an extensive search of variability management tools for the purpose of analysis.  To analyse the tools based on different aspects mentioned in scope of the work.  To report the results of analysis in well understood document format. Required Resources: In order to perform the study and analysis of the proposed work an access to the material related to the subject published in various books, journals, and online news and technical papers is necessary. In order to collect the material the required set of resources are: A personal computer with minimum configuration (Can be used to browse the internet), An access to library for books, An access to the digital library to search for published journals. Survey of the Variability Management Tools Managing the variability became a necessary business requirement in software product line. It is due to the fact that, the current trends in variability of moving hardware to software lead the industries to postpone the decisions of designing aspects till it become economically feasible. As mentioned in the introductory part, this project focus mainly on identifying different types of variability tools and to perform the analysis of these tools on the basis of functional, non-functional, governance and technical aspects. Some of the tools that are identified during the online tool survey are mentioned in this chapter with brief explanation of each. Totally 14 (Numeric) Tools are identified and are mentioned in the list below: GEARS Tool: Gears (1) is a commercial SPL(software product line) development tool developed by BigLever Inc (15) and enables the modelling of optional and varying features which is used to differentiate the products in portfolio. The Gears feature model uses high level typing includes (sets, enumeration, records, Boolean, integer, float, character, string) showing difference between "features" at the domain modelling level and "variation points" at the implementation level (source code, requirements, test cases, documentation). In Gears,  Set types allow the selection of optional objects.  Enumeration types allow selection of one and only alternative.  Boolean represent singular options.  Record represents mandatory lists of features. Gears variation points are inserted to support implementation level variation. Components with Gears variation points become reusable core assets that are automatically composed and configured into product instances. The workers in Gears given us a conventional way on Gears assets, with the expectation of implementing the variation points to support the feature model variations that are in the scope of their first asset. If we see the dependency section in Gears are expressed as relational assertions. They used very simple conventional require and excludes dependencies. Variation points and feature models are fully user programmable to arbitrary levels of sophistication and complexity. The Gears approach defines product feature profiles for each product and selects the desired choices in the feature model. A product configurator automatically produces the individual products in the portfolio by assembling the assets and customizing the variation points within those assets to produce a particular product according to the feature profile. Gears modules can be mapped to any existing modularity capabilities in software. Basically Gears models can be composed into which can be treated as standalone "product lines". These product lines can be composed from modules and other nested product lines. Aspect-oriented features are captured in Gears "mix-ins", which allow crosscutting features to be imported into one or more modules for use in implementation variation points in those modules. The tool supports also the definition of hierarchical product lines by nesting one product line into another (1). There are two types of views and editor styles are supported and can be switched dynamically a) syntactically and semantically well-defined text view b) context-sensitive structural tree view Basically in Gears uses file and text based configuration and composition. This languageindependent approach allows users to translation legacy variation as well as implements new variations. For all the above and multiple binding times in one product line will be supported by Gears. Indirectly through statically instantiated configuration files or database settings Gears typically influence the runtime behaviour for runtime binding with these able to set dynamically making feature selection at runtime. Due to quickly adaption of a software mass customization for product line, Gears enable organisations to use the software mass customization technology in a easy way. Proactive reactive and extractive approaches can be used depending of each particular organisation, but they are not mutually exclusive. Gears have been used in systems with millions of LoC with no prevented limitation on scalability (1). COVAMOF: COVAMOF (2,32) (conIPF variability modelling framework) It is a variability modelling approach to represent variation points and variants on all abstractions layers, supports the modelling of relations between dependencies, provides traceability, and a hierarchical organization of variability (2). There will be five different variation points can be supported in COVAMOF There are 1. optional 2. alternative 3. optional-variant 4. variants 5. value The first variation point refers to the selection (zero or more) from the one or more associated variants. The COVAMOF variability view (CVV) represents the view of the variability for the product family artefacts and unifies the variability on all layers of abstraction. The CVV models the dependencies that occur in industrial product families to restrict the building of one or more variation points. (2) 5 (2) Simple dependencies are expressed by a Boolean expression, and CVV specifies a function valid to indicate whether a dependency is violated or not. In addition to the Boolean, dependencies and constraints can also contain integer values, with operators like the ADD, SUBSTRACT, etc. Boolean and numerical values are used together in operators like the GREATER THAN, where numerical values are the input and Boolean values are the output, complex dependencies are defined in COVAMOF as dynamically analyzable dependencies and CVV contains for each dynamically analyzable dependency the below stated properties(2).  Aspect: Each dependency is associated with an aspect that can be expressed by a real value.  Valid range: The dependency specifies a function to (true, false) indicating whether a value is acceptable.  Associations: The CVV distinguishes three types of associations for dynamic dependencies, which are: predictable, directional and unknown. For communication between tools COVAMOF provides graphical representation and XML representation. For multiple views of CVV and COVAMOF variability view Mocca tool has been developed to manage. Mocca supports the management of the CVV from the variation point view and the dependency view. Mocca is implemented in Java as extension to the eclipse 3.0 platform. Some recent improvements to COVAMOF-VS tool suite, which is a set of add-ns for Microsoft visual studio.NET. The COVAMOF-VS provides two main graphical views, that is variation point view and the dependency view, as a way to maintain an integrated variability model. Finally, specific plug-ins can be added for supporting different variability implementation mechanisms (2). VMWT (Variability modelling web Tool): VMWT is a research prototype developed at the university Juan Carlos of Madrid. This is first prototype (http://triana.escet.urjc.es/VMWT/) is a web-based tool built with PHP and Ajax and running over Apache 2.0. VMWT stores and manages variation points and variants following a product line approach and enables to create product line projects for which a set of reusable existing assets can be associated. Before configuring a particular code component and we can specify numeric values (quantitative values), ranges of values or a enumerated list 6 can be specifies. Once all the variants have been added, the variation points will be added to the code components. VMWT supports dependency rules and constraints for the variation points and variant already defined. The following Boolean relationships are allowed: AND, OR, XOR and NONE. In addition, more complex dependencies can be defined, such as requires and excludes. The tool allows constraint and dependency checking and we complete the number of allowed configurations. This is quite useful when it is needed to estimate the cost of the products to be engineered. Finally, a FODA tree is visualized for selecting the options for each product and the selected configuration is then displayed to the user. The variation points and variants selected are included in a file attached to each code component documentation of the product line can be automatically generated as PDF documents. AHEAD Tool Suite (Algebraic Hierarchical Equations for Application Design) The AHEAD (Algebraic Hierarchical Equations for application Development) Tool Suite (AHEAD TS) was developed to support the development of product lines using compositional programming techniques (34). AHEAD TS has been used in distinct domains i. To produce applications where feature and variations are used in the production process (35). ii. To produce a product line of portlets. The production process in software product line requires the use of features that have to be modelled as first-class entities. AHEAD distinguishes between "product features" and "builtin features". The former characterizes the product as such. The latter refers to variations on the associated process. The production processes are specified in using Ant, a popular scripting language from the java community. AHEAD uses a step-wise refinement process based on the GenVoca methodology for incrementally adding features to the products belonging to a system family. The refinements supported by AHEAD (20) are packaged in layers. The base layer contains the base artefacts with specific features. The AHEAD production process shows differences between two stages. The intra-layer production process specifies the tasks for producing a set of artefacts within a layer or upper layers. The inter-layer production process defined how layers should be intertwined to obtain the final product. An extension to AHEAD is described in (36) and a Tool called XAK was developed for composing base and refinement artefacts in XML format. ATS was re-factored into features to allow the integration with XAK. The feature refactoring approach used in XAK decomposes legacy applications into set of feature modules which can be added to a product line. AHEAD doesn"t require manual intervention during the derivation process. CONSUL Based Tools: Variability management tools have to be used by two different classes of users. The first class is formed by the developers of these variable artefacts. As a complete tool chain, CONSUL (5,33) supports both classes. The modular implementation of CONSUL allows flexible combining of the required services and user interfaces to build different tools. 7 The current application family consists of following three different tools 1. Consul@GUI 2. Consul@CLI 3. Consul@Web 7.1 Consul@GUI : The main application for developers is Consul@GUI is an interactive modelling tool for CONSUL models. It allows creating and editing the models but can also be used in the deployment of the developed software for generating the customized software. The screenshot represents the Consul@GUI of cosine domain with several features selected. Consul process Overview (5) The configuration is not valid, since there is still an open alternative. This is indicated by the background colours of the two features. Once a valid configuration has been found, the generation process can be started. 8 Consul@CLI: Based on CONSUL a customization tool with a command line interface has been built as well. This tool can be used e.g. together with make to provide automated customization when re-building a software system. Consul@Web: It is also possible to make software customization available via web browsers. A demonstration on a Java applet can be found in pure-systems website. It allows the configuration, building and downloading of pure via an Java-enabled web browser. Feature Modelling tool: This Feature modeling tool(21) will allow us to crate feature models from inside visual studio IDE. By using this tool we can visualize a) indented list b) Tree structure Below figure is hierarchy of the feature modelling tool (6) where nodes in the left window represents the features. The central window represents the modeller"s design where it is allowed to add/modify/delete features. It is a tree type representation where the links symbolize the hierarchy component and the nodes symbolize features (22). Example of Feature model in Editor view fig -3 This tool supports cardinality-based feature modelling, specialization of feature diagrams, and configuration based on feature diagrams. This is an eclipse plug-in for feature modelling. This an ability of bringing the eclipse (16) platform closer to software-product line and generative development communities, providing tool support for feature modelling as an Ecplise plug-in is particularly attractive with the below reasons(OOPSLA"04). Initially, integration feature modelling as a part of a development environment helps to optimally support modelling variability in different artefacts. Example of Feature model in editor view: When the user clicks on the feature, an auxiliary window shows the information about the node. In addition to that feature dependencies are not available in this model. Pure::variants: Pure::Variants (7) is a commercial Tool supporting feature modelling and configuration using tree-view rendering. In other words, pure::Variants does not support cloning, pure::Variants allows modelling global constraints between features and it offers interactive, constraintsbased configuration using a prolog-based constraint solver. It is also a feature modelling tool which is created by pure systems GmBh set up in 2001. Basically it is an eclipse application, an open source community whose projects are focused on building an open development platform comprised of extensible frame works and the main functionality of it is to be used a frame work for the design of product line architectures (23). It is one of the representations of the previous tool, the items are situated as nodes in an indented list. Each check box placed near each component is used to configure a product line from the feature model. Thus the user is allowed to display a final result of a product line, if he or she selects some configuration by the use of these check-boxes (7). Pure::variants adds the possibility to represent the model by graph visualization, although some common editing operations like editing, deletion are supported by the tool. FAMA tool suite: This is a tool for the automated analysis of the variability models. The application provides an extensible framework for easily reading variability models, and automating the configuration of a final product. For the analysis and edition of Feature models FAMA (9) has been implemented as a complete tool. FAMA supports cardinality based feature modelling, export/import of Feature models from the XML and XMI and analysis operations of Feature models. As the majority of the feature modelling applications, FAMA Tool suite uses GUI as a representation of the model. The difference in this case lies in the process of modelling; the user has to develop the structure of the feature model writing it in an XML. Then the tools read the document and visualize the content of it like in the figure shown below allowing the user to interact with the representation. Analysis view Modelling view Ecplise Plugin Here also a node represents the features, relations and cardinalities of the relations in the model. FAMA integrates different solvers in order to combine the best of all of them in terms of performance. Basically the actual framework integrates CSP solver, SAT solver & BDD, Java solver to perform the analysis tasks. When we think about FAMA one advantage is the ability to select automatically, in execution time, the most efficient solver according to the operation requested by the user. Basically FAMA (13) have two main functionalities: visual model edition/creation and automated model analysis. In this process once the user has created or imported a cardinality based feature model, the analysis capability can be used. Maximum number of operations identified on feature models are using currently implemented. The main purpose of using FAMA to Preference page 16 In the above figure (preference page) shows a screenshot of the property page used to set the configuration options. Kumbang Tools: Kumbang Tools (17,31) is an application package consisting Kumbang configuration and Kumbang Modeller. These tools are designed for configuring the software product families. The tool takes configuration model as an input, and offers the user the possibility to make configuration decisions. The tool is implemented as plug-ins for Eclipse IDE (18). shows the configuration status. e. Properties : This show the properties are corresponding to the selected item in features/components -view To configure the initialized modeland make it complete for exportingyou need to edit features" attributes and/or add new components to the configuration. Edit attributes: Usually the first step in modifying the configuration is editing the attributes. You can edit attributes in the Feature-view. The purpose of the XToF (10,25) tool is to let programmers define, maintain, visualise and exploit precise traceability links between a feature diagram and the code base of a software product line. Basically XTof provides enhanced functionality by leveraging on two new components 1) TagSEA, an Eclipse plug-in developed at Victoria university, which purpose is to support navigation and knowledge sharing in collaborative program development. 2) S.P.L.A.R. a Java library developed at Waterloo University that automates various FD analyses. Below section going to present information related to the description of requirements, implementation of the initial tool chain including with its limitations. Here with this new prototype design use to overcome the aforementioned limitations. Now deeply will describe about the each one Requirements: The goal of the collaboration was to turn the implementation of a flight grade satellite communication software product line that would support the below stated requirements  Allow mass-customization of the library: meaning to be able to efficiently derive products that only contain the features required for a specific space mission.  be compliant with quality standards and regulations in place for flight software.  have a minimal impact on current development practices.  Automate the solution as much as possible. The tagging languages: Basically a feature tag is an annotation of a block of C code with the names of the features that require the block to be present. If none of the features listed in a tag is included in a particular product, then the tagged code block will not be part of the source code generated for this product. Tags can be nested and a whole file can be tagged with a special annotation. Untagged code is assumed to be needed for features. Limitations of the tool-chain: The tool-supported process described in the previous sections turned out to be effective in meeting the requirements set out by the organisation. Tighter integration: communication between the tools was performed only through file exchange. Although this did not impede usage of the tool chain, it was recognised that an integrated environment, where loosely coupled tools play together, could be a significant enhancement. 20 Legibility: according to the company"s developers, the legibility of the source code was not reduced by the tags. Indeed, the tagging language was designed to be concise and is rendered in a different colour in mode code editors. Portability: although pruning dead code is most usually required in embedded systems where C dominates, C is not the only language used in embedded systems, Additionally, our "tag and prune" approach has a wider applicability than embedded systems, hence the idea of extending the approach to other languages. 12.3.4 On-the-tag generation: the programmers who used the tool-chain estimated that the overhead due to the tags during the domain implementation phase was 20 to 25% with respect to tag-free implementation of a "maximal" product. XToF"s main screen Functionally, XToF (27), the new prototype, is meant to support the activities depicted in a single integrated environment and overcome the limitations described in the previous section. Components and principles of XToF: The opportunity for re-implementing the original toolchain came from the discovery of an open source Ecplise plug-in called TagSEA. TagSEA was developed to support asynchronous and collaborative program development. It enhances navigation and knowledge distribution in the code based on tags placed by the programmers. The approach and the tool are originally unrelated to software product lines, but turned out to be applicable in this context. XToF uses the capabilities of TagSEA to manage tagging and tags. TagSEA defines waypoints as "locations of software model elements". The notation of waypoint as a point of interest has been extended to a design area of interest in order to capture blocks of code associated to feature tags. TagSEA provides mechanisms to filter tags, waypoints and navigate to a way point, XTof then links TagSEA waypoints to features and blocks of code. Current functionalities: We will have look at functionalities supported like loading the FD, tagging code fragments, navigation and visualization, configuring and pruning, improve detective tagging. Let me explain clearly each Loading the FD: To be displayed and configured in the tool, the FD has to be loaded. XToF expects it as an XML file in the SXFM format. The file can be created in any text editor, but can be more easily produced by the web-based visual FD editor SPLOT (26), the front-end to SPLAR. Once the FD is loaded, XToF displays it and lets the users add tags, navigate and configure. The loaded FD is copied to the project folder and its path is saved as a properly of the project. The FD is thus made available to all project contributors who can work in parallel. Tagging load fragments: To reduce the time needed to tag blocks of source code, XToF uses auto-completion from Ecplise. While typing a tag, feature names are displayed and when selected, directly added to the tag. Navigation and visualization: XToF feature tags behave like regular TagSEA waypoints. The user can list the location of feature tags. Navigate to a tagged code fragment and display it. Some visualizations have been developed to answer simple questions such as "which blocks are associated to a set of tags?" and "which set of tags is associated to a line of source code?". To answer the first question, the user can select the tags in XToF and tagged block of source code is highlighted. Another mechanism provides the opposite function i.e. answers the second question: the features corresponding to the current line in the active editor window are highlighted in the FD. Configuring and pruning: Configuring and pruning are now integrated. The configuration interface is based on the FD, clicking on a feature allows the user to toggle it from described to selected and conversely. Each decision made on the diagram of propagated by SPLAR to ensure the validity of the configuration is completed; the mission-specific implementation can be generated. 22 Code highlighting in XToF Portability: XToF takes advantage of the plug-in platform provided by ecplise to support other languages than java. Two languages are currently supported: java and C PLUSEE: The scope of the PLUSEE (11,28) (HICSS"07) includes the product line engineering and product derivation phases Product line engineering: A product line multiple-view model, which addresses the multiple views of a software product line, is modelled and checked for consistency between 23 the multiple views. The product line multiple-view model and architecture is captured and stores in the product line reuse library. Product derivation: A target system multiple view models is configured from the product line multiple-view model. The user selects the desired features for the product line member and the tool configures the target system architecture. The PLUSEE represents second generation product line engineering tools which build on experience gained in previous research. PLUSEE builds on the experience gained with the earlier research with the knowledge based engineering environment (KBSEE). Whereas the KBSEE proof-of-concept prototype demonstrated that product line derivation from a product line feature model, architecture and components was feasible, it suffered from some serious limitations. Firstly, it used a structures analysis tool as a front end, and therefore had to rely on graphical editors for data flow diagrams and entity-relationship diagrams, which lacked the richness needed to model object-oriented product lines. Secondly, although a product line repository was used, it was developed in an ad-hoc way and lacked the underlying metamodel to formally describe the product line artefacts and their relationships. This experience with KBSEE guided the following design decisions for the development of the PLUSEE. Both Rose and Rose RT commercial CASE Tools were used as a graphical interface to this prototype. Rose supports all the views of the standard UML notation, but it does not generate an executable architecture from the product line multiple-view model. On the other hand, Rose RT generates are executable architecture from the product line multiple-view model and simulates the product line architecture although it does not support all the views of the standard UML. To take advantages of Rose and Rose RT, two separate versions of PLUSEE, which are very similar to each other, were developed. The knowledge based requirement Elicitation Tool (KBRET) and GUI developed in previous research were used without change. The knowledge based requirement Elicitation Tool (KBRET) is used to assist a user to select optional features of each target system. KBRET, which was developed in previous research conducts a dialog with a human target system requirements engineer, presenting the user with the optional features that will belong to the target system; KBRET. DecisionKing: DecisionKing(12) tool developed to give support for the approach integrated modelling. This application is based on the Ecplise platform. 24 Editing a decision model in DecisionKing The tool has been implemented in a highly iterative process with continuous feedback from Siemens VAI engineers. In the early days the versions of our modelling approach were tested using prototypes built with MS Excel. When we look into the suitability, adequacy, and usability of this approach have been tested by engineers in Siemens VAI who have been using the tool to create variability model s for different subsystems of the caster automation software. in the above figure shows a snapshot of the modelling shell in decisionKing (30). Decisions described are listed in the left pane. The right pane shows a decision viewer graphically visualizing dependencies among decisions. There are different tabs allowing importing and capturing the assets into the product line. For example if we look into the above figure, the component tab allows to import components from existing configuration and to specify the links to the decision model. The document tab is used to organize fragments of the documentation. Complex relationships between decisions and assets are expressed in a simple rule language. We are replaced in the near future with an off-shelf engine to ensure scalability. If we look into the feedback given by our industry partners" shows that the modelling approach works well for a capturing variability both from customer/marketing as well as from technical perspectives, but it is unrealistic to assume that such a model can be created and evolved by an individual or by a small team. The knowledge required to build such a model is typically spread across the minds of numerous heterogeneous stakeholders and 25 different teams responsible for various parts of the system. DecisionKing allows modelling different parts of a large variability model separately and merging the parts into one integrated model later on. For this purpose, one team may only build the asset model, or even only a partial asset model or decision model. The different parts of the model are then merged using the model merger. Engineers can mark certain elements in the variability model as "public" meaning that these can be used in other variability models. Other elements are listed as references. A team of stakeholders responsible for a certain variability model can refer to elements of other variability models. In the above fig we given a example depicted two parts of a variability model are shown model 1 imports DBProcessDisplay, a component defined public in model 2. Similarly model 2 refers to the FileManager component, which is set merger can combine the two models by resolving these refernces. Many problems can occur while merging different models, for examplemissing refernces, multiple occurrences of the same element, or ambiguity in the mapping of referenced elements. When conflicts cannot be resolved automatically our merger relies on input from the user. Plug-in mechanism in DecisionKing: Our second architecture-level variability mechanism ensures extensibility. DecisionKing is based on a plug-in architecture allowing arbitrary external tools to communicate and interact with it. This enables users to develop and integrate company-specific functionality. We have used this feature in three cases so far (i) we can automatically import existing assets and their relationships from existing configurations to populate the variability model. (ii) Our language to describe rules and constraints for relationships between decisions and is provided vis a plug-in. we intend to replace our current rule language with a more powerful language based on the JBoss rulw engine. (iii) we can use third-party model differences as demonstrated with a plug-in(29) by we integrated via this mechanism. BVR TOOL (Base-variation-resolution): The BVR (14) (NIK"06) approach depends on the possibility of establishing and maintaining the relations between the variation models and the base model, and between the resolution and variation models. In order to explain this concept, we have built a prototype tool called Object-Oriented Feature Modeller (OOFM). For the purpose of this prototype tool implementation we have used Java as the language of the base modelsit could just as well have been like UML. Initially we will know about BVR approach, this approach is defined by a meta model divided into three parts. The base model will be any model in a given language. The variation model will contain variation Elements, where each element refers to the base model Element this is subject to variation (implying that those that are not related are not subjected to relation). This relationship has a zero-to-one cardinality, as not all model Elements are affected by variability, variation elements only contain the information that the referenced model elements may be affected by variations; the information contained in the base model element is not duplicated. Variation is specified in a variability specification; it may in general involve other model elements and affect a number of variation elements. Variability specification comes in two kinds: variability constraint represents constraints on valid resolutions and distinguishes between valid resolution models and invalid ones; Transformers have concrete transformation associated with them. When values are bound to transformers (from the resolution element), this defines the transformation of the variation model and the base model into a specific model. The OOFM prototype tool was made in parallel with the development of the BVR approach. Therefore its variation model has a slightly different set of meta classes. According to the variation model of the OOFM, a model contains exactly one product, product may have zero or many features, each containing zero or many features. That is, a feature can contain other features. Waterproof is an example of a feature that contains two other features: depth and Time. Feature cardinality is represented as mandatory ([1….1]), optional ([0….1)] and group feature cardinality as alternative (<1-n>). A feature is mandatory, optional or alternative. Feature choices are stored in a List in the container feature object. For example the subfeature Depth has the choices 50 and 100, which are kept in the choice List in the feature object Depth. Depth choices state the waterproof depth (in meters) of a watch. Similarly, the sub-feature Time has a choice List that contains the choices 0, 5, 10 and 15. Time choices (in hour) tell us how many hours a waterproof watch can be under water before it no longer can resist water. The links between Variation model and Base model indicate to which element of the Base model the variation applies. Feature definition in OOFM is not totally automatic. OOFM has the ability to recognize and display all object fields it can define as features. The resulting resolution model will contain the variable features from the variation model and those objects fields of the base model that were not defined as features. To implement this tool, this is made as an Ecplise plugin and based upon the ECplise modelling framework (EMF). The feature modelling editor is based upon a Meta model according to the variation model part of the Meta model. This is done by defining the Meta model in terms of annotated Java classes and using the generator for tree-oriented model editors provided by the EMF. The Java development technology (JDT) is used to represent Base Java programs and Java programs that are generated Java programs in terms of objects according to JDT. FIT (Feature Implementation Time): Current industry software systems are usually built incrementally, there is a rarely a software product that is built as a final release from the first edition. Products are usually enhanced and features added to them continuously over time. Planning for further releases of products, the features to be implemented in these products, and the timing, is a key step for the success and sustainability of a product line. The feature implementation time should be captured with in the variability model as it contributes to the process of product versioning. 29 N.F (Negative Features): Basically, the development of variability models has purely depended on the features that are to be supported by a product line. At the same time little attention paid on the features which are not supported. These product ranges from low-end products to high-end ones. Negative features are features that are not supported by the given products. In such cases the product architecture should be designed in way to prohibit the enabling of such features by end user of the product. AFM (Alternative Feature Names): In a software product line life cycle there are so many areas variability management exits from requirements, to architecture design and implementation. Different people will use different ways to find out the variability and to express features. So, in this case the same feature may have different names in another team that need to watch carefully. FC (Feature Cardinality): Upto economically feasible always desirable to delay design decisions. One potential solution to alleviate the effect of open variation points is by attaching a limited number of possible variants that could be bound to a given variation point. This is usually referred to as feature cardinality. NON-FUNTIONAL CRITERIA: This section represents the relationship between two or more features. These relationships are classified based on their type and how they affect other features within the variability model as well as the system architecture. F.D (Feature Dependencies): In the same feature model, features in a model affect each other in a number of ways. Some features cannot be supported unless other features are supported in a product; other features may conflict and cannot be supported in the same product at the same time. Other forms of dependency could include weaker from of relationships such as when the inclusion of some feature recommends the inclusion/exclusion of another. Dependencies can be quite difficult to model especially those that relate to quality attributes. Hence, dependencies should not only be represent as first class citizens in any variability model, but also the technique used for capturing dependencies should allow for complex dependency representation. F.I (Feature Intereaction): In feature models with some presence and absence of features it may afect the other features, feature interaction is concerned with how different feature combinations affect the system architecture. Different feature combinations might lead to the inclusion of different architectural components and configurations. T.F (Tangled Features): The phase in software product line is the mapping of the selectable and configurable features to their corresponding implementation components. This process of encapsulation of features exhibiting non-functional properties is often limited due to their crosscutting nature. The way for deal with cross cutting features is Aspect-oriented Development(AOD). This will allow isolating and thereby encapsulating the implementations of crosscutting concerns in class like modularization units called aspect. B.H (Behavioral Features): This is one of the crucial part of the management level of the variability model. It is well known as capturing behaviour. This is due to the fact that some variability requirements encompass behavioural information that could not to be captures using traditional approaches. Another example is capturing information relating to data flows and data paths. Many approaches have been proposed to capture behaviour, from using UML state charts and use case diagrams within the multiple-views of the variability. 31 GOVERENCE ISSUES: This section will deal with business concerns of the software product line in general as well as the construction and management of the variability model. C/B Analysis (Cost/Benefit): To find and document the cost available in the overall project including valuable input. The cost for realizing a feature could be captured in the form of a financial estimate or man/month effort needed. The benefit could range from allowing for lower implementation costa and faster time-to-market to enhancing market shares and increasing the competitive edge of product line. Generally it is not an easy task to specify the cost/effort and benefit involved in realizing a given feature, adequate estimates can be obtained using information gathered and experiences gained from previous similar projects. O/C.S.F (Open/Closed set of features): Inside the industry projects very hard for the architect to built with a comprehensive and complete set of features. Rather than that features are continuously added to the initial feature model over timeeven after the design process is completed. It is very hard to design a system that has around an open and changing set of features that can be modified anytime. To overcome this problem, some industries differentiate between two types of features. The features which are cannot be changed or modified by the architect or development team and serve as the core of the product or product line. The features can be able to change or alter with advance in technology that to with out effecting to the overall system are open features. Such features can be altered by project manager, architect, or the development team depending on the nature of the feature. M.V (Multiple Views): Mostly different stakeholders have interest in considering different views of the product line variability model. So it is very important to present the extract information in multiple views for different groups of stakeholders like users, system analysts, developers, etc. The main challenge of multiple views is preserving consistency. So, for this purpose introduce meta-views to check for inconsistencies. M.U & A.C (Multiple Users & Access Control): As per the above multiple views, each view will be targeted at a specific user group. It is very important that a variability management solution provides access control to the variability model data. So in this way the user can only see information relevant to their view and can only modify properties that are within their limit. Conclusion This survey contains the tools which deals with variability management in software product lines. This report included different tools from various approaches and their working conditions. Functional criteria contains the information of the tools regarding to the functional strategies whether supporting or not supporting, at the same way non-functional criteria. Visualization area deals with visualization type of the tools and type of representation e.g. tree type, intended list, graphical. Goverence issues deals the supporting nature of the tools towards goverence issues stated in the report. Technical area gives us the information of the tools working nature and tooling approach. With this information about the tools industrial people can pick their tools easily. Tools including in this report are covering tooling approaches like modeling, configuring and management. Information about the covering phases of software product line life cycle of tools also presented.